text
stringlengths
1
1.19M
meta
dict
\section{Extensions} \label{sec:extensions} In this section we introduce two extensions to our relaxed-rigidity trajectory planner--joint acceleration smoothness and collision avoidance. We additionally propose an object pose feedback controller to compensate for errors encountered during execution of the in-grasp plan. \subsection{Joint Acceleration} We find that the linear interpolation cost term \[\sum\limits_{t=0}^{T-1}E_{obj}(\Theta_t,W_t)\] in Eq.\ref{eq:costs} aids our trajectory optimization in finding a path to the desired object pose; however, it imposes two limitations: \begin{enumerate} \item The planner prefers a linear object paths to the desired pose which may not always be possible. \item The object velocity prefers to be constant during the manipulation which might cause sudden jerk of the object and thereby the joint control. \end{enumerate} We explore an alternative cost which prefers smooth paths in the joint space. We minimize the acceleration between time steps following the sum of squares formulation from~\cite{toussaint2017tutorial}. This allows for smoother paths, while also not encouraging the object to follow a linear path to the desired pose. We replace the linear waypoint interpolation cost term with the following cost term, \begin{flalign*} \alpha_1\sum\limits_{t=0}^{T+1}(||\Theta_{t-2}-2\Theta_{t-1}+\Theta_{t}||_2^2 \numberthis \end{flalign*} where we force $\Theta_{T+1}=\Theta_{T}$, and $\Theta_{-1}=\Theta_{-2}=\Theta_{0}$. We discuss the empirical effects of this in Section~\ref{sec:smoothness-results}. \subsection{Collision Avoidance} As proposed in Section~\ref{sec:prob_def} our planner does not avoid collisions between the object and the robot palm or the object and the environment. We now propose adding an obstacle-based cost function to the optimization in order to obtain collision-free plans while moving the object to the desired pose. We use signed distance functions to measure the distance between the grasped object and the environment motivated by other trajectory optimization approaches for motion planning~\citep{Zucker2013,Schulman2014}. The signed distance computes the shortest distance between a point \(p\) and the mesh \(M\). The sign denotes if \(p\) lies within the mesh (negative) or outside the mesh (positive). Given the object mesh, $M$, in the palm frame, the hand joint configuration,~$\Theta$, and the environment as a set of objects,~$W$, the truncated signed distance function can be written as: \begin{flalign*} C(\Theta,M,W)&=\alpha_2\sum_{w\in W}(\beta-\min(\beta,SD(M,w)))\numberthis \end{flalign*} which penalizes the object when it comes within $\beta$ distance of any obstacles in the environment. Ideally collision functions should be used as constrains as in~\cite{Schulman2014}. However, we found that having the collision constraint as a cost term with a large scalar weight~$\alpha_2$ provided better trajectories and quicker solutions. Hence we add this as a cost term. This collision cost can be used to avoid both collisions with the environment and also with the hand. We add this as an additional cost term to Eq.~\ref{eq:costs} and perform trajectory optimization as before. \subsection{Object Pose Feedback Controller} \label{sec:feedback} While our purely kinematic trajectory optimization performs well in practice, it still suffers some error during execution caused by friction, contact dynamics, and other unmodeled effects. Explicitly modeling these variables proves difficult and complex on real-world objects and impossibly to know prior to interaction with a novel object. As such we propose compensating for these errors through feedback controller based on visually tracking the object's pose. We use as targets the desired object pose trajectory $\mathbf{X}_D$ from initial pose $X_0$ to the desired object pose $X_G$ generated by our ``relaxed-rigidity'' planner. We define our object pose feedback controller to only affect the thumb joints, as we assume only the thumb attaches rigidly to the object during planning. To ensure the object remains in the robot's grasp, we track the planned joint trajectory $\Theta_D$ for the remaining fingers. As long as the object does not deviate from the planned trajectory by a large margin, thumb-only feedback should prove sufficient to maintain grasp of the object. (We validate this claim in Sec.~\ref{sec:fb_results}.) The robot receives as input the joint position configuration $U[t]$ at every time step $t$. We define this as a combination of the feedforward planned joint trajectory~$\Theta_D[t+1]$ and the object pose feedback term~$\dot{\Theta}_{fb}$, \begin{align*} U[t]&=\Theta_D[t+1]+\lambda_{fb}\dot{\Theta}_{fb}[t] \numberthis \end{align*} where the positive weight $\lambda_{fb}$ allows for tuning the feedback compensation. The feedback input~$\dot{\Theta}_{fb}[t]$ corrects for errors between the planned fingertip pose and the predicted contact pose of the fingertip on the object. The planned fingertip pose at time step $t+1$ is given by $FK(\Theta_D[t+1])$. The predicted contact pose at time step $t+1$ is computed from the desired object pose~$X_D[t+1]$ and the observed transformation matrix from fingertip to object frame, ~$\prescript{O}{}{\hat{T}}_f$, as \begin{align*} H(X_D[t+1],\prescript{O}{}{\hat{T}}_f)&=Q(R(X_D[t+1])\cdot\prescript{O}{}{\hat{T}}_f) \numberthis \end{align*} where $R(\cdot)$ converts a pose into a homogenous transformation matrix and $Q(\cdot)$ transforms a homogenous matrix back into a pose. This essentially accounts for changes in rigid transformation between the object and the fingertip. We define our feedback law by transforming the Cartesian space object pose error into the joint space using the inverse of the finger's Jacobian: \begin{align*} \dot{\Theta}_{fb}[t] &= -J^{-1}_{\hat{\Theta}[t]}(FK(\Theta_D[t+1])-H(X_D[t+1],\prescript{O}{}{\hat{T}}_f)) \numberthis \end{align*} We found that approximating the Jacobian inverse by its transpose, rather than the Moore-Penrose pseudoinverse performed better for our underactuated fingers. \section{Implementation Details \& Experimental Protocol} \label{sec:exp} We now describe important details relating to our implementation and the setup of our experiments. \subsection{Trajectory Generation and Feedback Implementation} Direct methods for trajectory optimization, such as sequential quadratic programming (SQP), have shown promising results in robotics~\citep{Schulman2014,posa-ijrr2014,posa-icra2016}. We solve our trajectory optimization problem using SNOPT~\citep{Gill2005} an SQP solver designed for sparsely constrained problems. We run the solver with a maximum limit of 5000 iterations using analytical gradients for the costs and constraints. The computer used to run the solver and experiments is an Intel i7-7700K CPU with 32 GB of RAM running Ubuntu 16.04 with ROS Kinetic~\citep{quigley2009ros}. The robot used is the Allegro Hand, which has four fingers with 4 joints each\footnote{http://www.simlab.co.kr/Allegro-Hand.htm}. We solve for \(T=10\) time steps with each time step being \(\Delta t=0.167\)s long. We expand the obtained solution to a higher resolution of 100 time steps by linearly interpolating the joint trajectories. We limit the joint velocities to be less than 0.6rad/s. Our approach has four weights- three on scaling the importance of each cost term-$k_1,k_2,k_3$ and a projection weight $\psi$ for the orientation cost term $E_{or}$. The orientation cost $E_{or}$ reduces orientation changes along the weight vector $\psi$. Ideally, we would want to reduce the impact of any contact model on the manipulation task by having weights across all three dimensions. However, this would reduce the reachable workspace of the manipulation task as the Allegro hand is under-actuated with respect to the 6 DOF poses. Hence, we chose to reduce orientation changes along a single axis, which covers the largest workspace of fingertip positions. This is the $y$-axis with respect to the palm for the index, middle and ring fingers, making $\psi=\begin{bmatrix}0& 1 & 0\end{bmatrix}$. We consider this weight as a trade off between allowing a larger manipulation workspace and enforcing smaller changes at contact points. We see in Sec.~\ref{sec:reach-feas-object} that restricting orientation changes along one axis improves the position error over assuming a point contact model. The remaining three weights model the relative importance between the cost terms of our optimization. We want the robot to always maintain contacts close to the initial grasp during manipulation, this is taken care of by large value for $k_2$. Keeping the initial orientation is less important, allowing $k_3$ to be less than $k_2$. The weight for waypoints $k_1$ should help guide the fingers to the goal pose, while being low enough to allow for non-linear trajectories when linear trajectories are not feasible. We examined various weights under this scaling and found $k_1=0.09,k_2=100,k_3=1$ to work well across a variety of trajectories and objects. For $k_2<1.0$, the hand dropped the object when unreachable object poses were given. The chosen weights, however, were able to maintain the object in-grasp while still moving the object towards the desired pose. The weights chosen for the extensions are $\alpha_1=0.01,\alpha_2=1000,\beta=0.005,\lambda_{fb}=50$. For the collision avoidance experiments, the grasped object and the environment are approximately decomposed into convex groups using~\citep{mamou2009simple} to speedup signed distance computation. We compute signed distances using libccd\footnote{https://github.com/danfis/libccd} based on a combination of the Gilbert-Johnson-Keerthi~(GJK) algorithm and the expanding polytope algorithm~(EPA), extensive details are found in~\citep{van2001proximity}. For object pose feedback controller, we use a GPU based particle tracker from~\cite{GarciaCifuentes.RAL} to track the object using a NVIDIA GTX 1060 GPU. \begin{figure*} \centering \begin{tabularx}{0.95\textwidth}{>{\setlength\hsize{1\hsize}\centering}X >{\setlength\hsize{1\hsize}\centering}X >{\setlength\hsize{1\hsize}\centering}X >{\setlength\hsize{1\hsize}\centering}X >{\setlength\hsize{1\hsize}\centering}X } t=0s & t=0.4s & t=0.8s & t=1.2s & t=1.6s \end{tabularx} \subfloat{ % \includegraphics[width=0.95\textwidth]{tex_figs/tuna_traj}% } \\[-0.01ex] \subfloat{ % \includegraphics[width=0.95\textwidth]{tex_figs/lego_traj}% }\\[-0.01ex] \subfloat{ % \includegraphics[width=0.95\textwidth]{tex_figs/jello_traj}% }\\[-0.01ex] \subfloat{ % \includegraphics[width=0.95\textwidth]{tex_figs/banana_traj}% } \caption{Images showing manipulation of objects during trajectory execution. The trajectories are generated from our method(``Relaxed-Rigidity''). The frame 'O' represents the current object pose and 'G' frame is the desired object pose. \emph{Tuna} being heavier than all other objects has a larger error due to the PD controller of the hand being insufficient to counteract the gravitational forces. \emph{Banana}, having a complex surface also shows a larger error than objects with a flat surface. Markers used for ground truth collection only. Additional execution of different objects are shown at \url{https://youtu.be/Gn-yMRjbmPE}.} \label{fig:manipulation} \end{figure*} \subsection{Experimental Protocol} We selected objects of different size, texture and shape from the YCB dataset~\cite{Calli2015}, shown in Fig.~\ref{fig:obj}, as a benchmarking set. The ten objects used are: \emph{screwdriver}, \emph{Lego}, \emph{fork}, \emph{banana}, \emph{spatula}, \emph{toy plane}, \emph{Jello}, \emph{tuna}, \emph{apple}, and \emph{orange}. A variety of three-fingered grasps were performed across the objects to show the reachability of the proposed method; examples can be seen in Fig.~\ref{fig:grasps}. The set of experiments consist of moving the object under grasp to a goal pose. Finding feasible desired poses given an initial grasp is a complex problem~\citep{Rojas2016, Hertkorn2013a} and we do not formalize a method to obtain them. Instead, we focus on obtaining trajectories to a reachable pose and not on finding reachable poses. We obtain goal poses by having a human move the object in-grasp to the desired pose with the robot in gravity compensation mode. Any other method could be used to obtain desired poses. The Euclidean distance to the desired positions from the initial object positions, range from 0.8cm to 8.33cm with a mean of 4.87cm. Desired poses with small positional change have a large orientation change. One trajectory for each goal pose was generated. The ground truth of the object pose is obtained using Aruco markers~\cite{Aruco2014}. The initial pose of the object is obtained by placing the object in the hand and forming a grasp manually. Once the grasp is set, the joint angles are recorded and the object pose with respect to the palm link is obtained using the Aruco markers. We align the object with the initial pose used for trajectory generation using the markers and robot forward kinematics. Execution of all trials are recorded~(video, robot frames, and object poses). All associated data is available~(\url{https://robot-learning.cs.utah.edu/project/in_hand_manipulation}) to facilitate direct comparison. Relatively little empirical evaluation has been performed for in-hand manipulation on real robot hands. The lack of a common benchmarking scheme prohibits us from comparing directly with methods described in Sec.~\ref{sec:related_work}. The Allegro hand we use for physical validation has hemispherical fingertips which could cause rolling motion on the grasped object. Modeling rolling motion~(\cite{cutkosky1986friction}) between the fingertips and the grasped object requires extensive information about the object~(surface geometry, friction) and precise force control of the fingertips. The Allegro hand's lack of joint level torque sensing prevents us from comparing our method to methods that use force control. We compare to the ``point contact with friction'' model which can be approximated to a kinematic solution for object manipulation~\citep{Li1989}. We formulate this as a trajectory optimization problem similar to our method with different cost terms. Specifically, we attempt to keep the fingertip positions fixed with respect to the object, while allowing the relative orientation to change. The cost function can be found in~\cite{sundaralingam2017relaxed}. We define the following error metrics for evaluating in-grasp manipulation. The position error is computed as the Euclidean distance between the reached position and the desired position of the object. Additionally, we report position error normalized with respect to the length of the trajectory as ``Position Error\%''. The second metric measures the final orientation error, calculated using quaternions, as the difference between rotation frames is not well defined using Euler angles~\cite{Huynh2009}. \[err_{orient}=100\times\frac{\min(||q_d-q||_2,||q_d+q||_2)}{\sqrt{2}} \numberthis\] where $q_d$ is the unit quaternion of the desired object pose and $q$ is the unit quaternion of the object pose reached. This error is in the range $[0,\sqrt{2}]$ and hence normalized with $\sqrt{2}$ and stated as ``Orientation error\%''. Finally, where appropriate, we report as failed attempts trials where the robot dropped the object during execution. Ten unique reachable goal poses and two initial grasps per object are chosen to validate our planner. To account for variation in execution and evaluate robustness, 5 trials are run for each trajectory giving a total of 50 trials per object. The difference in initial position between trials has a mean error of 0.59cm with an associated variance of 0.09cm. A total of 2000 trials are run across different methods to evaluate our proposed method. To evaluate the joint acceleration extension to our planner and the object pose feedback controller, we conduct experiments with three objects- \emph{Apple}, \emph{Banana} and \emph{Jello}. We choose 5 goals poses per object across two initial grasps per object. We run three trials per generated trajectory. To validate collision avoidance, we show two applications on a physical robot. \section{Introduction and Motivation} The problem of robotic in-hand manipulation--changing the relative pose between a robot hand and object, without placing the object down--remains largely unsolved. Research in in-hand manipulation has focused largely on using full knowledge of the mechanical properties of the objects of interest in finding solutions~\citep{Li1989,Mordatch2012,Han1998,Andrews2013}. This reliance on object specific modeling makes in-hand manipulation expensive and sometimes infeasible in real-world scenarios, where robots may lack high-fidelity object models. Learning-based approaches to the problem have also been proposed~\citep{kumar-icra2016,vanhoof-ichr2015-in-hand-rl}; however, these methods require significant experience with the object of interest to work and learn only a single motion primitive~(e.g. movement to a specific goal pose). Solving the general in-hand manipulation problem using real world robotic hands will require a variety of manipulation skills~\citep{bicchi2000hands}. As such, we focus on a subproblem of in-hand manipulation: in-grasp manipulation where the robot moves an object under grasp to a desired pose without changing the initial grasp. We explore a purely kinematic planning approach for in-grasp manipulation motivated by recent successes in kinematic grasp planning~\citep{ciocarlie2007dexterous,carpin2016multi}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{tex_figs/intro_v5} \caption{Example trajectory produced by our method executed on the Allegro hand. The trajectory moves the Lego from its initially grasped pose to a desired pose. The robot follows the joint space trajectory produced from our trajectory optimizer with a PD based joint position controller. The images on the right show frames from execution where \(t\) refers to the timestep.} \label{fig:intro} \end{figure} Giving robots the ability to perform in-grasp manipulation would allow for changing a grasped object's pose without requiring full arm movement or complex finger gaiting~\citep{Hong1990}. Many tasks requiring a change in relative pose between the hand and the grasped object do not require a large workspace and can be performed without changing the grasp. Example tasks include turning a dial, reorienting objects for insertion, or assembling small parts such as watch gears. This is especially beneficial in cluttered environments where small movements would be preferred to avoid collisions. In this paper, we propose a kinematic planner for in-grasp manipulation through trajectory optimization. The proposed planner gives a joint space trajectory that would move the object to the desired pose without losing grasp of the object. By attempting to maintain the contact points from the initial grasp during manipulation, we do not require any detailed models of the grasped object. The in-grasp manipulation problem is under-actuated, as the object's states are not fully or directly controllable. As such, it does not immediately offer a kinematic solution. A naive approach would be to model all contacts between the object and robot as rigid links and plan for a desired object pose as if the robot were a parallel mechanism. However, in most robotic hands, the fingers have fewer degrees of freedom (DOF) than necessary to control a 6 DOF world pose. Thus, we introduce a novel cost function which relaxes the rigidity constraints between the object and fingers. This cost function penalizes the robot fingertips for changing the relative positions and orientations between each other from those used in the initial grasp. We name this cost function the relaxed-rigidity constraint. We combine relaxed-rigidity constraints for all fingers with cost terms that encourage the object's movement to the desired pose. This combined cost function defines the objective for our purely kinematic trajectory optimization. The result allows for small position and orientation changes at the contact locations, while maintaining a stable grasp as the object moves toward the desired pose. Fig.~\ref{fig:intro} shows an example trajectory from our planner. This kinematic planner successfully performed in-grasp manipulation with 10 objects across 500 trials without dropping the object. Our approach to in-grasp manipulation directly solves for a joint space trajectory to reach a task space goal, in contrast to previous methods~\citep{Mordatch2012,li2013integrating} which rely on separate inverse kinematic (IK) solvers to obtain joint space trajectories. Our direct solution is attractive, as IK solutions become complex when a robot is under-actuated in terms of the dimensions of the task space (i.e. the end-effector of a 4 joint manipulator cannot reach all orientations in a 6 dimensional task space for a given position). Our approach additionally handles hard constraints on the robot's joint positions and velocities. The problem is efficiently solved as a direct optimization using a sequential quadratic programming (SQP) solver. Our method allows for changing the object's pose without the need to know the dynamic properties of the object or the contact forces on the fingers. Solving directly in the joint space also allows us to have costs in the input space such as smooth joint acceleration between time-steps to allow smooth operation of the robot during manipulation. The use of Trajectory optimization also allows for using advancements in collision-free manipulator motion planning~\citep{Schulman2014} to our in-grasp manipulation problem and we show how our planner can avoid collisions with the environment during manipulation. In addition, we compensate for error in trajectory execution online by incorporating an object pose feedback control scheme. Our ``Relaxed-Rigidity'' planner makes the following contributions validated with real-world experiments: \begin{enumerate} \item We demonstrate that a purely kinematic trajectory optimization sufficiently solves a large set of in-grasp manipulation tasks with a real robot hand. \item We enable this kinematic solution by introducing a novel relaxation of rigid-contact constraints to a soft constraint on rigidity expressed as a cost function. We name this the relaxed-rigidity constraint. \item Our method directly solves for joint configurations at all time steps, without the need of a separate inverse kinematics solver, a novel contribution over previous trajectory optimization approaches for in-hand manipulation (e.g.~\cite{Mordatch2012}). \item We are the first to extensively validate an in-grasp manipulation planner on a real robot hand. We do so with multiple objects from the YCB dataset~\citep{Calli2015} and introduce relevant error metrics, paving the way for a unified testing scheme in future works (c.f. Section~\ref{sec:exp}). \end{enumerate} This articles makes the following contributions over our previous work~\citep{sundaralingam2017relaxed}: \begin{enumerate} \item We introduce a joint acceleration cost to prefer smooth joint space paths, leading to lower object dynamics excitation. \item We enable collision-free manipulation planning of the object in cluttered environments by including a signed distance cost function. \item We compensate for error online during trajectory execution through an object-pose feedback controller. \end{enumerate} We organize the remainder of the paper as follows. We discuss in-hand manipulation research related to our approach in Section~\ref{sec:related_work}. We follow this with a formal definition of the in-grasp manipulation problem and a detailed explanation of our in-grasp planner in Section~\ref{sec:prob_def}. We present our extensions over the initial planner in Section~\ref{sec:extensions}. We then discuss implementation details and define our experimental protocol in Section~\ref{sec:exp}. We analyze the results of extensive robot experiments in Section~\ref{sec:results}. We discuss the limitations of our approach in Section~\ref{sec:discussion} and conclude in Section~\ref{sec:conclusion}. \section{Discussion} \label{sec:discussion} We found several open questions to explore from extensive validation of our in-grasp manipulation planner which we discuss below. \subsection{Improving Manipulation Accuracy} \label{sec:online-replanning} Our planner was able to achieve an average position error of 13mm without feedback of the object pose. While this might seem large, there are many tasks that could be performed with this accuracy. One task we explore in this paper is moving a spoon into a cup~(fig.~\ref{fig:c_env}). If only the arm is used to move the spoon, a very precise arm controller or visual servoing is required to move inside the cup. With in-grasp manipulation, the spoon is moved inside the cup without visual servoing, using the dexterity in the fingers. At a broader scale, in-grasp manipulation cannot achieve large object pose changes, as the fingers have limited reachability. We have started exploring methods to switch to a different fingertip grasp to extend the reachable object poses~(\cite{sundaralingam2018regrasp}). Two potential bottlenecks prevent us from improving the accuracy through online replanning: slow planning time and poor object pose tracking accuracy. Our current trajectory optimization implementation takes on an average 2 seconds to generate a trajectory. The optimization is computationally expensive as the reachability of the fingertips and the objective function are highly non-convex. This led us to use a Jacobian object pose feedback controller~(Sec.~\ref{sec:feedback}). The feedback controller was unable to reduce the median object position error to less than 1cm. Upon further analysis, we found the object pose tracker was not precise to less than 1cm. We will explore improving the object pose tracking system and study the effect on manipulation accuracy. We will revaluate if the Jacobian controller is sufficient or online replanning is a necessity to improve accuracy. \subsection{Losing Contact During Manipulation} Physical experiments showed some of the fingers losing contact on the object during manipulation and making contact again before the manipulation is complete when four fingers were used as seen in Fig.\ref{fig:4f}. This did not lead to dropping of the object. We will be exploring adding tactile feedback to maintain contact with the object. We never observed the object slipping from the grasp during manipulation. \subsection{Cost vs Constraints} Our approach formulates the ``relaxed-rigidity'' terms as part of the cost as we want to minimize changes to the initial grasp as much as possible. Another perspective would be to formulate them as inequality constraints with thresholds~(i.e. max allowed deviations). Formulating them as constraints provides a potential advantage of faster planning times. However, finding the thresholds for the ``relaxed-rigidity'' terms that would lead to successful executions on the physical robot is not straightforward. Additionally, a constraint based approach treats all feasible solutions equally while our approach attempts to minimize the deviation when possible. \section{Conclusion} \label{sec:conclusion} We presented an in-grasp manipulation planner, which given only the initial joint angles, the joint limits, and the initial object pose, solves for a joint-level trajectory to move the object to a desired goal pose. We implemented and experimentally validated the proposed method on a physical robot hand with ground truth error analysis. The results show that our relaxed-rigidity constraint allows better real-world performance than assuming a point contact model. We show how to use our planner with a collision avoidance cost to manipulate the grasped object in a cluttered environment. We show the ability to reduce unmodeled dynamic effects by adding a cost for smooth joint space paths. We show that use of an object pose feedback controller reduces the variance in trajectory execution. \section{Results} \label{sec:results} We now discuss the results of our empirical experiments. We first validate our ``Relaxed-Rigidity'' planner on a real robot comparing with alternative formulations for in-grasp manipulation. We then discuss results from our extensions to the ``Relaxed-Rigidity'' planner. In all plots results correspond to objects grasped with three fingers, unless otherwise stated. For every trajectory that is run on the robot, the position error and orientation error is recorded. The errors are plotted as a box plot (showing first quartile, median error, third quartile) with whiskers connecting the extreme values. \subsection{Relaxed-Rigidity Physical Robot Validation} \label{sec:reach-feas-object} \begin{figure} \centering \subfloat{ % \includegraphics[width=0.48\textwidth]{tex_figs/position_error} % }\\[-0.01ex] \subfloat{ % \includegraphics[trim={0 0 0 0.73cm},clip,width=0.48\textwidth]{tex_figs/position_error_perc} % }\\[-0.01ex] \subfloat{ % \includegraphics[trim={0 0 0 0.73cm},clip,width=0.48\textwidth]{tex_figs/orientation_error} } \caption{A comparison of the relaxed-rigidity constraint performance with alternative formulations. Top: Position error Middle: Position error\% Bottom: Orientation error\%. The median position error decreases for all objects with our method. Except for \emph{Banana} and \emph{Jello}, the orientation error\% improves for our method for all objects.} \label{fig:pose_error} \end{figure} The position error and orientation error for all trials across all objects are shown in Fig.~\ref{fig:pose_error}. Our method has the lowest median position error across all objects. The maximum error across all objects is also much smaller for our method than assuming a point contact model with friction. The ``Position Error\%'' plot shows that our method-``Relaxed-Rigidity'' is closer to the desired pose than the initial pose for all trials with a maximum error of 75\%. In contrast ``PC'' obtains error greater than 100\% for several trials, showing the object is moving further away from the desired pose than at the initial pose. Additionally, one can see that our method has a lower variance in final position than the competing methods across nearly all objects. Four samples from our experiments are shown in Fig.~\ref{fig:manipulation} with overlaid current object pose and desired object pose. \begin{table} \centering \caption{Summary of results with the best value in bold text. The errors are the median of all trials. ``relaxed-1'' refers to ``relaxed-position'' and ``relaxed-2'' refers to ``relaxed-position-orientation''.} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Suc.\%} & Pos. &\multicolumn{2}{c|}{Error\%}\\ \cline{4-5} & & Error(cm) & Pos. & Orient. \\ \hline PC & 95 & 1.69 & 36.81 & \textbf{9.74} \\ \hline relaxed-1 & 91 & 1.64 & 30.95 & 10.43 \\ \hline relaxed-2 & 93 & 1.54 & 29.19 & 9.84 \\ \hline Relaxed-Rigidity & \textbf{100} & \textbf{1.32} & \textbf{28.67} & 9.86 \\ \hline \end{tabular} \label{tab:results} \end{table} Table~\ref{tab:results} shows the success rate and the median errors across all these methods. The success rate and position error improve as we add additional costs from our method. It is also seen that our method performs better than assuming a point contact model. The point contact model also resulted in dropping the object on 25 out of 500 trials, while our proposed method never dropped an object. The orientation error for all methods remains low across all objects except for \emph{Fork} where the fingertips are larger than the object causing it to roll with very small orientation changes at the fingertips. In all objects except \emph{Banana} and \emph{Jello}, the orientation error\% improves with our method. A large improvement in orientation error is seen in \emph{Spatula}, an object for which the point contact model with friction achieves relatively high orientation error. To show our method generalizes to n-fingered grasps, we show results for 2-fingered and 4-fingered grasps in Fig.~\ref{fig:4f}. We note that 2-fingered grasps tend to shake the object more during trajectory execution than 3-fingered grasps. With 4-fingered grasps, the ring finger sometimes loses and regains contact, adding little benefit over 3-fingered grasps. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{4f_exp} \caption{Execution of in-grasp manipulation for four fingered and two fingered grasps. Frame ``O'' is the object pose and frame ``G'' is the goal pose. With the \emph{Banana}, the ring finger loses contact during execution at t=0.4s but makes contact again at t=0.8s and the object reaches the desired pose.} \label{fig:4f} \end{figure} \section{In-Grasp Manipulation Planning through Relaxed-Rigidity Constraints} \label{sec:prob_def} \begin{figure*} \centering \subfloat[\(t=0\)]{\includegraphics[width=0.38\textwidth]{tex_figs/approach_a}}\hspace{0.1\textwidth}\subfloat[\(t=T\)]{\includegraphics[width=0.38\textwidth]{tex_figs/approach_b}} \caption{Depiction of a trajectory optimization solution at the initial and final time steps. The thumb-tip frame is shown as frame \(i\), finger \(f\)'s tip frame is \(f\), and \(w\) defines the world frame. The initial pose of the object with the grasp is shown in (a), the object has to reach the goal pose (b). Our approach models the thumb frame as rigidly attached to the object during the trajectory, while finger \(f\) has a relaxed-rigidity constraint. The effect can be seen in (b), where the relative orientation and position between frames \(i\) and \(f\) have changed from the initial grasp at \(t=0\). The \(\prescript{i}{0}{P}_f\), \(\prescript{i}{T}{P}_f\), \(c_0\) and \(c_T\) terms from Sec.~\ref{sec:relaxed} are shown as green, blue, orange and red vectors respectively. \(\uptheta_{\text{i1-i3}},\uptheta_{\text{f1-f3}}\) are the joint angles of thumb and finger~\(f\).} \label{fig:approach} \end{figure*} We define the problem of in-grasp manipulation planning as finding a trajectory of joint angles $\mathbf{\Theta}\in [\Theta_1,\Theta_T]$ that moves the object from its initial pose $X_0$ at time $0$ to a desired object pose $X_g$ at time $T$ without changing the fingertip contact points on the object. We address this problem under the following simple assumptions: \begin{enumerate} \item The object's pose can only be affected by the robot and gravity, i.e. there are no external systems acting on the object. \item The object is rigid. \item The initial grasp is a stable grasp of the object. \item The desired object pose is in the reachable workspace of the fingertips. \end{enumerate} We formulate our solution as a nonlinear, non-convex constrained kinematic trajectory optimization problem: \begin{flalign*} \min_{\mathbf{\Theta}}\hspace{5pt}&E_{obj}(\Theta_T, X_g)+k_1\sum\limits_{t=0}^{T-1}E_{obj}(\Theta_t,W_t)\\ &+k_2\sum\limits_{t=0}^{T}E_{pos}(\Theta_t)+k_3\sum\limits_{t=0}^{T}E_{or}(\Theta_t)\numberthis\label{eq:costs}\\ \text{s.t.}&\\ &\Theta_{min}\preceq \Theta_t \preceq\Theta_{max},\forall t\in[0,T] \numberthis\label{eq:position_limit}\\ &-\dot{\Theta}_{max}\preceq \frac{\Theta_{t-1}-\Theta_t}{\Delta t}\preceq \dot{\Theta}_{max},\forall t\in[1,T] \numberthis\label{eq:velcoity_limit} \end{flalign*} The first constraint enforces the joint limits of the robot hand, while the second inequality constraint limits the velocity of the joints to prevent rapid movements. The scalar weights, $k_1,k_2,k_3$, on each cost term allow us to tune the trade-off between the four cost components. $W_t$ defines the waypoint of the object pose at time $t$ computed automatically as described below. In order to achieve a purely kinematic formulation, we plan with a number of approximations, which we validate with experiments in Sec.\ref{sec:results}. We now describe the components of the cost function in detail. \subsection{Object Pose Cost} \label{sec:obj_pose} The first term in the cost function~(Eq.\ref{eq:costs}), \(E_{obj}(\Theta_T, X_g)\), is designed to minimize the euclidean distance between the planned object pose at the final time step \(X_T\) to the desired final object pose \(X_g\). Our kinematic trajectory optimization approach assumes no knowledge of the dynamic properties of the object, as such we can not directly simulate the object pose \(X_t\) during our optimization. Instead, we leverage the fact that in-grasp manipulation assumes no breaking or making of contacts during execution, meaning, in the ideal case, contact points between the robot and object remain fixed. In our approach we thus plan as if the contact point between the thumb-tip\footnote{The choice of thumb is arbitrary and made only to clarify the discussion. Any fingertip could be chosen to define the reference frame for the object.} and the object is rigid. This allows us to define a reference frame for the object $X$ with respect to the thumb-tip $i$ such that the transformation between the thumb-tip and the object remains fixed during execution. As the thumb-tip moves with respect to the world frame $w$, we compute the transform to the object frame as \[ \prescript{w}{}{T}_X =\prescript{w}{}{T}_i\cdot\prescript{i}{}{T}_X \numberthis \] where the superscript refers to the reference frame and the subscript to the target frame. The object's transformation matrix is represented by $\prescript{i}{}{T}_X$ with reference to thumb-tip $i$. We can now transform the desired object pose $X_g$ into a desired thumb pose $G_i$ in the world frame $w$. The cost function \(E_{obj}\) can now be formally defined as \[ \prescript{}{}{E}_{obj}(\Theta_T,X_g)=||X_{g}\cdot\prescript{X}{}{T}_{i}-FK(\Theta_T,i)||^2_2 \numberthis\] where, \(X_{g}\cdot\prescript{X}{}{T}_{i}\) and \(FK(\Theta_T,i)\) gives the pose of the thumb-tip $i$ with reference to the world frame. Thus by using the forward kinematics (FK) internally within the cost function we can directly solve for the joint angles of the thumb at the desired object pose. The second term~\(\sum\limits_{t=0}^{T-1}E_{obj}(\Theta_t,W_t)\) present in the cost function~(Eq.\ref{eq:costs}) encourages shorter paths to the desired pose. We define the waypoints $W_t$ for every time-step $t$ be linearly interpolated from the initial object pose to the desired object pose $X_g$, equally spaced across all timesteps. We weigh this term at a very low scale relative to the other cost terms, to encourage a shorter path as a linear path is not always guaranteed \subsection{Relaxed-Rigidity Constraints} \label{sec:relaxed} Since most robotic fingers are under-actuated with respect to possible 6 DOF fingertip poses, we can't apply the same rigid-contact constraint with respect to the object pose, as we did for the thumb for all the remaining fingers. Doing so would reduce the reachable space for the remaining fingers, resulting in a smaller manipulable workspace for the object (manipulable workspace being the workspace covering all possible object poses for a given grasp). Instead, we relax the rigid-contact constraint for all other fingers in the grasp, allowing for a larger manipulable workspace. The remaining two terms in the cost function~(Eq.\ref{eq:costs}), \(E_{pos}\) and \(E_{or}\), define our novel relaxed-rigidity constraint. The combined effect of the terms encourages the fingertips to remain at the same contact points on the object throughout the trajectory. We define the cost term $E_{pos}(\Theta_t)$ to maintain the initial relative positions between the thumb, \(i\), and the remaining fingertips, \(f\) throughout execution: \[ E_{pos}(\Theta_t)=\sum\limits_{f=1}^{n}||\prescript{i}{0}{P}_{f}-\prescript{i}{}{T}_w\cdot FK_{P}(\Theta_t,f)||_2^2 \numberthis\] where \(\prescript{i}{}{T}_w\cdot FK_{P}(\Theta_t,f)\) defines the fingertip position for finger $f$ in the thumb frame $i$ at time $t$ and \(FK_{P}(\Theta_t, f)\) computes the position of fingertip \(f\) for joint configuration \(\Theta_t\). Combined with the object pose cost, which moves the thumb towards the goal pose, this cost minimizes deviation from the initial grasp, while moving towards the goal pose. The last cost term \(E_{or}(\Theta_t)\) encourages the other fingers to maintain their relative orientation to the thumb to be the same as that in the initial grasp. Maintaining this cost across all three orientation dimensions, would again over-constrain the problem to the full rigidity constraint. We relax this constraint by introducing a weight vector $\psi$ which defines a relative preference for deviation in different orientation dimensions \[E_{or}(\Theta_t)=\sum\limits_{f=1}^{n}||(FK^{i}_{RPY}(\Theta,f)-c^f_0)\cdot\psi||^2_2 \numberthis\] where \(FK^{i}_{RPY}(\Theta,f)\) computes the roll, pitch, yaw of the unit vector between the thumb, \(i\) and finger $f$ at time $t$. Fig.~\ref{fig:approach} illustrates the vectors used in the relaxed-rigidity constraints. \section{Related Work} \label{sec:related_work} In-hand manipulation has been studied extensively~\citep{Li1989,bicchi-icra1995,fearing-ijra1986, Hartl1995}. The topic is often referred to as dexterous manipulation (e.g.~\cite{Han1998}) or fine manipulation (e.g.~\cite{Hong1990}). We choose the term in-hand manipulation to highlight the fact that the operations happen with respect to the hand and not the world or other parts of the robot. We believe that dexterity can be leveraged for a number of tasks, which do not fundamentally deal with in-hand manipulation, and that a robot can finely manipulate objects without the need for multi-fingered hands or grasping. This section covers those methods that are most relevant to our approach and does not discuss in detail methods for finger gaiting (e.g.~\cite{Hong1990,rus-icra1992}) or dynamic in-hand manipulation~\citep{srinavasa-iros2005,Bai2014}. \cite{salisbury1982articulated} explore grasping of objects with different hand designs. \cite{salisbury1983kinematic} explore gripping forces on grasped objects with three finger, three joint hand designs. Their work on force control with tendon driven articulated hands showed the need for dexterity near the end-effector for manipulation of grasped objects. \cite{Li1989} developed a computed torque controller for coordinated movement of multiple fingers on a robot hand. The controller takes as input a desired object motion and contact forces and outputs the set of finger torques necessary to create this change. The controller requires models of the object dynamics (mass and inertia matrix) in order to compute the necessary control commands. The authors demonstrate in simulation the ability for the controller to have a planar object follow a desired trajectory, when grasped between two fingers. \cite{Hartl1995} factors forces on objects and force to joint torque conversions to perform in-hand manipulation accounting for slippage and rolling. An analytical treatment of dynamic object manipulation is explored with ways for reducing the computations required. Han et al.~\citep{han-icra1997,Han1998} attempt in-hand manipulation with rolling contacts and finger gaiting requiring knowledge of the object surface. They solve for Cartesian space finger-tip and object poses. Results for rolling contacts are demonstrated using flat fingertips to manipulate a spherical ball. The robot tracks the end-effector velocities using the manipulator Jacobian to determine the joint velocities. \cite{bicchi-icra1995} analyze the kinematics of rolling an object grasped between fingers. The authors present a planner for rolling a sphere between two large plates acting as fingers. This is achieved through creating a state feedback law of vector flow fields. All these early methods require extensive details about the object which is hard to obtain in the real world and is inefficient when attempting to manipulate novel objects. In-hand manipulation research has diverged in terms of approaches. \cite{Mordatch2012} formalize in-hand manipulation as an optimization problem. They solve for a task space trajectory and obtain joint space trajectories for the robot using an IK solver independent of their optimization. The trajectory optimization approach factors in force closure, but uses a joint level position controller to perform the manipulation assuming they have a perfect robot dynamics model to convert end-effector forces to positions. Experimental evaluation is shown only in simulation. Similar to our approach, \cite{Hertkorn2013a} seek to find a trajectory to a desired object pose without changing the grasp. Their approach additionally solves for an initial grasp configuration to perform the desired motion in space. They discretize the problem by creating configuration space graphs for different costs and use a union of these graphs to choose a stable grasp. They perform an exhaustive search through this union of graphs to find the desired trajectory. Their approach does not scale, even to simple 3D problems, with multi-fingered robot hands as stated by the authors and they show no real-world results. Their method is computationally inefficient as the reported time for finding a single feasible trajectory was 60 minutes for a two fingered robot with 2 joints each. In contrast, we efficiently and directly solve for joint positions in the continuous domain using trajectory optimization. \cite{Andrews2013} take a hierarchical approach to in-hand manipulation by splitting the problem into three phases: approach, actuation, and release. Their method uses an evolutionary algorithm to optimize the individual motions. This requires many forward simulations of the full dynamical system and does not leverage gradient information in the optimization. Another drawback, as stated by the authors, is that their approach cannot be applied to objects with complex geometry. \cite{Hang2016} explore grasp adaptation to maintain a stable grasp and compensate for slippage or external disturbances with tactile feedback. They use the object's surface geometry to choose contact points for grasping and when performing finger gaiting to maintain a stable grasp. Their method could be used to obtain an initial stable grasp which could then be used in our approach to move the object to a desired pose. \cite{li2013integrating} use two KUKA arms to emulate in-hand manipulation with tactile and visual feedback to move objects to a desired pose. The use of flat contact surfaces limits the possible trajectories of the object. The use of a 7 joint manipulator as a finger also allows for reaching a larger workspace than common robotic hands which are mostly limited to 4 joint fingers. The evaluation is limited to position experiments and single axis orientation changes. \cite{scarcia2015local} perform in-grasp manipulation as a coordinated manipulation problem by adding arm motion planning. They assume a point contact with friction model between the object and the fingertips. They enumerate to obtain the reachability for an object pose. They do not perform extensive experiments with objects. \cite{Rojas2016} present a method to analyze the kinematic-motion of a hand with respect to a grasped object. This tool could be used to find feasible goal poses for an object without changing the current grasp similar to~\cite{Hertkorn2013a}. However the authors are motivated by designing dexterous robot hands and do not perform any planning with their technique. \cite{kumar-icra2014} examine the use of model-predictive control for a number of tasks including in-hand manipulation. They rely on hand synergies and full models of the robot and object dynamics to compute their optimal controllers. However, they recently built on this approach~\citep{kumar-icra2016} and used machine learning to construct dynamics models for the object-hand system. These models could then be used to create a feedback controller to track a specific learned trajectory. They show results on a real robot hand with a high number of states and actuators. However, their method requires retraining to be used if manipulating a new object or moving to a new goal pose. \cite{vanhoof-ichr2015-in-hand-rl} use reinforcement learning to learn a policy for rolling an object in an under-actuated hand. The resulting policy leverages tactile feedback to adapt to different objects, however they must learn a new policy if they were to change the desired goal. Finally, while these learning-based methods show promise in rapidly converging to a desired controller, they still require multiple runs on the robot. In contrast, we perform in-grasp manipulation on a physical robot hand with novel objects, without requiring extensive object information or performing any iterative learning. \subsection{Effect of Joint Acceleration} \label{sec:smoothness-results} The inclusion of joint acceleration cost term, gives a smooth velocity profile for the object during the in-grasp manipulation as shown in Fig.~\ref{fig:obj_velocity}. Linear interpolation has sudden jerks in the object trajectory if the goal pose is not reachable along the linear path as seen in Fig.~\ref{fig:obj_velocity}. There was no significant difference in planning time and offline convergence errors between the two formulations. However, physical robot validation shows the the joint acceleration generates lower maximum position error and similar median position error to the linear interpolation, as shown in Fig.~\ref{fig:fb_res}. The orientation error for $\emph{Banana}$ sees a significant improvement with ``joint-acc'' as it prevented rolling of the object during manipulation. The \emph{Jello} object sees a significant reduction in position error as the smooth path reduces inertia caused by the powder moving inside the box. We infer the following from our validation: the exclusion of linear interpolation for the waypoint allows for finding a smooth trajectory to the desired pose, the smooth acceleration reduces rolling of the object due to rapid changes to object velocity, and Objects with non-rigidly attached parts have lower error as the smooth acceleration keeps inertia at a minimum. \subsection{Object Pose Feedback Controller} \label{sec:fb_results} We now show results for incorporating the object pose feedback controller on the original relaxed-rigidity planner with linear interpolation costs. Fig.~\ref{fig:fb_res} shows that the feedback controller drastically reduced the variance in the position and orientation error. We note that nontrivial noise on the object pose persists, caused by the RGB-D based object tracker. This manifests by the lack of error reduction by the feedback controller when error is less than 1cm. Objects with an axis of symmetry such as the \emph{Apple} object proved particularly difficult to track, since the particle filter was unable to find a unique pose. \subsection{Collision Avoidance} An interesting application of in-grasp manipulation is to avoid collisions in a cluttered environment by making small changes to the object pose. We setup two such experiments and used our collision avoidance extension to generate trajectories. Fig.~\ref{fig:c_env} shows our in-grasp planner avoiding collisions with the environment while reaching the desired pose. This shows the effectivness of making small changes to the object pose to avoid obstacles in the environment which otherwise would require large motions with the arm. Adding the collision avoidance cost increased the planning time as we compute the signed distance in every iteration between the grasped object and the environment which we decompose into many convex obstacle. It took approximately 120 seconds to generate each collision free plan.
{ "timestamp": "2018-06-12T02:06:40", "yymm": "1806", "arxiv_id": "1806.00942", "language": "en", "url": "https://arxiv.org/abs/1806.00942" }
\section{Introduction} \label{intro} In this paper we consider nonlinear equations of the form \begin{align} \F u = \fst, \label{eq:maineq} \end{align} where $ \F: \ix \supset \DF \to \ix $ is a nonlinear operator in a \bhsl{real separable Hilbert space} $ \ix $ with inner product $ \skp{\cdot}{\cdot}: \ix \times \ix \to \reza $, and $ \fst \in \R(F) = F(\DF) $. It is assumed that equation \refeq{maineq} is \bhsl{ill-posed} in one of the concepts considered in \cite{Hofmann_Plato[18]}, \ie it is unstable solvable at $ \fst $ or locally ill-posed at each solution of \refeq{maineq}; see also \cite{Bot_Hofmann[16]}. If not specified otherwise, throughout the paper we restrict the considerations to the following class of operators. \begin{definition} \label{th:mononote} The operator $ \F: \ix \supset \DF \to \ix $ is called \emph{monotone} on a set $ \Mset \subset \DF $ if \begin{align} \skp{\F\myu-\F\myv}{\myu- \myv} \ge 0 \foreach \;\myu, \myv \in \Mset. \label{eq:monotone} \end{align} \end{definition} In the following we assume that equation \refeq{maineq} has a solution $ \ust \in \Mset $. Moreover, we suppose that the \rhs of \refeq{maineq} is only approximately given as $\fdelta \in \ix$ satisfying \begin{align} \norm{ \fst - \fdelta } \le \delta, \label{eq:noisy_data} \end{align} where $ \delta \ge 0 $ is a given noise level. For the regularization of the considered equation \refeq{maineq} with noisy data as in \refeq{noisy_data}, \lavmet \begin{align} (\F + \para I) \uparadeltab = \fdelta, \label{eq:lavmet} \end{align} may be considered, where $ \para > 0 $ is a regularization parameter. Solvability of equation \refeq{lavmet} on $ \Mset $ is a critical issue and can only be guaranteed under additional assumptions on the operator $ \F $ and the set $ \Mset$, \eg \begin{myenumerate_indent} \item \label{it:hemi_M_H} $ \F $ is hemicontinuous and $ \DF = \Mset = \ix $, or \item \label{it:maxmonot} $ \F $ is maximal monotone on $ \Mset $, or \item \label{it:ball} $ \F $ is hemicontinuous, $ \Mset$ is a closed ball, centered at a solution of \refeq{maineq} and with sufficiently large radius, and $ \tfrac{\delta}{\para} $ is sufficiently small. \end{myenumerate_indent} For \ref{it:maxmonot} we refer e.g.~to Deimling~\cite[Theorem 12.5]{Deimling[85]} and note that \ref{it:hemi_M_H} is a special case of \ref{it:maxmonot} (cf.~e.g.~Showalter~\cite[p.~39]{Showalter[97]}). The case \ref{it:ball} is considered in Tautenhahn~\cite{Tautenhahn[02]}, with some clarification given by Neubauer~\cite{Neubauer[16]}. There exist examples, however, where none of these conditions \ref{it:hemi_M_H} -- \ref{it:ball} on $ \F $ and $ \Mset $ is necessarily satisfied. For other examples, maximal monotonicity in \ref{it:maxmonot} is hard to verify, e.g.~for operators on $ \ix = L^2(\Omega) $ with $ \Omega \subset \reza^n $, and $ \Mset \subset \{ f \in \ix \mid f \ge 0 \ \textup{a.e.} \}$. In such cases, a variational formulation (see formula \refeq{lavmet-vi} below) seems to be a reasonable alternative for \refeq{lavmet}. To prove this fact, is one of the goals of the present paper. We conclude this section with some references on the regularizing properties of \refeq{lavmet}: see, \eg Alber and Ryazantseva~\cite{Alber_Ryazantseva[06]}, Bo\c{t} and Hofmann~\cite{Bot_Hofmann[16]}, Hofmann, Kaltenbacher and Resmerita~\cite{Hofmann_Kaltenbacher_Resmerita[16]}, Janno~\cite{Janno[00]}, Liu and Nashed~\cite{Liu_Nashed[96]}, Tautenhahn~\cite{Tautenhahn[02]}, as well as Mahale and Nair~\cite{Mahale_Nair[13]}. \section{\Vilavmet{} -- Basic notations} \label{varlavmet} We introduce the following assumptions and notations. \begin{assumption} \label{th:assump_1} Let $ \F: \ix \supset \DF \to \ix $ be a \bhsl{\demicont bounded operator} in the real separable Hilbert space $ \ix $ which is \bhsl{monotone} on a given \bhsl{closed convex subset} $ \Mset \subset \ix $, with $ \Mset \subset \DF $. In addition, let $ \fst, \fdel \in \ix $ satisfy the noise model \refeq{noisy_data}. Furthermore, we suppose that the equation $ \F \myu = \fst $ has a solution which belongs to $ \Mset $. \end{assumption} Throughout the present paper, we assume that Assumption \ref{th:assump_1} holds. Recall that the operator $ F $ is, by definition in the sense of Deimling~\cite[Definition 11.2]{Deimling[85]} and Showalter~\cite[p.~36]{Showalter[97]}, \begin{mylist_indent} \item \emph{\demicont{}}, if for each $ \myx \in \DF $ and for each sequence $ (\xn) \subset \DF $ with $ \xn \to \myx $ as $ n \to \infty $, we have weak convergence $ \F \xn \rightharpoonup \F \myx $ as $ n \to \infty $, \item \emph{bounded}, if for each bounded set $ \Nset \subset \DF $, the set $ \F(\Nset) \subset \ix $ is bounded. \end{mylist_indent} Instead of \lavmet \refeq{lavmet}, in what follows we consider the following \vilavmet (\ref{eq:lavmet-vi}). Let, for $ \para > 0$, $ \upardel \in \Mset $ satisfy \begin{align} \skp{\F \upardel + \alpha \upardel -\fdel}{\myv-\upardel} \ge 0 \foreach \myv \in \Mset. \label{eq:lavmet-vi} \end{align} For technical purposes, we use for $ \para > 0 $ the notation \begin{align} \upara = \uparzer \label{eq:upara} \end{align} for the noise-free case $\delta=0$, which means that the approximation obtained by the \vilavmet has been derived on the basis of exact data $ \fdelta = \fst $. An approach \refeq{lavmet-vi} can be considered as a variational inequality formulation of \lavmet. A solution to the \varineq \refeq{lavmet-vi} with the penalized operator always exists on $ \Mset $ and depends stably of $ \fdelta $: \begin{theorem} \label{th:noise-amplific} Let Assumption \ref{th:assump_1} be satisfied. Then for each parameter $\para>0 $, the \vipert \refeq{lavmet-vi} has a unique solution $ \upardel \in \Mset $. In addition, the following stability estimate is satisfied, \begin{align} \norm{\upardel - \upara} \le \mfrac{\delta}{\para}, \label{eq:noise-amplific} \end{align} where $ \upara \in \Mset $ is given by \refeq{upara}. \end{theorem} \proof Consider, for $ \para > 0 $ fixed, the nonlinear operator $ \Fpara: \ix \supset \DF \to \ix$ which maps as $\cdott \myu \mapsto \F \myu + \para \myu $. Then we obviously have $ \skp{\Fpara \myu - \Fpara \myv}{\myu-\myv} \ge \para \normqua{\myu - \myv} $ for each $ \myu, \cdott \myv \in \Mset $, \ie the nonlinear operator $ \Fpara$ is strongly monotone on the set $ \Mset $. Existence thus follows, \eg from Showalter~\cite[proof of Theorem 2.3 in Chapter II]{Showalter[97]}. The mentioned proof in that reference may be applied, using the notations from there, with $ \mathcal{A} = \Fpara $ and $ v_0 = \ust $, where again $ \ust \in \Mset $ satisfies $ \F \ust = \fst $. Notice that the operator $ \mathcal{A} $ considered in \cite{Showalter[97]} is assumed to be pseudo-monotone all over the considered Hilbert space. However, the proof in that paper can be employed straightforward under the assumptions made in the present paper. We next verify estimate \refeq{noise-amplific}. For notational convenience we introduce the notation \begin{align*} \mydiffdel = \upardel - \upara \foreach \para > 0. \end{align*} We have \begin{align*} \skp{\F \upara + \para \upara - \fst}{ \mydiffdel } \ge 0, \quad \skp{\F \upardel + \para \upardel - \fdelta}{ -\mydiffdel } \ge 0. \end{align*} Summation of those two inequalities gives \begin{align*} 0 & \le \skp{\F \upara + \para \upara - \fst}{ \mydiffdel } -\skp{\F \upardel + \para \upardel - \fdelta}{ \mydiffdel } \\ & = -\skp{\F \upardel -\F \upara}{ \mydiffdel } -\para \skp{\mydiffdel}{\mydiffdel} + \skp{\fdelta -\fst}{ \mydiffdel } \le 0 -\para \normqua{\mydiffdel} + \delta \norm{\mydiffdel}, \end{align*} and the statement of the theorem follows by rearranging terms. \proofend \begin{remark} Existence results for \varineqs (either for similar, more general or more specific situations) may also be found in others papers and monographs. See, e.g.~Barbu and Precupanu~\cite[Theorem 2.67 and subsequent remark]{Barbu_Precupanu[12]} and Kinderlehrer and Stampacchia~\cite[Corollary 1.8 of Chapter III]{Kinderlehrer_Stampacchia[00]}. In Bakushinsky, Kokurin and Kokurin~\cite[Lemma 6.1.3]{Bakushinsky_Kokurin_Kokurin[18]} and Br\'{e}zis~\cite[Proposition 31]{Brezis[68]}, a simple proof is given for the special case that the operator $ \F $ satisfies a Lipschitz condition on the monotonicity set $ \Mset $. Using more refined arguments, it is possible to weaken the assumptions of Theorem \ref{th:noise-amplific} without changing the statement of the theorem: the condition ``separable'' on the Hilbert space $ \ix $ can be removed in fact, and the assumption ``\demicont, bounded'' on the operator $ \F $ can be replaced by the weaker property ``hemicontinuous''; \cf Browder~\cite{Browder[65]} or Br\'{e}zis~\cite[Theorem 24]{Brezis[68]}. \remarkend \end{remark} \bn Below we consider the overall regularization error $ \upardel - \ust $, where $ \ust \in \Mset $ denotes a classical or generalized solution of the equation $ \F \myu = \fst $, \cf \refeq{maineq} above or \refeq{vi-unpert} below. This overall error can be decomposed into regularization error $ \upara - \ust $ and noise amplification term $ \upardel - \upara $. The latter term has already been estimated in \refeq{noise-amplific}, and we thus have \begin{align} \norm{\upardel - \ust} \le \norm{\upara - \ust} + \mfrac{\delta}{\para} \;\foreach \;\para > 0. \label{eq:regularization-error} \end{align} Below we thus may focus on the estimation of the bias norm $ \norm{\upara - \ust} $. \section{Convergence of regularized solutions} \label{convergence} In this section, we consider strong convergence of the elements $ \upara $ generated by the \vilavmet \refeq{lavmet-vi} as $ \para \to 0 $. We continue to assume that the conditions stated in Assumption \ref{th:assump_1} are satisfied. \begin{comment} We start with a useful lemma. \begin{lemma} \label{th:lavmet_lemma_1} We have $ \normqua{\uparaa} \le \skp{\uparab}{\uparaa}\; $ for $\; 0 < \parab \le \paraa $. \end{lemma} \proof We have by definition \begin{align*} \skp{\F \uparab + \parab \uparab - \fst}{\uparaa - \uparab} \ge 0, \qquad \skp{\F \uparaa + \paraa \uparaa - \fst}{\uparab - \uparaa} \ge 0, \end{align*} and summation gives with $\skp{\F \uparab - \F \uparaa}{\uparab - \uparaa}\ge 0$ \begin{align*} 0 & \le \skp{\F \uparaa - \F \uparab + \paraa \uparaa - \parab \uparab}{\uparab - \uparaa} \\ &= \skp{\paraa \uparaa - \parab \uparab}{\uparab - \uparaa} - \skp{\F \uparab - \F \uparaa}{\uparab - \uparaa} \\ & \le \skp{\paraa \uparaa - \parab \uparab}{\uparab - \uparaa} = \kla{\paraa-\parab}\skp{\uparaa}{\uparab - \uparaa} - \parab \normqua{\uparab-\uparaa} \\ & \le \kla{\paraa-\parab}\skp{\uparaa}{\uparab - \uparaa} \end{align*} and thus $ \skp{\uparaa}{\uparab - \uparaa} \ge 0 $. \proofend \begin{corollary} \label{th:lavmet_cor_1} We have \begin{myenumerate_indent} \item $ \normqua{\uparaa- \uparab} \le \normqua{\uparab} - \normqua{\uparaa}\; $ for $ \;0 < \parab \le \paraa $. \item $ \norm{\uparaa} \le \norm{\uparab} \;$ for $ \;0 < \parab \le \paraa $, \ie the function $ \para \mapsto \norm{\upara} $ is non-increasing. \end{myenumerate_indent} \end{corollary} \proof \begin{myenumerate} \item This easily follows from Lemma \ref{th:lavmet_lemma_1}: \begin{align*} \normqua{\uparaa- \uparab} = \normqua{\uparaa} -2\skp{\uparaa}{\uparab} + \normqua{\uparab} \le \normqua{\uparab} - \normqua{\uparaa}. \end{align*} \item Follows immediately from part (a) of the present corollary. \proofend \end{myenumerate} \begin{corollary} \label{th:lavmet_cor_2} Let $ \vst \in \Mset $ be any solution of the \varineq \begin{align} \skp{\F \vst -\fst}{\myv-\vst} \ge 0 \foreach \myv \in \Mset. \label{eq:vi-unpert} \end{align} For each parameter $ \para > 0 $ we have \begin{align*} \normqua{\upara} \le \skp{\vst}{\upara}, \qquad \normqua{\upara- \vst} \le \normqua{\vst} - \normqua{\upara}, \qquad \norm{\upara} \le \norm{\vst}. \end{align*} \end{corollary} \proof The assertions of the corollary follow immediately from the proofs of Lemma \ref{th:lavmet_lemma_1} and Corollary \ref{th:lavmet_cor_1}, with $ \beta = 0 $ there, respectively. \proofend \end{comment} As a preparation, we consider the unperturbed, unpenalized version of the \vilavmet \refeq{lavmet-vi}, i.e.~the determination of an $ \vst \in \Mset $ which satisfies the \varineq \begin{align} \skp{\F \vst -\fst}{\myv-\vst} \ge 0 \foreach \myv \in \Mset. \label{eq:vi-unpert} \end{align} \begin{remark} \label{th:varineq-comments} \begin{myenumerate} \item We note that any classical solution of \refeq{maineq} obviously satisfies the \varineq \refeq{vi-unpert}, so the set of solutions of \refeq{vi-unpert} is by assumption non-empty. \item The \varineq \refeq{vi-unpert} is equivalent with $ \skp{\F \myv -\fst}{\myv-\vst} \ge 0 $ for each $ \myv \in \Mset $, \cf \eg Showalter~\cite[Corollary~2.4]{Showalter[97]}), or Browder~\cite[Lemma 1]{Browder[65]}. This in particular means that the set of solutions satisfying the \varineq \refeq{vi-unpert} is closed and convex, and thus it has a unique element with minimal norm $ \ustst $. \item Let the operator $ F $ be strictly monotone on $ \Mset $, \ie in \refeq{monotone} we may replace ``$ \ge $'' by strict inequality ``$>$'' for each $ \myu, \myv \in \Mset $ with $ \myu \neq \myv $. Then \refeq{vi-unpert} and also \refeq{maineq} have at most one solution, respectively. \item Any element $ \vst \in \Mset $ solves the \varineq \refeq{vi-unpert} if and only if the identity $ \vst = \PM (\vst - \mu \kla{\F \vst-\fst}) $ holds for each $ \mu \ge 0 $, where $ \PM: \ix \to \ix $ denotes the convex projection onto the set $ \Mset $. This follows from a standard variational formulation for convex projections, see \eg Kinderlehrer and Stampacchia~\cite[Theorem 2.3 of Chapter I]{Kinderlehrer_Stampacchia[00]}. A similar statement holds for the \vilavmet \refeq{lavmet-vi}. \remarkend \end{myenumerate} \end{remark} \begin{comment} \begin{theorem} \label{th:convergence} \begin{myenumerate_indent} \item $ \lim_{\para \to 0} \norm{\upara} $ exists. \item We have $ \upara \to \ustst $ as $ \para \to 0 $, where $ \ustst \in \Mset $ denotes the minimum norm solution of the \varineq \refeq{vi-unpert}. \end{myenumerate_indent} \end{theorem} \proof \begin{myenumerate} \item It follows from the last estimate in Corollary \ref{th:lavmet_cor_2} that $ \norm{\upara} $ is bounded as $ \para \to 0 $. Convergence of $ \norm{\upara} $ as $ \para \to 0 $ now follows from the monotonicity of the function $\para \mapsto \|u_\para\|$, \cf (b) in Corollary \ref{th:lavmet_cor_1}. \item It follows from part (a) of this theorem and part (a) in Corollary \ref{th:lavmet_cor_1} that $ \upara $ is Cauchy as $ \para \to 0 $. Let $ \ustst \defeq \lim_{\para \to 0} \upara $. The set $ \Mset $ is by assumption closed, thus we have $ \ustst \in \Mset $. In addition, it follows from \refeq{lavmet-vi} and the \demiconty of $ F $ that this limit $ \ustst $ is a solution of the \varineq \refeq{vi-unpert}. Finally, the last estimate in Corollary \ref{th:lavmet_cor_2} implies $ \norm{\ustst} \le \norm{\ust} $ for any solution $ \ust \in \Mset $ of the \varineq \refeq{vi-unpert}. \proofend \end{myenumerate} \end{comment} \begin{theorem} \label{th:convergence} Let Assumption \ref{th:assump_1} be satisfied. We have $ \upara \to \ustst $ as $ \para \to 0 $, where $ \ustst \in \Mset $ denotes the minimum norm solution of the \varineq \refeq{vi-unpert}. \end{theorem} \proof This easily follows, e.g., by a compilation of the steps considered in the proof of Theorem 3 in Ryazantseva~\cite{Ryazantseva[76]}. \proofend \begin{remark} Convergence of the \vilavmet is in fact the subject of many research papers and monographs, see \eg Alber and Ryazantseva~\cite[Theorem 4.1.1]{Alber_Ryazantseva[06]}, Bakushinsky, Kokurin and Kokurin~\cite[Lemma 6.1.4]{Bakushinsky_Kokurin_Kokurin[18]}, Khan, Tammer and Zalinescu~\cite{Khan_Tammer_Zalinescu[15]}, Liu and Nashed~\cite{Liu_Nashed[98]}, and Ryazantseva~\cite{Ryazantseva[76]}, and the references therein. Quite frequently in the literature, more general situations than in the present paper are considered, \eg perturbation of the considered convex set $ \Mset $ in \refeq{vi-unpert}, or set-valued operators $ \F $ in Banach spaces. On the other hand, the assumptions made in Theorem \ref{th:convergence} are weaker in some aspects. For example, we allow the monotonicity set in \refeq{monotone} to be a nontrivial subset of $\ix $, with a possibly empty interior, and in addition no Lipschitz continuity of the operator $ \F $ is required in Theorem \ref{th:convergence}. \remarkend \end{remark} \bn As an immediate consequence of Theorem \ref{th:convergence} and estimate \refeq{regularization-error}, we obtain the following result. \begin{corollary} Let Assumption \ref{th:assump_1} be satisfied. For any a priori parameter choice $ \para = \para(\delta) $ with $ \para(\delta) \to 0 $ and $ \tfrac{\delta}{\para(\delta)} \to 0 $ as $ \delta \to 0 $, we have \begin{align} \upardeldel \to \ustst \quad \textup{as } \ \delta \to 0, \end{align} where $ \ustst $ is as in Theorem \ref{th:convergence}. \end{corollary} \section{Convergence rates for regularized solutions} \label{convergence_rates} In this section, we provide convergence rates of $ \upara $ as $ \para \to 0 $ under adjoint source conditions. We continue to assume that the conditions stated in Assumption \ref{th:assump_1} are satisfied. In addition, the following class of operators will be of importance, \cf Bauschke and Combettes~\cite[Definition 4.4]{Bauschke_Combettes[85]}. \begin{definition} \label{th:cocoercive} An operator $ \F: \ix \supset \DF \to \ix $ in a Hilbert space $ \ix $ is called \emph{\cocoercive} on a subset $ \Mset \subset \DF $ if, for some constant $ \tau > 0 $, we have \begin{align} \skp{\F \myu - \F \myv}{\myu-\myv} \ge \tau \normqua{\F \myu - \F \myv} \foreach \myu, \cdott \myv \in \Mset. \label{eq:cocoercive} \end{align} \end{definition} A \cocoercive operator is sometimes called inverse strongly monotone. For $ \tau > 0 $ fixed, an operator $ \F $ is \cocoercive on $ \Mset $ with constant $ \tau $ if and only if $ I - \mu F $ is nonexpansive for each $ 0 \le \mu \le 2\tau $. Cocoerciveness obviously implies monotonicity. An example of a \cocoercive operator may be found in Liu and Nashed~\cite[Example 3]{Liu_Nashed[98]}. Another example is given in section~\ref{paraesti} of the present paper. Below, frequently we make use of the following Lipschitz condition. \begin{assumption} \label{th:assump_2} Let $ \DF \subset \ix $ be an open subset, and let $ \F $ be \frechet differentiable on $ \DF $. In addition, let the following Lipschitz condition be satisfied on a given subset $ \Mset \subset \DF $, \begin{align} \norm{\prim{F}(\myu)-\prim{\F}(\myv)} \le L \norm{\myu-\myv} \foreach \myu, \myv \in \Mset, \label{eq:lipschitz-prime} \end{align} where $ L \ge 0 $ denotes some finite constant. \end{assumption} The following proposition provides a useful tool for the verification of cocoerciveness of a nonlinear operator. \begin{proposition} \label{th:coco-diffable} Let Assumptions \ref{th:assump_1} and \ref{th:assump_2} be satisfied. Let $ \prim{F}(\myu) $ be \cocoercive on $ \ix $, uniformly for $ \myu \in \Mset $, \ie there exists some constant $ \tau > 0 $ such that for each $ \myu \in \Mset $ \begin{align} \skp{\prim{F}(\myu)h}{h} \ge \tau \normqua{\prim{F}(\myu)h} \quad \forall h \in \ix, \label{eq:coco-diffable} \end{align} holds. Then $ \F $ is \cocoercive on $ \Mset $, with constant $ \tau $. \end{proposition} \proof From uniform cocoerciveness of $ \prim{F} $, we obtain for any $ \myu \in \Mset $ and $ h \in \ix $ with $ \myu + h \in \Mset $ that $ \F(\myu+h) - \F(\myu) = \inttxt{0}{1}{ \prim{\F}(\myu+th)h}{dt} $, and thus \begin{align*} & \skp{\F(\myu+h) - \F(\myu)}{h} = \ints{0}{1}{ \skp{\prim{\F}(\myu+th)h}{h}}{dt} \ge \tau \ints{0}{1}{ \normqua{\prim{\F}(\myu+th)h}}{dt} \\ & \quad \ge \tau \klabi{\ints{0}{1}{ \norm{\prim{\F}(\myu+th)h}}{dt}}^2 \ge \tau \normqua{\ints{0}{1}{ \prim{\F}(\myu+th)h}{dt}} = \tau \normqua{\F(\myu+h) - \F(\myu)}. \proofend \end{align*} \begin{remark} \label{th:monotone-diffable} \begin{myenumerate} \item If $ \prim{F}(\myu) $ is a monotone operator on $ \ix $ for each $ \myu \in \Mset $, then $ F $ is monotone on $ \Mset $. This immediately follows from the proof of Proposition \ref{th:coco-diffable} by considering the case $ \tau = 0 $ there. \item It is evident from the proof of Proposition \ref{th:coco-diffable} that in \refeq{coco-diffable}, ``$ \forall h \in \ix $'' can be replaced by the weaker condition ``$ \forall h \in \ix $ satisfying $ \myu + t h \in \Mset $ for $ t > 0 $ sufficiently small'', without changing the statement of the proposition. One can show that this in fact yields an equivalent condition for \cocoerciveness. \remarkend \end{myenumerate} \end{remark} For ill-posed problems, convergence rates can only be obtained under additional conditions on the solution. In this section we assume that there exists a solution of equation \refeq{maineq} which belongs to $ \Mset $ and satisfies an adjoint source condition, \ie \begin{align} \ust \in \Mset, \quad \F \ust = \fst, \quad \ust = \Fprime(\ust)^* z, \quad \norm{z} =: \varrho, \label{eq:adjoint-source-condition} \end{align} for some $ z \in \ix $. This completes the formulation of the basic assumptions needed in this section. For the proof of the main result of this section, \cf Theorem \ref{th:epara-speed} below, we need the following lemma. For any element $ \myu \in \Mset $ consider \begin{align} \eparau \defeq \eparau(\myu) = \upara - \myu, \quad \rpara = \F \upara - \fst, \quad \epara = \eparau(\ust) \for \para > 0, \label{eq:difference-notations} \end{align} where $ \upara \in \Mset $ is introduced in \refeq{upara}. \begin{lemma} \label{th:vi-lemma} Let Assumption \ref{th:assump_1} be satisfied. For any $ \myu \in \Mset $ we have, with the notations from \refeq{difference-notations}, \begin{align} \skp{\rpara}{\eparau} + \para \normqua{\eparau} \le - \para \skp{\myu}{\eparau} \for \para > 0. \label{eq:vi-lemma} \end{align} \end{lemma} \proof We consider \refeq{lavmet-vi} with $ \delta = 0 $, which means $ \fdel = \fst $ in fact: \begin{align*} \skp{\F \upara - \fst + \para \upara}{\upara -\myu} = \skp{\rpara + \para \upara}{\eparau} = \skp{\rpara}{\eparau} + \para \skp{\upara}{\eparau} \le 0. \end{align*} From this we obtain \begin{align*} \skp{\rpara}{\eparau} + \para \normqua{\eparau} = \skp{\rpara}{\eparau} + \para \skp{\upara}{\eparau} -\para \skp{\myu}{\eparau} \le - \para \skp{\myu}{\eparau}, \end{align*} which is \refeq{vi-lemma}. \proofend \bn We are now in a position to formulate the main result of this section. \begin{theorem} \label{th:epara-speed} Let Assumptions \ref{th:assump_1} and \ref{th:assump_2} be fulfilled. If $ \F $ is \cocoercive on $ \Mset $, and if in addition the adjoint source condition \refeq{adjoint-source-condition} is satisfied with $ \varrho L < 2 $, then \begin{align} \norm{ \upara - \ust } &= \Landauno{\para^{1/2}}, \qquad \norm{\F \upara - \fst} = \Landauno{\para} \as \para \to 0. \label{eq:epara-rpara-speed} \end{align} \end{theorem} \proof We proceed with \refeq{vi-lemma} for $ \myu = \ust $. From \refeq{adjoint-source-condition} we obtain, with the notations introduced in \refeq{difference-notations}, \begin{align} -\skp{\ust}{\epara} = -\skp{\Fprime(\ust)^* z}{\epara} = -\skp{z}{\Fprime(\ust) \epara} \le \varrho \norm{\Fprime(\ust) \epara}. \label{eq:epara-speed-a} \end{align} For a further estimation of \refeq{epara-speed-a}, we need to consider the first order remainder $ \R = \R_{\ust} $ of a Taylor expansion at $ \ust \in \DF $: \begin{align*} \R(\myu) &= \F(\myu)- \F(\ust) - \prim{\F}(\ust)\kla{\myu-\ust}, \quad \myu \in \DF. \end{align*} For $ h \in \ix $ such that the line segment from $ \ust $ to $ \ust + h $ belongs to $ \DF $, we have $ \R(\ust+h) = \inttxt{0}{1}{ \kla{ \prim{\F}(\ust+th)-\prim{\F}(\ust)}h}{dt} $ and thus $ \norm{\R(\ust+h)} \le \tfrac{L}{2} \normqua{h} $. This gives $ \Fprime(\ust) \epara = \F(\upara) - \F(\ust) - \R(\upara) = \rpara - \R(\upara) $ with $ \norm{\R(\upara)} \le \tfrac{L}{2} \normqua{\epara} $. We are now in a position to proceed with the upper bound in \refeq{epara-speed-a}: \begin{align} \norm{\Fprime(\ust) \epara } \le \norm{\rpara} + \norm{\R(\upara)} \le \norm{\rpara} + \mfrac{L}{2} \normqua{\epara}. \label{eq:epara-speed-b} \end{align} The estimates \refeq{vi-lemma} for $ \myu = \ust $ and \refeq{epara-speed-a} -- \refeq{epara-speed-b} finally give \begin{align*} \skp{\rpara}{\epara} + \para \normqua{\epara} \le - \para \skp{\ust}{\epara} \le \varrho \para \norm{\Fprime(\ust) \epara} \le \varrho \para \klabi{\norm{\rpara} + \mfrac{L}{2} \normqua{\epara}}, \end{align*} and thus \begin{align} \skp{\rpara}{\epara} + \para \klabi{1- \tfrac{\varrho L}{2}} \normqua{\epara} \le \varrho \para \norm{\rpara}. \label{eq:epara-speed-c} \end{align} This in particular means $ \skp{\rpara}{\epara} \le \varrho \para \norm{\rpara} $, and \cocoerciveness, \cf \refeq{cocoercive}, moreover means $ \skp{\rpara}{\epara} \ge \tau \normqua{\rpara} $. We thus obtain \begin{align} \tau \norm{\rpara} \le \varrho \para, \label{eq:epara-speed-d} \end{align} \ie $ \norm{\rpara} = \Landauno{\para} $ as $ \para \to 0 $. From \refeq{epara-speed-c} and \refeq{epara-speed-d} we finally obtain \begin{align*} \tau \klabi{1- \tfrac{\varrho L}{2}} \normqua{\epara} \le \tau \varrho \norm{\rpara} \le \varrho^2 \para, \end{align*} which is the first statement in \refeq{epara-rpara-speed}. \proofend \begin{remark} \begin{myenumerate} \item From Theorem \ref{th:epara-speed} and Theorem \ref{th:convergence} it follows that any $ \ust $ satisfying the conditions in \refeq{adjoint-source-condition} is the minimum norm solution of the \varineq \refeq{vi-unpert}. \item Theorem \ref{th:epara-speed} improves results in Liu and Nashed~\cite[Theorem 6]{Liu_Nashed[98]}, where $ \norm{\eparalong} = \Landauno{\para^{1/3}} $ as $ \para \to 0 $ is obtained only (under more general assumptions, however, \eg possible set perturbations). \item The first error estimate in Theorem \ref{th:epara-speed} remains valid if in \refeq{adjoint-source-condition}, the identity $ \F \ust = \fst $ is replaced by the weaker assumption that $ \ust \in \Mset $ satisfies the \varineq \refeq{vi-unpert}. In the proof of Theorem \ref{th:epara-speed}, then one only has to make additional use of the fact that the inequality $ \skp{\F \upara - \F \ust }{\eparalong} \le \skp{\F \upara - \fst }{\eparalong} $ holds. The second error estimate in Theorem~\ref{th:epara-speed} has to be replaced by $ \norm{\F \upara - F \ust} = \Landauno{\para} $ then. \item Using some ideas of Tautenhahn~\cite{Tautenhahn[02]} and Janno~\cite{Janno[00]}, one may obtain convergence rates for source conditions of the form $ \ust = \Fprime(\ust) z $, \ie the adjoint source condition is replaced by a classical one. This topic, however, goes beyond the scope of the present study and will be considered elsewhere. \item For recent results on adjoint source conditions for linear problems, see Plato, Hofmann, and Math\'{e}~\cite{Plato_Hofmann_Mathe[16]}. \remarkend \end{myenumerate} \end{remark} \begin{corollary} \label{th:apriori} Under the conditions of Theorem \ref{th:epara-speed} we have, for any a priori parameter choice $\paradelta \sim \delta^{2/3}$, the convergence rate result \begin{align} \norm{u_{\paradelta}^\delta - \ust} = \Landau(\delta^{1/3}) \quad \textup{ as }\;\; \delta \to 0. \label{eq:apriori} \end{align} \end{corollary} \begin{remark} \begin{myenumerate} \item The rate \refeq{apriori} is identical with rates obtained in \cite[Theorem 3, Remark 4]{Hofmann_Kaltenbacher_Resmerita[16]} for \lavmet \refeq{lavmet} with variational source conditions. \item The rate of convergence in \refeq{apriori} is higher than those obtained by Liu and Nashed~\cite{Liu_Nashed[98]}, Thuy~\cite{Thuy[11]}, and Buong~\cite{Buong[05]} for the \vilavmet under similar source conditions. Note that, on the other hand, the results in those papers are established in a more general framework, respectively, \eg in Banach spaces or allowing set perturbations, and for a~posteriori parameter choice strategies. \remarkend \end{myenumerate} \end{remark} \section{Modified \vilavmet} \label{lavmet-translate} Occasionally it may be useful to consider a modified version of the \vilavmet \refeq{lavmet-vi}. For this purpose let $ \myubar \in \ix $ be fixed. For $ \para > 0 $ let $ \upardel \in \Mset $ satisfy \begin{align} \skp{\F \upardel + \alpha( \upardel - \myubar) -\fdel}{\myv-\upardel} \ge 0 \foreach \myv \in \Mset. \label{eq:lavmet-vi-translate} \end{align} We denote by $ \upara = \uparzer $ the approximation obtained by the modified \vilavmet \refeq{lavmet-vi-translate} with exact data $ \fdelta = \fst $. Method \refeq{lavmet-vi-translate} can be considered as variational inequality formulation of the translated \lavmet $ \F \uparadeltab + \para (\uparadeltab - \myubar) = \fdelta $. The results of sections \ref{varlavmet}--\ref{convergence_rates} can be easily applied to the modified \vilavmet by considering translation: replace the operator $ \F $ and the monotonicity set $ \Mset $ there by \begin{align*} \Ftil: \ix \supset - \myubar + \DF \to \ix, \ v \mapsto \F(\myubar + v), \quad \Mtil = - \myubar + \Mset, \end{align*} respectively. We briefly formulate the relevant results under the general assumption that the conditions stated in Assumption \ref{th:assump_1} are satisfied. \begin{myenumerate_indent} \item The modified \vilavmet \refeq{lavmet-vi-translate} has a unique solution $ \upardel \in \Mset $ which satisfies $ \norm{\upardel - \upara} \le \tfrac{\delta}{\para} $ for each $ \para > 0 $. \item We have $ \upara \to \ustst $ as $ \para \to 0 $, where $ \ustst \in \Mset $ denotes the solution of the \varineq \refeq{vi-unpert} having minimal distance to $ \myubar $. In addition, for any a priori parameter choice $ \para = \paradelta $ with $ \paradelta \to 0 $ and $ \tfrac{\delta}{\paradelta} \to 0 $ as $ \delta \to 0 $, we have $ \upardeldel \to \ustst $ as $ \delta \to 0 $. \item If Assumption \ref{th:assump_2} is fulfilled and $ \F $ is \cocoercive on $ \Mset $, and if in addition the adjoint source condition \begin{align} \ust \in \Mset, \quad \F \ust = \fst, \quad \ust - \myubar = \Fprime(\ust)^* z, \quad \varrho \defeq \norm{z}, \label{eq:adjoint-source-cond-mod} \end{align} is satisfied with some $ z \in \ix $ and $ \varrho L < 2 $, then \begin{align*} \norm{\upara - \ust} = \Landauno{\para^{1/2}} \ \textup{ as } \para \to 0, \qquad \norm{\upardeldel - \ust} = \Landau(\delta^{1/3}) \ \textup{ as } \delta \to 0, \end{align*} for any a priori parameter choice $\paradelta \sim~\delta^{2/3}$. \end{myenumerate_indent} An appropriate choice of $ \myubar $ guarantees that $ \ust - \myubar $ belongs to the range of $ \Fprime(\ust)^* $, which typically requires, besides sufficient smoothness, that appropriate conditions on a subset of the boundary of the domain of definition $ \DF $ are satisfied. \section{An example, and numerical illustrations} \subsection{A parameter estimation problem} \label{paraesti} We consider the estimation of the coefficient $ u \in L^2(0,1) $ in the following initial value problem: \begin{align*} \prim{f} + u f = 0 \; \text{\ a.e.~on } [0,1], \quad f(0) = -\czer < 0, \end{align*} where $ f \in H^1(0,1) $; cf.~Groetsch~\cite{Groetsch[93]}, Hofmann~\cite{Hofmann[99]}, or Tautenhahn~\cite{Tautenhahn[02]}. The initial value $ -\czer $ with $ \czer > 0 $ is assumed to be known exactly. This problem can be written as $ F u = f $, with \begin{align} (F u)(\myt) \defeq -\czer e^{-U(\myt)}, \quad U(\myt) = \ints{0}{\myt}{ u(\mys) } { d\mys }, \quad 0 \le \myt \le 1. \label{eq:fex} \end{align} The operator $ F : L^2(0,1) \to L^2(0,1) $ is bounded and \frechet differentiable on $ L^2(0,1) $, with \frechet derivative \begin{align} [\prim{\F}(u)h](t) = - (\F u)(t) H(t) \for h \in L^2(0,1), \ H(\myt) = \ints{0}{\myt}{ h(\mys) } { d\mys }, \ 0 \le \myt \le 1. \label{eq:fex-prime} \end{align} Let \begin{align} \Dsetc = \inset{ u \in L^2(0,1) \mid u \ge \mykapa\; \text{\ a.e.~on}\;[0,1] }, \label{eq:Muzer} \end{align} where $ \mykapa \in \reza $. \begin{proposition} \label{th:Fu-ex-frechet-mono-coco} The operator $ \F $ in \refeq{fex} is monotone on $ \Dsetc[0] $. For any $ \mykapa > 0 $, it is \cocoercive on $ \Dsetc $, with constant $ \tau = \tfrac{\mykapa}{2\czer} $. \end{proposition} \proof We shall make use of Proposition \ref{th:coco-diffable} and Remark \ref{th:monotone-diffable}. Let $ \myu \in L^2(0,1), \fex \defeq -Fu $, and $ h \in L^2(0,1) $. From \refeq{fex-prime} it follows that \begin{align*} \skp{\prim{F}(u)h}{h} & = \ints{0}{1}{ (\fex H) \prim{H} } { d\myt } = \fex H^2 \vert_0^1 - \ints{0}{1}{ \prim{(\fex H)} H}{ d\myt } \ge - \ints{0}{1}{ \prim{\fex} H^2 }{ d\myt } -\skp{\prim{F}(u)h}{h}, \end{align*} and thus \begin{align} 2 \skp{\prim{F}(u)h}{h} \ge - \ints{0}{1}{ \prim{\fex} H^2 }{ d\myt } = \ints{0}{1}{ \fex \myu H^2 }{ d\myt }, \label{eq:Fu-ex-monotone-a} \end{align} where the properties $ \fex(1) H^2(1) \ge 0 $ and $ H(0) = 0 $ have been used. Estimate \refeq{Fu-ex-monotone-a} implies for each $ \myu \in \Dsetc[0] $ that $ \skp{\prim{F}(u)h}{h} \ge 0 $ for each $ h \in \ix $, and the monotonicity statement for $ \F $ immediately follows from Remark \ref{th:monotone-diffable}. Now let $ \mykapa > 0 $ be fixed. For any $ \myu \in \Dsetc $ we proceed with \refeq{Fu-ex-monotone-a}: \begin{align*} 2 \skp{\prim{F}(u)h}{h} \ge \mykapa \ints{0}{1}{ \fex H^2 }{ d\myt } \ge \tfrac{\mykapa}{\czer} \ints{0}{1}{ (\fex H)^2 }{ d\myt } = \tfrac{\mykapa}{\czer} \ints{0}{1}{ (\prim{F}(u)h)^2 }{ d\myt } = \tfrac{\mykapa}{\czer} \normqua{\prim{F}(u)h}, \end{align*} where the estimate $ \fex \le \czer $ has been applied. The cocoerciveness statement for $ \F $ now follows from Proposition \ref{th:coco-diffable}. \proofend \subsection{Numerical experiments} The theoretical results are finally illustrated by some numerical experiments for the operator $ F : L^2(0,1) \to L^2(0,1) $ considered in \refeq{fex}, with $ \czer = 1 $ there. We give a few preparatory notes on the numerical tests first. \begin{mylist} \item In each of our numerical experiments we choose a convex closed subset $ \Mset = \Dsetc $ of the form \refeq{Muzer} with some lower bound $ \mykapa > 0 $. The setting \refeq{Muzer} guarantees \cocoerciveness (cf.~Proposition~\ref{th:Fu-ex-frechet-mono-coco}), and Lipschitz continuity \refeq{lipschitz-prime} of the operator $ \Fprime $ on $ \Mset $ holds with $ L = \czer = 1 $. We consider some $ \ust \in H^1(0,1) $ with $ \ust \in \Mset $, and then the adjoint source condition \refeq{adjoint-source-cond-mod} is satisfied for $ \myubar \equiv \ust(1) $. The solution $ \ust $ and the set $ \Mset $ are always chosen in such a way that the condition $ \varrho L < 2 $ is satisfied, \cf \refeq{adjoint-source-cond-mod} and the subsequent conclusion there. \item We consider the a priori parameter choice $ \paradelta = \delta^{2/3} $, for different values of $ \delta $. \item The modified \vipert \refeq{lavmet-vi-translate} is approximately solved by using a fixed point iteration for the corresponding fixed point equation \begin{align*} \uparadelta = \PM (\uparadelta- \mu \kla{\F \uparadelta + \para (\uparadelta - \myubar) -\fdelta}), \with \para = \paradelta, \end{align*} and the initial guess is the function $ \myubar $. Notice that the underlying fixed point operator is contractive, with contraction constant $ 1-\mu \para $, provided that the step size satisfies $ 0 < \mu < 2\tau = \mykapa $, \cf the remarks following Definition \ref{th:cocoercive}, and Proposition \ref{th:coco-diffable}. In addition, the regularization parameter must satisfy $ 0 < \para \le \tfrac{1}{\mu} - \tfrac{1}{\mykapa} $. In our numerical experiments we always choose $ \mu = \tfrac{\mykapa}{2} $. Iteration is stopped if the norm difference of two consecutive iterates satisfies, for the first time, an estimate of the form $ \le c \delta $, with some constant $ c > 0 $. This stopping criterion ensures that the resulting approximation $ \upardeldeltil \in \Mset $ satisfies $ \norm{\upardeldeltil - \upardeldel} = \Landau(\delta^{1/3}) $, which is of sufficient accuracy. \item The problem is discretized using a backward rectangular rule for the integrals, and replacing each considered (continuous) function $ \psi:[0,1] \to \reza $ by $ (\psi(nh))_{n=0,\ldots, N} $, with step size $ h = \tfrac{1}{N} $ for $ N = 200 $. This leads to a fully discretized nonlinear problem in $ \reza^{N+1} $. \item In the numerical experiments we consider perturbations of the form $ f_n^\delta = f(nh) + \Delta_n, \ n = 0,1,\ldots, N $, with uniformly distributed random values $ \Delta_n $ satisfying $ \modul{\Delta_n} \le \delta $. \end{mylist} \begin{example} \label{th:example1} We first consider the equation $ \F u = \fst $, with \rhs \begin{align*} \fst(\myt) = -\exp(-\tfrac{\myalpha}{2} \myt^2 -\mybeta \myt) \for 0 \le \myt \le 1, \end{align*} with $ \myalpha = \mybeta = \tfrac{1}{2} $. The exact solution is then given by \begin{align*} \ust(\myt) = \myalpha \myt + \mybeta \for 0 \le \myt \le 1. \end{align*} We may consider the set $ \Mset = \Dsetc $ in \refeq{Muzer} with $ \mykapa = \mybeta $. The numerical results are given in Table \ref{tab:num1}. \begin{table}[h] \hfill \begin{tabular}{|| c | c |@{\hspace{5mm} } c | c ||} \hline \hline $ \delta $ & $ 100 \myast \delta/\norm{f} $ & $ \ \norm{\upardeldel - \ust} $ & $ \ \norm{\upardeldel - \ust} \ / \delta^{1/3} \ $ \\ \hline \hline $1.0 \myast 10^{-2}$ & $1.33 \myast 10^{0}$ & $9.87 \myast 10^{-2}$ & $0.46$ \\ $5.0 \myast 10^{-3}$ & $6.66 \myast 10^{-1}$ & $8.23 \myast 10^{-2}$ & $0.48$ \\ $2.5 \myast 10^{-3}$ & $3.33 \myast 10^{-1}$ & $6.72 \myast 10^{-2}$ & $0.50$ \\ $1.2 \myast 10^{-3}$ & $1.67 \myast 10^{-1}$ & $5.42 \myast 10^{-2}$ & $0.50$ \\ $6.2 \myast 10^{-4}$ & $8.33 \myast 10^{-2}$ & $4.17 \myast 10^{-2}$ & $0.49$ \\ $3.1 \myast 10^{-4}$ & $4.16 \myast 10^{-2}$ & $3.26 \myast 10^{-2}$ & $0.48$ \\ $1.6 \myast 10^{-4}$ & $2.08 \myast 10^{-2}$ & $3.26 \myast 10^{-2}$ & $0.61$ \\ $7.8 \myast 10^{-5}$ & $1.04 \myast 10^{-2}$ & $2.72 \myast 10^{-2}$ & $0.64$ \\ $3.9 \myast 10^{-5}$ & $5.21 \myast 10^{-3}$ & $2.53 \myast 10^{-2}$ & $0.75$ \\ \hline \hline \end{tabular} \hfill \mbox{} \caption{Numerical results for Example \ref{th:example1}} \label{tab:num1} \end{table} \remarkend \end{example} \begin{example} \label{th:example2} We next consider the equation $ \F u = \fst $ with \rhs \begin{align*} \fst(\myt) = -\exp(\tfrac{\myalpha}{\pi} (\cos \pi \myt -1) - \mybeta \myt) \for 0 \le \myt \le 1, \end{align*} with $ \myalpha = \tfrac{1}{4}, \ \mybeta = \tfrac{1}{3} $. The exact solution is then given by \begin{align*} \ust(\myt) = \myalpha \sin \pi \myt + \mybeta \for 0 \le \myt \le 1. \end{align*} We may consider the set $ \Mset = \Dsetc $ in \refeq{Muzer} with $ \mykapa = \mybeta $. The numerical results are shown in Table \ref{tab:num2}. \begin{table}[h] \hfill \begin{tabular}{|| c | c |@{\hspace{5mm} } c | c ||} \hline \hline $ \delta $ & $ 100 \myast \delta/\norm{f} $ & $ \ \norm{\upardeldel - \ust} $ & $ \ \norm{\upardeldel - \ust} \ / \delta^{1/3} \ $ \\ \hline \hline $1.0 \myast 10^{-2}$ & $1.25 \myast 10^{0}$ & $7.00 \myast 10^{-2}$ & $0.32$ \\ $5.0 \myast 10^{-3}$ & $6.25 \myast 10^{-1}$ & $4.66 \myast 10^{-2}$ & $0.27$ \\ $2.5 \myast 10^{-3}$ & $3.12 \myast 10^{-1}$ & $3.87 \myast 10^{-2}$ & $0.29$ \\ $1.2 \myast 10^{-3}$ & $1.56 \myast 10^{-1}$ & $3.01 \myast 10^{-2}$ & $0.28$ \\ $6.2 \myast 10^{-4}$ & $7.81 \myast 10^{-2}$ & $2.22 \myast 10^{-2}$ & $0.26$ \\ $3.1 \myast 10^{-4}$ & $3.90 \myast 10^{-2}$ & $1.60 \myast 10^{-2}$ & $0.24$ \\ $1.6 \myast 10^{-4}$ & $1.95 \myast 10^{-2}$ & $1.08 \myast 10^{-2}$ & $0.20$ \\ $7.8 \myast 10^{-5}$ & $9.76 \myast 10^{-3}$ & $7.54 \myast 10^{-3}$ & $0.18$ \\ $3.9 \myast 10^{-5}$ & $4.88 \myast 10^{-3}$ & $4.70 \myast 10^{-3}$ & $0.14$ \\ \hline \hline \end{tabular} \hfill \mbox{} \caption{Numerical results for Example \ref{th:example2}} \label{tab:num2} \end{table} \remarkend \end{example}
{ "timestamp": "2018-06-05T02:09:43", "yymm": "1806", "arxiv_id": "1806.00743", "language": "en", "url": "https://arxiv.org/abs/1806.00743" }
\section{#1}\label{#2}\setcounter{equation}{0}} \textwidth 16truecm \textheight 8.4in\oddsidemargin0.2truecm\evensidemargin0.7truecm\voffset0truecm \def\newpage{} \f \DeclareMathOperator*{\operatornamewithlimits{ess\,sup}}{ess\,sup} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \oddsidemargin=-.3cm \topmargin=-0.5in \textwidth=6.5in \textheight=8.5in \def{\sf Pr} {{\sf Pr}} \def\sigma {{\sf g}} \def{\sf v} {{\sf v}} \def{\sf G} {{\sf G}} \def{\sf T} {{\sf T}} \def{\hat{d}} {{\hat{d}}} \newcommand{{\mathfrak A}}{{\mathfrak A}} \newcommand{{\mathfrak S}}{{\mathfrak S}} \newcommand{{\mathfrak X}}{{\mathfrak X}} \newcommand{{\mathfrak I}}{{\mathfrak I}} \newcommand{{\mathfrak H}}{{\mathfrak H}} \newcommand{{\mathfrak J}}{{\mathfrak J}} \newcommand{{\mathfrak Q}}{{\mathfrak Q}} \newcommand{{\mathfrak Z}}{{\mathfrak Z}} \newcommand{{\mathfrak P}}{{\mathfrak P}} \newcommand{ {\cA_{{\cal F}_{n+1}|{\cal F}_n}} }{ {{\mathfrak A}_{{\cal F}_{n+1}|{\cal F}_n}} } \newcommand{{\mathfrak D}}{{\mathfrak D}} \newcommand{{\mathfrak M}}{{\mathfrak M}} \newcommand{{\mathfrak Y}}{{\mathfrak Y}} \newcommand{{\mathfrak B}}{{\mathfrak B}} \newcommand{{\mathfrak G}}{{\mathfrak G}} \newcommand{{\mathfrak O}}{{\mathfrak O}} \newcommand{{\mathfrak F}}{{\mathfrak F}} \newcommand{{\mathfrak C}}{{\mathfrak C}} \newcommand{{\mathfrak N}}{{\mathfrak N}} \newcommand{{\mathfrak E}}{{\mathfrak E}} \newcommand{{\Bbb{T}}}{{\Bbb{T}}} \newcommand{{\mathfrak v}}{{\mathfrak v}} \newcommand{{\mathfrak g}}{{\mathfrak g}} \newcommand{{\mathfrak f}}{{\mathfrak f}} \newcommand{{\mathfrak s}}{{\mathfrak s}} \newcommand{{\mathfrak p}}{{\mathfrak p}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{example}{Example}[section] \newtheorem{remark}{Remark}[section] \newtheorem{definition}{Definition}[section] \newtheorem{alg}{Algorithm}[section] \newtheorem{assu}{Assumption}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{propt}{Property}[section] \newcommand{{u}}{{\bf v}} \newcommand{{ x}}{{ x}} \newcommand{{ y}}{{ y}} \newcommand{{ z}}{{ z}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal S}}{{\cal S}} \newcommand{{\cal T}}{{\cal T}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\begin{center}}{\begin{center}} \newcommand{\end{center}}{\end{center}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{align}}{\begin{align}} \newcommand{\end{align}}{\end{align}} \newcommand{\begin{lemma}}{\begin{lemma}} \newcommand{\end{lemma}}{\end{lemma}} \newcommand{\begin{Corollary}}{\begin{Corollary}} \newcommand{\end{Corollary}}{\end{Corollary}} \newcommand{\bethm}{\begin{theorem}} \newcommand{\end{theorem}}{\end{theorem}} \newcommand{\beass}{\begin{Assumption}} \newcommand{\end{Assumption}}{\end{Assumption}} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{\begin{proof}}{\begin{proof}} \newcommand{\end{proof}}{\end{proof}} \newcommand{\begin{definition}}{\begin{definition}} \newcommand{\end{definition}}{\end{definition}} \newcommand{\begin{remark}}{\begin{remark}} \newcommand{\end{remark}}{\end{remark}} \newcommand{\emptyset}{\emptyset} \newcommand{\textrm}{\textrm} \newcommand{\overline{\lim}_{\epsilon\rightarrow 0^+}}{\overline{\lim}_{\epsilon\rightarrow 0^+}} \newcommand{\inmat}[1]{\mbox{\rm {#1}}} \newcommand{\mathop{\overline{\lim}}}{\mathop{\overline{\lim}}} \newcommand{\insmat}[1]{\mathop{\rm {#1}}} \newcommand{\twoargs}[3]{ \insmat{#1}_{{\scriptstyle \rule{0cm}{.3cm}#2} \atop{\scriptstyle \rule{0cm}{.3cm}#3}}} \def\mathop{\rm Exp}{\mathop{\rm Exp}} \def\mathop{\rm Min}{\mathop{\rm Min}} \def\mathop{\rm Max}{\mathop{\rm Max}} \newcommand{{\rm Var}}{{\rm Var}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal Q}}{{\cal Q}} \newcommand{{\cal U}}{{\cal U}} \newcommand{{\cal V}}{{\cal V}} \newcommand{{\cal Z}}{{\cal Z}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal C}}{{\cal C}} \newcommand{{\cal A}}{{\cal A}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal J}}{{\cal J}} \newcommand{{\cal M}}{{\cal M}} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal Y}}{{\cal Y}} \newcommand{{\cal O}}{{\cal O}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{{\mathbb{P}}}{\mathbb{P}} \newcommand{\mathbb{N}_0}{\mathbb{N}_0} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\varrho}{\varrho} \newcommand{\rule[-2pt]{4pt}{8pt}}{\rule[-2pt]{4pt}{8pt}} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{{\rangle}}{\rightarrow} \newcommand{ \mbox{\small$\frac{1}{2}$}}{ \mbox{\small$\frac{1}{2}$}} \newcommand{ \mbox{\small$\frac{1}{4}$}}{ \mbox{\small$\frac{1}{4}$}} \newcommand{\rfp}[1]{(\ref{#1})} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\rightarrow}{\rightarrow} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\downarrow}{\downarrow} \newcommand{\uparrow}{\uparrow} \newcommand{\Longrightarrow}{\Longrightarrow} \newcommand{\rightrightarrows}{\rightrightarrows} \newcommand{\gg }{\gg } \newcommand{\big\{ }{\big\{ } \newcommand{\big\} }{\big\} } \newcommand{\bigg\{ }{\bigg\{ } \newcommand{\bigg\} }{\bigg\} } \newcommand{\langle }{\langle } \newcommand{\rangle }{\rangle } \newcommand{\colon}{\colon} \newcommand{\doteq}{\doteq} \newcommand{\mbox{\tiny\boldmath$b$}}{\mbox{\tiny\boldmath$b$}} \newcommand{\mbox{\boldmath$c$}}{\mbox{\boldmath$c$}} \newcommand{\mbox{\boldmath$g$}}{\mbox{\boldmath$g$}} \newcommand{\mbox{\boldmath$a$}}{\mbox{\boldmath$a$}} \newcommand{\mbox{\boldmath$f$}}{\mbox{\boldmath$f$}} \newcommand{\mbox{\boldmath$u$}}{\mbox{\boldmath$u$}} \newcommand{\mbox{\boldmath$\xi$}}{\mbox{\boldmath$\xi$}} \newcommand{\mbox{\scriptsize\boldmath$\xi$}}{\mbox{\scriptsize\boldmath$\xi$}} \newcommand{\mbox{\boldmath$A$}}{\mbox{\boldmath$A$}} \newcommand{\mbox{\boldmath$T$}}{\mbox{\boldmath$T$}} \newcommand{\mbox{\boldmath$X$}}{\mbox{\boldmath$X$}} \newcommand{\mbox{\boldmath$S$}}{\mbox{\boldmath$S$}} \newcommand{\mbox{\boldmath$s$}}{\mbox{\boldmath$s$}} \newcommand{\mbox{\boldmath$V$}}{\mbox{\boldmath$V$}} \newcommand{\mbox{\boldmath$x$}}{\mbox{\boldmath$x$}} \newcommand{\mbox{\boldmath$r$}}{\mbox{\boldmath$r$}} \newcommand{\mbox{\boldmath$H$}}{\mbox{\boldmath$H$}} \newcommand{\mbox{\boldmath$M$}}{\mbox{\boldmath$M$}} \newcommand{\mbox{\boldmath$z$}}{\mbox{\boldmath$z$}} \newcommand{\mbox{\boldmath$Z$}}{\mbox{\boldmath$Z$}} \newcommand{\mbox{\boldmath$Q$}}{\mbox{\boldmath$Q$}} \newcommand{\mbox{\boldmath$G$}}{\mbox{\boldmath$G$}} \newcommand{\mbox{\boldmath$B$}}{\mbox{\boldmath$B$}} \newcommand{\mbox{\boldmath$C$}}{\mbox{\boldmath$C$}} \newcommand{\mbox{\boldmath$I$}}{\mbox{\boldmath$I$}} \newcommand{\mbox{\boldmath$E$}}{\mbox{\boldmath$E$}} \newcommand{\mbox{\boldmath$U$}}{\mbox{\boldmath$U$}} \newcommand{{\bf \beta}}{{\bf \beta}} \newcommand{{\bf c}}{{\bf c}} \newcommand{{\bf e}}{{\bf e}} \newcommand{{\bf q}}{{\bf q}} \newcommand{{\vartheta}}{{\vartheta}} \newcommand{{\bf s}}{{\bf s}} \newcommand{{\bf t}}{{\bf t}} \newcommand{\mbox{\boldmath\scriptsize$x$}}{\mbox{\boldmath\scriptsize$x$}} \newcommand{\mbox{\boldmath\scriptsize$y$}}{\mbox{\boldmath\scriptsize$y$}} \newcommand{\mbox{\boldmath$0$}}{\mbox{\boldmath$0$}} \newcommand{\mbox{\boldmath$\sigma$}}{\mbox{\boldmath$\sigma$}} \newcommand{\mbox{\boldmath$\mu$}}{\mbox{\boldmath$\mu$}} \newcommand{\mbox{\boldmath$\Sigma$}}{\mbox{\boldmath$\Sigma$}} \newcommand{\mbox{\small\boldmath$\theta$}}{\mbox{\small\boldmath$\theta$}} \newcommand{\mbox{\boldmath$\Delta$}}{\mbox{\boldmath$\Delta$}} \newcommand{\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\gamma$}} \newcommand{\mbox{\boldmath$\Lambda$}}{\mbox{\boldmath$\Lambda$}} \newcommand{\mbox{\boldmath$\Psi$}}{\mbox{\boldmath$\Psi$}} \newcommand{\mbox{\boldmath$\Phi$}}{\mbox{\boldmath$\Phi$}} \newcommand{\mbox{\boldmath$\Gamma$}}{\mbox{\boldmath$\Gamma$}} \newcommand{\mbox{\boldmath$\nu$}}{\mbox{\boldmath$\nu$}} \newcommand{\mbox{\small\boldmath$\zeta$}}{\mbox{\small\boldmath$\zeta$}} \newcommand{\mbox{\small\boldmath$\beta$}}{\mbox{\small\boldmath$\beta$}} \newcommand{\mbox{\scriptsize\boldmath$\beta$}}{\mbox{\scriptsize\boldmath$\beta$}} \newcommand{\mbox{\boldmath$\delta$}}{\mbox{\boldmath$\delta$}} \newcommand{\mbox{\boldmath$\varphi$}}{\mbox{\boldmath$\varphi$}} \newcommand{\mbox{\boldmath$\upsilon$}}{\mbox{\boldmath$\upsilon$}} \newcommand{\mbox{\scriptsize\boldmath$\upsilon$}}{\mbox{\scriptsize\boldmath$\upsilon$}} \newcommand{{\varrho}}{{\varrho}} \newcommand{\mbox{\boldmath$\mathfrak g$}}{\mbox{\boldmath$\mathfrak g$}} \newcommand{\mbox{\boldmath$\psi$}}{\mbox{\boldmath$\psi$}} \newcommand{\mbox{\scriptsize\boldmath$\zeta$}}{\mbox{\scriptsize\boldmath$\zeta$}} \newcommand{\mbox{\boldmath$\Pi$}}{\mbox{\boldmath$\Pi$}} \newcommand{\mbox{\boldmath$\alpha$}}{\mbox{\boldmath$\alpha$}} \newcommand{\mbox{\scriptsize\boldmath$\theta$}}{\mbox{\scriptsize\boldmath$\theta$}} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \def\beal#1\eal{\begin{align}#1\end{align}} \def\mathop{\rm dist}{\mathop{\rm dist}} \def\omega{\omega} \def\varepsilon{\varepsilon} \def\Omega{\Omega} \def\vartheta{\vartheta} \def\varepsilon{\varepsilon} \def{\langle}{{\langle}} \def{\rangle}{{\rangle}} \def{\rm val}{{\rm val}} \def{\rm lsc}{{\rm lsc}} \def{\rm dom}{{\rm dom}} \def{\rm int}{{\rm int}} \def{\rm conv}{{\rm conv}} \def{\rm cl}{{\rm cl}} \def{\rm gph} {{\rm gph}} \def{\rm vec} {{\rm vec}} \def{\rm vecs} {{\rm vecs}} \def{\rm diag} {{\rm diag}} \def{\rm tr} {{\rm tr}} \def{\rm rank}{{\rm rank}} \def{\rm dim} {{\rm dim}} \def{\rm supp} {{\rm supp}} \def\varphi{\varphi} \def{\rm Sol} {{\rm Sol}} \def{\rm bdr} {{\rm bdr}} \def{\rm pos} {{\rm pos}} \def{\rm ess}\sup {{\rm ess}\sup} \def\,{\stackrel{\D}{\sim}}\, {\,{\stackrel{{\cal D}}{\sim}}\,} \def{\sl Ext} {{\sl Ext}} \def\fo {{\preceq}} \def\fos {{\prec}} \def\so {{\preccurlyeq}} \def\sos {{\prec}} \def{\diamond} {{\diamond}} \def{\sf CV@R} {{\sf CV@R}} \newcommand{{\sf AV@R}}{{\sf AV@R}} \newcommand{{\sf AV@R}}{{\sf AV@R}} \newcommand{{\sf EV@R}}{{\sf EV@R}} \newcommand{{\sf V@R}}{{\sf V@R}} \newcommand{\operatornamewithlimits{ess\,inf}}{\operatornamewithlimits{ess\,inf}} \def{\sf V@R} {{\sf V@R}} \def\rho^{\max} {\rho^{\max}} \def\widetilde{F}{\widetilde{F}} \def\widehat{Q}{\widehat{Q}} \def\widehat{S}{\widehat{S}} \def\bbr{{\Bbb{R}}} \def\bbe{{\Bbb{E}}} \def{\Bbb{N}}{{\Bbb{N}}} \def{\Bbb{P}}{{\Bbb{P}}} \def{\Bbb{M}}{{\Bbb{M}}} \def{\Bbb{Y}}{{\Bbb{Y}}} \def{\Bbb{A}}{{\Bbb{A}}} \def{\Bbb{Q}}{{\Bbb{Q}}} \def{\Bbb{S}}{{\Bbb{S}}} \def{\Bbb{D}}{{\Bbb{D}}} \def{\Bbb{H}}{{\Bbb{H}}} \def{\mathbb{P}}{{\mathbb{P}}} \def{{\mathbb{E}}^{\bbP^{\pi}}}{{{\mathbb{E}}^{{\mathbb{P}}^{\pi}}}} \def{\mathbb{E}}{{\mathbb{E}}} \def \rho_{t+1}{\rho_{t+1}} \def \rho_{t}{\rho_{t}} \def \alpha{\alpha} \def \sigma{\sigma} \def\overline{\Bbb{R}}{\overline{\Bbb{R}}} \newcommand{{\mbox{\boldmath$1$}}}{{\mbox{\boldmath$1$}}} \newcommand{\Ind}{{\Bbb{I}}} \renewcommand{\fbox{\bf \thesection.\arabic{equation}}}{\thesection.\arabic{equation}} \begin{document} \def{\cor \underline{??????}\cob}{{\cor \underline{??????}\cob}} \def\nto#1{{\coC \footnote{\em \coC #1}}} \def\fractext#1#2{{#1}/{#2}} \def\fracsm#1#2{{\textstyle{\frac{#1}{#2}}}} \def{} \def\cor{{} \def\cog{{} \def\cob{{} \def\coe{{} \def\coA{{} \def\coB{{} \def\coC{{} \def\coD{{} \def\coE{{} \def\coF{{} \ifnum\coloryes= \definecolor{coloraaaa}{rgb}{0.1,0.2,0.8 \definecolor{colorbbbb}{rgb}{0.1,0.7,0.1 \definecolor{colorcccc}{rgb}{0.8,0.3,0.9 \definecolor{colordddd}{rgb}{0.0,.5,0.0 \definecolor{coloreeee}{rgb}{0.8,0.3,0.9 \definecolor{colorffff}{rgb}{0.8,0.3,0.9 \definecolor{colorgggg}{rgb}{0.5,0.0,0.4 \def\cog{\color{colordddd} \def\cob{\color{black} \def\cor{\color{red} \def\coe{\color{colorgggg} \def\coA{\color{coloraaaa} \def\coB{\color{colorbbbb} \def\coC{\color{colorcccc} \def\coD{\color{colordddd} \def\coE{\color{coloreeee} \def\coF{\color{colorffff} \def\coG{\color{colorgggg} \f \ifnum\isitdraft= \chardef\coloryes=1 \baselineskip=17p \input macros.te \def\blackdot{{\color{red}{\hskip-.0truecm\rule[-1mm]{4mm}{4mm}\hskip.2truecm}}\hskip-.3truecm \def\bdot{{\coC {\hskip-.0truecm\rule[-1mm]{4mm}{4mm}\hskip.2truecm}}\hskip-.3truecm \def\purpledot{{\coA{\rule[0mm]{4mm}{4mm}}\cob} \def{\purpledot \els \baselineskip=15pt \def\blackdot{{\rule[-3mm]{8mm}{8mm}} \def\purpledot{{\rule[-3mm]{8mm}{8mm}} \def{} \f \def\fbox{\fbox{\bf\tiny I'm here; \today \ \currenttime}}{\fbox{\fbox{\bf\tiny I'm here; \today \ \currenttime}}} \def\nts#1{{\hbox{\bf ~#1~}}} \def\nts#1{{\cor\hbox{\bf ~#1~}}} \def\ntsf#1{\footnote{\hbox{\bf ~#1~}}} \def\ntsf#1{\footnote{\cor\hbox{\bf ~#1~}}} \def\bigline#1{~\\\hskip2truecm~~~~{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}{#1}\\ \def\biglineb{\bigline{$\downarrow\,$ $\downarrow\,$} \def\biglinem{\bigline{---} \def\biglinee{\bigline{$\uparrow\,$ $\uparrow\,$} \def\omega{\omega} \def\widetilde{\widetilde} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Example}[Theorem]{Example} \newtheorem{Assumption}[Theorem]{Assumption} \newtheorem{Claim}[Theorem]{Claim} \newtheorem{Question}[Theorem]{Question} \def\fbox{\bf \thesection.\arabic{equation}}{\thesection.\arabic{equation}} \def\hfill$\Box$\\{\hfill$\Box$\\} \def\hfill$\Box$\\{\hfill$\Box$\\} \def\comma{ {\rm ,\qquad{}} } \def\commaone{ {\rm ,\qquad{}} } \def\mathop{\rm dist}{\mathop{\rm dist}\nolimits} \def\sgn{\mathop{\rm sgn\,}\nolimits} \def\Tr{\mathop{\rm Tr}\nolimits} \def\div{\mathop{\rm div}\nolimits} \def{\rm supp}{\mathop{\rm supp}\nolimits} \def\divtwo{\mathop{{\rm div}_2\,}\nolimits} \def\re{\mathop{\rm {\mathbb R}e}\nolimits} \def\div{\mathop{\rm{Lip}}\nolimits} \def\indeq{\qquad{}} \def\hbox{\huge\textbullet}{.} \def\semicolon{\,;} \title{Robust Optimal Control Using Conditional Risk Mappings in Infinite Horizon} \author{Kerem U\u{g}urlu} \maketitle \date{} \begin{center} \end{center} \medskip \indent Department of Applied Mathematics, University of Washington, Seattle, WA 98195\\ \indent e-mail: keremu@uw.edu \begin{abstract} We use one-step conditional risk mappings to formulate a risk averse version of a total cost problem on a controlled Markov process in discrete time infinite horizon. The nonnegative one step costs are assumed to be lower semi-continuous but not necessarily bounded. We derive the conditions for the existence of the optimal strategies and solve the problem explicitly by giving the robust dynamic programming equations under very mild conditions. We further give an $\epsilon$-optimal approximation to the solution and illustrate our algorithm in two examples of optimal investment and LQ regulator problems. \end{abstract} \section{Introduction} Controlled Markov decision processes have been an active research area in sequential decision making problems in operations research and in mathematical finance. We refer the reader to \cite{HL,HL2,SB} for an extensive treatment on theoretical background. Classically, the evaluation operator has been the expectation operator, and the optimal control problem is to be solved via Bellman's dynamic programming \cite{key-6}. This approach and the corresponding problems continue to be an active research area in various scenarios (see e.g. the recent works \cite{SN,NST,GHL} and the references therein) On the other hand, expected values are not appropriate to measure the performance of the agent. Hence, expected criteria with utility functions have been extensively used in the literature (see e.g. \cite{FS1,FS2} and the references therein). Other than the evaluation of the performance via utility functions, to put risk aversion into an axiomatic framework, coherent risk measures has been introduced in the seminal paper \cite{key-1}. \cite{FSH1} has removed the positive homogeneity assumption of a coherent risk measure and named it as a convex risk measure (see \cite{FSH2} for an extensive treatment on this subject). However, this kind of operator has brought up another difficulty. Deriving dynamic programming equations with these operators in multistage optimization problems is challenging or impossible in many optimization problems. The reason for it is that the Bellman's optimality principle is not necessarily true using this type of operators. That is to say, the optimization problems are not \textit{time-consistent}. Namely, a multistage stochastic decision problem is time-consistent, if resolving the problem at later stages (i.e., after observing some random outcomes), the original solutions remain optimal for the later stages. We refer the reader to \cite{key-141, key-334, key-148, AS, BMM} for further elaboration and examples on this type of inconsistency. Hence, optimal control problems on multi-period setting using risk measures on bounded and unbounded costs are not vast, but still, some works in this direction are \cite{ key-149,key-150,key-151, BO}. To overcome this deficit, dynamic extensions of convex/coherent risk measures so called conditional risk measures are introduced in \cite{FR} and studied extensively in \cite{RS06}. In \cite{key-3}, so called Markov risk measures are introduced and an optimization problem is solved in a controlled Markov decision framework both in finite and discounted infinite horizon, where the cost functions are assumed to be bounded. This idea is extended to transient models in \cite{CR1,CR2} and to unbounded costs with $w$-weighted bounds in \cite{LS,CZ,SSO} and to so called \textit{process-based} measures in \cite{FR1} and to partially observable Markov chain frameworks in \cite{FR2}. In this paper, we derive \textit{robust} dynamic programming equations in discrete time on infinite horizon using one step conditional risk mappings that are dynamic analogues of coherent risk measures. We assume that our one step costs are nonnegative, but may well be unbounded from above. We show the existence of an optimal policy via dynamic programming under very mild assumptions. Since our methodology is based on dynamic programming, our optimal policy is by construction time consistent. We further give a recipe to construct an $\epsilon$-optimal policy for the infinite horizon problem and illustrate our theory in two examples of optimal investment and LQ regulator control problem, respectively. To the best of our knowledge, this is the first work solving the optimal control problem in infinite horizon with the minimal assumptions stated in our model. The rest of the paper is as follows. In Section 2, we briefly review the theoretical background on coherent risk measures and their dynamic analogues in multistage setting, and further describe the framework for the controlled Markov chain that we will work on. In Section 3, we state our main result on the existence of the optimal policy and the existence of optimality equations. In Section 4, we prove our main theorem and present an $\epsilon$ algorithm to our control problem. In Section 5, we illustrate our results with two examples, one on an optimal investment problem, and the other on an LQ regulator control problem. \section{Theoretical Background} In this section, we recall the necessary background on static coherent risk measures, and then we extend this kind of operators to the dynamic setting in controlled Markov chain framework in discrete time. \subsection{Coherent Risk Measures} Consider an atomless probability space $(\Omega,{\cal F}, {\mathbb{P}})$ and the space ${\cal Z}: = L^1(\Omega,{\cal F},{\mathbb{P}})$ of measurable functions $Z:\Omega \rightarrow \mathbb{R}$ (random variables) having finite first order moment, i.e. ${\mathbb{E}}^{{\mathbb{P}}}[|Z|] < \infty$, where ${\mathbb{E}}^{\mathbb{P}}[\cdot]$ stands for the expectation with respect to the probability measure ${\mathbb{P}}$. A mapping $\rho:{\cal Z} \rightarrow \mathbb{R}$ is said to be a \textit{coherent risk measure}, if it satisfies the following axioms \begin{itemize} \item (A1)(Convexity) $\rho(\lambda X+(1-\lambda)Y)\leq\lambda\rho(X)+(1-\lambda)\rho(Y)$ $\forall\lambda\in(0,1)$, $X,Y \in {\cal Z}$. \item (A2)(Monotonicity) If $X \preceq Y$, then $\rho(X) \leq \rho(Y)$, for all $X,Y \in {\cal Z}$. \item (A3)(Translation Invariance) $\rho(c+X) = c + \rho(X)$, $\forall c\in\mathbb{R}$, $X\in {\cal Z}$. \item (A4)(Homogeneity) $\rho(\beta X)=\beta\rho(X),$ $\forall X\in {\cal Z}$. $\beta \geq 0$. \end{itemize} The notation $X \preceq Y$ means that $X(\omega) \leq Y(\omega)$ for ${\mathbb{P}}$-a.s. Risk measures $\rho:{\cal Z} \rightarrow \mathbb{R}$, which satisfy (A1)-(A3) only, are called convex risk measures. We remark that under the fourth property (homogeneity), the first property (convexity) is equivalent to sub-additivity. We call the risk measure $\rho:{\cal Z}\rightarrow \mathbb{R}$ law invariant, if $\rho(X) = \rho(Y)$, whenever $X$ and $Y$ have the same distributions. We pair the space ${\cal Z} = L^1(\Omega,{\cal F}, {\mathbb{P}})$ with ${\cal Z}^* = L^\infty(\Omega,{\cal F}, {\mathbb{P}})$, and the corresponding scalar product \begin{equation} \langle \zeta, Z \rangle = \int_\Omega \zeta(\omega)Z(\omega)dP(\omega), \; \zeta\in {\cal Z}^*,Z\in{\cal Z}. \end{equation} By \cite{key-14}, we know that real-valued law-invariant convex risk measures are continuous, hence lower semi-continuous (l.s.c.), in the norm topology of the space $L^1(\Omega,{\cal F}, {\mathbb{P}})$. Hence, it follows by Fenchel-Moreau theorem that \begin{equation} \label{eqn12} \rho(Z) = \sup_{\zeta \in {\cal Z}^*} \{ \langle \zeta, Z \rangle - \rho^*(\zeta) \},\;\textrm{for all } Z \in {\cal Z}, \end{equation} where $\rho^*(Z) = \sup_{Z \in {\cal Z}} \{ \langle \zeta, Z \rangle - \rho(Z) \}$ is the corresponding conjugate functional (see \cite{RW}). If the risk measure $\rho$ is convex and positively homogeneous, hence coherent, then $\rho^*$ is an indicator function of a convex and closed set ${\mathfrak A} \subset {\cal Z}^*$ in the respective paired topology. The dual representation in Equation \ref{eqn12} then takes the form \begin{equation} \label{eqn13} \rho(Z) = \sup_{\zeta \in {\mathfrak A}} \langle \zeta,Z \rangle ,\;Z\in {\cal Z}, \end{equation} where the set ${\mathfrak A}$ consists of probability density functions $\zeta:\Omega\rightarrow \mathbb{R}$, i.e. with $\zeta \succeq 0$ and $\int\zeta dP = 1$. A fundamental example of law invariant coherent risk measures is Average- Value-at-Risk measure (also called the Conditional-Value-at-Risk or Expected Shortfall Measure). Average-Value- at-Risk at the level of $\alpha$ for $Z \in {\cal Z}$ is defined as \begin{equation} \label{eqn016} {\sf AV@R}_\alpha (Z) = \frac{1}{1-\alpha}\int_\alpha^1 {\sf V@R}_p(Z)dp, \end{equation} where \begin{equation} {\sf V@R}_p(Z) = \inf \{ z \in \mathbb{R}: {\mathbb{P}}(Z \leq z) \geq p \} \end{equation} is the corresponding left side quantile. The corresponding dual representation for ${\sf AV@R}_\alpha(Z)$ is \begin{equation} {\sf AV@R}_\alpha(Z) = \sup_{m \in {\cal A}}\langle m,Z \rangle , \end{equation} with \begin{equation} {\cal A} = \{ m \in L^\infty(\Omega,{\cal F},{\mathbb{P}}): \int_\Omega md{\mathbb{P}} =1, 0 \leq \lVert m \rVert_\infty \leq \frac{1}{\alpha} \}. \end{equation} Next, we give a representation characterizing any law invariant coherent risk measure, which is first presented in Kusuoka \cite{K} for random variables in $L^\infty(\Omega, {\cal F}, {\mathbb{P}})$, and later further investigated in ${\cal Z}^p = L^p(\Omega, {\cal F}, {\mathbb{P}})$ for $1 \leq p < \infty$ in \cite{PR}. \begin{lemma} \label{lem11}\cite{K} Any law invariant coherent risk measure $\rho: {\cal Z}^p \rightarrow \mathbb{R}$ can be represented in the following form \begin{equation} \rho(Z) = \sup_{\nu \in {\mathfrak M}}\int_0^1 {\sf AV@R}_\alpha(Z)d\nu(\alpha), \end{equation} where ${\mathfrak M}$ is a set of probability measures on the interval [0,1]. \end{lemma} \subsection{Controlled Markov Chain Framework} Next, we introduce the controlled Markov chain framework that we are going to study our problem on. We take the control model $\mathcal{M} = \{ \mathcal{M}_n, n \in \mathbb{N}_0 \}$, where for each $n \geq 0$, we have \begin{equation} \label{eqn270} {\cal M}_n := (X_n, A_n, \mathbb{K}_n, Q_n, F_n, c_n) \end{equation} with the following components: \begin{itemize} \item $X_n$ and $A_n$ denote the state and action (or control) spaces,which are assumed to be complete seperable metric spaces with their corresponding Borel $\sigma$-algebras ${\cal B}(X_n)$ and ${\cal B}(A_n)$. \item For each $x_n \in X_n$, let $A_n(x_n) \subset A_n$ be the set of all admissible controls in the state $x_n$. Then \begin{equation} \mathbb{K}_n := \{ (x_n,a_n): x_n \in X_n,\; a_n \in A_n \} \end{equation} stands for the set of feasible state-action pairs at time $n$. \item We let \begin{equation} \label{eqn23} x_{i+1} = F_i(x_i,a_i, \xi_i), \end{equation} for all $i = 0,1,...$ with $x_i \in X_i$ and $a_i \in A_i$ as described above, with independent random variables $(\xi_i)_{i \geq 0}$ on the atomless probability space \begin{equation} \label{eqn2100} (\Omega^i,\mathcal{G}^i, {\mathbb{P}}^i). \end{equation} We take that $\xi_i \in S_i$, where $S_i$ are Borel spaces. Moreover, we assume that the system equation \begin{equation} \label{eqn678} F_i: \mathbb{K}_i \times S_i \rightarrow X_i \end{equation} as in Equation \eqref{eqn23} is continuous. \item We let \beal \label{eqn27} \Omega &= \otimes_{i=1}^{\infty} X^i \eal where $X^i$ is as defined in Equation \eqref{eqn678}. For $n\geq 0$, we let \beal \label{eqn28} {\cal F}_n &= \sigma(\sigma({\displaystyle \cup_{i=0}^n \mathcal{G}^i)} \cup \sigma(X_0,A_0,X_1,A_1\ldots,A_{n-1},X_n)) \\ {\cal F} &= \sigma(\cup_{i=0}^\infty {\cal F}_{i}) \eal be the filtration of increasing $\sigma$-algebras. Furthermore, we define the corresponding probability measures $(\Omega,{\cal F})$ as \beal \label{eqn2140} {\mathbb{P}} &= \prod_{i=1}^{\infty} {\Bbb{P}}^i, \eal where the existence of ${\mathbb{P}}$ is justified by Kolmogorov extension theorem (see \cite{HL}). We assume that for any $n\geq 0$, the random vector $\xi_{[n]} = (\xi_0,\xi_1,\ldots,\xi_n)$ and $\xi_{n+1}$ are independent on $(\Omega,{\cal F},{\mathbb{P}})$. \item The transition law is denoted by $Q_{n+1}(B_{n+1}|x_n,a_n)$, where $B_{n+1} \in \mathcal{B}(X_{n+1})$ is the Borel $\sigma$-algebra on $X_n$, and $(x_n,a_n) \in X_n \times A_n$ is a stochastic kernel on $X_n$ given $\mathbb{K}_n$ (see \cite{SB,HL} for further details). We remark here that at each $n\geq 0$ the stochastic kernel depends only on $(x_n,a_n)$ rather than ${\cal F}_n$. That is, for each pair $(x_n,a_n) \in \mathbb{K}_n$, $Q_{n+1}(\cdot|x_n,a_n)$ is a probability measure on $X_{n+1}$, and for each $B_{n+1} \in \mathcal{B}_{n+1}(X_{n+1})$, $Q_{n+1}(B_{n+1}|\cdot,\cdot)$ is a measurable function on $\mathbb{K}_n$. Let $x_0 \in X_0$ be given with the corresponding policy $\Pi = (\pi_n)_{n\geq0}$. By the Ionescu Tulcea theorem (see e.g. \cite{HL}), we know that there exists a unique probability measure ${\mathbb{P}}^\pi$ on $(\Omega, {\cal F})$ such that given $x_0 \in X_0$, a measurable set $B_{n+1} \subset X_{n+1}$ and $(x_n,a_n) \in \mathbb{K}_n$, for any $n\geq 0$, we have \beal \label{eqn2150} {\mathbb{P}}^{\Pi}_{n+1} (x_{n+1} \in B_{n+1}) &\triangleq Q_{n+1}(B_{n+1} | x_n,a_n). \eal \item Let $\mathbb{F}_n$ be the family of measurable functions $\pi_n:X_n \rightarrow A_n$ for $n \geq 0$. A sequence $( \pi_n )_{n\geq 0}$ of functions $\pi_n \in \mathbb{F}_n$ for $n \geq 0$ is called a control policy (or simply a policy), and the function $\pi_n(\cdot)$ is called the decision rule or control at time $n\geq 0$. We denote by $\Pi$ the set of all control policies. For notational convenience, for every $n \in \mathbb{N}_0$ and $(\pi_n)_{n \geq 0} \in \Pi$, we write \begin{align*} c_n(x_n,\pi_n) &:= c_n(x_n,\pi_n(x_n))\\ &:= c_n(x_n,a_n). \end{align*} We denote by ${\mathfrak P}(A_n(x_n))$ as the set of probability measures on $A_n(x_n)$ for each time $n\geq0$. A randomized Markovian policy $(\pi_n)_{n \geq 0}$ is a sequence of measurable functions such that $\pi_n(x_n) \in {\mathfrak P}(A_n(x_n))$ for all $x_n \in X_n$, i.e. $\pi_n(x_n)$ is a probability measure on $A_n(x_n)$. $(\pi_n)_{n \geq 0}$ is called a deterministic policy, if $\pi_n(x_n) = a_n$ with $a_n \in A_n(x_n)$. \item $c_n(x_n,a_n): \mathbb{K}_n \rightarrow \mathbb{R}_{+}$ is the real-valued cost-per-stage function at stage $n \in \mathbb{N}_0$ with $(x_n,a_n) \in \mathbb{K}_n$. \end{itemize} \begin{definition} \label{defn31} A real valued function $v$ on $\mathbb{K}_n$ is said to be inf-compact on $\mathbb{K}_n$, if the set \begin{equation} \{ a_n \in A_n(x_n) | v(x_n,a_n) \leq r \} \end{equation} is compact for every $x_n \in X_n$ and $r \in \mathbb{R}$. As an example, if the sets $A_n(x_n)$ are compact and $v(x_n,a_n)$ is l.s.c. in $a_n\in A_n(x_n)$ for every $x_n \in X_n$, then $v(\cdot,\cdot)$ is inf-compact on $\mathbb{K}_n$. Conversely, if $v$ is inf-compact on $\mathbb{K}_n$, then $v$ is l.s.c. in $a_n \in A_n(x_n)$ for every $x_n \in X_n$. \end{definition} We make the following assumption about the transition law $(Q_n)_{n\geq1}$. \begin{Assumption}\label{ass31} For any $n \geq 0$, the transition law $Q_n$ is weakly continuous; i.e. for any continuous and bounded function $u(\cdot)$ on $X_{n+1}$, the map \begin{equation} (x_n,a_n) \rightarrow \int_{X_{n+1}} u(y)dQ_n(y|x_n,a_n) \end{equation} is continuous on $\mathbb{K}_n$. \end{Assumption} Furthermore, we make the following assumptions on the one step cost functions and action sets. \beass \label{ass32} For every $n \geq 0$, \begin{itemize} \item the real valued non-negative cost function $c_n(\cdot,\cdot)$ is l.s.c. in $(x_n,a_n)$. That is for any $(x_n,a_n) \in X_n \times A_n$, we have \begin{equation} c_n(x_n,a_n) \leq \liminf_{(x^k_n,a^k_n) \rightarrow (x_n,a_n) } c_n(x^k_n,a^k_n), \end{equation} as $k\rightarrow \infty$. \item The multifunction (also known as a correspondence or point-to-set function) $x_n \rightarrow A_n(x_n)$, from $X_n$ to $A_n$, is upper semicontinuous (u.s.c.) that is, if $\{x_n^l\} \subset X_n$ and $\{ a_n^l\} \subset A_n$ are sequences such that $\{x_n^l\} \rightarrow \bar{x}_n$ with $\{a_n^l\} \subset A_n$ for all $l$, and $a_n^l \rightarrow \bar{a}_n$, then $\bar{a}_n$ is in $A_n(\bar{x}_n)$. \item For every state $x_n \in X_n$, the admissible action set $A_n(x_n)$ is compact. \end{itemize} \end{Assumption} \subsection{Conditional Risk Mappings} In order to construct dynamic models of risk, we extend the concept of static coherent risk measures to dynamic setting. For any $n \geq 1$, we denote the space ${\cal Z}_n: = L^1(\Omega,{\cal F}_n,{\mathbb{P}}^\pi_{n})$ of measurable functions with $Z:\Omega \rightarrow \mathbb{R}$ (random variables) having finite first order moment, i.e. ${\mathbb{E}}^{{\mathbb{P}}^\pi_{n}}[|Z|] < \infty$ ${\mathbb{P}}^\pi_{n}$-a.s., where ${\mathbb{E}}^{{\mathbb{P}}^\pi_{n}}$ stands for the conditional expectation at time $n$ with respect to the conditional probability measure ${\mathbb{P}}^\pi_{n}$ as defined in Equation \eqref{eqn2150}. \begin{definition}\label{def21} Let $X,Y \in {\cal Z}_{n+1}$. We say that a mapping $\rho_n:{\cal Z}_{n+1} \rightarrow {\cal Z}_n$ is a one step conditional risk mapping, if it satisfies following properties \begin{itemize} \item(a1) Let $\gamma \in [0,1]$. Then, \begin{equation} \rho_{n}(\gamma X + (1-\gamma) Y) \preceq \gamma \rho_{n}(X) + (1-\gamma)\rho_{n}(Y) \end{equation} \item (a2) If $X \preceq Y$, then $\rho_{n}(X) \preceq \rho_{n}(Y)$ \item (a3) If $Y \in {\cal Z}_{n}$ and $X \in {\cal Z}_{n+1}$, then $\rho_{n}(X + Y) = \rho_{n}(X) + Y$. \item (a4) For $\lambda \succeq 0$ with $\lambda \in {\cal Z}_n$ and $X \in {\cal Z}_{n+1}$, we have that $\rho_{n+1}(\lambda X) = \lambda \rho_{n+1}(X)$. \end{itemize} \end{definition} Here, the relation $Y(\omega) \preceq X(\omega)$ stands for $Y \leq X$ ${\mathbb{P}}^{\pi}_n$-a.s. We next state the analogous results for representation theorem for conditional risk mappings as in Equation \eqref{eqn13} (see also \cite{RS06}). \bethm \label{thm21} Let $\rho_n: {\cal Z}_{n+1} \rightarrow {\cal Z}_n$ be a law-invariant conditional risk mapping satisfying assumptions as stated in Definition \ref{def21}. Let $Z \in {\cal Z}_{n+1}$. Then \begin{equation} \label{eqn2220} \rho_n(Z) = \sup_{\mu \in {\mathfrak A}_{n+1}} \langle \mu, Z \rangle , \end{equation} where ${\mathfrak A}_{n+1}$ is a convex closed set of conditional probability measures on $(\Omega, {\cal F}_{n+1})$, that are absolutely continuous with respect to ${\mathbb{P}}^{\pi}_{n+1}$. \end{theorem} Next, we give the Kusuoka representation for conditional risk mappings analogous to Lemma \ref{lem11}. \begin{lemma} \label{lem22} Let $\rho_n:{\cal Z}_{n+1} \rightarrow {\cal Z}_{n}$ be a law invariant one-step conditional risk mapping satisfying Assumptions (a1)-(a4) as in Definition \ref{def21}. Let $Z \in {\cal Z}_{n+1}$. Then, conditional Average-Value-at-Risk at the level of $0 < \alpha < 1$ is defined as \begin{equation} \label{eqn016} {\sf AV@R}^{n}_\alpha (Z) \triangleq \frac{1}{1-\alpha}\int_\alpha^1 {\sf V@R}^{n}_p(Z)dp, \end{equation} where \begin{equation} \label{eqn212} {\sf V@R}^{n}_p(Z) \triangleq \operatornamewithlimits{ess\,inf} \{ z \in \mathbb{R}: {\mathbb{P}}^{\pi}_{n+1}(Z \leq z) \geq p \}. \end{equation} Here, we note that ${\sf V@R}^{n}_p(Z)$ is ${\cal F}_{n}$-measurable by definition of essential infimum (see \cite{FSH2} for a definition of essential infimum and essential supremum). Then, we have \begin{equation} \label{eqn2130} \rho_{n}(Z) \triangleq \operatornamewithlimits{ess\,sup}_{\nu \in {\mathfrak M}}\int_0^1{\sf AV@R}^n_\alpha(Z)d\nu(\alpha), \end{equation} where ${\mathfrak M}$ is a set of probability measures on the interval [0,1]. \end{lemma} \begin{remark} By Equations \eqref{eqn016},\eqref{eqn212} and \eqref{eqn2130}, it is easy to see that the corresponding optimal controls at each time $n \geq 0$ is deterministic, if the one step conditional risk mappings are ${\sf AV@R}^n_\alpha: {\cal Z}_{n+1} \rightarrow {\cal Z}_{n}$ as defined in \eqref{eqn016}. On the other hand, by Kusuoka representation, Equation \eqref{eqn2130}, it is clear that for other coherent risk randomized policies might be optimal. In this paper, we restrict our study to deterministic policies. \end{remark} \begin{definition}\label{defn230} A policy $\pi \in \Pi$ is called admissible, if for any $n\geq 0$, we have \beal &c_n(x_n,a_n) + \lim_{N\rightarrow \infty}\gamma\rho_n\big( c_{n+1}(x_{n+1},a_{n+1}) \\ &\indeq + \gamma\rho_{n+1}( c_{n+2}(x_{n+2},a_{n+2}) \ldots + \gamma\rho_{N-1}(c_N(x_N,a_N))) \big) < \infty,\; {\mathbb{P}}^\pi_{n}\textrm{ a.s. } \eal The set of all admissible policies is denoted by $\Pi_{\mathrm{ad}}$. \end{definition} \section{Main Problem} Under Assumptions \ref{ass31}, \ref{ass32}, our control problem reads as \beal \label{eqn321} &\inf_{\pi \in \Pi_{\textrm{ad}}} \bigg( c_0(x_0,a_0) + \lim_{N\rightarrow \infty}\gamma\rho_0( c_1(x_1,a_1) + \gamma\rho_{1}( c_2(x_2,a_2) \\ &\indeq \ldots + \gamma\rho_{N-1}(c_N(x_N,a_N))) \bigg) \eal Namely, our objective is to find a policy $(\pi^*_n)_{ n \geq 0}$ such that the value function in Equation \eqref{eqn321} is minimized. For convenience, we introduce the following notations that are to be used in the rest of the paper \begin{align*} \varrho_{n-1} ( \sum_{t=n}^\infty c_t(x_t,\pi_t) ) &:= \lim_{N\rightarrow \infty}\gamma\rho_{n-1} (c_n(x_n,a_n) + \gamma\rho_{n}( c_{n+1}(x_{n+1},a_{n+1}) \\ &... + \rho_{N-1}(c_N(x_N,a_N))) \\ V_n(x,\pi) &:= c_n(x_n,a_n) + \varrho_n(\sum_{t=n+1}^\infty c_t(x_t,a_t)) \end{align*} \begin{align*} V_n^*(x) &:= \inf_{\pi \in \Pi_{\textrm{ad}}} c_n(x_n,a_n) + \varrho_n(\sum_{t=n+1}^\infty c_t(x_t,a_t))\\ V_{n,N}(x,\pi) &:= c_n(x_n,a_n) + \varrho_n(\sum_{t=n+1}^{N-1}c_t(x_t,a_t)) \\ V_{N,\infty}(x,\pi) &:= c_N(x_N,a_N) + \varrho_N(\sum_{t=N+1}^\infty c_t(x_t,a_t)) \\ V_{n,N}^*(x) &:= \inf_{\pi \in \Pi_{\textrm{ad}}} c_N(x_N,a_N) +\varrho_n(\sum_{t=n+1}^Nc_t(x_t,a_t)) \end{align*} For the control problem to be nontrivial, we need the following assumption on the existence of the policy. \begin{Assumption} \label{ass41} There exists a policy $\pi \in \Pi_{\textrm{ad}}$ such that \begin{equation} c_0(x_0, a_0) + \varrho_0(x_0) < \infty. \end{equation} \end{Assumption} We are now ready to state our main theorem. \bethm \label{thm41} Let $0 < \gamma < 1$. Suppose that Assumptions \ref{ass31}, \ref{ass32} and \ref{ass41} are satisfied. Then, \begin{itemize} \item[(a)] the optimal cost functions $V_n^*$ are the pointwise minimal solutions of the optimality equations: that is, for every $n \in \mathbb{N}_0$ and $x_n \in X_n$, \begin{equation} \label{eqn33} V_n^*(x_n) = \inf_{a \in A(x_n)} \bigg(c_n(x_n,a_n) + \gamma\rho_n(V_{n+1}^*(x_{n+1}))\bigg). \end{equation} \item[(b)] There exists a policy $\pi^* = (\pi^*_n)_{n \geq 0}$ such that for each $n \geq 0$, the control attains the minimum in \eqref{eqn33}, namely for $x_n \in X_n$ \begin{equation} V_n^*(x_n) = c_n(x_n,\pi^*_n) + \gamma\rho_n(V_{n+1}^*(x_{n+1}) ). \end{equation} \end{itemize} \end{theorem} \section{Proof of Main Result} \begin{lemma} \cite{key-35} \label{lem31} Fix an arbitrary $n \in \mathbb{N}_0$. Let $\mathbb{K}$ be defined as \begin{equation} \mathbb{K} := \{ (x,a)| x \in X, a \in A(x) \}, \end{equation} where $X$ and $A$ are complete seperable metric Borel spaces and let $v: \mathbb{K} \rightarrow \mathbb{R}$ be a given ${\cal B}(X \times A)$ measurable function. For $x \in X$, define \begin{equation} v^*(x) := \inf_{a \in A(x)} v(x,a). \end{equation} If $v$ is non-negative, l.s.c. and inf-compact on $\mathbb{K}$ as defined in Definition \ref{defn31}, then for any $x \in X$, there exists a measurable mapping $\pi_n: X \rightarrow A$ such that \begin{equation} v^*(x) = v(x,\pi_n) \end{equation} and $v^*(\cdot):X\rightarrow \mathbb{R}$ is measurable, and l.s.c. \end{lemma} \begin{lemma} \label{lem52} For any $n \geq 1$, let $c_n(x_n,a_n)$ be in ${\cal Z}_n$. Then $\rho_{n-1}(c_n(x_n,a_n))$ is an element of ${\cal Z}_{n-1} = L^1(\Omega,{\cal F}_n,{\mathbb{P}}^\pi_n)$. \end{lemma} \begin{proof} Let $\mu \in {\mathfrak A}_{n}$ be as in Theorem \ref{thm21}. By non-negativity of the one step cost function $c_n(\cdot, \cdot)$ and by Fatou Lemma, we have \begin{equation} \label{eqn433} \langle \mu,c_n(x_n,a_n) {\rangle} \leq \liminf_{(x^k_n,a^k_n) \rightarrow (x_n,a_n)} \langle \mu,c_n(x^k_n,a^k_n) {\rangle}. \end{equation} Hence, $\langle \mu,c_n(x_n,a_n) {\rangle}$ is l.s.c. for ${\mathbb{P}}^{\pi}_{n-1}$-a.s. Then, by Equation \eqref{eqn2220}, we have \begin{equation} \label{eqn434} \rho_{n-1}(c_n(x_n,a_n)) = \operatornamewithlimits{ess\,sup}_{\mu \in {\mathfrak A}_{n}} \langle \mu, c_n(x_n,a_n) \rangle . \end{equation} Hence, by Equation \eqref{eqn433} and by Equation \eqref{eqn434} taking supremum of l.s.c. functions being still l.s.c., we conclude that for fixed $\omega$, $\rho_{n-1}(c_n(x_n(\omega),a_n(\omega)))$ is l.s.c. with respect to $(x_n,a_n)$. Next, we show that $\rho_{n-1}(c_n(x_n,a_n))$ is ${\cal F}_{n-1}$ measurable. By Lemma \ref{lem22}, we have \beal \label{eqn018} &\rho_{n-1}(c_n(x_{n},a_{n})) = \operatornamewithlimits{ess\,sup}_{\nu \in {\mathfrak M}} \int_{[0,1]} {\sf AV@R}^{n-1}_\alpha(c_n(x_{n},a_{n})) d\nu,\\ &\indeq= \operatornamewithlimits{ess\,sup}_{\nu \in {\mathfrak M}} \int_{[0,1]} \frac{1}{1-\alpha}\int_\alpha^1 {\sf V@R}_p^{n-1}(c_n(x_{n},a_{n})) dp\; d\nu\\ &\indeq= \operatornamewithlimits{ess\,sup}_{\nu \in {\mathfrak M}} \int_{[0,1]} \frac{1}{1-\alpha}\int_\alpha^1 \operatornamewithlimits{ess\,inf} \big( z \in \mathbb{R}: {\mathbb{P}}^\pi_n(c_n(x_{n},a_{n})\leq z) \geq p \big) dp\; d\nu, \eal where ${\mathfrak M}$ is a set of probability measures on the interval [0,1]. By noting that for any $p \in [\alpha,1]$, $\operatornamewithlimits{ess\,inf} \big( z \in \mathbb{R}: {\mathbb{P}}^\pi_n(c_n(x_{n},a_{n})\leq z) \geq p \big)$ is ${\cal F}_{n-1}$-measurable, and then, by integrating from $\alpha$ to 1 and multiplying by $\frac{1}{1-\alpha}$, ${\cal F}_{n-1}$ measurability is preserved. Similarly, in Equation \ref{eqn018}, integrating with respect to a probability measure $\nu$ on $[0,1]$ and taking supremum of the integrals preserve ${\cal F}_{n-1}$ measurability. Hence, we conclude the proof. \end{proof} \begin{Corollary} Let $n \geq 1$, $x_n \in X_n$ and $a_n \in A_n$, where $X_n$ and $A_n$ are as introduced in Equation \eqref{eqn270}. Then, \label{cor51} \begin{equation} \min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n)) \end{equation} is l.s.c. in $x_n$ ${\mathbb{P}}^{\pi}_{n-1}$-a.s. Furthermore, $\displaystyle\min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n))$ is ${\cal F}_{n-1}$ measurable. \end{Corollary} \begin{proof} We know by Lemma \ref{lem52}, $\rho_{n-1}(c_n(x_n,a_n))$ is l.s.c. ${\mathbb{P}}^{\pi}_{n-1}$-a.s. Hence, by Lemma \ref{lem31}, \begin{equation} \min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n)) \end{equation} is l.s.c. in $x_n$ for any $x_n \in X_n$ ${\mathbb{P}}^{\pi}_{n-1}$-a.s. for $n \geq 1$. Furthermore, by Lemma \ref{lem31}, we know that there exists an $\pi^* \in \Pi$ such that \beal \min_{a_n \in \pi(x_n)} \rho_{n-1}(c_n(x_n,a_n)) &= \rho_{n-1}(c_n(x_n,\pi^*(x_n)))\\ &= \rho_{n-1}(c_n(F_{n-1}(x_{n-1},a_{n-1},\xi_{n-1}),\\ &\indeq \pi^*(F_{n-1}(x_{n-1},a_{n-1},\xi_{n-1})))),\\ \eal where $F_{n-1}$ is as defined in Equation \eqref{eqn23}, but we know that $\rho_{n-1}(c_n(x_n,\pi^*_n)$ is ${\cal F}_{n-1}$ measurable. Hence, the result follows by Lemma \ref{lem52}. \end{proof} For every $n \geq 0$, let $L_n(X_n)$ and $L_n(X_n,A_n)$ be the family of non-negative mappings on $(X_n,A_n)$, respectively. Denote \begin{equation} \label{eqn41} T_{n}(v_{n+1}) := \min_{a_n \in A(x_n)} \big\{ c_n(x_n,a_n) + \gamma\rho_n(v_{n+1}(F_n(x_n,a_n, \xi_n)))\big\}. \end{equation} \begin{lemma} \label{lem53} Suppose that Assumption \ref{ass31}, \ref{ass32} and \ref{ass41} hold, then for every $n \geq 0$, we have \begin{itemize} \item[(a)] $T_n$ maps $L_{n+1}(X_{n+1})$ into $L_{n}(X_{n})$. \item[(b)] For every $v_{n+1} \in L_{n+1}(X_{n+1})$, there exists a policy $\pi^*_n$ such that for any $x_n \in X_n$, $\pi^*_n(x_n) \in A_n(x_n)$ attains the minimum in \eqref{eqn41}, namely \begin{equation} \label{eqn316} T_{n}(v_{n+1}) := c_n(x_n,\pi^*_n) + \gamma \rho(v_{n+1}(F_n(x_n,\pi^*_n, \xi_n))) \end{equation} \end{itemize} \end{lemma} \begin{proof} By assumption, our one-step cost functions $c_n(x_n,a_n)$ are in $L_n(X_n)$. By Corollary \ref{cor51}, $\gamma \rho_n(v_{n+1}(F_n(x_n,\pi^*_n, \xi_n)))$ is in $L_n(X_n)$. Hence their sum is in $L_n(X_n,A_n)$, as well. Hence, the result follows via Corollary \ref{cor51} again. \end{proof} By Lemma \ref{lem53}, we express the optimality equations \eqref{eqn41} as \begin{equation} V_n^* = T_nV^*_{n+1}\; \textrm{ for }n \geq 0. \end{equation} Next, we continue with the following lemma. \begin{lemma} \label{lem54} Under the Assumptions \ref{ass31} and \ref{ass32}, for $n \geq 0$, let $v_n \in L_n(X_n)$ and $v_{n+1} \in L_{n+1}(X_{n+1})$. \begin{itemize} \item[(a)] If $v_n \geq T_n (v_{n+1})$, then $v_n \geq V_n^*$. \item[(b)] If $v_n \leq T_n (v_{n+1})$ and in addition, \begin{equation} \lim_{N\rightarrow \infty}v_{N}(x_{N+1}(\omega)) = 0, \end{equation} ${\mathbb{P}}$-a.s., then $v_n \leq V_n^*$. \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[(a)] By Lemma \ref{lem53}, there exists a policy $\pi=(\pi_n)_{n \geq 0}$ such that for all $n \geq 0$, \begin{equation} v_{n}(x_n) \geq c_n(x_n,\pi_n) + \rho_n( v_{n+1}(F_n(x_n,\pi_n, \xi_n))). \end{equation} By iterating the right hand side and by monotonicity of $\varrho_n(\cdot)$, we get \begin{equation} v_{n}(x_{n}) \geq c_n(x_n,\pi_n) + \varrho_n(\sum_{i = n+1}^{N-1}c_i(x_i,\pi_i) + v_{N}(x_{N})). \end{equation} Since $v_{N}(x_{N}) \geq 0$, we have \begin{equation} v_{n}(x_{n}) \geq c_n(x_n,\pi_n) + \varrho_n(\sum_{i=n+1}^{N-1}c_i(x_i,\pi_i)), \textrm{ a.s.} \end{equation} Hence, letting $N \rightarrow \infty$, we obtain $v_{n}(x) \geq V_n(x,\pi)$ and so $v_{n}(x) \geq V_n^*(x)$. \item[(b)] Suppose that $v_{n} \leq T_nv_{n+1}$ for $n \geq 0$, so that \begin{equation} v_{n}(x_n) \leq c_n(x_n,\pi_n) + \rho_n(c_{n+1}(x_{n+1},\pi_{n+1}) + v_{n+1}(x_{n+1})) \end{equation} for any $\pi \in \Pi_{\textrm{ad}}$, ${\mathbb{P}}^\pi_n$-a,s. Summing from $i=1$ to $i=N-1$ gives \beal &v_{n}(x_n) \leq c_n(x_n,a_n) + \varrho_n(\sum_{i=1}^{N-1}c_{n+i}(x_{n+i},a_{n+i}) \\ &\indeq + \varrho_{N}(\sum_{i=n+N}^\infty c_i(x_i,a_i))) \eal Letting $N\rightarrow \infty$ and by $\pi \in \Pi_{\mathrm{ad}}$, we get that \begin{equation} \lim_{N\rightarrow \infty}{\varrho}_n(v_{n+N}) = 0 \end{equation} so that we have \begin{equation} v_n(x_n) \leq V_{n}(x_n,\pi), \end{equation} Taking infimum, we have \begin{equation} v_n(x_n) \leq V_{n}^*(x_n) \end{equation} Thus, we conclude the proof. \end{itemize} \end{proof} To further proceed, we need the following technical lemma. \begin{lemma} \label{lem34}\cite{HL} For every $N > n \geq 0$, let $X_n,A_n$ be complete, seperable metric spaces and $\mathbb{K}_n:= \{(x_n,a_n):x_n\in X_n,a_n\in A_n\}$ with $w_n$ and $w_{n,N}$ be functions on $\mathbb{K}_n$ that are non-negative, l.s.c. and inf-compact on $\mathbb{K}_n$. If $w_{n,N} \uparrow w_n$ as $N \rightarrow \infty$, then \begin{equation} \lim_{N\rightarrow \infty}\min_{a_n \in A_n}w_{n,N}(x_n,a_n) = \min_{a_n \in A_n} w_n(x_n,a_n), \end{equation} for all $x_n \in X$. \end{lemma} The next result gives the validity of the convergence of value iteration. \begin{theorem}\label{thm51} Suppose that Assumptions \ref{ass31} and \ref{ass32} are satisfied. Then, for every $n \geq 0$ and $x_n \in X_n$, \begin{equation} V_{n,N}^*(x_n) \uparrow V_n^*(x_n)\;{\mathbb{P}}_\textrm{-a.s. }N\rightarrow\infty \end{equation} and $V_n^*(x_n)$ l.s.c. ${\mathbb{P}}$-a.s. \end{theorem} \begin{proof} We obtain $V_{n,N}^*$ by the usual dynamic programming. Indeed, let $J_{N+1}(x_{N+1}) \equiv 0$ for all $x_{N+1} \in X_{N+1}$ a.s. and going backwards in time for $n=N,N-1,\ldots$, let \begin{equation} \label{eqn328} J_{n}(x_{n}) := \inf_{a_n \in A(x_n)} c_n(x_n,a_n) + \rho_n(J_{n+1}(F_n(x_n,a_n, \xi_n))). \end{equation} Since $J_{N+1}(\cdot) \equiv 0$ is l.s.c., by backward induction, $J_N$ is l.s.c. ${\mathbb{P}}$-a.s. and ${\cal F}_N$-measurable. Moreover, by Corollary \ref{lem31}, for every $t = N-1,...,n$, there exists $\pi_t^N$ such that $\pi_t^N(x_t) \in A_t(x_t)$ attains the minimum in Equation \eqref{eqn328}. Hence $\{ \pi_{N-1}^N,...,\pi_n^{N} \}$ is an optimal policy. We note that $c_n(x_n,a_n)$ as well as $\rho_n(J_{n+1}(F_n(x_n,a_n, \xi_n)))$ is l.s.c., ${\cal F}_n$ measurable, inf-compact and non-negative. Hence their sum preserves those properties. Furthermore, $J_n$ is the optimal $(N-n)$ cost by construction. Hence, $J_n(x) = V_{n,N}^*(x)$ and since $J_n(x)$ is l.s.c. so is $V_{n,N}^*(x_n)$ with \begin{equation} \label{eqn329} V_{n,N}^*(x_n) := \inf_{a_n \in A(x_n)} \bigg(c_n(x_n,a_n) + \rho_n(V_{n+1,N}^*(x_{n+1})) \bigg). \end{equation} By the non-negativity assumption on $c_n(\cdot,\cdot)$ for all $n \geq 0$, the sequence $N \rightarrow V_{n,N}^*$ is non-decreasing and $V_{n,N}^*(x_n) \leq V_n^*(x_n)$, for every $x_n \in X_n$ and $N > n$. Hence, denoting \begin{equation} v_n(x_n) := \sup_{N>n}V_{n.N}^*(x_n) \textrm{ for all }x_n \in X_n. \end{equation} and $v_n$ being supremum of l.s.c. functions is itself l.s.c. ${\mathbb{P}}$-a.s. and ${\cal F}$-measurable. Letting $N\rightarrow \infty$ in \eqref{eqn329} by Lemma \ref{lem34}, we have that \begin{equation} \label{eqn410} v_{n}(x_n) := \inf_{a_n \in A(x_n)} \bigg( c_n(x_n,a_n) + \rho_n(V_{n+1}(x_{n+1})) \bigg) \end{equation} for all $n \in \mathbb{N}_0$ and $x_n \in X_n$. Hence, $v_n$ are solutions of the optimality equations, $v_n = T_nv_{n+1}$, and so by Lemma \ref{lem53}, $v_n(x_n) \geq V_n^*(x_n)$. This gives $v_n(x) = V_n^*(x)$. Hence, $V_{n,N}^* \uparrow V_n^*$ and $V_n^*$ is l.s.c. \end{proof} Now, we are ready to prove our main theorem. \begin{proof}[Proof of Theorem \ref{thm21}] \begin{itemize} \item[(a)] By Theorem \ref{thm51}, the sequence $(V_n^*)_{n \geq 0}$ is a solution to the optimality equations. By Lemma \ref{lem53}, it is the minimal such solution. \end{itemize} \item[(b)] By Theorem \ref{thm51}, the functions $V_n^*$ are l.s.c. ${\mathbb{P}}$-a.s. and ${\cal F}_n$-measurable. Therefore, \begin{equation} \label{eqn32} c_n(x_n,\pi^*_n) + \rho_n(V^*_{n+1}(x_{n+1})) \end{equation} is non-negative, l.s.c. ${\mathbb{P}}$-a.s., ${\cal F}_n$-measurable and inf-compact on $\mathbb{K}_n$ for any $a_n \in A_n$, for every $n \geq 0$. Thus, the existence of optimal policy $\pi_n^*$ follows from Lemma \ref{thm51}. Iterating Equation \eqref{eqn32} gives \begin{align} V_n^*(x_n) &= c_n(x_n, \pi_t^*) + \varrho_n\bigg(\sum_{t=n+1}^{N-1}c_t(x_t, \pi_t^*) + V_{N}^*(x_{N})\bigg)\\ &\geq V_{n,N}(x_n,\pi_n^*). \end{align} Letting $N \rightarrow \infty$, we conclude that $V_n^*(x) \geq V_n(x,\pi^*)$. But by definition of $V_n^*(x)$, we have $V_n^*(x) \leq V_n(x,\pi^*)$. Hence, $V_n^*(x) = V_n(x,\pi^*)$, and we conclude the proof. \end{proof} \subsection{An $\epsilon$-Optimal Approximation to Optimal Value} We note that our iterative scheme via validity of convergence of value iterations in Theorem \ref{thm21} is computationally not effective for large horizon $N$ problem, since we have to calculate the dynamic programming equations for each time horizon $n \leq N$. To overcome this difficulty, we propose the following methodology, which requires only one time calculation of dynamic programming equations of the optimal control problem and is able to give an $\epsilon$-optimal approximation to the original problem. By Assumption \ref{ass41}, we have after some $N_0$ \begin{equation} \label{eqn470} \varrho_{N_0}(\sum_{n=N_0+1}^\infty c_n(x_n, a_n)) < \epsilon \;{\mathbb{P}} \textrm{-a.s}. \end{equation} But, then this means for the theoretical optimal policy $(\pi^*_n)_{n\geq0}$, justified in Theorem \ref{thm21}, we have \begin{equation} \varrho_{N_0}(\sum_{n=N_0+1}^\infty c_n(x_n, \pi^*_n)) \leq \epsilon\;{\mathbb{P}} \textrm{-a.s.} \end{equation} since, the optimal policy gives a smaller value than the one in Equation \eqref{eqn470}. Then, by monotonicity of ${\varrho}$, for the optimal policy $\pi^*$ we have \begin{equation} {\varrho}_{N_0}(\sum_{n=N_0+1}^\infty c_n(x_n, \pi^*_n)) \leq {\varrho}_{N_0}(\sum_{n=N_0+1}^\infty c_n(x_n, \pi_n)) \leq \epsilon\;{\mathbb{P}} \textrm{-a.s}. \end{equation} Hence, this means that by solving the optimal control problem up to time $N_0$ via dynamic programming and combine these decision rules $(\pi^*_0, \pi^*_1,\pi^*_2,...,\pi^*_{N_0})$ with the decision rules from time $N_0+1$ onwards, we have an $\epsilon$-optimal policy. Hence, we have proved the following theorem. \bethm\label{thm42} Suppose that Assumptions \ref{ass31} and \ref{ass32} hold. Let $\pi_0 \in \Pi_{\textrm{ad}}$ be the policy in Assumption \ref{ass41} such that \begin{equation} \varrho_{N_0}\big(\sum_{n=N_0+1}^\infty c_n(x_n, a_n)\big) < \epsilon\;{\mathbb{P}} \textrm{-a.s}.. \end{equation} Then, we have for the optimal policy \begin{equation} \varrho_{N_0}\big(\sum_{n=N_0+1}^\infty c_n(x_n, a^*_n)\big) \leq \epsilon\;{\mathbb{P}} \textrm{-a.s}. \end{equation} Hence $\pi^* = \{\pi^*_0, \pi^*_1,\pi^*_2,...,\pi^*_{N_0}, \pi^0_{N+1},\pi^0_{N+2},\pi^0_{N+3}\dots \}$ is an $\epsilon$-optimal policy for the original problem. \end{theorem} \section{Applications} \subsection{An Optimal Investment Problem} In this section, we are going to study a variant of mean-variance utility optimization (see e.g. \cite{BMZ}). The framework is as follows. We consider a financial market on an infinite time horizon $[0,\infty)$. The market consists of a risky asset $S_n$ and a riskless asset $R_n$, whose dynamics are given by \begin{align*} &S_{n+1} - S_n = \mu S_n + \sigma S_n \xi_n\\ &R_{n+1} - R_n = r R_n\\ \end{align*} with $R_0 = 1, S_0=s_0$, where $(\xi_n)_{n\geq 0}$ are i.i.d standard normal random variables having distribution functions $\Phi$ on $\mathbb{R}$ with ${\cal Z} = L^1(\mathbb{R},{\mathfrak B}(\mathbb{R}),\Phi)$ and $\mu,r,\sigma > 0$. We consider a self-financing portfolio composed of $S$ and $R$. We let $(\widetilde{\pi}_n)_{n \geq 0}$ denote the amount of money invested in risky asset $S_n$ at time $n$ and $X_n$ denote the investor's wealth at time $n$. Namely, \beal X^{\widetilde{\pi}}_n &= \widetilde{\pi}_n S_n + R_n \\ X^{\widetilde{\pi}}_{n+1}-X^{\widetilde{\pi}}_n &= \widetilde{\pi}_n(S_{n+1} - S_n) + (X^{\widetilde{\pi}}_n - \widetilde{\pi}_n) r R_n \eal For each $n \geq 0$, we denote $\widetilde{\pi}_n = X^{\widetilde{\pi}}_n \pi_n$ so that $\pi_n$ stands for the fraction of wealth that is put in risky asset. Hence, the wealth dynamics are governed by \begin{align} X^\pi_{n+1}-X^\pi_n &= [rZ^\pi_n + (\mu - r)\pi_n] + \sigma \pi_n \xi_n \end{align} with initial value $x_0 = S_0 + B_0$. We further assume $|\pi_n| \leq C$ for some constant $C > 0$ at each time $n \geq 0$. The particular coherent risk measure used in this example is the mean-deviation risk measure that is in \textit{static setting} defined on ${\cal Z}$ as \begin{equation} {\varrho}(X) := {\mathbb{E}}^{{\mathbb{P}}}[X] + \gamma g(X), \end{equation} with $\gamma > 0$ with \begin{equation} g(X) := {\mathbb{E}}^{{\mathbb{P}}}\big(|X - {\mathbb{E}}^{{\mathbb{P}}}[X]| \big), \end{equation} for $X \in {\cal Z}$, where ${\mathbb{E}}^{{\mathbb{P}}}$ stands for the expectation taken with respect to the measure ${\mathbb{P}}$. Hence $\gamma$ determines our \textit{risk averseness} level. For ${\varrho}$ to satisfy the properties of a coherent risk measure, it is necessary that $\gamma$ is in $[0,1/2]$. In fact, $\gamma$ being in $[0,1/2]$ is both necessary and sufficient for ${\varrho}$ to satisfy monotonicity (see \cite{key-14}). Hence, for fixed $0 \leq \gamma \leq 1/2$ with $X \in {\cal Z}$, we have that \begin{equation} \rho(X) = \sup_{m \in {\mathfrak A}}\langle m, X{\rangle}, \end{equation} where ${\mathfrak A}$ is a subset of the probability measures, that are of the form (identifying them with their corresponding densities) \beal \label{eqn662} &{\mathfrak A} = \bigg\{ m \in L^\infty(\mathbb{R}, {\cal B}(\mathbb{R}), \Phi): \int_\mathbb{R} m(x) d\Phi(x) = 1,\\ &\indeq m(x) = 1 + h(x) - \int_\mathbb{R} h(x)d\Phi(x),\; \| h \|_\infty \leq \gamma\;\Phi\textrm{-a.s.} \bigg\} \eal for some $h \in L^\infty(\mathbb{R}, {\cal B}(\mathbb{R}), \Phi)$. Then, we \textit{define} for each time $n \geq 0$, the dynamic correspondent of $\rho$ as $\rho_n: {\cal Z}_{n+1} \rightarrow {\cal Z}_n$ with \begin{align} \rho_n (X_{n+1}) &= \sup_{m_n \in {\mathfrak A}_{n+1} }\langle m_n, X_{n+1}\rangle , \end{align} as in Equation $\eqref{eqn27},\eqref{eqn28},\eqref{eqn2140}$ using $(\mathbb{R},{\cal B}(\mathbb{R}),\Phi)$. Hence, the controlled one step conditional risk mapping has the following representation \begin{equation} \sup_{m_n \in {\mathfrak A}_{n+1} } \langle m_n,X^\pi_n \rangle , \end{equation} and our optimization problem reads as \begin{equation} \label{eqn567} \min_{\pi_n \in \Pi_{\mathrm{ad}}} \sup_{m_n \in {\mathfrak A}_{n+1} } \langle m_n,X^\pi_n \rangle , \end{equation} where ${\mathfrak A}_{n+1}$ are the sets of conditional probabilities analogous to Equation \eqref{eqn662} with $\Pi_{\textrm{ad}}$ as defined in Definition \ref{defn230}. Namely, ${\mathfrak A}_{n+1}$ is a subset of the conditional probability measures at time $n+1$ that are of the form (identifying them with their corresponding densities) \beal \label{eqn565} &{\mathfrak A}_{n+1} = \bigg\{ m_{n+1} \in L^\infty(\Omega, {\cal F}_{n+1}, {\mathbb{P}}^\pi_{n+1}): \int_\Omega m_{n+1} d{{\mathbb{P}}}^\pi_{n+1} = 1,\\ &\indeq m_{n+1} = 1 + h - \int_\Omega h d{{\mathbb{P}}}^\pi_{n+1},\; \| h \|_\infty \leq \gamma\;{\mathbb{P}}^\pi_{n}\textrm{-a.s.} \bigg\} \eal for some $h \in L^\infty(\Omega, {\cal F}_{n+1}, {\mathbb{P}}^\pi_{n+1})$, where ${\mathbb{P}}^\pi_{n+1}$ stands for the conditional probability measure on $\Omega$ at time $n+1$ as constructed in \eqref{eqn2150}. Our one step cost functions are $c_n(x_n,a_n) = x_n$ for $n \geq 0$ for some discount factor $0 < \gamma < 1$ that are l.s.c. (in fact continuous) in $(x_n,a_n)$ for $n \geq 0$. Hence, starting with initial wealth at time 0, denoted by $x_0$, investor's control problem reads as \beal \label{exprob} &x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} \varrho_0\bigg( \sum_{n = 1}^\infty X^\pi_n \bigg) \\ &\triangleq x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} \lim_{N\rightarrow \infty} \bigg( c_0(x_0,a_0) + \gamma\rho_0(c_1,(x_1,a_1) + \ldots+\gamma \rho_{N-1}(c_N(x_N,a_N) )\ldots)\bigg) \eal We note that $\Pi_{\mathrm{ad}}$ is not empty so that our example satisfies Assumption \ref{ass41}. Indeed, by choosing $a_n \equiv 0$ for $n \geq 0$, i.e. investing all the current wealth into riskless asset $R_n$ for $n\geq 0$, we have that \begin{equation} \varrho\bigg( \sum_{n = 0}^\infty \gamma^n x_0 \bigg) = \frac{x_0}{1 - \gamma} \end{equation} Hence, as in Theorem \ref{thm42}, we find $N_0$ such that \begin{equation} x_0 \sum_{n = N_0}^\infty \gamma^n < \epsilon. \end{equation} Thus, we write the corresponding \textit{robust} dynamic programming equations as follows. Starting with $V^*_{N_0+1} \equiv 0$ for $n = 1,2,...,N_0$, we have by Equation \eqref{eqn567} \begin{align} V^*_{n}(X^\pi_{n}) &= \min_{|\pi_n| \leq C} X^\pi_n + \gamma \rho_{n}(V^*_{n+1}(X^\pi_{n+1})) \\ &= \min_{|\pi_n| \leq C} X_n^\pi + \gamma \sup_{m_{n+1} \in {\mathfrak A}_{n+1}}\langle m_{n}, V^*_{n+1}(X_{n+1}^\pi) \rangle \end{align} going backwards iteratively at first stage, the problem to solve is then \begin{align} V^*_0(x_0) &= \min_{ |a_0| \leq C}x_0 + \gamma\rho_0( V^*_1( X^\pi_1) )\\ &= x_0 + \gamma \min_{|a_0| \leq C}\sup_{m_1 \in {\mathfrak A}_{1}}\langle m_1, V^*_1( X_1^\pi) \rangle \end{align} Hence, the corresponding policy \begin{equation} \widetilde{\pi} = \{ \pi^*_0, \pi^*_1,\pi^*_2,\ldots,\pi^*_{N_0},0,0,0,\ldots, \} \end{equation} is $\epsilon$-optimal with the optimal value $V^\pi_0(x_0)$ for our example optimization problem \eqref{exprob}. \subsection{The Discounted LQ-Problem} We consider the linear-quadratic regulator problem in infinite horizon. We refer the reader to \cite{HL} for its study using expectation performance criteria. Instead of the expected value, we use the ${\sf AV@R}$ operator to evaluate total discounted performance. For $n\geq 0$, we consider the scalar, linear system \begin{equation} x_{n+1} = x_n + a_n + \xi_n, \end{equation} with $X_0 = x_0$, where the disturbances $(\xi_n)_{n \geq 0}$ are independent, identically distributed random variables on ${\cal Z}_n^2 = L^2(\mathbb{R},{\cal B}(\mathbb{R}),{\mathbb{P}}^n)$ with mean zero and ${\mathbb{E}}^{{\mathbb{P}}^n}[\xi^2_n] < \infty$. The control problem reads as \beal \label{orn2} &x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} \varrho_0\bigg( \sum_{n = 1}^\infty x^\pi_n \bigg) \\ &\triangleq x_0 + \min_{\pi \in \Pi_{\mathrm{ad}}} \lim_{N\rightarrow \infty} \bigg( (x^2_0+ a^2_0) + \gamma\rho_0( (x^2_1+ a^2_1) \\ &\indeq + \ldots +\gamma^N\rho_{N-1}((x^2_N+ a^2_N) )\ldots)\bigg), \eal where $\rho_n(\cdot): {\cal Z}_{n+1}^2 \rightarrow {\cal Z}_{n}^2$ is the dynamic ${\sf AV@R}_\alpha: {\cal Z}_{n+1}^2 \rightarrow {\cal Z}_{n}^2$ operator defined as \beal \rho_n(Z) &\triangleq \sup_{m_{n+1} \in {\mathfrak A}_{n+1}} \langle m_{n+1},Z \rangle , \eal with \beal \label{eqn05999} {\mathfrak A}_n = \big\{ &m_n \in L^{\infty}(\Omega,{\cal F}_n,{\mathbb{P}}^\pi_n): \int_\Omega m_n d{\mathbb{P}}^\pi_n = 1, \\ &\indeq 0 \leq \lVert m_n \rVert_\infty \leq \frac{1}{\alpha}, {\mathbb{P}}^\pi_{n-1}-\textrm{a.s.} \big\} \eal We note that $\Pi_{\textrm{ad}}$ is not empty. Indeed, choose $\pi_n \equiv 0$ for $n \geq 0$ so that \begin{equation} x_n = x_0 + \sum_{i=0}^{n-1}\xi_i, \end{equation} with \beal \varrho(\sum_{n=0}^\infty x^2_n ) &\leq 2x_0^2 + 2\varrho( \sum_{n=0}^\infty \xi^2_n ) \\ &\leq 2x^2_0 + 2\sum_{n=0}^\infty \gamma^n {\sf AV@R}_\alpha(\xi^2_n)\\ &\leq 2 x^2_0 + 2 \sum_{n=0}^\infty \gamma^n \frac{1}{\alpha}{\mathbb{E}}^{{\mathbb{P}}}[\xi^2_i]\\ &\leq 2 x^2_0 + \frac{2\sigma^2}{\alpha(1-\gamma)}\\ &< \infty, \eal where we used Equation \eqref{eqn05999} in the third inequality. Hence, we find $N_0$ such that \begin{equation} \frac{2\sigma^2}{\alpha}\sum_{n=N_0}^\infty \gamma^n < \epsilon. \end{equation} Starting with $J_{N_0+1} \equiv 0$, the corresponding $\epsilon$-optimal policy for $n=0,1,\ldots,N_0$ is found via \beal J_n(x_n) = \min_{|\pi_n|\leq C}\bigg( (x^2_n + a^2_n) + \gamma {\sf AV@R}^n_{\alpha}\big(J_{n+1}(x_{n+1} + a_{n+1})\big) \bigg), \eal so that at the final stage, we have \beal J_0(x_0) &= \min_{ |a_0| \leq C}x_0 + \gamma{\sf AV@R}_\alpha( J_1( x^\pi_1) )\\ &= x_0 + \gamma \min_{|a_0| \leq C}\sup_{m_1 \in {\mathfrak A}_{1}}\langle m_1, J_1( x_1) \rangle , \eal where ${\mathfrak A}$ is as defined in Equation \eqref{eqn05999}. Thus, the corresponding policy \begin{equation} \widetilde{\pi} = \{ \pi^*_0, \pi^*_1,\pi^*_2,\ldots,\pi^*_{N_0},0,0,0,\ldots, \} \end{equation} is $\epsilon$-optimal with the optimal value $V^\pi_0(x_0)$ for problem \eqref{orn2}. \section{#1}\cob\label{#2} \setcounter{equation}{0} \pagestyle{fancy} \lhead{\cob Section~\ref{#2}, #1 } \cfoot{} \rfoot{\thepage} \lfoot{\cob{\today,~\currenttime}~(c75-iklt2, Version~\fbox{\version})}} \chead{} \rhead{\thepage} \def\newpage{\newpage} \newcounter{startcurrpage} \newcounter{currpage} \def\llll#1{{\rm\tiny\fbox{#1}}}
{ "timestamp": "2018-06-05T02:14:47", "yymm": "1806", "arxiv_id": "1806.00983", "language": "en", "url": "https://arxiv.org/abs/1806.00983" }
\subsubsection*{Literature} The foremost paper on costly learning of prices is \cite{diamond1971}, where competing firms set the monopoly price. A monopoly price or above is also found in \cite{diamond1987,axell1977,reinganum1979,klemperer1987} and \cite{garcia+2017}. A number of solutions to the Diamond paradox have been proposed. With a positive fraction of consumers having zero learning cost, as in \cite{butters1977,stahl1996,klemperer1987} and \cite{benabou1993}, firms put a positive probability on the competitive price. A similar idea to zero learning cost is that consumers observe multiple prices with positive probability, for example by seeing price advertisements \citep{salop+stiglitz1977,burdett+judd1983,robert+stahl1993}. If consumers have private taste shocks, then that generates search and below-monopoly pricing \citep{wolinsky1986,anderson+renault1999,zhou2014}. Prices below the monopoly level also occur with repeat purchases, as in \cite{salop+stiglitz1982,bagwell+ramey1992}. The current paper does not rely on zero learning cost, multiple free price observations, taste shocks or repeat purchases. To the author's knowledge, this work is the first to combine signalling and consumer search costs. The informative price difference between firm types endogenously gives consumers the incentive to learn, in contrast with the exogenous incentive created by taste shocks or a zero search cost. Multiple free price observations constitute exogenous learning, also differing from an endogenous motivation to learn. In the current work, the incentive for firm types to set different and low prices is endogenous, driven by consumer beliefs responding to the price. This differs from \cite{salop+stiglitz1982} where firms are indifferent between selling two units at a lower price or one unit at a higher, and these prices are determined by the exogenous willingness to pay of consumers. In \cite{bagwell+ramey1992}, the motivation for a low price is that in the infinitely repeated game, consumers start to boycott firms that raise price. This motivation is endogenous, but different from the current paper. Downward\footnote{As opposed to the upward price signalling (higher-quality firm sets a higher price) studied by the large literature following \cite{milgrom+roberts1986}. } price signalling by a single firm has been studied in \cite{shieh1993}. A similar idea is in \cite{simester1995}, where multiproduct firms (whose prices for all products are positively correlated) signal by a low price on one product. In \cite{rhodes2015}, a multiproduct monopolist stocking more products (better for the consumers) charges lower prices. The result is similar to a higher-quality firm charging less, but the mechanism is different: adding a product attracts additional customers with relatively low valuations for the other products. When the average valuation of customers falls, the monopoly price falls. The receivers of the price signal are the consumers in this paper, which differs from limit pricing (as in \cite{milgrom+roberts1982b} and the literature following) where the receivers are potential entrants. The next section sets up the model. Section~\ref{sec:existence} constructs an equilibrium with near-competitive pricing in a market with consumer search costs. and shows that this equilibrium is the unique one that survives the Intuitive Criterion of \cite{cho+kreps1987}. The robustness of the results to relaxing various assumptions is discussed in Section~\ref{sec:robust}. \section{Price competition with costly learning of prices} \label{sec:setup} There are two firms indexed by $i\in\set{X,Y}$, each with a type $\theta\in\set{G,B}$ (good and bad, respectively). Each firm knows its own type, but not that of the other. Types are i.i.d.\ with $\Pr(G)=\mu_0\in(0,1)$. There is a continuum of consumers of mass $1$ with types $v\in[0,\overline{v}]$ distributed according to the strictly positive continuous pdf $f_v$, with cdf $F_v$, independently of firm types. Firms and consumers know their own type, but only have a common prior belief over the types of others. The timeline of the game is as follows. \begin{enumerate} \setlength\itemsep{0.0pt} \item Nature draws independent types for firms and consumers, and assigns half the consumers to one firm, half to the other, independently of types. Each player observes his own type, but not the types of the others. \item Firms simultaneously set prices. \item Each consumer observes the price of his assigned firm and chooses either to buy from this firm, learn the price of the other firm, or leave the market. \item Each consumer who chose to learn observes both firms' prices and chooses either to buy from his assigned firm, buy from the other firm, or leave the market. \end{enumerate} A type $G$ firm has marginal cost $c_{G}$ normalised to $0$, and type $B$ has $c_{B}>0$. The quality of a type $G$ firm is higher. Specifically, a type $v$ consumer values firm type $B$'s product at $v$ and $G$'s product at $h(v)\geq v$, with $h'>1$, $h(\overline{v})<\infty$. To ensure that demand for $B$'s good is positive, assume $\overline{v}>c_{B}$. Consumers and firms are risk-neutral. Each consumer has unit demand. After the firms' cost and quality are determined, the firms simultaneously set prices $P_{X},P_{Y}\in S_P:=\{0,m,2m,\ldots,Nm\}$, where $m>0$ is the smallest monetary unit.\footnote{ Using a discrete price grid avoids problems with equilibrium existence (explained in Section~\ref{sec:robust}), which are not the focus of this paper. An alternative to the grid is to restrict prices to $ \mathbb{R}_+\setminus (c_{B}-\rho,c_{B}+\rho)$ for some $\rho>0$. } Assume $c_{B}=km\geq h(0)-m$ for some $k\in\mathbb{N}$ (costs are measured in terms of the minimal monetary unit, and not all consumers buy at a price just below the bad type's cost). Assume $Nm\geq h(\overline{v})$. Prices above $Nm$ are unavailable w.l.o.g., because no consumer buys at any $P>h(\overline{v})$. For a set $S$, denote the set of probability distributions on $S$ by $\Delta S$. A behavioural strategy of firm $i$ is $\sigma_i:\set{G,B}\rightarrow \Delta S_P$, so $\sigma_i(\theta)(P)$ is the probability that type $\theta$ of firm $i$ puts on price $P$. A consumer sees the price that his assigned firm sets and can learn the price of the other firm at cost $c_{\ell}>0$. Define $c_{B}^{+}:=c_{B}+m$. Assume that $c_{\ell}\leq \mu_0 (h(c_{B}^{+})-c_{B})$, i.e.\ the learning cost is small relative to the prior probability of the good type firm and the valuation difference between consumer type $c_{B}^{+}$ for a good type firm and consumer type $c_{B}$ for a bad type firm. Assume that $m<\min\{c_{\ell},\;\frac{\mu_0 c_{B}}{1-\mu_0},\;\overline{v}-c_{B}\}$, i.e.\ the minimal monetary unit is small relative to the costs, the prior, and the maximal valuation for the bad type. The cost difference $c_{B}-0$ between the types, as well as the quality difference $h(0)-0$ may be small, provided the learning cost and minimal monetary unit are even smaller. After seeing the price of his assigned firm, a consumer decides whether to buy from this firm (denoted $b$), learn the other firm's price ($\ell$) or not buy at all ($n$). Upon learning the price of the other firm, the consumer decides whether to buy from firm $X$ (denoted $b_{X}$), firm $Y$ ($b_{Y}$) or not buy at all ($n_{\ell}$). A consumer's behavioural strategy consists of $\sigma_1:[0,\overline{v}]\times S_P\rightarrow\Delta\set{b,n,\ell}$ and $\sigma_2:[0,\overline{v}]\times S_P^2\rightarrow \Delta\set{b_{X},b_{Y},n_{\ell}}$, so that e.g.\ $\sigma_2(v,P_i,P_j)(b_{j})$ is the probability that a consumer type $v$ initially at firm $i$ buys from $j\neq i$ after learning $P_j$. A type $\theta$ firm's \emph{ex post} payoff if mass $D$ of consumers buy from it at price $P$ is $(P-c_{\theta})D$. Assume that the full-information monopoly profit function $P[1-F_v(h^{-1}(P))]$ of firm type $G$ strictly increases in $P$ on $[0,c_{B}+m]$, so that the full-information monopoly price $P_{G}^{m}$ of $G$ is strictly above $c_{B}$ (this is relaxed in Section~\ref{sec:robust}). A consumer's posterior belief about firm $i$ after observing its price $P_i$ and expecting the firm to choose strategy $\sigma_i^*$ is \begin{align} \label{mu} \mu_i(P_i) :=\frac{\mu_0\sigma_i^*(G)(P_i)}{\mu_0\sigma_i^*(G)(P_i) +(1-\mu_0)\sigma_i^*(B)(P_i)} \end{align} whenever $\mu_0\sigma_i^*(G)(P_i) +(1-\mu_0)\sigma_i^*(B)(P_i)>0$, and arbitrary otherwise. The gain from trade that consumer type $v$ expects from buying from firm $i$ at price $P$ is denoted $w(v,i,P):=\mu_i(P)h(v)+(1-\mu_i(P))v-P$. The solution concept used is perfect Bayesian equilibrium (PBE), hereafter simply called equilibrium. Later, a unique equilibrium is selected using the Intuitive Criterion of \cite{cho+kreps1987}. \begin{defi} \label{def:mix} An equilibrium consists of $\sigma_{X}^*,\sigma_{Y}^*,\sigma_1^*,\sigma_2^*$ and $\mu_{X},\mu_{Y}:S_{P}\rightarrow [0,1]$ satisfying the following for $\theta\in\set{G,B}$, $v\in[0,\overline{v}]$, $i,j\in\set{X,Y}$, $i\neq j$: \begin{enumerate}[(a)] \item if $w(v,i,P_{i})\geq \max\set{0,\;w(v,j,P_{j})}$, then $\sigma_2^*(v,P_{i},P_{j})(b_i)=1$, and if in addition $w(v,i,P_{i})> w(v,j,P_{j})$, then $\sigma_2^*(v,P_{j},P_{i})(b_i)=1$, \item if $\max\set{w(v,i,P_{i}),\; w(v,j,P_{j})}<0$, then $\sigma_2^*(v,P_{i},P_{j})(n_{\ell})=1$, \item if $w(v,i,P_{i})> \max\{0,\; \sum_{P_{j}\in S_P}\max\{w(v,i,P_{i}),\;w(v,j,P_{j})\}[\mu_0\sigma_j^*(G)(P_{j}) +(1-\mu_0)\sigma_j^*(B)(P_{j})] -c_{\ell}\}$, then $\sigma_1^*(v,P_{i})(b)=1$, \item if $w(v,i,P_{i})\leq \sum_{P_{j}\in S_P}\max\{0,\;w(v,i,P_{i}),\;w(v,j,P_{j})\}[\mu_0\sigma_j^*(G)(P_{j}) +(1-\mu_0)\sigma_j^*(B)(P_{j})] -c_{\ell}\geq 0$, then $\sigma_1^*(v,P_{i})(\ell)=1$, \item if $\max\{w(v,i,P_{i}),\; \sum_{P_{j}\in S_P}\max\{0,\;w(v,j,P_{j})\}[\mu_0\sigma_j^*(G)(P_{j}) +(1-\mu_0)\sigma_j^*(B)(P_{j})] -c_{\ell}\}< 0$, then $\sigma_1^*(v,P_{i})(n)=1$, \item if $\sigma_i^*(\theta)(P_{i})>0$, then $P_{i}\in\arg\max_{P\in S_P} (P-c_{\theta})D_i(P)$, where \begin{align} \label{demand} &D_i(P) :=\frac{1}{2}\int_0^{\overline{v}} \sum_{P_{j}\in S_P} \{\sigma_1^*(v,P)(b) +\sigma_1^*(v,P)(\ell)\sigma_2^*(v,P,P_{j})(b_i) \\&\notag+\sigma_1^*(v,P_{j})(\ell)\sigma_2^*(v,P_{j},P)(b_i)\}[\mu_0\sigma_j^*(G)(P_j) +(1-\mu_0)\sigma_j^*(B)(P_j)] dF_v(v), \end{align} \item if $\sigma_i^*(G)(P)>0$ or $\sigma_i^*(B)(P)>0$, then $\mu_i(P)$ is derived from~(\ref{mu}). \end{enumerate} \end{defi} The equilibrium profit of type $\theta$ of firm $i$ is denoted $\pi_{i\theta}^*$; it equals $(P-c_{\theta})D(P)$ for any $P$ s.t.\ $\sigma_i^*(\theta)(P)>0$. Some tie-breaking rules are built into the equilibrium definition, e.g.\ a consumer indifferent between $b$ and $\ell$ chooses $\ell$. The results remain substantially the same if the tie-breaking rules are modified, as discussed in Section~\ref{sec:robust}. The next section constructively proves equilibrium existence by guessing and verifying. \section{Equilibrium} \label{sec:existence} This section constructs an equilibrium in which consumers put probability one on a firm being the good type if the price is below the bad type's cost, otherwise probability one on the bad type. The good type firm sets a price equal to the bad type's cost. The bad type's price is its cost plus $m$. A consumer initially facing a price less than the bad type's cost either buys (when his valuation for the good type is above the price) or leaves the market. A consumer who initially sees a price strictly above the bad type's cost learns (when his expected valuation for the other firm is above $c_{\ell}$) or leaves the market. After learning, all consumers buy from the lower-priced firm or leave the market. The formal definition of the \textbf{guessed equilibrium} is the following: \begin{enumerate} \item Beliefs: $P\leq c_{B}\Rightarrow\mu_i(P)=1$ and $P> c_{B}\Rightarrow \mu_i(P)=0$ for $i\in\set{X,Y}$. \item Each firm's type $G$ sets price $c_{B}$ and type $B$ sets $c_{B}^{+}$. \item If $\mu_i(P)=1$, then $h(v)\geq P \Rightarrow \sigma_1^*(v,P)(b)=1$ and $h(v)< P\Rightarrow \sigma_1^*(v,P)(n)=1$. \item If $\mu_i(P)=0$ and $\mu_0 (h(v)-c_{B})+(1-\mu_0)(v-c_{B}^{+})-c_{\ell}\geq0$, then $\sigma_1^*(v,P)(\ell)=1$. \\If $\mu_i(P)=0$ and $\mu_0 (h(v)-c_{B})+(1-\mu_0)(v-c_{B}^{+})-c_{\ell}< 0$, then $\sigma_1^*(v,P)(n)=1$. \item If $w(v,i,P_{i})\geq \max\set{0,\;w(v,j,P_{j})}$, then $\sigma_2^*(v,P_{i},P_{j})(b_i)=1$, and if in addition $w(v,i,P_{i})> w(v,j,P_{j})$, then $\sigma_2^*(v,P_{j},P_{i})(b_i)=1$. If $\max\set{w(v,i,P_{i}),\; w(v,j,P_{j})}<0$, then $\sigma_2^*(v,P_{i},P_{j})(n_{\ell})=1$. \end{enumerate} Part 1 of the guessed equilibrium includes Definition~1(g) and also specifies beliefs at off-path prices. Beliefs are consistent with Bayes' rule~(\ref{mu}). Part 2 specialises Definition~1(f) to the guessed equilibrium. Parts 3--5 are simply the rewriting of Definition~1(a)--(e). Appendix~\ref{sec:existenceproof} proves that no player can profitably deviate from the guessed equilibrium. The idea of the proof is as follows. Consumers are clearly best responding to their belief, which is consistent with firm strategies. The bad type does not price below $c_{B}$, because it is weakly dominated by $c_{B}^{+}$. If consumers at a bad type learn and the other firm is the good type, then all consumers leave the bad type. Otherwise, the two bad types are in Bertrand competition over the consumers who learn. So the bad types undercut each other until pricing at $c_{B}^{+}$. A good type does not increase price above $c_{B}$, because the resulting fall in belief reduces expected profit below that obtained at price $c_{B}$. The reason is twofold. If the other firm is the good type, then all consumers leave. If the other firm is the bad type, then no consumers are drawn away from that firm, which would happen at price $c_{B}$ or below. At prices less than $c_{B}$, the Diamond paradox reasoning applies to the good types: each can raise its price to $m$ above that of the other without losing demand. This is because the consumers' learning cost is above $m$, so a price $m$ higher than expected does not motivate them to learn and switch, unless the price increase changes their belief. The guessed equilibrium already partly resolves the Diamond paradox, because its outcome differs from monopoly pricing and no search. Prices in the guessed equilibrium are close to competitive. Type $B$ prices the same as under Bertrand competition between the $B$ types with zero search cost and complete information. Type $G$ prices below $B$. For a stronger resolution of the Diamond paradox, subsequent results will show that the guessed equilibrium introduced above is the unique one that survives the Intuitive Criterion. Without refinement, belief threats support other equilibria. For example, for high enough $\mu_0$, both firms pool on $c_{B}^{+}$, justified by the belief $\mu_i(c_{B}^{+})=\mu_0$ and if $P\neq c_{B}^{+}$, then $\mu_i(P)=0$. The following lemma shows the monotonicity of equilibrium demand and prices. Given the ranking of the costs and qualities of the types, the results are intuitive---the lower-cost type $G$ sets a lower price and the higher-quality type $G$ receives higher demand. Based on Lemma~\ref{lem:D2}, there cannot be two prices on which both types put positive probability and at one of which, demand is positive. \begin{lemma} \label{lem:D2} In any equilibrium, if $\sigma_i^*(G)(P_{G})>0$ and $\sigma_i^*(B)(P_{B})>0$, then $D_i(P_{G})\geq D_i(P_{B})$, and if in addition $0<D_i(P_{B})\leq D_i(P_{G})$, then $P_{G}\leq P_{B}$. \end{lemma} The proofs of this and subsequent results are in Appendix~\ref{sec:proofs}. The next lemma shows that in any equilibrium satisfying the Intuitive Criterion in which not all consumers buy at price $c_{B}$ and belief $\mu_0$, neither firm sets a price at which demand is zero. Both types of both firms make positive profit, and the types set different prices with positive probability. To state the lemma, define $v(x)$ as the (unique) consumer valuation $v$ that satisfies $\mu_0h(v)+(1-\mu_0)v =x$, and define \begin{align} \label{mstar} m^*:=\min_{x\in [c_{B}, \mu_0h(\overline{v})+(1-\mu_0)\overline{v}-m]}x\left(\frac{1-F_{v}(h^{-1}(x))}{1-F_{v}(v(x))}-1\right). \end{align} The function $\frac{1-F_{v}(h^{-1}(\cdot))}{1-F_{v}(v(\cdot))}$ is the ratio of demand at belief $1$ to demand at belief $\mu_0$. This function only depends on the primitives $F_{v},h,\mu_0$, so $m^*$ only depends on exogenous parameters. The ratio of demands is strictly greater than $1$ and continuous when the denominator is positive (as is the case when $x\leq \mu_0h(\overline{v})+(1-\mu_0)\overline{v}-m$), so $m^*>0$. \begin{lemma} \label{lem:posprofit} For any $m< m^*$, $i\in\set{X,Y}$ and $\theta\in\set{G,B}$, in any equilibrium satisfying the Intuitive Criterion, we have $\pi_{i\theta}^*>0$ and there exists $P_i\geq c_{B}^{+}$ s.t.\ $D_i(P_i)>0$ and $\sigma_i^*(B)(P_i)>0=\sigma_i^*(G)(P_i)$. \end{lemma} Lemma~\ref{lem:posprofit} provides the first component of the race to the bottom, namely the good types separating (at least partially) from the bad by setting a lower price. The Intuitive Criterion drives the separation, because it eliminates belief threats at low prices, which would otherwise deter the good types from price-cutting. The next lemma establishes a lower bound on the equilibrium price by showing that the good types price weakly above the cost of the bad type. \begin{lemma} \label{lem:Hprice} For any $m< m^*$ and $i\in\set{X,Y}$, in any equilibrium satisfying the Intuitive Criterion, if $P_i< c_{B}$, then $\sigma_i^*(G)(P_i)=0$. \end{lemma} The intuition for Lemma~\ref{lem:Hprice} is that the firms' good types are in a \emph{race to the top} at prices in $[0,c_{B})$.\footnote{A similar race occurs in \cite{diamond1971} at all prices below the monopoly level.} Neither firm's good type loses customers to the other firm when raising price slightly, because the small price difference does not motivate the customers to pay the search cost. The reason that a good type does not increase price from $c_{B}$ to $c_{B}^{+}$ is that the bad type is choosing $c_{B}^{+}$, thus belief and demand are significantly lower at $c_{B}^{+}$. In the unique equilibrium surviving the Intuitive Criterion, each good type sets price $c_{B}$ and each bad type $c_{B}^{+}$, as shown in the following Theorem. The proof provides the second component of the race to the bottom: a bad type reduces price to deter its customers from learning and to undercut the other firm's bad type. The motive for the customers to learn comes from the good types separating (the first component of the race, Lemma~\ref{lem:posprofit}), which makes the other firm's price informative and smaller in expectation than a bad type's price. \begin{thm} \label{thm:unique} For any $m< m^*$ and $i\in\set{X,Y}$, in the unique equilibrium satisfying the Intuitive Criterion, $\sigma_i^*(G)(c_{B})=1=\sigma_i^*(B)(c_{B}^{+})$. \end{thm} Theorem~\ref{thm:unique} shows that the unique equilibrium outcome that satisfies the Intuitive Criterion is the guessed equilibrium from above. Prices are close to competitive and there is positive, but small price dispersion. The equilibrium outcome is robust to changing the prior, the learning cost, the distribution of consumer valuations and the good type's cost in a range of parameters\footnote{The range defined by $h(v)\geq v$, $h'>1$, $h(\overline{v})<\infty$, $0<m<\min\{c_{\ell},\;\overline{v}-c_{B},\;\frac{\mu_0 c_{B}}{1-\mu_0},\;m^*\}$, $c_{\ell}\leq \mu_0 (h(c_{B}^{+})-c_{B})$, $c_{B}=km\geq h(0)-m>0$ for some $k\in\mathbb{N}$, $\frac{d}{dP}P[1-F_v(h^{-1}(P))]>0$ for $P\in[0,c_{B}+m]$. } (Section~\ref{sec:robust} discusses cases outside that range). The equilibrium in Theorem~\ref{thm:unique} is distinct from signalling by a monopoly, because a bad type monopolist does not have an incentive to cut price when the good type's price is low enough. This is because there is no competing firm for the customers to learn about and leave to. The bad type sets its monopoly price. Under the Intuitive Criterion, Lemmas~\ref{lem:D2}--\ref{lem:Hprice} still apply, so the good type monopolist sets a price between $c_{B}$ and $P_{G}^{m}$. Separation from the bad type usually requires the good type's price to be strictly below $P_{G}^{m}$, so unobservable type has some of the same pro-competitive effect with one firm as with two. However, more than one firm is needed for both types' prices to be close to competitive. Section~\ref{sec:completeinfo} below contrasts Theorem~\ref{thm:unique} with competition when the type is observed together with the price. The comparisons to monopoly and observed type show that the combination of signalling and multiple firms is necessary as well as sufficient to overcome the effect of the positive search cost. Bertrand competition under zero learning cost between two known bad or two known good types leads to equal profits (close to zero) for the firms and no price dispersion, unlike in the equilibrium in Theorem~\ref{thm:unique}. Bertrand competition between a good and a bad firm yields zero demand for the bad firm, but positive demand and profit for the good firm, which sets a strictly higher price than the bad. This differs from the outcome in Theorem~\ref{thm:unique} where a firm that sets a strictly lower price is preferred by the consumers and gets greater demand and profit. If some consumers have zero and others positive learning cost, but there is no quality or cost uncertainty, then the firms mix over an interval of prices between the competitive and the monopoly price. The price distribution depends strongly on the density of the learning costs at zero, and whether there is an atom at zero. In the current paper, each firm sets a single price and the equilibrium is robust to perturbing the parameters within a range. With consumer taste shocks (horizontal differentiation of firms), there is no price dispersion, and for each firm, some consumers initially at it learn another firm's price and leave. This differs from the current paper, which models vertical differentiation and shows that consumers initially at a good firm do not learn or leave. Models of repeat purchases have many equilibria, some of which replicate the pricing patterns found in this paper. However, the markets described by repeated games with high discount factors differ from the markets studied in this paper, which involve infrequent buying (repair services, insurance, durable goods such as cars) and are thus closer to one-shot interactions. The next section relaxes some of the assumptions made above. The equilibrium remains qualitatively similar, in particular the Diamond paradox is still resolved. \section{Robustness} \label{sec:robust} Relaxing the assumption that the full-information monopoly price $P_{G}^{m}$ of the good type is above the cost of the bad type, the equilibrium price of the good type is either $c_{B}$ as above (if $P_{G}^{m}= c_{B}$), or $P_{G}^{m}<c_{B}$. In the latter case, the only modification of the equilibrium in Section~\ref{sec:existence} is that $G$ sets price $P_{G}^{m}\in(0,c_{B})$. The proofs simplify, because $G$ fully separates. If the learning cost is large enough ($c_{\ell}> \mu_0[h(c_{B}^{+})-c_{B}]$), then some customers initially at a bad type setting price $c_{B}^{+}$ buy immediately instead of learning the other firm's price. These customers are called \emph{captive}.\footnote{The captive customers correspond to the uninformed customers in \cite{varian1980}.} The mass of captive customers depends on $c_{\ell}- \mu_0[h(c_{B}^{+})-c_{B}]$. If this is large, then the bad type sets price $P>c_{B}^{+}$ with positive probability, because extracting more revenue from captive customers outweighs losing some non-captive ones to the competitor. The probability that the bad type puts on $P>c_{B}^{+}$ and the maximal $\hat{P}$ with $\sigma_i(B)(\hat{P})>0$ increase in the mass of captive customers. As $c_{\ell}- \mu_0[h(c_{B}^{+})-c_{B}]$ increases, eventually the good type starts putting positive probability on $c_{B}^{+}$. The qualitative features of the model are preserved, in that price is lower than with complete information, and there is price dispersion. If there is a distribution of learning costs with $\min c_{\ell}>m$ and $\max c_{\ell}\leq \mu_0[h(c_{B}^{+})-c_{B}]$, then the equilibrium outcome is unchanged. Learning costs strictly greater than $\mu_0[h(c_{B}^{+})-c_{B}]$ create captive consumers, as discussed above. Nonpositive learning costs for some consumers eliminate the Diamond paradox even without incomplete information, as the previous literature showed. In the current model, enough consumers with a nonpositive learning cost make the good types reduce price, but the positive probability of the other firm having a bad type ensures that the good types do not reach zero price (their marginal cost). The customers initially at a bad type are captive for the other firm's good type. If all consumers buy at price $c_{B}^{+}$ and belief $\mu_0$ (formally, $h(0)\geq c_{B}^{+}/\mu_0$), then there is no reason for a good type to reduce price below $c_{B}^{+}$ to increase belief. Both firms pooling on $P_0:=\max\{P\in S_P:P\leq \mu_0h(0)\}$ survives the Intuitive Criterion, because if belief at any $P_1>P_0$ is set to $1$ and the good type wants to deviate to $P_1$, then the bad type also wants to deviate. If the bad, but not the good type wants to deviate to a price, then the Intuitive Criterion sets belief at that price to $0$, which deters deviations. The results remain unchanged if the tie-breaking rule for $\sigma_2(v,P_i,P_j)$ in Definition~\ref{def:mix} depends on the belief or the price, e.g.\ if $\mu_{X}(P_{X})h(v)+(1-\mu_{X}(P_{X}))v-P_{X} =\mu_{Y}(P_{Y})h(v)+(1-\mu_{Y}(P_{Y}))v-P_{Y}$, then the customer buys from the firm with the greater $\mu_i(P_i)$ (or smaller $P_i$) with probability $p\in[0,1]$. The results also do not change if ties are always broken in favour of a particular firm, say $X$. A slight change in equilibrium is possible if the tie-breaking rule can depend on both the price and the firm, e.g.\ if both firms set $P=c_{B}+2m$, then ties are broken in favour of $X$, but if both set $P=c_{B}^{+}$, then in favour of $Y$. In this case, the equilibrium features $\sigma_{X}(B)(c_{B}+2m)=1 =\sigma_{Y}(B)(c_{B}^{+})$. Firm $Y$'s type $B$ has no incentive to raise price, because then it would lose all customers. If $X$ cuts price to $c_{B}^{+}$, it still gets zero demand. The price at which trade occurs when both firms are of type $B$ is still $c_{B}^{+}$. Other parts of the equilibrium are unchanged. A small asymmetry between firms has a similar effect to asymmetric tie-breaking. Denote type $\theta$ of firm $i$ by $i\theta $. If consumers slightly prefer $\xB $ to $\yB$, other things equal (interpreted as $\yB$ having lower quality), then $\yB $ gets zero demand and profit at equal price to $\xB $, because consumers at $iB$ learn. Then $\xB $ sets either the same price as $\yB $, or higher by just enough to deter consumers from switching to $\yB $. Consumers initially at a good type do not learn, unless the quality is lower or price higher than that expected from the other firm's good type, and the difference multiplied by the prior outweighs the learning cost. If consumers do not learn, then they cannot switch firms, so both firms' good types set price $c_{B}$, as before. Now suppose the firms have the same quality, but the costs satisfy $0=c_{\xG }\leq c_{\yG}\leq c_{\xB } \leq c_{\yB}-m$, and the full-information monopoly price of $X\theta$ is above $c_{Y\theta}$. Then $\xB$ sets price $c_{\yB}$, because all consumers at $\xB $ learn, so there is asymmetric Bertrand competition between $\xB $ and $\yB$. Consumers at $\xG $ do not learn, so the price of $\xG$ is at least $c_{\yG}$. The good types are in a race to the top, as in Section~\ref{sec:existence}, so the good types set price $c_{\xB}$. If the firms can set any price in $[c_{B}-\rho,c_{B}+\rho]$ for some $\rho>0$ (not constrained to a grid), then an equilibrium satisfying the Intuitive Criterion does not exist. The proofs of Lemmas~\ref{lem:D2}--\ref{lem:Hprice} still work, but in Theorem~\ref{thm:unique}, the bad types Bertrand compete down to price $c_{B}$. Then belief at $c_{B}$ is strictly lower than $1$, the belief at any $P<c_{B}$. This makes the payoff of a good type drop discontinuously at $c_{B}$, so a best response of a good type does not exist. Without refining with the Intuitive Criterion, equilibria exist, e.g.\ pooling on $c_{B}+\epsilon$ for $\epsilon\in(0,\rho)$ small. This is supported by zero belief for any $P\neq c_{B}+\epsilon$. Having more than two firms only strengthens competition. Because the bad types do not set the weakly dominated price $c_{B}$, and consumers initially at the good types do not learn, pricing cannot get more competitive than with two firms. The outcome is the same as in Section~\ref{sec:existence}. More than two types (with higher quality implying lower cost) are conceptually similar to two, but notationally cumbersome. The worst type (highest cost, lowest quality) behaves like $B$. In particular, the worst types undercut each other in Bertrand fashion, until they price $m$ above their cost. Consumers initially facing the worst type's price learn the price of the other firm, hoping to meet a better type with a lower price. The Intuitive Criterion imposes (partial) separation of types, so the gain from learning is positive. The best type acts similarly to $G$, setting a price equal to the second-best type's cost. The reason is a race to the top among the best types, as in the baseline model. Types other than the worst and the best set prices between the second-best and the worst type's cost and may mix, because customers who switch away from the worst type of the other firm are captive for types other than the worst. Two-dimensional types with combinations of cost and quality $(c_{G},\hat{q}_{G})$, $(c_{G},\hat{q}_{B})$, $(c_{B},\hat{q}_{G})$ and $(c_{B},\hat{q}_{B})$ are similar to the two-type case when cost and quality are negatively correlated. A type $(c_{\theta},\hat{q}_{G})$ cannot separate from $(c_{\theta},\hat{q}_{B})$ for any $\theta\in\set{G,B}$ in any equilibrium, because $(c_{\theta},\hat{q}_{B})$ can imitate any pricing strategy of $(c_{\theta},\hat{q}_{G})$. The type $(c_{\theta},\hat{q}_{B})$ strictly prefers to imitate and get exactly the same payoff as $(c_{\theta},\hat{q}_{G})$, because demand is based on the quality that the consumers expect, given a price. Demand is thus greater at prices set by $(c_{\theta},q_{G})$. The model with multidimensional types and negative correlation of cost and quality thus reduces to the two-type model in Section~\ref{sec:setup}, with $q_{\theta} =\hat{q}_{G}\Pr(\hat{q}_{G}|c_{\theta}) +\hat{q}_{B}\Pr(\hat{q}_{B}|c_{\theta})$ for $\theta\in\set{G,B}$. If the correlation of cost and quality is positive, then the four-type model reduces to the case of two types with higher cost implying higher quality. Price signalling is then directed upward (the high-quality type sets a higher price). The race to the bottom does not occur. Each type sets a price weakly greater than its monopoly price. If the correlation of cost and quality is zero, then signalling is impossible in either direction. Consumers expect the average quality after each price set in equilibrium and each type of firm sets its monopoly price given the expected quality. Suppose that the firms can advertise as well as signal by price. If ads reveal prices to some consumers, then competition increases and the good types cut prices below $c_{B}$. The bad types still set price $c_{B}^{+}$. If all consumers see both firms' prices, then the good types Bertrand compete to price $m$. If ads do not reveal prices, but are just wasteful signalling which for some reason is cheaper for the good type, then the results depend on the noisiness, timing and cost of the ads. If consumers cannot see the advertising expenditure, but must infer it from noisily observed ad quality and quantity, then ads seen before the prices only change the prior. The results are unaffected by the prior $\mu_0$ if $\mu_0>\max\{\frac{m}{m+c_{B}},\; \frac{c_{\ell}}{h(c_{B}^{+})-c_{B}}\}$. Ads seen after the prices have no effect, because the prices already reveal the types. Even if ads are free for the good type, the good type still signals by price, because ads are noisy, so revealing the type via price discretely increases demand. Suppose that ads are perfect signals of the money spent on them. Then the relative cost to the types per unit of ads vs per unit of price decrease determines which signalling channel the good type uses. If revealing the type via ads is relatively cheaper, then the good type sets its full-information monopoly price and signals using ads. If the ad costs for the types are similar relative to the difference between the profits lost by cutting price, then ads are not used and the outcome is the equilibrium found above. A similar reasoning applies to any other way to signal, e.g.\ warranties, hiring independent quality testers, etc. If each firm trembles when setting price, and prices are the only way to signal, then the results depend on the trembles. Denote by $\Pr(P_1|P_2)$ the probability that the consumers see price $P_1$ when the firm tries to set $P_2$. A natural benchmark has $\Pr(P_1|P_2)$ strictly decreasing in $|P_1-P_2|$, and $\Pr(P_1|P_2)>0$ for all $P_1,P_2\in S_P$. Reasoning similar to Lemma~\ref{lem:D2} shows that in any equilibrium, the good type tries to set a lower price than the bad. Pooling cannot occur, because then the posterior belief equals the prior at every price, motivating the good type to set a strictly smaller price than the bad. If the trembles are small enough, i.e.\ $\Pr(P|P)\approx 1$ for all $P$, then the distinct prices of the types motivate the consumers to learn. This starts the race to the bottom discussed in Section~\ref{sec:existence}, leading to the same outcome. \subsection{Comparison to observable types} \label{sec:completeinfo} In this section, the only difference from Section~\ref{sec:setup} is that the type is not inferred from the price, but seen directly. The consumers initially at firm $i$ see the price and type of firm $i$, but have to pay $c_{\ell}$ to learn the price and type of firm $j$. In such a market, prices are not competitive, as shown below. The equilibrium definition omits part (g) of Definition~\ref{def:mix} and replaces $\mu_i(P_i)$ with $1$ if firm $i$ is of type $G$ and $0$ if $B$. The following Proposition puts a lower bound on the price of type $G$. \begin{prop} \label{prop:complete} In any equilibrium with observable types, $\pi_{iG}^*>0$, and if $\sigma_i(G)(P)>0$, then $P\geq \min\{P_{G}^m,\; h(c_{B}^{+})-m\}$ for $i\in\set{X,Y}$. \end{prop} The idea for Proposition~\ref{prop:complete} is that race to the top between the good types now continues at prices above $c_{B}$, as long as the profit increases in the price and consumers initially at a good type do not learn. If the consumers learn, then with positive probability they switch to the other firm (otherwise there would be no reason to pay the learning cost) and the good type loses demand. The prices of the good types stay close to each other throughout the race to the top, so the motive for a consumer to learn is to find a bad type of the other firm at a price low enough to compensate for the quality difference and the learning cost. So the good types can price above $c_{B}^{+}$ by at least the quality difference plus the learning cost. The race to the top may end at the good type's monopoly price or below that. If the race ends below $P_{G}^{m}$, then consumers initially at a good type learn and switch with positive probability. The bad type then gets positive demand, even when pricing above the other firm's bad type. The captive customers of the bad type then motivate it to raise price above $c_{B}^{+}$. In summary, if quality is seen together with the price, then either the good type sets its monopoly price or both types set a price strictly greater than with unobservable types. \section{Conclusion} \label{sec:conclusion} The famous paradox of \cite{diamond1971} is that a market with multiple firms need not be competitive if consumers have to pay a cost to learn the prices of firms. However, as shown in the current paper, negatively correlated production cost and quality that are private information restore competitive pricing. This result holds for a wide range of quality and cost differences between firms. There are several mechanisms that make cost and quality negatively correlated across firms, for example economies of scale, regulation or differing managerial talent. These mechanisms operate in many markets. Private information about cost and quality, as well as prices close to the competitive level are empirically reasonable in skilled services, insurance and durable goods, among others. The previous literature resolves the Diamond paradox assuming either (a) zero learning cost for a positive fraction of consumers, (b) that consumers observe multiple prices at once, (c) large private taste shocks, or (d) repeat purchases. The current paper models markets in which a given consumer purchases rarely, e.g.\ cars, insurance, repair services, and in which the vertical quality difference is more important than the horizontal taste shock. The predictions of the current paper differ from zero search costs and observing multiple prices at once, because the firms set deterministic prices instead of mixing, and the mark-up and profit are larger for a lower-price firm. The current paper assumes no repeat buying of the same good (insurance policies and car models change by the time the consumer purchases a replacement), which distinguishes the model from the literature on repeat purchases. With taste shocks, prices decrease in the number of firms and the degree of product differentiation. In the current paper, prices stay constant when the number of firms rises above two or when the quality difference changes within some bounds. If lower cost implies higher quality, then a low-cost firm would like to tell consumers about its cost level. A cheap talk message about low cost does not work, for the same reason as cheap talk about high quality has little effect. On the other hand, a low price is a credible signal, because it is differentially costly to the firm types. In some markets, other costly signals are available, e.g.\ warranties or advertising. In other applications like insurance, warranties are uncommon, so price signalling is more likely. Even if feasible, signalling by ads or warranties may not be optimal, for example when price signals are cheaper or more precise. Signalling by a low price resembles limit pricing, in which an incumbent tries to keep an entrant out of the market. The incumbent sets a low price to convince the entrant that the incumbent has a low cost and is likely to start a price war. The low price in limit pricing is anti-competitive. In the current work, the low price results from competition, thus has different policy implications. A regulator maximising total or consumer surplus should encourage the race to the bottom in prices, for example by punishing low quality or checking the quality of a firm with a larger market share more frequently.
{ "timestamp": "2018-06-05T02:13:16", "yymm": "1806", "arxiv_id": "1806.00898", "language": "en", "url": "https://arxiv.org/abs/1806.00898" }
\section{Introduction}\label{sect:intro} The growth of super-massive black holes (SMBH) at the centres of galaxies and the properties and evolution of the interstellar medium (ISM) in their hosts are expected to be connected \citep[e.g.][]{DiMatteo05,Hopkins08}. There are in fact well established correlations observed between the black hole mass and the physical properties of the host galaxy \citep{Kormendy&Ho13} such as the bulge mass or velocity dispersion, suggesting that the energy output of the accretion onto SMBH may be communicated to the surrounding ISM\ and affect star formation (SF). Indeed, active galactic nuclei (AGN) feedback onto their host galaxies is expected to proceed via kpc-scale, wide-angle outflows \citep{Menci08,Faucher-Giguere12}, capable of heating and removing gas, therefore suppressing SF. AGN feedback is one of the main mechanisms invoked in cosmological simulations to prevent an excessive growth of massive galaxies and make gas-rich starburst galaxies quickly evolve to quiescence. Growing observational evidence of massive AGN-driven outflows has been collected, involving different gas phases (ionised, atomic and molecular) extending from sub-pc to kpc scales. While recent works, based on local AGN, use a multi-phase study of outflows to quantify their impact on the host galaxy \citep[e.g.][]{Feruglio15,Tombesi15,Veilleux17,Longinotti18}, at high redshift ($z\sim1-3$) studies of outflows are still mostly limited to the ionised phase \citep[see][and references therein]{Fiore17}. There are only few detections of fast molecular gas observed in CO high-\textit{J} rotational transitions \citep{Carniani17,Feruglio17,Vayner17,Brusa18}. However massive, quiescent systems and old (aged 2$-$3 Gyr) galaxies have been observed already at $z\sim2-3$ \citep{Cimatti04,Whitaker13,Straatman14}, indicating that a feedback mechanism must have been in place even at very early epochs, around $z\sim5-6$. Observations of AGN-driven outflows at high redshift have targeted the [CII]\ fine-structure emission line at 158 $\mu$m, which is generally the strongest emission line in galaxies at far infrared (FIR) wavelengths. Typically [CII]\ is a tracer of both the neutral atomic gas, primarily in Photo-Dissociated Regions (PDRs), but can be in part emitted also from the (partly) ionised medium \citep[e.g.][]{Carilli13}. Since PDR are produced by the UV radiation emitted by young stars, [CII] has also been used as a tracer of SF \citep{Maiolino05,DeLooze11,Carniani13,Carniani18a}. Recently, [CII]\ has also been exploited to trace cold gas in galactic outlows. Indeed, broad wings [CII]\ emission has been observed in the hyper-luminous quasi-stellar object (QSO) J1148$+$5251 at z $\sim$ 6.4 by \cite{Maiolino12} and \cite{Cicone15}, revealing outflowing gas extended up to $\sim30$ kpc and escaping with velocities in excess of 1000 km s$^{-1}$. The Herschel Space Observatory has enabled the detection of cold outflows through broad wings of the [CII] line also in local active galaxies \citep{Janssen16}. The exploitation of the bright [CII] line at high redshift has been increasing in the last few years with the advent of ALMA. In particular, the population of high-$z$ luminous QSOs with detected [CII]\ emission has been rapidly growing. Previous works have exploited the [CII] emission to investigate the properties of their host galaxies, such as the SFR, the dynamical mass, and the presence of merging companions \citep[e.g.][]{Wang13,Wang16,Venemans16,Venemans17,Willott15,Willott17,Trakhtenbrot17,Decarli17,Decarli18}. In none of these high-$z$ QSOs evidence of [CII]\ outflows has been reported. However, most of the [CII]\ observations in distant QSOs are still rather short (10$-$20 minutes of on-source time), with a sensitivity generally not yet adequate to individually detect weak [CII]\ broad wings. We have collected a sample of 48 QSOs with ALMA [CII]\ detection to investigate the presence of outflows, as traced by weak [CII]\ broad wings, by performing a stacking analysis. We will show that the stacked data achieve a sensitivity that is more than an order of magnitude deeper than that reached in the previous [CII]\ outflow detection by \cite{Maiolino12,Cicone15} and enable us to reveal very broad wings tracing cold outflows associated with distant QSOs. \section{Sample and data reduction}\label{sect:sample} \begin{table*} \caption{Main information about the ALMA observations and source properties of the QSOs in our sample. Columns give the following information: (1) source ID, (2) [CII]-based redshift, (3) beamsize, (4) continuum rms sensitivity, (5) representative rms of the [CII]\ spectral region for a channel width of 30 km s$^{-1}$, (6) continuum flux, (7) [CII]\ luminosity, (8) FWHM of the [CII]\ line, (9) FIR luminosity in the range 8-1000 $\mu$m\ derived from the ALMA continuum flux, (10) AGN bolometric luminosity.} \centering \small \makebox[1\textwidth]{ \setlength{\tabcolsep}{3 pt} \begin{tabular}{lcccccccccc} \toprule Source ID & $z_{\rm [CII]}^*$ & Beam & Cont rms & [CII]\ rms & $f_{\rm cont}$ & $L_{\rm [CII]}^{\rm core}$ & FWHM$_{\rm [CII]}^{\rm core}$ & Log($L_{\rm FIR}$/L$_\odot$) & Log($L_{\rm AGN}$/erg s$^{-1}$)$^{**}$ & Stack\\ & & [arcsec] & [mJy/beam] & [mJy/beam] & [mJy/beam] & [$10^9$ L$_\odot$] & [km s$^{-1}$] & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \midrule PJ007+04 & 6.001 & 0.47$\times$0.69 & 0.05 & 0.58 & 1.87$\pm$0.04 & 1.30$\pm$0.15 & 365$\pm$45 & 12.89 & 46.77 & BF\\ PJ009-10 & 6.003 & 0.45$\times$0.66 & 0.05 & 0.49 & 2.43$\pm$0.12 & 2.28$\pm$0.15 & 290$\pm$35 & 12.98 & 46.73 & AF\\ J0055+0146 & 6.005 & 0.60$\times$0.72 & 0.03 & 0.30 & 0.22$\pm$0.02 & 0.60$\pm$0.07 & 330$\pm$40 & 11.80 & 46.04 & AE\\ J0109-3047 & 6.790 & 0.51$\times$0.80 & 0.05 & 0.84 & 0.58$\pm$0.04 & 1.64$\pm$0.18 & 310$\pm$40 & 12.30 & 46.37 & AE\\ J0129-0035 & 5.779 & 0.36$\times$0.45 & 0.02 & 0.24 & 3.04$\pm$0.05 & 1.73$\pm$0.04 & 200$\pm$30 & 12.91 & 45.67& AF\\ J0142-3327 & 6.337 & 0.75$\times$0.87 & 0.04 & 0.61 & 1.70$\pm$0.06 & 2.94$\pm$0.16 & 300$\pm$30 & 12.71 & 47.24& BF\\ J0210-0456 & 6.433 & 0.61$\times$0.90 & 0.03 & 0.29 & 0.16$\pm$0.03 & 0.37$\pm$0.04 & 185$\pm$35 & 11.70 & 45.93& AE\\ J0305-3150 & 6.615 & 0.51$\times$0.72 & 0.03 & 0.37 & 3.20$\pm$0.05 & 2.34$\pm$0.09 & 215$\pm$30 & 13.02 & 46.59& AF\\ J0331-0741 & 4.737 & 0.31$\times$0.40 & 0.05 & 0.45 & 3.75$\pm$0.07 & 2.84$\pm$0.11 & 475$\pm$35 & 12.84 & 47.39& DF\\ PJ065-26 & 6.187 & 0.87$\times$1.11 & 0.05 & 0.76 & 1.05$\pm$0.07 & 1.94$\pm$0.19 & 405$\pm$40 & 12.48 & 47.01& DE\\ PJ065-19 & 6.125 & 0.75$\times1.09$ & 0.04 & 1.32 & 0.42$\pm$0.04 & 1.80$\pm$0.40 & 315$\pm$60 & 12.11 & 46.76& BE\\ J0454-4448 & 6.058 & 0.80$\times$1.18 & 0.04 & 0.63 & 0.68$\pm$0.05 & 0.62$\pm$0.10 & 360$\pm$70 & 12.30 & 46.68& AE\\ J0807+1328 & 4.879 & 0.25$\times$0.40 & 0.03 & 0.67 & 6.64$\pm$0.13 & 2.44$\pm$0.19 & 435$\pm$38 & 13.14 & 47.07 & DF\\ J0842+1218 & 6.076 & 1.14$\times$1.27 & 0.05 & 0.77 & 0.57$\pm$0.04 & 1.62$\pm$0.22 & 480$\pm$55 & 12.24 & 46.88 & DE\\ J0923+0247 & 4.655 & 0.29$\times$0.51 & 0.04 & 0.30 & 2.94$\pm$0.08 & 2.55$\pm$0.09 & 325$\pm$30 & 12.76 & 46.96 & BF\\ J0935+0801 & 4.682 & 0.29$\times$0.55 & 0.04 & 0.29 & 1.39$\pm$0.05 & 0.70$\pm$0.07 & 385$\pm$40 & 12.48 & 47.25 & BE\\ J1017+0327 & 4.949 & 0.30$\times$0.38 & 0.03 & 0.32 & 1.23$\pm$0.07 & 1.02$\pm$0.05 & 270$\pm$30 & 12.42 & 46.27& AE\\ PJ159-02 & 6.381 & 0.99$\times$1.27 & 0.04 & 0.55 & 0.60$\pm$0.05 & 1.24$\pm$0.15 & 385$\pm$45 & 12.27 & 46.83 & BE\\ J1044-0125 & 5.785 & 0.66$\times$0.72 & 0.02 & 0.29 & 3.07$\pm$0.03 & 1.62$\pm$0.08 & 470$\pm$35 & 12.92 & 47.07 & DF\\ J1048-0109 & 6.676 & 1.00$\times$1.43 & 0.03 & 0.51 & 2.57$\pm$0.03 & 2.42$\pm$0.13 & 350$\pm$35 & 12.94 & 46.51 & AF\\ PJ167-13 & 6.515 & 0.98$\times$1.27 & 0.04 & 0.51 & 0.69$\pm$0.04 & 3.15$\pm$0.19 & 480$\pm$35 & 12.35 & 46.36 & CE\\ J1120+0641 & 7.086 & 0.29$\times$0.32 & 0.01 & 0.13 & 0.40$\pm$0.02 & 0.69$\pm$0.05 & 540$\pm$40 & 12.19 & 46.77 & DE\\ J1152+0055 & 6.365 & 1.02$\times$1.36 & 0.04 & 0.70 & 0.50$\pm$0.06 & 0.51$\pm$0.10 & 140$\pm$50 & 12.23 & 46.17 & AE\\ J1207+0630 & 6.037 & 0.89$\times$1.63 & 0.06 & 0.90 & 0.56$\pm$0.04 & 1.16$\pm$0.18 & 490$\pm$95 & 12.20 & 46.77 & DE\\ PJ183+05 & 6.439 & 1.06$\times1.24$ & 0.04 & 0.59 & 4.62$\pm$0.05 & 6.02$\pm$0.19 & 370$\pm$30 & 13.17 & 46.93 & BF\\ J1306+0356 & 6.033 & 0.98$\times$1.17 & 0.05 & 0.74 & 1.20$\pm$0.05 & 1.87$\pm$0.17 & 265$\pm$35 & 12.53 & 46.84 & BE\\ J1319+0950 & 6.132 & 1.10$\times$1.26 & 0.03 & 0.43 & 5.00$\pm$0.05 & 3.85$\pm$0.18 & 520$\pm$35 & 13.17 & 46.93 & DF\\ J1321+0038 & 4.722 & 0.34$\times$0.39 & 0.02 & 0.19 & 1.49$\pm$0.04 & 1.19$\pm$0.06 & 560$\pm$35 & 12.47 & 46.70 & CE\\ J1328-0224 & 4.646 & 0.31$\times$0.48 & 0.04 & 0.37 & 1.58$\pm$0.04 & 1.56$\pm$0.06 & 300$\pm$30 & 12.49 & 47.05 & BE\\ J1341+0141 & 4.700 & 0.30$\times$0.38 & 0.06 & 0.45 & 17.74$\pm$0.33 & 3.06$\pm$0.15 & 435$\pm$35 & 13.55 & 47.50 & DF\\ J1404+0314 & 4.924 & 0.34$\times$0.40 & 0.05 & 0.58 & 10.98$\pm$0.20 & 3.14$\pm$0.15 & 515$\pm$35 & 13.37 & 47.02 & DF\\ PJ217-16 & 6.149 & 0.92$\times$1.19 & 0.05 & 0.71 & 0.40$\pm$0.02 & 0.89$\pm$0.17 & 510$\pm$70 & 12.14 & 46.89& DE\\ J1433+0227 & 4.727 & 0.34$\times$0.44 & 0.05 & 0.43 & 7.69$\pm$0.13 & 2.52$\pm$0.08 & 415$\pm$30 & 13.19 & 47.37& DF\\ J1509-1749 & 6.122 & 0.92$\times$1.43 & 0.04 & 0.67 & 1.34$\pm$0.04 & 1.72$\pm$0.20 & 615$\pm$75 & 12.59 & 46.9& DE\\ J1511+0408 & 4.679 & 0.31$\times$0.53 & 0.06 & 0.46 & 10.08$\pm$0.19 & 2.78$\pm$0.18 & 580$\pm$45 & 13.30 & 47.25& DF\\ PJ231-20 & 6.587 & 0.94$\times$1.29 & 0.04 & 0.66 & 3.80$\pm$0.10 & 2.97$\pm$0.21 & 410$\pm$35 & 13.09 & 46.99& DF\\ J1554+1937 & 4.627 & 0.74$\times$1.26 & 0.16 & 1.67 & 11.98$\pm$0.42 & 6.86$\pm$0.43 & 800$\pm$45 & 13.37 & 47.70& DF\\ PJ308-21 & 6.234 & 0.68$\times$0.89 & 0.03 & 0.54 & 1.12$\pm$0.08 & 2.17$\pm$0.18 & 575$\pm$45 & 12.53 & 46.65& CE\\ J2054-0005 & 6.039 & 0.73$\times$0.76 & 0.02 & 0.40 & 2.89$\pm$0.04 & 2.46$\pm$0.07 & 230$\pm$30 & 12.92 & 46.60& AF\\ J2100-1715 & 6.082 & 0.66$\times$0.78 & 0.05 & 0.60 & 0.46$\pm$0.02 & 1.27$\pm$0.17 & 390$\pm$60 & 12.22 & 46.33& AE\\ J2229+1457 & 6.151 & 0.70$\times$0.80 & 0.03 & 0.42 & 0.14$\pm$0.02 & 0.34$\pm$0.06 & 240$\pm$50 & 11.62 & 46.03& AE\\ J2244+1346 & 4.661 & 0.33$\times$0.40 & 0.03 & 0.31 & 3.26$\pm$0.05 & 1.74$\pm$0.04 & 270$\pm$30 & 12.80 & 46.58& AF\\ W2246-0526 & 4.601 & 0.35$\times$0.37 & 0.05 & 0.52 & 7.18$\pm$0.12 & 6.12$\pm$0.19 & 740$\pm$35 & 13.14 & 48.12& DF \\ J2310+1855 & 6.002 & 0.79$\times$1.18 & 0.04 & 0.41 & 7.62$\pm$0.12 & 5.74$\pm$0.15 & 405$\pm$30 & 13.34 & 47.23& DF\\ J2318-3029 & 6.148 & 0.75$\times$0.87 & 0.05 & 0.82 & 2.87$\pm$0.05 & 1.73$\pm$0.18 & 275$\pm$35 & 12.93 & 46.60& AF\\ J2318-3113 & 6.444 & 0.79$\times$0.89 & 0.06 & 0.92 & 0.32$\pm$0.04 & 1.26$\pm$0.20 & 305$\pm$55 & 12.00 & 46.56& AE\\ J2348-3054 & 6.902 & 0.62$\times$0.82 & 0.05 & 0.79 & 1.90$\pm$0.05 & 1.52$\pm$0.23 & 455$\pm$65 & 12.82 & 46.43& CF\\ PJ359-06 & 6.172 & 0.64$\times$1.14 & 0.07 & 0.77 & 0.76$\pm$0.05 & 1.66$\pm$0.17 & 305$\pm$40 & 12.46 & 46.83& BE\\ \bottomrule \end{tabular} } \flushleft $^*$ Given the statistical error on the centroid of the best-fit Gaussian modelling the [CII] line and the systematics associated with the 30 km s$^{-1}$\ channel width of our ALMA spectra, the typical error on redshift is $\Delta z_{\rm [CII]}=0.001$.\\ $^{**}$ We consider as error on $L_{\rm AGN}$\ the 0.1 dex scatter associated with the bolometric correction by \citet{Runnoe12}. \label{tab:sample} \end{table*} \begin{figure} \centering \includegraphics[width=1\columnwidth]{rms-beam-dist-newcal} \caption{Sensitivity and beam size distributions of the ALMA [CII]\ observations for the high-$z$ QSOs in our sample. \textit{Left panel:} number of sources as a function of the mean (averaged over the spectral range covered by the stack) sensitivity for a 30 km s$^{-1}$\ channel. \textit{Right panel:} histogram of the mean beam axis size.} \label{fig:rmshisto} \end{figure} \begin{figure} \centering \includegraphics[width=0.55\columnwidth]{z-dist-newcal} \includegraphics[width=1\columnwidth]{lcii-fwhm-dist-newcal} \includegraphics[width=1\columnwidth]{lbol-lfir-dist-newcal} \caption{Properties of the high-$z$ QSOs sample considered in this work. \textit{Top panel:} redshift distribution. \textit{Middle panel:} luminosity and FWHM of the [CII]\ core emission. \textit{Bottom panel:} AGN luminosity and FIR luminosity.} \label{fig:sample-distrib} \end{figure} We collected all [CII]\ observations of $z>4.5$ QSOs on the ALMA archive public as of March 2018 and selected the sources with a [CII]\ detection significant at $\gtrsim 5\sigma$. Specifically, we used data from ALMA projects 2011.0.00243.S (P.I. C. Willott), 2012.1.00604.S (P.I. A. Kimball), 2012.1.00676.S (P.I. C. Willott), 2012.1.00882.S (P.I. B. Venemans), 2013.1.01153.S (P.I. P. Lira), 2015.1.01115.S (P.I. F. Walter) and 2016.1.01515.S (P.I. P. Lira). Details about individual QSOs in our sample, for those which have been published, can be found in \cite{Wang13}, \cite{Willott13}, \cite{Willott15}, \cite{Willott17}, \cite{Kimball15}, \cite{Diaz-Santos16}, \cite{Venemans16}, \cite{Venemans17}, \cite{Decarli17}, \cite{Decarli18}, and \cite{Trakhtenbrot17}; however, we also included some ALMA not yet published archival data from project 2015.1.00997.S (P.I. R. Maiolino, Carniani et al. in prep.). The assembled sample consists of the most luminous QSOs with rest-frame absolute UV magnitude $-28.5 \lesssim M_{1450\AA} \lesssim -23.9$ mag and black hole masses $10^8 \lesssim M_{\rm BH} \lesssim 10^{10}$ M$_\odot$. As mentioned, in total we combined ALMA data for 48 QSOs, and equivalent to a total of $\sim34$ hours of on-source observing time. Observations involve ALMA bands 6 or 7, depending on the redshift of the individual source. The distribution of the average rms sensitivity, representative of the [CII]\ spectral region, and that of the size of the ALMA beam are shown in Fig. \ref{fig:rmshisto}. Individual values are listed in Table \ref{tab:sample}. Except for few outliers, the bulk of the observations have similar rms sensitivities from $\sim 0.3$ to $\sim 0.8$ mJy/beam for a 30 km s$^{-1}$ channel. The angular resolutions, computed as average beam axis, range from $\sim 0.3$ to 1.2 arcsec. Data were calibrated using the CASA 4.7.2 software \citep{McMullin07} in manual or pipeline mode. The default phase, bandpass and flux calibrators were used unless differently indicated by the ALMA observatory. Where necessary, extra flagging and improvement in the flux calibration was done. Data cubes were produced by using the CASA task clean by using the Hogbom algorithm with no cleaning mask and a number of iterations $N_{\rm iter}=500-1000$ according to the significance of the detection, together with a threshold of three times the sensitivity limit given by the rms. We chose a natural weighting to maximise the sensitivity of the individual observations, a common pixel size of 0.05$''$ and a common spectral bin of 30 km s$^{-1}$. Continuum maps were obtained by averaging over all the four spectral windows and excluding the spectral range covered by the [CII]\ emission and possible [CII]\ broad wings. Continuum flux densities were derived by fitting a 2D Gaussian model to the ALMA maps. Furthermore, to model the continuum emission we combined the two adjacent spectral windows of the sideband containing the [CII]\ line to increase the available spectral range, for a total of $\sim3.7$ GHz. We did not consider the two additional spectral windows in the sideband not including [CII] because of the large spectral separation ($\sim15$ GHz in the observed frame). The expected intrinsic differences in the QSO continuum flux ($\sim15-20$\%) among this large spectral range and, mainly, the systematics in the relative calibration of distant spectral windows may affect the detection of broad wings. We thus fitted a zeroth order continuum model in the UV plane to all the available channels (of the spectral windows adjacent to [CII]\, where the QSO continuum variation is expected to be $<1$\%) with a velocity $|v| > 1500$ km s$^{-1}$ with respect to the centroid of the (core) [CII] emission. This choice represents a trade-off between maximising the number of channels ($\sim1/4$ of each spectral window) available to the fit and avoiding spectral regions where broad [CII]\ wings might be present. Moreover, spectral regions corresponding to an atmospheric transmission $<0.5$ for a 1 mm precipitable water vapour were excluded from the fit. We verified that modelling the continuum emission with a first order polynomial did not significantly affect our results, given the limited frequency range covered by our stack. To determine the properties of the host galaxy emission, we extracted the continuum-subtracted [CII]\ spectra from a region with an area of four beams (see Sect. \ref{sect:methods}). The line parameters describing the [CII]\ core emission were derived by fitting each spectrum with one Gaussian component model. Specifically, redshifts ($z_{\rm [CII]}$) were derived from the centroid of the best-fit [CII]\ model (see Table \ref{tab:sample}). The main properties of our high-$z$ QSOs sample are shown in Fig. \ref{fig:sample-distrib} and listed in Table \ref{tab:sample}. The QSOs in the sample are distributed in two main redshift bins, i.e. a first group at $4.5<z<5$ and a second, higher-$z$ group at $z\gtrsim6$. The bulk of the sample is characterised by a luminosity of the [CII]\ core emission in the range Log($L_{\rm [CII]}$/L$_\odot$) $\sim9.0-9.5$ and [CII]\ line profiles with a Full Width Half Maximum (FWHM$_{\rm [CII]}^{\rm core}$) in the range between 300 and 500 km s$^{-1}$. We computed the FIR luminosity by using an Mrk231-like template \citep{Polletta07} normalised to the observed continuum flux density at (rest frame) 158 $\mu$m. The resulting $L_{\rm FIR}$\ (see Fig. \ref{fig:sample-distrib}) span almost two orders of magnitude, i.e. Log($L_{\rm FIR}$/L$_\odot$)$=11.6-13.4$. The AGN bolometric luminosity ($L_{\rm AGN}$) was derived from the monochromatic luminosity at 1450 \AA\ by applying the bolometric correction from \cite{Runnoe12}. All sources in our sample are luminous and hyper-luminous QSOs with $L_{\rm AGN}$\ $\gtrsim10^{46}$ erg s$^{-1}$, with an average $L_{\rm AGN}$\ of $6.3\times10^{46}$ erg s$^{-1}$. \section{Methods} \label{sect:methods} In order to investigate the presence of high velocity wings of the [CII]\ emission, we performed a stacking analysis of the distant QSOs in our sample. The stacking technique has the potential to greatly increase the sensitivity of the stacked spectrum or stacked cube and, therefore, favours the detection of even modest outflows traced by weak [CII]\ wings. As a first step, the cubes were aligned at the [CII]\ rest frequency (1900.5369 GHz) according to $z_{\rm [CII]}$\ and spatially centred on the peak of the QSO continuum emission. We did not include in the stack spectral regions corresponding to an atmospheric transmission $<0.5$ for a 1 mm precipitable water vapour. Then, we combined the data from the 48 sources in our sample according to the relation below, defining the weighted intensity $I^\prime_k$ of a generic spatial pixel ($x^\prime, y^\prime$) in the stacked cube for each spectral channel $k$, and the relative weight $W^\prime_{\rm k}$ as follows \citep{Fruchter&Hook02}: \begin{equation}\label{eq:stack_1} W^\prime_{\rm k} = \sum_{j=1}^n w_{\rm j,k} = \sum_{j=1}^n \frac{1}{\sigma_{\rm j,k}^2} = \frac{1}{\sigma^{\prime 2}_k} \end{equation} \begin{equation}\label{eq:stack_2} I^\prime_k = \frac{\sum_{j=1}^n \left(i_{\rm j,k}\cdot w_{\rm j,k}\right)}{W^\prime_k} \end{equation} \noindent where $i_{\rm j,k}$ is the intensity at the same spatial pixel ($x_j, y_j$) and same spectral channel $k$ of source $j$, and $n=48$. We applied a standard variance-weighted stacking, i.e. we used a weighting factor $w_{\rm j,k}=1/\sigma_{\rm j,k}^2$, where $\sigma_{\rm j,k}$ is the rms noise estimated channel by channel from cube $j$. Furthermore, with this method we accounted for the noise variation with frequency in the spectral range covered by the ALMA [CII]\ spectra, i.e. $\sim 3.7$ GHz, and considered only the contribution of sources with available spectral coverage in our weighted mean. We performed the stacking in two alternative, complementary ways: by stacking the 1D spectra extracted from the individual cubes and by stacking the 3D cubes into a single stacked cube. \begin{table*} \centering \caption{Variance-weighted properties of the stacked QSO samples and the corresponding [CII]\ emission properties. Specifically, rows give the following information: (1) rms sensitivity representative of the [CII]\ spectral region for a channel width of 30 km s$^{-1}$, (2) average $L_{\rm AGN}$\ in the (sub-)sample, (3) average FIR-based SFR, (4) FWHM of the [CII] core, (5) average luminosity of the broad [CII]\ wings, (6) their significance, (7) FWHM, and (8) velocity shift. (9) peak and (10) integrated flux density ratios of the broad [CII]\ with respect to the core [CII]\ emission.} \setlength{\tabcolsep}{3 pt} \begin{tabular}{llccccccc} \toprule & & \textbf{Whole} & \multicolumn{6}{c}{Subsamples} \\\cline{4-9} & & \textbf{sample} & A & B & C & D & E & F \\ \midrule (1) & rms [mJy beam$^{-1}$] & \textbf{0.06} & 0.11 & 0.16 & 0.16 & 0.09 & 0.09 & 0.10 \\ (2) & $L_{\rm AGN}$\ [erg s$^{-1}$] & \textbf{47.0} & 46.3 & 47.1 & 46.7 & 47.2 & 46.7 & 47.1 \\ (3) & SFR$_{\rm FIR}$ [M$_\odot$ yr$^{-1}$] & \textbf{790} & 570 & 540 & 360 & 1270 & 260 & 1750 \\ (4) & FWHM$_{\rm [CII]}^{\rm core}$ [km s$^{-1}$] & \textbf{390$\pm$30} & 210$\pm30$ & 330$\pm$30 & 600$\pm$40 & 510$\pm40$ & 390$\pm$30 & 360$\pm30$\\ (5) & $L_{\rm [CII]}^{\rm broad}$\ [10$^8$ L$_\odot$] & \textbf{4.1$\pm$0.7} & $2.5\pm0.8$ & 4.6$\pm$1.9 & 3.7$\pm$1.2 & 6.9$\pm$1.5 & 4.2$\pm$1.1 & 3.8$\pm$0.8 \\ (6) & SNR$_{\rm [CII]}^{\rm broad}$ & \textbf{5.6$^{\ast}$, 10.2$^{\ast\ast}$, 7.2$^{\ast\ast\ast}$} & 3.0, 5.6, 3.5 & 2.4, 7.5, 3.3 & 3.0, 2.9, 2.4 & 4.6, 7.0, 9.8 & 3.7, 6.5, 5.1 & 4.8, 7.8, 5.4 \\ (7) & FWHM$_{\rm [CII]}^{\rm broad}$ [km s$^{-1}$] & \textbf{1730 $\pm$ 210} & 850 $\pm$ 160 & 710 $\pm$ 130 & 2360 $\pm$ 640 & 1920 $\pm$ 250 & 2210 $\pm$ 430 & 1380 $\pm$ 200 \\ (8) & $\Delta v$ [km s$^{-1}$] & \textbf{-90 $\pm$ 40} & -110 $\pm$ 70 & -70 $\pm$ 50 & -10 $\pm$ 100 & -180 $\pm$ 70 & 130 $\pm$ 100 & -240 $\pm$ 90 \\ (9) & $p_{\rm[CII]}$ & \textbf{0.05 $\pm$ 0.01} & 0.05 $\pm$ 0.01 & 0.1 $\pm$ 0.03 & 0.05 $\pm$ 0.02 & 0.07 $\pm$ 0.01 & 0.07 $\pm$ 0.01 & 0.04 $\pm$ 0.01\\ (10) & $f_{\rm[CII]}$ & \textbf{0.22 $\pm$ 0.04} & 0.18 $\pm$ 0.06 & 0.23 $\pm$ 0.08 & 0.22 $\pm$ 0.07 & 0.31 $\pm$ 0.06 &0.39 $\pm$ 0.09 & 0.14 $\pm$ 0.02 \\ \bottomrule \end{tabular} \flushleft $^{\ast}$ Computed from the fit parameters errors, accounts for the uncertainty in modelling the narrow component.\\ $^{\ast\ast}$ Computed from the pure statistical uncertainty. \\ $^{\ast\ast\ast}$ Computed as in $^{\ast\ast}$, but excluding the central channels affected by the [CII]\ core.\\ \label{tab:outflow} \end{table*} In the first case the continuum-subtracted spectrum of each target was extracted from an elliptical aperture with same position angle of the beam, but over an area four times larger. This approach allows us to collect most of the flux from the QSO (for a point source, $\sim$ 94\% of the flux lies within two beam axes) and limits the contamination of possible companions. The angular size of the systemic [CII] emission is comparable to the ALMA beam for most of the QSOs in our sample \citep[e.g.][]{Venemans16, Venemans17, Decarli18}. The chosen extraction areas therefore maximise the significance of possible high-velocity [CII] wings if outflowing and systemic gas are distributed over similar scales \citep{Cicone14}. As a drawback of our approach, emission from different physical scales may contribute to the stacked spectra. The individual spectra were stacked according to Eq. 1 and Eq. 2. In the second approach, the continuum-subtracted {\it cubes} of the single sources were stacked by applying Eqs. 1,2 to each spaxel. This resulted in a stacked data cube, containing the contribution of each source to the different channels and spatial positions. Table \ref{tab:outflow} reports the statistical uncertainties of the stacked spectrum and cube in spectral channel of 30 km s$^{-1}$, which have been estimated excluding those spaxels contaminated by the QSO emission. We note that the statistical uncertainty of a spectrum extracted from an area of N beams was computed as $\sqrt{N}$ times the rms. Similarly, the sensitivity of a map integrated over K channels was derived as $\left(\sum_{i=1}^K \sigma _i^{-2}\right)^{-1/2}$, where $\sigma _i$ is the rms within each channel slice. To ensure that the presence of broad [CII]\ wings in the stacked spectrum is not an artefact of the stacking procedure, we extracted individual integrated spectra from 100 "empty" positions randomly-selected within the ALMA field of view and stacked them as described above. An upper limit on the significance of the integrated flux associated with a spurious broad component can be derived by fitting the stacked noise spectra with one broad Gaussian profile with a FWHM $>500$ km s$^{-1}$\ and centroid in the velocity range $v\in[-500,+500]$ km s$^{-1}$. This resulted into an average spurious signal-to-noise ratio of $\sim0.4$ for the stack of the total sample. We also verified that the presence of broad [CII]\ wings was not associated with a few QSOs but instead a general property of our sample. For this purpose, we recomputed 1000 times the stack of the integrated spectrum on different subgroups, excluding each time a combination of five randomly selected sources (i.e. $\sim10\%$ of the sample). The resulting rms variation of the [CII]\ wings in the velocity range $400<|v|<1500$ km s$^{-1}$\ corresponds to $\sim$20\% of the peak flux density of the broad [CII]\ wings presented in Sect. \ref{sect:totstack}. The average luminosity variation of the [CII]\ wings is $\sim11$\%, with a maximum variation of 40\%. The uncertainty on the continuum fitting of the individual spectra would result into a simple pedestal, as we modelled the continuum emission with a zero order. However, the fitting of the total stacked spectrum does include a continuum component which is fully consistent with zero, confirming on average a proper continuum subtraction in the individual spectra. \begin{figure} \centering \includegraphics[width=1\columnwidth]{total-2Gauss-newcal.pdf} \caption{Whole sample stacked integrated spectrum. \textit{First panel from top:} number of sources contributing to the stack at different velocities. \textit{Second panel from top:} [CII]\ flux density as a function of velocity, in spectral bins of 60 km s$^{-1}$. The red curve represents the best-fit 2 Gaussian components model: the combination of a core component (blue) and a broad component (green) is needed to properly reproduce the data. Labels indicate the number of stacked sources and the luminosity of the broad [CII]\ wings. The inset shows a zoom on the broad component. \textit{Third panel from top:} residuals from the subtraction of the core component (blue line in the second panel). The green curve shows the best fit broad component. \textit{Fourth panel:} residuals from the two Gaussian components fitting. The 1$\sigma$ rms of the spectrum is also indicated by the shaded region.} \label{fig:stackedspec} \end{figure} \section{Results} \subsection{Stacked spectrum} \label{sect:totstack} The integrated spectrum resulting from the stack of all 48 QSO individual (1D) spectra in our sample is shown in Fig. \ref{fig:stackedspec}. The stacked spectrum reveals very broad wings beneath the narrow line core, tracing fast outflows of cold gas. We modelled the spectrum with a single Gaussian component. The $\chi^2$ minimisation of the fit was performed by using for each channel the weight $W^\prime_{\rm k}$ (see Eq. 1) and all the model parameters were free to vary in the fit with no constrains. However, a single Gaussian could not account for the emission at $v>500$ km s$^{-1}$. This can be seen in the first bottom panel of Fig. \ref{fig:stackedspec} as also indicated by the resulting large $\chi^{2}_{\nu\rm,1G} = 8.6$. The addition of a second unconstrained Gaussian component gives a $\chi^2_\nu = 3.7$ (see Fig. \ref{fig:stackedspec}), i.e. a factor of $\gtrsim2$ smaller, which indicates that the second, broad Gaussian component is required with very high confidence level ($>99.9$\%). The reduced $\chi^2$ is yet larger than unity, hence suggesting that the line profile might be more complex than two simple Gaussian components. Details on the fitting procedure, uncertainties and confidence ellipses associated with the parameters of the broad Gaussian modelling the [CII]\ wings, are reported in Appendix A. In Appendix A we also show the results from a single-Gaussian model fitting, indicating the reliability of the broad component. The significance estimated through the fitting analysis is $5.6\sigma$. The significance based on the simple integration of the flux associated with the broad component (and the statistical uncertainty calculated in the same spectral region) gives a significance of $\sim10\sigma$. This is a pure statistical significance of the broad signal, higher than the confidence obtained from the fit, as it does not take into account the uncertainties associated with the subtraction of the narrow component. However, even ignoring the central channels affected by the core, the statistical significance of the wings alone is $\sim7\sigma$ (see Table \ref{tab:outflow}). The median rms of the stacked spectrum is $\sim0.06$ mJy/beam (see Table \ref{tab:outflow}), which is consistent with the noise expected by stacking the original spectra if the noise is Gaussian. To give an idea of the significant improvement in sensitivity obtained with the stack, the sensitivity level reached in this work is a factor of $\sim14$ lower than that of the J1148$+$5251 observations of \cite{Cicone15}, where a massive [CII]\ outflow was found. \begin{figure*}[] \centering \includegraphics[width=0.367\textwidth]{A-2Gauss-newcal.pdf} \includegraphics[width=0.367\textwidth]{B-2Gauss-newcal.pdf} \includegraphics[width=0.367\textwidth]{C-2Gauss-newcal.pdf} \includegraphics[width=0.367\textwidth]{D-2Gauss-newcal.pdf} \caption{Stacked integrated spectra for the different QSOs subgroups \textit{A, B, C, D} (properties of the individual samples are indicated in the top labels). For each plot, the \textit{first panel from top} shows the [CII]\ flux density as a function of velocity, in bins of 60 km s$^{-1}$. The red curve represents the best-fit 2 Gaussian components model; the two individual components are shown with blue and green curves. Labels indicate the number of stacked sources and the luminosity of the broad [CII]\ wings. The inset shows a zoom on the broad component. \textit{Second panel from top:} residuals from the subtraction of the core component (blue line in first panel). The green curve shows the best fit broad component. \textit{Third panel:} residuals from the two Gaussian components fitting. The 1$\sigma$ rms of the spectrum is also indicated by the shaded region.} \label{fig:s-fwhm-l-lbol-stack-fit} \end{figure*} In the stacked spectrum the core emission component has a width of FWHM$^{\rm core}_{\rm[CII]}$ = 390 $\pm30$ km s$^{-1}$, while the underlying very broad component has a width of FWHM$^{\rm broad}_{\rm[CII]}$ = 1730 $\pm$ 210 km s$^{-1}$ (see Table \ref{tab:outflow}). The broad wings are not symmetric, the blue side being much more prominent than the red side, resulting in the overall broad Gaussian used to fit the broad component being slightly blueshifted (by $\sim$ 90 km s$^{-1}$, see Table \ref{tab:outflow}) with respect to the systemic [CII]\ emission. This might be an artefact resulting from the asymmetric distribution of the data, with the red wing being contributed by fewer spectra than the blue wing (top panel of Fig. \ref{fig:stackedspec}). Alternatively, at such early epochs, the host galaxies of these hyper-luminous QSOs may be similar to extreme ultra-luminous infrared galaxies (ULIRGs), which have been found to be optically thick even at far-IR and sub-millimetre wavelengths \citep{Papadopoulos10,Neri14,Gullberg15}, which may result into absorption of the receding (redshifted) component of the outflow, even at the wavelength of [CII]. The latter interpretation is supported by the fact that, as we will show in Sect. \ref{sect:subsamples}, when we produce stacks by splitting the sample between galaxies with high and low SFR (hence high/low gas and dust content), the stack associated with low SFR (hence low dust content) does not show a blueshift of the broad component, while the sample with high SFR (hence high dust content) exhibits a large blueshift of the [CII] broad component. The peak flux density of the broad [CII]\ component is about 5\% of that of the core, while the integrated broad-to-narrow [CII]\ flux density ratio is $f_{\rm [CII]}\sim0.22$ (see Table \ref{tab:outflow}). In order to estimate the luminosity of the [CII] broad component representative of our sample, we computed a weighted $L_{\rm [CII]}^{\rm stack}$ by applying Eqs. (1) and (2) to the individual narrow [CII]\ luminosities of our targets. We therefore derived the luminosity associated with the broad [CII]\ wings as $L_{\rm [CII]}^{\rm broad}=L_{\rm [CII]}^{\rm stack}\times f_{\rm[CII]}$. The individual [CII]\ spectra contributing to the stacked spectrum are characterised by FWHM of the line core in the range $\sim150-800$ km s$^{-1}$\ (Table \ref{tab:sample}). The combination of line profiles with different widths may potentially result into a stacked profile similar to the combination of a narrow and a broad Gaussian curve. Although the latter could not be as broad as the wings observed in the stacked spectrum of Fig. \ref{fig:stackedspec} , this could still contribute to $L_{\rm [CII]}^{\rm broad}$. We quantified this contribution by stacking Gaussian curves with the same FWHM and $L_{\rm [CII]}$\ distribution of the QSOs in our sample, according to Eq. (1) and Eq (2). In the stacked profile, we computed the [CII]\ emission in excess of a single Gaussian curve to be $\sim17$~\% of $L_{\rm [CII]}^{\rm broad}$, indicating that the effect mentioned above can only have a marginal contribution to the flux of the broad component. \begin{figure}[] \centering \includegraphics[width=1\columnwidth]{lbol-lbroad-newcal.pdf} \caption{Broad [CII]\ wings luminosity as a function of the AGN bolometric luminosity for the different stacks performed (indicated by the top labels). Error bars on $L_{\rm AGN}$\ correspond to the 0.1 dex associated with the UV-based bolometric correction by \cite{Runnoe12}. } \label{fig:summary} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.8\columnwidth]{E-2Gauss-newcal.pdf} \includegraphics[width=0.8\columnwidth]{F-2Gauss-newcal.pdf} \caption{Stacked integrated spectra for the low SFR ($\textit{E}$) and high SFR ($\textit{F}$) subgroups (properties of the individual samples are indicated in the top labels). For each plot, the \textit{first panel from top} shows the [CII]\ flux density as a function of velocity, in bins of 60 km s$^{-1}$, and the associated 2-Gaussians best-fit model; the two individual components are shown with blue and green curves. Labels indicate the number of stacked sources and the luminosity of the broad [CII]\ wings. The inset shows a zoom on the broad component. \textit{Second panel from top:} residuals from the subtraction of the core component (blue line in first panel). The green curve shows the best fit broad component. \textit{Third panel:} residuals from the two Gaussian components fitting. The 1$\sigma$ rms of the spectrum is also indicated by the shaded region.} \label{fig:sfr-stack} \end{figure} \subsection{Outflow relation with QSO-galaxy properties}\label{sect:subsamples} In this section, we study the relation between the presence of cold outflows traced by the [CII]\ wings and the properties of the QSO-host galaxy system, such as AGN luminosity and SFR. Furthermore, to investigate the presence of broad [CII]\ wings without the shortcomings of combining [CII]\ profiles with significantly different widths, we separate the QSOs in two subsamples with FWHM$_{\rm [CII]}$$<400$ km s$^{-1}$\ (median linewidth of the whole sample) and FWHM$_{\rm [CII]}$$>400$ km s$^{-1}$, respectively. This roughly corresponds to discriminate between less and more massive systems, given that the [CII]\ linewidth is a proxy of the dynamical mass of the galaxy (modulo disc inclination effects). We further separate our sample in two AGN luminosity bins: specifically $L_{\rm AGN}$$<10^{46.8}$ erg s$^{-1}$\ (median $L_{\rm AGN}$\ of the whole sample) and $L_{\rm AGN}$$>10^{46.8}$ erg s$^{-1}$. This allowed us to investigate the relation between the [CII]\ outflow strength and $L_{\rm AGN}$. For simplicity, hereafter the different subsamples will be referred to as: \begin{itemize} \item \textit{A}:\hspace{0.1cm} FWHM$_{\rm [CII]}$$<$ 400 km s$^{-1}$, $L_{\rm AGN}$$<10^{46.8}$ erg s$^{-1}$ \item \textit{B}:\hspace{0.1cm} FWHM$_{\rm [CII]}$$<$ 400 km s$^{-1}$, $L_{\rm AGN}$$>10^{46.8}$ erg s$^{-1}$ \item \textit{C}:\hspace{0.1cm} FWHM$_{\rm [CII]}$$>$ 400 km s$^{-1}$, $L_{\rm AGN}$$<10^{46.8}$ erg s$^{-1}$ \item \textit{D}:\hspace{0.1cm} FWHM$_{\rm [CII]}$$>$ 400 km s$^{-1}$, $L_{\rm AGN}$$>10^{46.8}$ erg s$^{-1}$ \end{itemize} \noindent The stacked spectra for the different subsamples are shown in Fig. \ref{fig:s-fwhm-l-lbol-stack-fit}. Because of less statistics, the sensitivity improvement is modest compared to the stack of the whole sample (see Table \ref{tab:outflow}) and the individual source contribution to the stacked spectrum is more evident. This is particularly evident in Fig. \ref{fig:s-fwhm-l-lbol-stack-fit} for stacks \textit{C} and \textit{D}, where the core of the stacked [CII]\ profile is broadened by few sources exhibiting a rotation pattern in their [CII]\ spectra. We fit \textit{A} and \textit{B} with a two Gaussian components model, while for stack \textit{D} and \textit{E} we use a combination of two Gaussians to account for the broadening of the [CII]\ core, and a third Gaussian to reproduce the [CII]\ wings. Similarly to sect. \ref{sect:stacked-cube}, all parameters were let free to very in the fit with no constrains. The best fit models are shown in Fig. \ref{fig:s-fwhm-l-lbol-stack-fit}. A faint broad [CII]\ emission component is still observed in the stacked spectrum of the subgroups, although with lower significance if compared to the whole sample stack presented in Sect. \ref{sect:totstack} and, in few cases, with only marginal significance (see Table \ref{tab:outflow}). In sources with small FWHM of the [CII] core emission (stacks \textit{A} and \textit{B}), wings are characterised by FWHM$_{\rm[CII]}^{\rm broad}$ up to $\sim850$ km s$^{-1}$, while broader wings with FWHM$_{\rm[CII]}^{\rm broad} \sim 2000$ km s$^{-1}$\ are observed in the subsamples with broader [CII] cores (stacks \textit{C} and \textit{D}). Similarly to what we found in the whole sample stack, the peak of the broad [CII]\ wings is 5\% to 10\% of the core peak flux density, while the integrated flux of the broad component corresponds to $20-30$\% of the core component. Following the same method presented in Sect. \ref{sect:methods} for the stack of the total sample, the average signal-to-noise ratio of a spurious broad component is $\sim0.4-0.6$. For each subsample, $L_{\rm [CII]}^{\rm broad}$\ and $L_{\rm AGN}$\ have been computed following the same method of Sect. \ref{sect:totstack} (see Table \ref{tab:outflow}). We observe an increased $L_{\rm [CII]}^{\rm broad}$\ in the high $L_{\rm AGN}$\ sources (see Fig. \ref{fig:s-fwhm-l-lbol-stack-fit}), despite the limited luminosity range spanned by the sources considered in our analysis. Fig. \ref{fig:summary} shows indeed that the stacked $L_{\rm [CII]}^{\rm broad}$\ follow a trend with $L_{\rm AGN}$\ similar to what observed by previous works in individual sources at lower redshift \citep[e.g.][]{Cicone14,Fiore17,Flutsch18}, finding that the outflow strength correlates with the AGN luminosity. This result indicates that the observed [CII]\ outflows are primarily QSO-driven. Instead, we see only marginal variations of $L_{\rm [CII]}^{\rm broad}$\ with respect to the width of the line core, indicating that the dynamics of the galactic disc does not significantly affect the detectability of the broad [CII]\ components associated with the outflow. An alternative driving mechanism of the fast [CII]\ emission could be the starburst in the QSO host galaxy through supernovae and radiation pressure. To investigate this possibility in more detail, we considered two subgroups according to their SFR, as inferred from their $L_{\rm FIR}$\ (computed following \cite{Kennicutt12}), assuming that the bulk of the far-IR emission is associated with SF in the host galaxy: \begin{itemize} \item\textit{E}: SFR$_{\rm FIR}$ $<$ 600 M$_\odot$ yr$^{-1}$ \item\textit{F}: SFR$_{\rm FIR}$ $>$ 600 M$_\odot$ yr$^{-1}$ \end{itemize} The corresponding stacked spectra are shown in Fig. \ref{fig:sfr-stack}. It is evident that the [CII]\ core emission is mainly associated with SF activity, confirming that [CII] is a tracer of star formation as previously found by e.g. \cite{DeLooze14,HerreraCamus15}. The [CII]\ flux density of the core in stack \textit{E}, characterised by a variance-weighted SFR of 260 M$_\odot$ yr$^{-1}$, is in fact a factor of $\sim3.5$ lower with respect to the highly star forming sources stacked in \textit{F} (whose variance-weighted SFR is $\sim$ 1750 M$_\odot$ yr$^{-1}$). However, the broad [CII]\ wings are well present in both stacks with comparable luminosity $L_{\rm [CII]}^{\rm broad}$$\sim4\times10^{8}$ L$_\odot$, indicating that SF does not significantly contribute to the outflows in the hosts of these powerful QSOs. Interestingly, as mentioned, we observe the largest blueshift ($\sim240$ km s$^{-1}$) of the broad [CII]\ wings in the high-SFR QSOs, which are those hosted in dustier galaxies, hence possibly corroborating the interpretation that the blueshift of the [CII] broad component is associated with heavy obscuration by the host galaxy. For stacks {\it E} and {\it F} we calculate an average signal-to-noise ratio of a spurious broad component of $\sim0.4$ and $\sim0.5$, respectively (see Sect. \ref{sect:methods}). Moreover, we estimate the contribution to $L_{\rm [CII]}^{\rm broad}$\ due to the combination of [CII]\ profiles with different FWHM to be only $\sim10$\% for stack \textit{E} and $\sim20$\% for stack \textit{F}. \subsection{Outflow detectability}\label{sect:detectability} As mentioned in Sect. \ref{sect:intro}, up to date few tens of high-z QSOs have been targeted in [CII], some of them with deep ALMA observations. Despite this fact, J1148$+$5251 remains the only source where a massive cold outflow was detected by \cite{Maiolino12} and \cite{Cicone15}. Among the deepest observations \cite{Venemans17} targeted the [CII]\ emission in the $z=7.1$ QSO J1120$+$0641, with no detection of fast [CII]\ emission. The sensitivity reached by \cite{Venemans17} is comparable to that of our B and C subgroups, where the broad [CII]\ wings are only marginally detected (see Sect. \ref{sect:subsamples}). A forthcoming work reaching similar depths (Carniani et al., in prep.) but exploiting configurations more sensitive to the extended, diffuse emission (hence more suitable to detect extended outflows), will present two QSOs were [CII]\ fast emission associated with AGN-driven outflows may be present. Similarly to our work, \cite{Decarli18} computed the variance-weighted stacked spectrum of a sample of 23 ALMA [CII]-detected QSOs but finding no emission in excess of a Gaussian profile. However, the observations presented by \cite{Decarli18} consist of very short ($\sim8$ min) integrations and, therefore, sensitivities from $\sim$0.5 to 1.0 mJy beam$^{-1}$ covering the high-rms half of our sample. These observations correspond to $\sim10\%$ of the total on-source time covered by the QSOs in our sample. By applying our stacking procedure (Sect. \ref{sect:methods}) to the \citet{Decarli18} sample alone, we derive a median rms sensitivity of 0.17 mJy/beam, comparable to that reached by subsamples \textit{B} and \textit{C}, in which we find only marginal presence of [CII]\ wings. We note that our approach differs from that by \citet{Decarli18} as they extract the individual QSOs spectra from the brightest pixel, while we choose extraction apertures of four ALMA beams recovering emission from extended scales. By fitting a two-Gaussian components model to the resulting stacked spectrum, we find a SNR$_{\rm [CII]}^{\rm broad}\sim2$. Accordingly, we agree with \citet{Decarli18} in finding no clear evidence of outflow signatures when stacking only their sample. \begin{figure*}[htb] \centering \includegraphics[width=1\linewidth]{chmap_total_newcal_small.pdf} \caption{Channel maps of the whole sample stacked cube, corresponding to the central $6''\times6''$ in the velocity range $v\in$ [$-$1000, 1000] km s$^{-1}$\ (in bins of 80 km s$^{-1}$, as indicated by the top labels). The bulk of the [CII]\ core emission is collapsed in the channel $v\in$[-390, 390] km s$^{-1}$. Contours correspond to [-3, -2, 2, 3, 4, 5, 6]$\sigma$, where $\sigma$ is the rms sensitivity evaluated for each channel.} \label{fig:stackedcube} \end{figure*} In our stacked spectra we limited the contamination from companions that are usually observed around a fraction of high-z, high-luminosity QSOs, that can be as high as 50\% \citep[e.g.][]{Trakhtenbrot17, Fan2018}. Companions might in fact mimic a tail in the [CII]\ line profile similar to the broad [CII]\ wings indicative of outflowing gas. Most of these companions are located at much larger angular separations than the extraction regions of our spectra (see Sect. \ref{sect:totstack}) and, therefore, do not contaminate our spectra. However, a few of the high-z QSOs in our sample are known to have close companions with angular separation of about 1-2 arcsec which, despite the small extraction region used in our work, may partly contaminate the QSO emission. Specifically, the QSOs PJ231$-$20 and PJ308$-$21 from \cite{Decarli17} show a close companion galaxy with a [CII] luminosity comparable to that of the QSO. In both cases the companion is slightly redshifted and may consequently contaminate the red wing of the [CII]\ stacked spectrum. The QSO PJ167$-$13 presented by \cite{Willott17} is also likely associated with a companion at 0.9 arcsec separation, whose [CII]\ blueshifted ($\sim270$ km s$^{-1}$) emission corresponds to about 20\% of the QSO [CII]\ luminosity. As mentioned in Sect. \ref{sect:methods}, individual sources do not significantly affect the luminosity of the whole sample stacked spectrum. As further verification, we also excluded these particular three QSOs from the stack and found no significant variation in the luminosity of the broad [CII]\ wings. In our stacking procedure we assumed no relation between the luminosity of the broad [CII]\ wings and the AGN-host galaxy properties, such as the luminosity of the core of the [CII]\ emission line. Therefore, as mentioned in Sect. \ref{sect:methods}, we performed a standard variance weighted stack in flux density units. However, as our sample spans a factor of $\sim1.7$ in luminosity distance, we verified that our results persist if performing a stack in luminosity density (see Appendix B). We derived a $L_{\rm [CII]}^{\rm broad}$$\sim4.7\times10^8$ L$_\odot$, comparable to that found in the original stacked spectrum (Table \ref{tab:outflow}). \begin{figure*}[htb] \centering \includegraphics[width=0.8\linewidth]{nobinsum-map-3-total-ok-kpc.png} \caption{\textit{Top:} luminosity maps of the high-velocity [CII]\ emission derived from the whole sample stacked cube. From left to right, panels correspond to emission at increasing absolute velocities, specifically $|v|>400$ km s$^{-1}$, $|v|>550$ km s$^{-1}$\ and $|v|>700$ km s$^{-1}$. Maps were obtained by summing the emission at $>3\sigma$ in 80 km s$^{-1}$\ channel maps for at least three channels (i.e. $\gtrsim$ 250 km s$^{-1}$). The variance-weighted beam of the stacked cube is also indicated in the first map (solid line), together with the smallest beam contributing to the stack (dashed line). The thick solid contour encloses the region from which 50\% of $L_{\rm [CII]}^{\rm broad}$\ arises. \textit{Bottom:} signal-to-noise maps associated with the different velocity bins.} \label{fig:summaps} \end{figure*} \begin{figure*}[htb] \centering \floatbox[{\capbeside\thisfloatsetup{capbesideposition={left,center},capbesidewidth=5cm}}]{figure}[\FBwidth] {\caption{Channel maps of the stacked cube obtained by removing the higher resolution ($<0.6$ arsec) observations (see Sect. \ref{sect:sample}) and tapering the remaining data to a common resolution of 1.2 arcsec. The displayed region corresponds to the central $6''\times6''$ in the velocity range $v\in$ [$-$1000, 1000] km s$^{-1}$\ (in bins of 80 km s$^{-1}$, as indicated by the top labels). The bulk of the [CII]\ core emission is collapsed in the channel $v\in$[-390, 390] km s$^{-1}$. Contours correspond to [-3,-2, 2, 3, 4, 5, 6]$\sigma$, where $\sigma$ is the rms sensitivity evaluated for each channel.}\label{fig:tapered_maps}} {\includegraphics[width=0.7\textwidth]{taper06-chmap-ok.png}} \vspace{-0.5cm} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.7\textwidth]{nobinsum-map-3-taper06-nohigh-snrok.png} \caption{\textit{Top:} luminosity maps of the high-velocity [CII]\ emission derived from the stacked cube of the $\gtrsim0.6$ arcsec sample, after having applied a tapering to a common 1.2 arcsec resolution. From left to right, panels correspond to emission at increasing absolute velocities, specifically $|v|>400$ km s$^{-1}$, $|v|>550$ km s$^{-1}$\ and $|v|>700$ km s$^{-1}$. Maps were obtained by summing the emission at $>3\sigma$ in 80 km s$^{-1}$\ channel maps for at least three channels (i.e. $\gtrsim$ 250 km s$^{-1}$). The 1.2 arcsec beam is also indicated in the first map (solid line). The thick solid contour encloses the region from which 50\% of $L_{\rm [CII]}^{\rm broad}$\ arises. \textit{Bottom:} signal-to-noise maps associated with the different velocity bins.} \label{fig:tapered_total} \end{figure*} \subsection{Stacked Cube}\label{sect:stacked-cube} In this section we present the results from the stacking of the ALMA data cubes for the QSOs in our sample. We produced a stacked cube by applying the stacking technique presented in Sect. \ref{sect:methods}, i.e. we used Eqs. (1) and (2) to compute the variance-weighted stacked flux density of each spaxel in the final cube. It is a very simple approach primarily aimed at investigating the spatial scale of the [CII]\ outflows in the high-$z$ QSOs host galaxies. Differently from the analysis of the integrated spectra of Sect. \ref{sect:totstack}, here we did not not choose an extraction region but only piled up the emission contributions to each pixel of the map. However, the application of this stacking method to heterogeneous observations may lead to a few issues, as discussed in the following: \begin{itemize} \item Combining observations with different angular resolutions (see Fig. \ref{fig:rmshisto}) implies that emission from different physical scales may contribute to the total flux density of a same pixel. Degrading the observations to the lowest angular resolution would allow to stack emission arising from similar physical scales (given that the physical-to-angular scale ratio changes only by a factor $\lesssim1.3$ in the redshift range of our sources). However, in interferometric data this would imply tapering the visibilities, i.e. lowering the weight of the extended baselines to the final map, at the expenses of the sensitivity. Therefore, we preferred not to modify the angular resolutions. However, by computing the variance-weighted beamsize of our stacked cube, it is possible to have an indication of the angular scale above which the most of the emission is resolved. For the all-sample stacked cube we computed an average angular resolution $\theta_{\rm res} = 0.52'' \times 0.68''$. In the case of a point source emission, the flux density contribution to the scale of the beam axes is only $\sim$6\%. We may therefore safely assess that emission on larger scales is mainly associated with extended, resolved emission. \item Lacking {\it a-priori} information about the structure and orientation of possible [CII]\ extended emission, in particular at high velocities, may cause outflowing anisotropic or clumpy [CII]\ emission to be diluted in the stack. As a consequence, the true fraction of [CII]\ emission associated with extended structures may be significantly higher. \item The different angular resolutions of the interferometric observations are the result of different array configurations, which may filter out emission on different large angular scales. As a rough estimate the largest angular scale ($LAS$) that can be recovered by interferometric observations is $LAS\sim(4-6)\times\theta_{\rm res}$, where $\theta_{\rm res}$ is the angular resolution. In the case of our stacked cube the flux loss of extended emission due to filtering starts to become important at $\sim2$ arcsec. \end{itemize} \noindent Aware of these potential issues, Fig. \ref{fig:stackedcube} shows the central $6'' \times 6''$ region of the stacked cube obtained by combining all the high-$z$ QSOs in our sample. Specifically, channel maps of the [CII]\ emission are shown for a velocity range $v\in$ [$-$1000, 1000] km s$^{-1}$\, in bins of 80 km s$^{-1}$. The bulk of the [CII]\ core emission is in the central $v\in$ [-390, 390] km s$^{-1}$. Compact [CII]\ emission is observed in almost all channels at $\gtrsim2\sigma$ up to $\sim6\sigma$, in addition to the presence of few offset clumps. At $|v|\sim400-600$ km s$^{-1}$\ there is also some indication of extended [CII]\ emission. The channel maps of Fig. \ref{fig:stackedcube} suggest that we might be observing [CII]\ emission clumps moving at different velocities and characterised by a range of velocity dispersions. To build a global picture of the [CII]\ outflows, we created an integrated luminosity map of the high-velocity [CII]\ emission by summing the emission contributions, in the 80 km s$^{-1}$\ channel maps, detected at $> 3\sigma$ significance for at least three channels (i.e. $\gtrsim$ 250 km s$^{-1}$) in the whole-sample stacked cube. The result is shown in Fig. \ref{fig:summaps}, where the maps corresponding to the velocity bins $|v|>400$ km s$^{-1}$, $|v|>550$ km s$^{-1}$\ and $|v|>700$ km s$^{-1}$\ are displayed. We also plotted the associated signal-to-noise ratio maps. As expected, most of the fast [CII]\ emission arises from the central regions, where all sources contribute in the stack. At the highest velocities ($>700$ km s$^{-1}$) the nuclear outflow is still present at $\sim3\sigma$ significance. At moderate velocities $|v|\sim400-550$ km s$^{-1}$, we observe extended emission up to $\sim1.5$ arcsec, corresponding to $\sim9$ kpc at $\langle z_{\rm stack}\rangle = 5.8$, and fully resolved compared with the average beam of the observations in the stack. We cannot exclude that part of this extended emission is due to contamination from the [CII]\ core emission. Marginally resolved emission is observed also at higher velocities. However, we stress that we might be losing a significant part of the extended emission in our stack. It is interesting that the apparent size of the outflow appears to decrease as a function of velocity. This is what is expected in an approximate spherically symmetric outflow as a consequence of projection effects. On the other hand, the stacked cube is a combination of outflows which may have different size, morphology and orientation between sources. The modest significance of the stacked data does not allow to draw conclusions on the outflow geometry. Nonetheless, we can compute the spatial scale at which the bulk of the observed fast [CII]\ is emitted as the half light radius of the $|v|>400$ km s$^{-1}$\ map (see Fig. \ref{fig:summaps}), which has the highest SNR. This radius corresponds to the average extent of the region enclosing 50\% of the [CII]\ emission, indicated by the black contour in Fig. \ref{fig:summaps}. We derived a beam-deconvolved half light radius $R_{\rm out} = \sqrt{(R_{\rm 50\%}^2 - R_{\rm beam}^2})\sim 0.60$ arcsec, where $R_{\rm beam}$ is the weighted beam radius of the stacked cube. $R_{\rm out}$ corresponds to $\sim3.5$ kpc at $\langle z_{\rm stack}\rangle$. To ensure that the presence of spatially extended [CII]\ emission in the stacked cube was not an artefact of the combination of different ALMA beam sizes, we performed the whole sample stack of the data cubes after degrading the observations to a common angular resolution. We produced ALMA cubes at the worse angular resolution in our sample by tapering the visibilities to 1.2 arcsec. We did not consider observations with angular resolution $<0.6''$, as tapering to a worse resolution by a factor of $>2$ would result into a major loss of the original flux. The channel maps from the resulting stacked cube are shown in Fig. \ref{fig:tapered_maps}. The corresponding integrated luminosity map of the high-velocity emission is shown in Fig. \ref{fig:tapered_total}. In agreement with Fig. \ref{fig:summaps}, we found that the high-velocity [CII]\ emission is mainly located in a bright central component and extends up to $\sim1.5-2$ arcsec. Indeed, combining the observations associated with the shorter baselines in our sample allowed us to recover some extended [CII]\ emission also in the high-velocity ($|v|>550$ km s$^{-1}$ and $|v|>700$ km s$^{-1}$) bins. By following the same procedure presented in Sect. \ref{sect:stacked-cube}, we derived a beam-corrected half light radius $R_{\rm out}^{\rm taper} \sim 4.6$ kpc at the representative redshift of the stack $\langle z_{\rm stack}^{\rm taper}\rangle\simeq6.2$, slightly larger than the radius inferred from the stack of the total sample. \section{Discussion} As mentioned in Sect. \ref{sect:intro}, most of [CII]\ emission in IR-bright galaxies is expected to arise from PDRs \citep[e.g.][]{Sargsyan12}, accounting for about 70\% of the total [CII]\ emission. Under the assumption that [CII]\ emission is optically thin, it is possible to link the luminosity of the broad [CII]\ wings to the mass of the outflowing atomic gas. In case of optically thick [CII], the true outflowing gas mass would be larger. It is therefore possible to estimate the typical energetics of [CII]\ outflows in high-redshift, high-luminosity QSOs, in the central $\sim3$ kpc regions (see Sect. \ref{sect:stacked-cube}). Specifically, to compute the outflow mass of atomic neutral gas we use the relation from \cite{Hailey-Dunsheath10}: \begingroup\makeatletter\def\f@size{9.2}\check@mathfonts \begin{equation} M_{\rm out}/{\rm M_\odot} = 0.77 \left( \frac{0.7L_{\rm [CII]}}{\rm L_\odot} \right)\left( \frac{1.4\times10^{-4}}{X_{\rm C^+}} \right)\times\frac{1 + 2e^{-91K/T}+n_{\rm crit}/n}{2e^{-91K/T}} \label{eq:m_cii} \end{equation} \endgroup \noindent where $X_{\rm C^+}$ is the C$^+$ fraction per hydrogen atom, $T$ is the gas temperature, $n$ is the gas density and $n_{\rm crit}\sim3\times10^3$ cm$^{-3}$ is the [CII]$\lambda$158 $\mu$m critical density. We use Eq. \ref{eq:m_cii} in the approximation of $n>>n_{\rm crit}$, thus deriving a lower limit on the outflowing gas mass. This choice is in agreement with \cite{Maiolino05} who estimated a gas density of $\sim10^5$ cm$^{-3}$ in J1148$+$5251, but also confirmed by the large densities typically observed in QSO outflows \citep{Aalto12,Aalto15}, and allows us to directly compare with the energetics of the outflow detected in this QSO. Following \cite{Maiolino12} and \cite{Cicone15} we consider a conservative $X_{\rm C^+}\sim10^{-4}$ and a gas temperature of 200 K, both typical of PDRs \citep{Hailey-Dunsheath10}. We recall that, although the molecular gas phase in the ISM has typically lower temperatures, in the outflow even the molecular gas is expected to have higher temperatures, of a few 100 K \citep{RichingsGiguere18a}. Assuming a temperature from 100 K to 1000 K would imply a variation of only 20\% in the resulting gas mass. The 0.7 factor in the first parenthesis of Eq. \ref{eq:m_cii} accounts for the fraction of neutral [CII] typically arising from PDRs, while 30\% typically comes from the partially ionised phase \citep{Stacey10, Maiolino12, Cicone14}. By applying Eq. \ref{eq:m_cii} to the stack of the whole sample we infer a mass of the outflowing neutral gas $M_{\rm out} = (3.7\pm0.7) \times10^8$ M$_\odot$, see Table \ref{tab:outrate}. To compute the [CII]\ outflows energetics of the high-$z$ QSOs in our sample, we assume the sceario of time-averaged expelled shells or clumps \citep{Rupke&Veilleux05b}: \begin{equation}\label{eq:moutrate} \dot{M}_{\rm out} = \frac{v_{\rm out} \times M_{\rm out}}{R_{\rm out}} \end{equation} where $v_{\rm out}= |\Delta v_{\rm broad}| + \rm FWHM_{[CII]}^{broad}/2$ (see Table \ref{tab:outrate}) and $\Delta v_{\rm broad}$ is the velocity shift of the centroid of the broad [CII]\ wings with respect to the systemic emission, $R_{\rm out}\sim3.5$ kpc as derived in Sect. \ref{sect:stacked-cube} from the extension of the [CII] broad wings as inferred from the stacked cube. We calculate the kinetic power associated with the [CII]\ outflows as: \begin{equation} \dot{E}_{\rm out} = \frac{1}{2} \dot{M}_{\rm out} \times v_{\rm out}^2 \end{equation} and the momentum load: \begin{equation}\label{eq:pout} \dot{P}_{\rm out}/\dot{P}_{\rm AGN} = \frac{\dot{M}_{\rm out}\times v_{\rm out}}{L_{\rm AGN}/c} \end{equation} where $\dot{P}_{\rm AGN}=L_{\rm AGN}/c$ is the AGN radiation momentum rate. This approach allows us to directly compare our findings to the collection of 30 low redshift AGN by \citep{Flutsch18}, for which the energetics of spatially resolved molecular and (in $\sim$ one third of the same sources) neutral [CII]\ and ionised outflows has been homogeneously calculated. \begin{figure*}[] \centering \includegraphics[width=0.48\textwidth]{mout-lbol-errorbars.pdf} \includegraphics[width=0.48\textwidth]{vout-lbol-newcal.pdf} \includegraphics[width=0.96\textwidth]{pout-vout-newcal.pdf} \caption{[CII] outflows parameters. \textit{(a):} mass outflow rate as a function of $L_{\rm AGN}$\ for the different stacked spectra (stars, see legend for details), compared to the sample of 30 low-redshift AGN from \cite{Flutsch18} for which spatially resolved molecular (blue) and, in one third of the sample, ionised (green) outflows have been observed. We also included the compilation of ionised outflows (hollow green squares) with spatial information in $z\sim0.1-3$ AGN from \cite{Fiore17}, recomputed according to Eqs. \ref{eq:moutrate}-\ref{eq:pout}. Purple squares are local systems for which the outflow has been traced in [CII]\ through observations with the \textit{Herschel Space Observatory} \citep{Janssen16,Flutsch18}. By applying the atomic-to-molecular outflowing gas mass correction by \cite{Flutsch18}, the molecular+atomic mass outflow rates are shown with circles. The typical $\sim0.3$ dex uncertainty on $\dot{M}_{\rm out}$ for the [CII] outflows found in our $z\sim6$ QSOs (similar to that of outflows in the atomic neutral and molecular phase in low-$z$ AGN) is shown by the black solid line, while the uncertainty on $\dot{M}_{\rm out}$ for the ionised outflows is shown by the green line. \textit{(b):} Outflow velocity as a function of $L_{\rm AGN}$ . \textit{(c):} Kinetic power as a function of $L_{\rm AGN}$. The dotted, dashed, solid and dot-dashed curves indicate kinetic powers that are 10\%, 1\%, 0.1\% and 0.01\% of the AGN luminosity. \textit{(d):} Momentum load factor as a function of the outflow velocity. The horizontal line corresponds to $\dot{\rm P}_{\rm out}=P_{\rm AGN}$.} \label{fig:mout-lbol} \end{figure*} \begin{table*} \centering \caption{Outflow parameters associated with the different stacked integrated spectra. Values for the whole sample stack are listed in boldface. Columns give the following information: (1) stacked sample, (2) outflow velocity (3) atomic gas mass associated with the broad [CII]\ wings, (4) mass outflow rate, computed following \cite{Flutsch18}, (5) kinetic power and (6) momentum load factor of the outflow.} \makebox[1\linewidth]{ \setlength{\tabcolsep}{3 pt} \begin{tabular}{lccccccc} \toprule Stack & \multicolumn{2}{c}{} & $v_{\rm out}$ & $M_{\rm out}$ & $\dot{M}_{\rm out}$ & $\dot{E}_{\rm out}$ & $\dot{P}_{\rm out}$/$P_{\rm AGN}$ \\ & & & [km s$^{-1}$]&[10$^8$ M$_\odot$] & [M$_\odot$ yr$^{-1}$] & [10$^{43}$ erg s$^{-1}$] & \\ (1) & & & (2) & (3) & (4) & (5) & (6)\\ \midrule \textbf{Whole sample} & \multicolumn{2}{c}{} & \textbf{960 $\pm$ 120} & \textbf{3.7 $\pm$ 0.7} & \textbf{100 $\pm$ 20} & \textbf{2.6 $\pm$ 0.7} & \textbf{0.20$\pm$ 0.05} \\ A & \multicolumn{2}{c}{FWHM$_{\rm [CII]}$$<$ 400 km s$^{-1}$, $L_{\rm AGN}$$<10^{46.8}$ erg s$^{-1}$} & 550 $\pm$ 110 & 2.4 $\pm$ 0.9 & 35 $\pm$ 15 & 0.30 $\pm$ 0.15 & 0.17$\pm$ 0.07 \\ B & \multicolumn{2}{c}{FWHM$_{\rm [CII]}$$<$ 400 km s$^{-1}$, $L_{\rm AGN}$$>10^{46.8}$ erg s$^{-1}$} & 440 $\pm$ 90 & 4.6 $\pm$ 1.5 & 55 $\pm$ 20 & 0.33 $\pm$ 0.15 & 0.04 $\pm$ 0.02 \\ C & \multicolumn{2}{c}{FWHM$_{\rm [CII]}$$>$ 400 km s$^{-1}$, $L_{\rm AGN}$$<10^{46.8}$ erg s$^{-1}$} & 1180 $\pm$ 380 & 3.2 $\pm$ 1.0 & 115 $\pm$ 50 & 5.0 $\pm$ 2.3 & 0.58 $\pm$ 0.24 \\ D & \multicolumn{2}{c}{FWHM$_{\rm [CII]}$$>$ 400 km s$^{-1}$, $L_{\rm AGN}$$>10^{46.8}$ erg s$^{-1}$} & 1100 $\pm$ 140 &6.2 $\pm$ 1.2 & 185 $\pm$ 35 & 7.4 $\pm$ 2.0& 0.28 $\pm$ 0.07 \\ E & \multicolumn{2}{c}{SFR$_{\rm FIR}$ $<$ 600 M$_\odot$ yr$^{-1}$} & 1210 $\pm$ 230 & 3.9 $\pm$ 1.1 & 135 $\pm$ 40 & 3.0 $\pm$ 0.7 & 0.50 $\pm$ 0.12 \\ F & \multicolumn{2}{c}{SFR$_{\rm FIR}$ $>$ 600 M$_\odot$ yr$^{-1}$} & 930 $\pm$ 140 & 3.6 $\pm$ 0.8 & 95 $\pm$ 30 & 2.5 $\pm$ 0.8& 0.12 $\pm$ 0.03\\ \bottomrule \end{tabular} } \label{tab:outrate} \end{table*} The resulting outflow parameters for the whole sample stack and the different subsamples considered (see Sect. \ref{sect:subsamples}) are listed in Table \ref{tab:outrate}. We derive a mass outflow rate of $\dot{M}_{\rm out} = 100\pm20$ M$_\odot$ yr$^{-1}$\ for the stack of the whole sample, while for the large FWHM, high-$L_{\rm AGN}$\ subgroup (stack $D$) we find $\dot{M}_{\rm out}\sim200$ M$_\odot$ yr$^{-1}$. These outflow rates only refer to the atomic neutral component. \cite{Flutsch18} obtained that, for AGN-driven outflows, the molecular mass outflow rates are of the same order as the atomic neutral outflow rates, while the contribution from the ionised gas is negligible, at least in the luminosity range probed by them. They find that the molecular-to-ionised outflow rate increases with luminosity, in contrast with what found by \cite{Fiore17}; the discrepancy may originate from the fact that the latter study investigate disjoint samples, or may originate from the different luminosity ranges sampled. If we assume that the relations found by \cite{Flutsch18} also apply to these distant luminous QSOs, then the implied total outflow rate is twice the value inferred from [CII]. Fig. \ref{fig:mout-lbol}a shows the mass outflow rate as a function of the AGN bolometric luminosity. Stars show the atomic neutral outflow rate inferred from the [CII] broad wings for the various stacked spectra, as indicated in the legend. The circles, connected to the star through a dashed line, indicate the inferred outflow rate by accounting also for the molecular gas content in the outflow assuming the relation given by \cite{Flutsch18}. Blue, green and purple squares show the molecular, ionised and atomic neutral outflow rates measured by \cite{Flutsch18} in local AGN. In the latter case the neutral component is obtained through [CII]\ observations of local galaxies performed by the \textit{Herschel} infrared space telescope \citep{Janssen16} and purple circles show the effect of correcting the atomic outflow rate as discussed above. Hollow green squares show the ionised outflow rates inferred from \cite{Fiore17}; these are from a disjoint sample (they do not have measurements for the molecular and atomic phase) and may be subject to different selection effects, but they have the advantage to extend to much higher luminosities than the sample in \cite{Flutsch18}. Fig. \ref{fig:mout-lbol}a illustrates the well known phenomenon that the outflow rate increases with the AGN luminosity and that generally the outflow rate is dominated by the neutral phases (atomic and neutral). However, at the very high luminosities probed by our stacked spectra of the most distant QSOs the outflow rates associated with the neutral phase appear to deviate from the trend observed locally, and the outflow rates seem similar to those observed in the ionised phase. For completeness, Fig. \ref{fig:mout-lbol}b shows the outflow velocity as a function of the AGN luminosity, illustrating that the velocity of the outflow observed in the stacked spectra is consistent with the trend observed in other AGN and in other phases, further confirming that these outflows are QSO-driven. Fig. \ref{fig:mout-lbol}c shows the kinetic power as a function of the AGN luminosity with the same symbols as in Fig.\ref{fig:mout-lbol}a. For our stacked spectra, the kinetic power is between 0.01\% and 0.5\% of $L_{\rm AGN}$, i.e. much lower than what expected from AGN "energy-driven" outflow models \citep[$\dot{E}_{\rm out}\sim0.05\times L_{\rm AGN}$, e.g.][]{DiMatteo05,ZubovasKing2012} which ascribe outflows to the nuclear winds that expands in an energy-conserving way. Fig. \ref{fig:mout-lbol}d shows the outflow momentum load factor, i.e. the outflow momentum rate relative to $ \dot{P}_{\rm AGN}$, as a function of the outflow velocity. For our stacked spectra $\dot{P}_{\rm out}/ \dot{P}_{\rm AGN}\lesssim1$, while "energy-driven" outflow models would expect momentum load factors of $\sim20$. These results suggest that the outflows in these powerful quasars are either energy-driven but with poor coupling with the ISM of the host galaxy, or are driven by direct radiation pressure onto the dusty clouds \citep[e.g.][]{Ishibashi18}. In either cases the outflow unlikely is in its "ejective" mode, i.e. very effective in removing gas from the entire galaxy, hence in completely suppressing star formation \citep{Costa15,Costa17,Bourne14,Bourne15,Roos15,Gabor14}, although such ejective mode can be effective in clearing of the gas content and quenching star formation in the central regions. Moreover, the outflow can be effective in heating the circumgalactic medium and therefore preventing further accretion of fresh gas onto the galaxy, hence resulting in a delayed quenching of the galaxy by "starvation" \citep{Peng&Maiolino15}. It could also be, in contrast with what observed in the low-luminosity local AGN, that in these very luminous, distant QSOs the bulk of the outflow is highly ionised. The observation that in other very luminous QSOs the ionised outflow rate, kinetic power and momentum rate is similar to the same quantities locally observed in the molecular phase (Fig.\ref{fig:mout-lbol}), does suggest that the balance between the various phases is different in these systems \citep{Bischetti17,Fiore17}. However, as illustrated in Fig. \ref{fig:mout-lbol}, even the ionised phase does not seem to be massive and powerful enough to match the requirements of the energy-driven scenario with high coupling. In alternative, the interferometric data used in our stack of the [CII]\ emission may miss extended, diffuse emission associated with outflows. Indeed, a large fraction of the data have angular resolution higher than 0.7$''$, which may prevent them to detect emission on scales larger than $\sim3-4''$. The lack of sensitivity to extended, diffuse emission may indeed be a major problem in very distant systems, due to the rapid cosmological dimming of the surface brightness, decreasing as $\rm \sim (1+z)^4$. This scenario may also explain why the [CII]\ outflow rate and kinetic power in the stacked data of distant QSOs do not seem to increase significantly with respect to the local, lower-luminosity AGN (purple square symbols in Fig.\ref{fig:mout-lbol}) whose [CII]\ broad wings were observed with Herschel. Within this context it is interesting to note that in the QSO J1148$+$5251 at z$=$6.4 \cite{Maiolino12} and \cite{Cicone15} did detect a very extended outflow on scales of $\sim 6''$, by exploiting low angular resolution observations. J1148$+$5251 (black square in Fig. \ref{fig:mout-lbol}) is indeed characterised by a larger outflow rate and higher kinetic power with respect to the stacked measurements. However, even for J1148$+$5251 the kinetic power and momentum rate appear to be significantly lower than what expected by the simple scenario of energy-driven outflows with high coupling with the ISM. \section{Conclusions} In this work we have presented the stacking analysis of a sample of 48 QSOs at $4.5<z<7.1$ detected in [CII]\ by ALMA, equivalent to an observation of $\sim34$ hours on-source, aimed at investigating the presence and the properties of broad [CII]\ wings tracing cold outflows. The stack allows us to reach an improvement in sensitivity by a factor of $\sim14$ with respect to the previous observation of a massive [CII]\ outflow in J1148$+$5251 at $z\sim6.4$ \citep{Maiolino12,Cicone15}. \begin{itemize} \item From the stacked integrated spectra, we clearly detect broad [CII]\ wings, tracing cold outflows associated with $z\sim6$ QSOs and whose velocities exceed 1000 km s$^{-1}$. This weak, broad component has not been previously detected in single observations (except for the case of J1148$+$5251) because of insufficient sensitivity. The same limitation applies to the stack recently performed by \cite{Decarli18} on the sample of 23 $z\sim6$ QSOs with ALMA [CII]\ detection, which were mostly observed with very short (few minutes) exposures. In fact similarly to \cite{Decarli18}, we find no significant broad [CII]\ wings in the stacked spectrum of their sources alone. \item The redshifted [CII] wing is fainter than the blueshifted [CII] wing. This may be associated with the asymmetric distribution of the spectral coverage of the spectra used in the stacked spectrum. However, if confirmed with additional data, this asymmetry would suggest that in these systems the dusty gas in the host galaxy has a column density high enough to obscure the receding component of the outflows, with respect to our line of sight. High dust column densities capable of absorbing even at far-IR and sub-mm wavelengths have been observed in local ULIRGs. \item By splitting the sample in AGN luminosity and SFR bins, we observe that the strength of the stacked broad component correlates with the AGN luminosity, but does not depend on the SFR. This indicates that the QSOs are the primary driving mechanism of the [CII]\ outflows in these systems. Moreover, we find that the broad component is very blushifted in the stack with high SFR and nearly symmetric in the stack with low SFR. Since the SFR correlates with the gas and dust content in the galaxy, this finding corroborates the interpretation that the blueshift of the [CII] broad component might be associated with heavy dust absorption. \item By stacking the ALMA data cubes, we investigate the morphology of the [CII]\ outflows in our sample and find that the high-velocity [CII]\ emission extends up to $R_{\rm out}\sim3.5$ kpc. However, we cannot exclude that additional, more extended emission is present but missed by the interferometric data used for the stacking. Moreover, averaging outflows with different orientations and clumpiness may result into dilution effects affecting the observed intensity and extension of the [CII]\ broad wings in the stacked cube. \item From the stacked cube we infer an average atomic mass outflow rate $\dot{M}_{\rm out} \sim100$ M$_\odot$ yr$^{-1}$, which doubles for the stack of the most luminous sources. By correcting for the atomic-to-molecular gas ratio found by \cite{Flutsch18}, the former value translates into a total mass outflow rate of about 200 M$_\odot$ yr$^{-1}$. The associated kinetic powers are consistent with 0.1\% of $L_{\rm AGN}$\ for most stacks, while momentum load factors span the range $0.1-1$; these $\dot{M}_{\rm out}$ are lower than what observed in cold outflows associated with local, lower luminosity AGN, and are lower than the expectations of standard energy-driven outflow models (hence indicating either a low coupling with the ISM and/or a different driving mechanism, such as direct radiation pressure on the dusty clouds). As a consequence, QSO-driven outflows in the early universe may have not been very effective in clearing the galaxy from their gas content, although they may have been effective in clearing and quenching their central regions, and also heating the galaxy halo hence resulting into a delayed star formation quenching as a consequence of starvation. \end{itemize} Future deep ALMA follow-up observations will allow us to confirm the presence of [CII]\ outflows in individual high-$z$ QSOs. Furthermore, the increasing number of available sources on the ALMA archive will increase the statistics, enabling us to reduce the uncertainties on the cold outflows parameters in the early Universe. \begin{acknowledgements} We are grateful to the anonymous referee for valuable feedback that helped us to improve the paper. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00243.S, 2012.1.00604.S, 2012.1.00676.S, 2012.1.00882.S, 2013.1.01153.S, 2015.1.01115.S and 2016.1.01515.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. MB and EP acknowledge financial support from ASI and INAF under the contract 2017-14-H.0 ASI-INAF. RM and AF acknowledge ERC Advanced Grant 695671 "QUENCH" and support by the Science and Technology Facilities Council (STFC). FF and EP acknowledge financial support from INAF under the contract PRIN INAF 2016 FORECAST. \end{acknowledgements} \bibpunct{(}{)}{;}{a}{}{,} \bibliographystyle{aa}
{ "timestamp": "2019-07-18T02:12:24", "yymm": "1806", "arxiv_id": "1806.00786", "language": "en", "url": "https://arxiv.org/abs/1806.00786" }
\section{} {\it Introduction.}--- In Weyl semi-metals and Weyl superconductors, low-energy excitations behave as Weyl fermions characterized by nonzero Berry curvatures in the momentum space, which stem from monopole charges at Weyl points \cite{vishwanath,balents,balents2,murakami,volovik,volovik2,volovik3,Hosur_review,PhysRevB.86.054504}. This feature results in various intriguing electromagnetic responses associated with chiral anomaly. For instance, in the case of Weyl semi-metals, chiral anomaly gives rise to the anomalous Hall effect, chiral magnetic effect, and negative magnetoresistivity \cite{NN_1983, Fukushima_CME2008,Vazifeh2013,Zyuzin_AHE2012,Liu2013,PhysRevB.88.104412, Goswami2013,Lucas9463}, some of which have been already experimentally verified in real materials \cite{volovik4,Xu613,LvTaAs2015,TaAsWeng2015,huang-nat,XiaochunTaAs2015,Shekhar2015}. For Weyl superconductors, however, chiral anomaly phenomena can not be realized by simply applying electromagnetic fields, because Weyl-Bogoliubov quasiparticles do not carry definite charges. Instead, chiral anomaly in the superconducting state can be induced by emergent electromagnetic fields which are generated by spatially inhomogeneous textures of order parameters, or lattice strain \cite{guinea,PhysRevB.92.165131,T.Hughes2011,T.Hughes2013,Parrikar2014,Chandia,Shitade2014,Gromov2014,PhysRevLett.115.177202,PhysRevLett.116.166601,PhysRevB.94.241405,PhysRevX.6.041046,PhysRevX.6.041021,PhysRevB.96.081110,PhysRevB.95.041201,PhysRevB.96.224518}. In this letter, we demonstrate that negative magnetoreisistivity of longitudinal thermal currents induced by an emergent magnetic field can be a signature of chiral anomaly; i.e., thermal conductivity of Weyl quasiparticles increases as the emergent magnetic field parallel to the temperature gradient increases, even when pair-breaking effects due to magnetic fields are negligibly small. We examine two scenarios for realizing emergent magnetic fields. One is that induced by vortex textures in the mixed state, and the other one is a chiral magnetic field arising from lattice strain \cite{PhysRevB.92.165131,PhysRevB.95.041201,PhysRevB.96.224518}. We establish the above-mentioned result by combining the argument based on the semiclassical equation of motion with Berry curvatures characterizing Weyl fermions, and microscopic analysis using quasiclassical theory of the Keldysh Green function. Our finding is relevant to putative Weyl superconductors such as multi-layer systems \cite{PhysRevB.86.054504}, and uranium-based systems, URu$_2$Si$_2$, UPt$_3$, UCoGe, U$_{1-x}$Th$_{x}$Be$_{13}$~\cite{sato-fujimoto,PhysRevLett.99.116402,doi:10.7566/JPSJ.85.033704,PhysRevB.91.140506,URuSi1,goswami,PhysRevB.92.214504,Schemm190,PhysRevLett.108.066403,PhysRevB.66.134504,shimizu,mizushimaPRB18,machidaJPSJ18}. {\it Semiclassical argument for thermal transport with Berry curvature.}--- We, first, present a semiclassical argument for thermal transport. This approach is useful for qualitative understanding of chiral anomaly effects. We consider a paradigmatic model of Weyl superconductors which describes a three-dimensional (3D) chiral $p_x+ip_y$ pairing state of spinless fermions, though our basic idea can be generalized to any Weyl superconductors. The superconducting gap function for homogeneous cases is given by $\Delta_{\bm{k}}=\Delta(k_x-ik_y)/k_F$. In this system, low-energy excitations from point nodes of the superconducting gap at $\bm{k}=(0,0,\pm k_F)$ behave as Weyl fermions. The model Hamiltonian for low-energy Weyl quasiparticles with the monopole charge $s=\pm 1$ in the case with spatial inhomogeneity is given by, \begin{eqnarray} \mathcal{H}_{s}(\bm{k},\bm{r})=s e^{\mu}_aV^a_b\tau^b(k_{\mu} - sk_{0\mu}), \label{eq:ham1} \end{eqnarray} where $V^a_b=\mbox{diag}[\frac{\Delta}{k_F},\frac{\Delta}{k_F},v_F]$ with $v_F$ the Fermi velocity, $\tau^a$ is the Pauli matrix in the particle-hole space. Spatial inhomogeneity is described in terms of the vielbein $e^{\mu}_a$. We use greek letter indices $\mu=1,2,3$ as space indices for the laboratory frame, and roman letters $a=\bar{1},\bar{2},\bar{3}$ as indices for a local orthogonal frame. As mentioned above, the spatial inhomogeneity gives rise to an emergent magnetic field $\bm{\mathcal{B}}=\bm{T}^{\mu}k_{\mu}$ with the torsion field, $ (\bm{T}^{\mu})^{\nu}=\frac{1}{2}\epsilon^{\nu\lambda\rho}T^a_{\lambda\rho}e_a^{\mu} $, $ T^a_{\mu\nu}=\partial_{\mu} e^a_{\nu}-\partial_{\nu} e^a_{\mu} $, where $e^a_{\mu}$ is the inverse of $e^{\mu}_a$ \cite{PhysRevB.92.165131,T.Hughes2011,T.Hughes2013,Parrikar2014,SM}. It is noted that $\bm{\mathcal{B}}$ plays a role of a chiral magnetic field, when $\bm{T}^{z}$ is nonzero, since the sign of $k_z$ at the Weyl points of the model (\ref{eq:ham1}) corresponds to chirality of Weyl fermions. There are several ways of realizing nonzero $\bm{\mathcal{B}}$ in superconductors. For instance, a vortex line texture parallel to the $z$-axis, i.e. $\Delta=\Delta_0e^{i\phi}$ generates the emergent magnetic field, $\bm{\mathcal{B}}=(0,0,\mathcal{B}_z)$ with $ \mathcal{B}_z=T_{12}^{\mu}k_{\mu}=(k_y\cos \phi -k_x \sin\phi )/r, $ which does not depend on $k_z$, and is not a chiral magnetic field, but imitates a usual magnetic field. Also, lattice strain such as twist of a crystal structure with a rotation axis parallel to $z$-direction gives rise to an emergent chiral magnetic field along the $z$-axis. In the following, we consider magnetoresistivity of thermal current for these two cases. By using the semiclassical equation of motion with Berry curvatures for Weyl quasiparticles~\cite{SM}, and the Boltzmann equation, we obtain the chiral anomaly contribution of the local thermal current $\bm{J}_H(\bm{r})$ up to leading terms in $\bm{\mathcal{B}}$, \begin{eqnarray} \bm{J}_H(\bm{r}) &=&\sum_{s=\pm 1}\sum_{\bm{k}}(\bm{v}_{\bm{p}s}\cdot\bm{\Omega}_{\bm{kk}s})^2\varepsilon_{\bm{k}s}^2\left(\frac{\partial f}{\partial \varepsilon_{\bm{k}s}}\right)\tau_{\bm{k}s} \nonumber \\ &&\times\left(\frac{\nabla T}{T}\cdot\bm{\mathcal{B}}\right)\bm{\mathcal{B}}, \label{eq:hc} \end{eqnarray} where $\varepsilon_{\bm{k}s}=\sqrt{v^2(k_z-sk_{0z})^2+\Delta^2(k_x^2+k_y^2)/k_F^2}$, $\bm{v}_{\bm{k}s}=\partial \varepsilon_{\bm{k}s}/\partial\bm{k}$, $\tau_{\bm{k}s}$ is the relaxation time, $f$ is the Fermi distribution function, and $\bm{\Omega}_{\bm{k}\bm{k}s}$ is the Berry curvature generated by the monopole charge at the Weyl point, which characterizes the chiral anomaly contribution. Equation (\ref{eq:hc}) evidences the negative thermal magnetoresistivity (NTMR) due to the emergent magnetic field $\bm{\mathcal{B}}$. It is noted that the chiral anomaly contribution of the thermal conductivity $\kappa_A$ extracted from Eq.~(\ref{eq:hc}) exhibits singular temperature dependence. In the case of a constant relaxation time, we have, \begin{eqnarray} \kappa_A \propto 1/T, \label{eq:tc2} \end{eqnarray} for low $T$. If one takes into account temperature-dependence of $\tau_{\bm{k}s}$ more precisely, the low-temperature behavior becomes more singular. This behavior is due to the singularity of the Berry curvature in the vicinity of Weyl points, i.e. $\Omega_{\bm{kk}s} \sim 1/|\delta\bm{k}|^2$ for the deviation from the Weyl points $|\delta\bm{k}|\rightarrow 0$. The characteristic $T$-dependence of (\ref{eq:tc2}) can be utilized for discriminating the chiral anomaly contribution from usual contributions of thermal conductivity of nodal excitations, $\kappa_0 \propto T$ for $T\rightarrow 0$. However, we must be careful about the applicability of Eq.~(\ref{eq:hc}). The divergent behavior of (\ref{eq:tc2}) implies that it can not be used in the low-temperature limit, for which adiabatic approximation postulated for the derivation of the Berry curvature formula fails down. Thus, Eq.~(\ref{eq:tc2}) is applicable only in the intermediate temperature region. To investigate thermal transport for the whole temperature region, we exploit alternative approaches based on the Keldysh formalism in the following. {\it Keldysh-Eilenberger approach for cases with vortex textures.}--- To confirm the prediction obtained above, and go beyond adiabatic approximation, which fails down in the low-temperature region, we exploit the Keldysh formalism of the quasiclassical Eilenberger equation. We consider the 3D chiral $p_x+ip_y$ pairing model again, and, first, examine the case of an emergent magnetic field generated by vortex textures of the superconducting order parameter. The case of strain-induced chiral magnetic fields will be considered later. A merit of the scenario of a vortex-induced emergent magnetic field is that it can be easily realized for any type-II superconductors. Transport properties of systems with inhomogeneous textures are described in terms of the quasiclassical Green function $\check{g}(\hat{\boldsymbol{k}},\boldsymbol{r},\epsilon)$ with $\hat{\bm{k}}$ a unit vector parallel to the Fermi momentum.\cite{SM,rainer,graf,eschrig}. Using the Keldysh Green function $\hat{g}^K$, we can express a thermal current as, \begin{equation} \boldsymbol{J}_H(\boldsymbol{r})=N_F\int_{-\infty}^{\infty}{\frac{d\epsilon}{4\pi i}} \int{d\hat{\boldsymbol{k}}\epsilon {\bm v}_{F}\frac{1}{2}\mathrm{Tr}\left[\hat{g}^K(\hat{\boldsymbol{k}},\boldsymbol{r},\epsilon)\right]}, \label{eq:hc2} \end{equation} where $N_F$ is the density of states at the Fermi level, ${\bm v}_{\rm F}$ is the Fermi velocity, and $\int d\hat{\bm k}\cdots$ is the normalized Fermi surface average. In this paper, we consider the spherical Fermi surface with ${\bm v}_{\rm F}=v_{\rm F}\hat{\bm k}$. Effects of emergent magnetic fields arising from spatial inhomogeneity can be incorporated via spatial gradient expansion of the Eilenberger equation, which gives higher-order quantum corrections to the quasiclassical approximation. Up to the first order in $1/(k_F\xi)$ with $\xi$ the coherence length, the Eilenberger equation with quantum corrections is given by~\cite{SM}, \begin{align} [(\epsilon+e\bm{v}_F\cdot\bm{A})\tau_3-\check{h},\check{g}]+i\boldsymbol{v}_F\cdot\boldsymbol{\nabla}_{\boldsymbol{r}}\check{g}=\frac{i}{2}\{\check{h}\cdot\check{g}\}-\frac{i}{2}\{\check{g}\cdot\check{h}\}, \label{eq:ee1} \end{align} where $ \{\check{a}\cdot\check{b}\}=\boldsymbol{\nabla}_{\boldsymbol{r}}\check{a}\cdot\boldsymbol{\nabla}_{\boldsymbol{k}}\check{b}-\boldsymbol{\nabla}_{\boldsymbol{k}}\check{a}\cdot\boldsymbol{\nabla}_{\boldsymbol{r}}\check{b} $, and $\bm{A}$ is a vector potential due to an external magnetic field, and $\check{h}=\check{\Delta}+\check{\sigma}_{\rm imp}$ with $\check{\Delta}$ the gap function, and $\check{\sigma}_{\rm imp}$ the self-energy due to impurity scattering, which determines the relaxation time $\tau$ \cite{SM}. The nonzero right-hand side term of (\ref{eq:ee1}) describes leading quantum corrections. For simplicity, we assume that $\check{\sigma}_{\rm imp}$ does not depend on temperature $T$. In general, $\check{\sigma}_{\rm imp}$ should depends on $T$, because of the energy-dependence of the density of states of Weyl quasiparticles, and $T$-dependence of the gap function. However, this simplification is useful for the investigation of characteristic $T$-dependence of thermal conductivity arising from chiral anomaly, which is predicted from the semiclassical analysis (\ref{eq:tc2}). Effects of an emergent magnetic field caused by vortex textures are included in the right-hand side of Eq.~(\ref{eq:ee1}). We deal with this term in a perturbative way. We expand the Green function up to the second order in $1/(k_F\xi)$; $\check{g}=\check{g}_0+\check{g}_1+ \check{g}_2$. The non-perturbative part $\check{g}_0$ can be easily calculated from the standard Eilenberger equation without quantum corrections, supplemented with the normalization condition, $\check{g}_0^2=-\pi^2$~\cite{richardPRB16}. The correction terms $\check{g}_1$ and $\check{g}_2$ are obtained from an inhomogeneous Eilenberger equation with leading quantum corrections, \begin{align} [(\epsilon+e\bm{v}_F\cdot\bm{A})\tau_3-\check{h},\check{g}_{n}]&+ i\boldsymbol{v}_F\cdot\boldsymbol{\nabla}_{\boldsymbol{r}}\check{g}_n = \nonumber \\ \frac{i}{2}\{\check{h}\cdot\check{g}_{n-1}\}& - \frac{i}{2}\{\check{g}_{n-1}\cdot\check{h}\}. \label{eq:ee2} \end{align} The thermal conductivity $\kappa = J^z_H/(-\partial _zT)$ is obtained by substituting the solution of $\check{g}=\check{g}_0+\check{g}_1+\check{g}_2+\cdots$ to Eq.~\eqref{eq:hc2}. The temperature gradient along the vortex line is incorporated as the boundary condition of the Keldysh component at $z=\pm\infty$, $g^{K}_n(\infty)=-2\pi(g^{R}_n-g^{A}_n)\tanh[\epsilon/2T(\pm \infty)]$~\cite{SM}, where $g^{R,A}_n$ are calculated in the absence of the temperature gradient. We, first, consider the case of single vortex with vorticity $m$, i.e. $\Delta(\bm{r})=\Delta_0(T)[\tanh(r/\xi)]^{|m|}e^{im \phi}$ with $r=\sqrt{x^2+y^2}$. In this case, we can neglect the vector potential $\bm{A}$ in Eq.~(\ref{eq:ee2}). Solving Eq.~(\ref{eq:ee2}) numerically for $\check{g}_1$ and $\check{g}_2$, we found that the contribution from $\check{g}_1$ to the thermal current is negligible. The leading quantum correction associated with the vortex-induced emergent magnetic field arises from $\check{g}_2$. The calculated results of this quantum correction term of the thermal conductivity, $\kappa_2$, for vorticity $m=1,2,3$ are shown in FIG.~\ref{kappa-1}(a), where $\kappa _2$ is spatially averaged over the core region within $r\le 5\xi$. In this calculation, the BCS-type temperature-dependence of the gap function is assumed, the energy unit is scaled by $2\pi T_c$, and the parameters are set as, $v_F=20$, $k_F=1$, $\xi=20$, $\Delta_0(0)=1.765T_c$, and $1/\tau=0.002$. It is noted that $\kappa_2$ increases as the vorticity increases. Since the emergent magnetic field is proportional to the vorticity, this behavior implies negative magnetoresistivity of thermal currents. Furthermore, the $T$-dependence of $\kappa_2$ remarkably exhibits upturn increase in the intermediate temperature region, which is indeed in agreement with the prediction from the semiclassical analysis, Eq.~(\ref{eq:tc2}). However, in contrast to the semiclassical result, which fails down in the low temperature limit, the $T$-dependence turns to decreasing behaviors in the low temperature region, which is consistent with the thermodynamics third law. Thus, it is concluded that the negative magnetoresistivity of thermal currents is a signature of chiral anomaly of Weyl quasiparticles. We, here, comment on $T$-dependence of the normal self-energy neglected in our calculations. If one takes into account the $T$-dependence due to the energy dependence of the density of states, the increase of the thermal conductivity is more magnified in the intermediate $T$-region, because of the longer relaxation time. Thus, the detection of the chiral anomaly effect becomes more feasible. \begin{figure} \begin{center} \includegraphics[width=70mm]{NTM-fig1.eps} \end{center} \caption{(a) $\kappa_2$, versus $T$ in the case of single vortex with vorticity $m=1,2,3$. (b) $\kappa_2$ versus $T$ in the case of a vortex lattice for $H=0.08,~0.09~,~0.10,~0.11,~0.12$ from bottom to top. } \label{kappa-1} \end{figure} We, next, performed the calculation for the case of a vortex lattice. For simplicity, a square lattice structure of vortices is assumed \cite{SM,ichiokaPRB02}. The calculated results of $\kappa_2$ are shown in FIG.~\ref{kappa-1}(b), which is the spatially averaged value over the unit cell. The qualitative characteristic features are similar to the results for the case with single vortex. The thermal conductivity increases as a function of a magnetic field, and the $T$-dependence qualitatively coincides with the Berry phase formula (\ref{eq:tc2}) in the intermediate $T$-region, signifying the chiral anomaly effect. We also calculated the spatial distribution of thermal currents, and found that thermal currents are mainly carried by bulk quasiparticles, rather than bound states in vortex cores, confirming that the increase of $\kappa_2$ is due to chiral anomaly of Weyl quasiparticles. It is noted that the NTMR in this scenario is free from the issue of current jetting, which disturbs the detection of negative magnetoresistivity as a signature of chiral anomaly in the case of Weyl semimetals \cite{armitage}. The current jetting is caused by inhomogeneity of current distribution due to the strong Landau quantization. Since the wave function in the vortex state is the Bloch function, the current jetting is absent in this case. We stress that the characteristic temperature dependence found in FIG.\ref{kappa-1} can not be realized for any non-Weyl (non-Dirac) superconductors, as revealed by numerous previous studies on thermal transport in the vortex state \cite{maki1,maki2,maki3,vek1,vekhter,franz-high,vafek,tesa,takigawa,matsuda,adachi,voron,golubov}. Thus, the NTMR with the characteristic temperature dependence is a unique feature of Weyl (Dirac) superconductors. Although the above results establish the NTMR as a signature of chiral anomaly, the chiral anomaly contribution shown in FIG.~\ref{kappa-1}(b), which corresponds to the case of high magnetic fields, is about 0.1 $\%$ of the total contribution. The calculation for low fields is not attainable because of numerical costs. It is known that for small magnetic fields close to a lower critical field and for $\bm{J}_H \parallel \bm{H}$, the field dependence of the thermal conductivity due to usual pair-breaking is quite small. Thus, in this case, the experimental detection of the chiral anomaly contribution is still feasible by measuring the field-dependent part of the thermal conductivity. A more promising approach for the detection of the chiral anomaly effect is to utilize an emergent chiral magnetic field induced by lattice strain. We consider this scenario in the following. \begin{figure} \begin{center} \includegraphics[width=70mm]{NTM-fig2.eps} \end{center} \caption{(a) $\kappa$ versus $T$ for $e\mathcal{B}_C=0.0, ~0.00125,~0.0025, ~0.00375,~0.005$ from bottom to top. For the temperature region $T< T_L$, in which the quasiclassical approximation fails down, the results are shown in dotted lines. Inset: $\kappa_2$ versus $T$ for $e\mathcal{B}_C= 0.00125,~0.0025, ~0.00375,~0.005$ from bottom to top. (b) $\kappa$ versus $T$ for $e\mathcal{B}_C=0.0, ~0.01,~0.015,~0.02$ from bottom to top. The results for $T< T_L$ are shown in dotted lines. } \label{kappa-2} \end{figure} {\it Case of strain-induced chiral magnetic fields.}--- We, now, explore the case that lattice strain induces a chiral magnetic field $\bm{\mathcal{B}}_{\rm C}$ in the 3D chiral $p_x+ip_y$-wave spinless superconductor. To simplify the analysis, we introduce the strain-induced chiral vector potential by hand in the mode, though the realization of the strain-induced magnetic field requires multi-orbital degrees of freedom \cite{PhysRevB.92.165131,PhysRevX.6.041021}. Since a chiral magnetic field causes neither the Meissner effect nor the vortex state, the pair-breaking effect due to the chiral magnetic field is remarkably weak \cite{SM}. In fact, for the parameters used in our calculations, the superconducting state survives against a chiral magnetic $e\mathcal{B}_C \sim < 0.03$, and thus, we can expect enormous NTMR due to a large value of $\mathcal{B}_C$. The chiral magnetic field in superconductors gives rise to a pseudo-Lorentz force, which is obtained from the right-hand side of Eq.(\ref{eq:ee1}) \cite{matsushita}. For simplicity, we assume a uniform chiral magnetic field parallel to $z$-axis, $\bm{\mathcal{B}}_{\rm C}=(0,0,\mathcal{B}_C)$. Then, we end up with the Eilenberger equation, \begin{eqnarray} [\epsilon \tau_3 - \check{h}, \check{g}]+ i\bm{v}_{\rm F}\cdot{\nabla}_{\bm{r}}\check{g} +ie\bm{v}_{\rm F}\times\bm{\mathcal{B}}_{\rm C}\cdot\frac{\partial}{\partial\bm{k}_{\parallel}}\check{g} =0. \label{eq:ee-cm} \end{eqnarray} The last term of (\ref{eq:ee-cm}) is the pseudo-Lorentz force term. Since this equation is homogeneous, we need an additional normalization condition for $\check{g}$ to solve it, i.e., $\check{g}^2=-\pi^2$. To derive an approximate analytic solution of (\ref{eq:ee-cm}), we expand $\check{g}$ in terms of $1/(\xi k_F)$ and $\bm{\mathcal{B}}_{\rm C}$ up to the second order. An explicit expression for quantum corrections of $\check{g}$ due to $\bm{\mathcal{B}}_{\rm C}$ is given in Supplemental Material \cite{SM}. Although the superconducting state is robust against large values of $\mathcal{B}_C$, one can not neglect the Landau quantization of quasiparticles for a sufficiently strong chiral magnetic field, which can not be treated within the quasiclassical approximation. Thus, the temperature range in which our method is valid is limited to $T>T_L\equiv \sqrt{2e\mathcal{B}_C}\Delta/k_F$, for which the Landau levels are smeared by temperature broadening effect. We calculate a thermal current from Eq.(\ref{eq:hc2}) up to linear order in $\nabla T$ \cite{graf,voron}. Numerical results of the thermal conductivity $\kappa=\kappa_0+\kappa_2$ with $\kappa_0$ the non-perturbed zero-field part and $\kappa_2$ the field-dependent quantum correction, are shown in FIG.\ref{kappa-2}. In this calculation, the BCS-type $T$-dependence of the gap function, and the same parameters as those in the case with vortex-induced magnetic fields are used. As seen in FIG.\ref{kappa-2}, the thermal conductivity increases, as $\mathcal{B}_C$ increases, signfying NTMR. Furthermore, for $e\mathcal{B}_C > \sim0.01$, the quantum correction part dominates over, and hence, the total thermal conductivity exhibits a remarkable increase, as temperature is lowered in the intermediate temperature region, which is a characteristic feature of chiral anomaly contributions. The positions of the peaks of $\kappa$ for different values of $\mathcal{B}_C$ shown in FIG.\ref{kappa-2} (b) are roughly $T_c \times \Delta /E_F$, and thus independent of $\mathcal{B}_C$. It is noted that the prominent increase of the thermal conductivity appears even for temperatures much above $T_L$ for sufficiently large $e\mathcal{B}_C$, implying that the increasing behavior of the thermal conductivity is not an artifact of the quasiclassical approximation. For putative Weyl superconductors of uranium-based systems with lattice constants $4\sim 9$ \AA, $\mathcal{B}_C \approx 2 \sim 5$ Tesla (T) can be realized by torsional distortion around the $c$-axis by $2\pi$ per $\sim 1$ $\mu$m. On the other hand, for a lattice constant $\sim 4$ \AA, $e\mathcal{B}_C=0.00125$ in FIG.\ref{kappa-2} corresponds to $\mathcal{B}_C\sim 5$ T. In such cases, the magnitude of the chiral anomaly part of the thermal conductivity is more than 10 $\%$ of the total thermal conductivity, and thus, it is feasible to detect the characteristic $T$-dependence of $\kappa_2$ experimentally by extracting $\mathcal{B}_C$-dependent part of the thermal conductivity. We also note that current jetting issue \cite{armitage} can be avoided in this case, because the results in FIG. \ref{kappa-2} shows that the characteristic signature of chiral anomaly, i.e. the upturn increase of the thermal conductivity in the intermediate temperature region, appears even for sufficiently small chiral magnetic fields which do not cause the inhomogeneous current distribution due to the strong Landau quantization. {\it Conclusion.}--- We have investigated thermal transport in Weyl superconductors with emergent (chiral) magnetic fields. It is established that NTMR as a signature of chiral anomaly of Weyl quasiparticles can be realized, and its experimental detection is feasible. This work was supported by the Grant-in-Aids for Scientific Research from MEXT of Japan [Grants No.~JP17K05517, No.~25220711, and No.~JP16K05448] and KAKENHI on Innovative Areas ``Topological Materials Science'' [No.~JP15H05852, No.~JP15H05855] and "J-Physics" [No.~JP18H04318].
{ "timestamp": "2018-10-23T02:21:35", "yymm": "1806", "arxiv_id": "1806.00993", "language": "en", "url": "https://arxiv.org/abs/1806.00993" }
\section{Introduction}\label{intro} Let $x \in [0,1] \setminus \mathbb{Q}$. It is well known that there exists a sequence $\{i_n\}_{n \in \mathbb{N}}$ known as the \emph{continued fraction expansion} of $x$ that satisfies $$x=\cfrac{1}{i_1+\cfrac{1}{i_2+\cfrac{1}{i_3+\ldots}}}.$$ Continued fractions are closely related to the \emph{Gauss map} which is defined as $T: [0,1] \setminus \mathbb{Q} \to [0,1] \setminus \mathbb{Q}$ $$T(x)=\frac{1}{x} \mod 1.$$ Let $\Sigma=\mathbb{N}^{\mathbb{N}}$, $\Sigma^{\ast}$ denote the set of all words of finite length with entries in $\mathbb{N}$ and $\sigma: \Sigma \to \Sigma$ be the shift map given by $\sigma((i_n)_{n \in \mathbb{N}})=(i_{n+1})_{n \in \mathbb{N}}$. Often we will let $\i$ denote a point $\i=(i_n)_{n \in \mathbb{N}} \in \Sigma$. $T$ is `coded' by $(\Sigma, \sigma)$ meaning that $T \circ \Pi= \Pi \circ \sigma$ where the `coding map' $\Pi: \Sigma \to [0,1] \setminus \mathbb{Q}$ is given by $$\Pi(\i)= \lim_{n \to \infty} T_{i_1}^{-1} \circ \ldots T_{i_n}^{-1}([0,1])= \cfrac{1}{i_1+\cfrac{1}{i_2+\cfrac{1}{i_3+\ldots}}}.$$ It is well known that $T$ has an absolutely continuous invariant probability measure $\mu_T$ given by $$\mu_T(A)= \frac{1}{\log 2} \int_A \frac{1}{1+x} \textup{d}x.$$ By using the coding map $\Pi$, we can construct many more $T$-invariant measures, by `pushing forward' $\sigma$-invariant measures from $\Sigma$. In particular, if $m$ is a $\sigma$-invariant measure then $\mu=m \circ \Pi^{-1}$ is a $T$-invariant measure. In this paper we will be focused on pushforward \emph{Bernoulli measures}. Given a countable probability vector $\p=(p_n)_{n \in \mathbb{N}}$, let $m_{\p}$ denote the Bernoulli measure on $\Sigma$ which satisfies $m_{\p}([i_1\ldots i_n])=p_{i_1} \ldots p_{i_n}$, where $[ i_1 \ldots i_n]=\{\j \in \Sigma: j_1=i_1, \ldots ,j_n=i_n\}$ denotes the cylinder set for the word $i_1 \ldots i_n$. We define $\mu_{\p}=m_{\p} \circ \Pi^{-1}$ and we will also call this a Bernoulli measure. We will be interested in the \emph{Hausdorff dimension} of Bernoulli measures, where the Hausdorff dimension of a Borel probability measure $\mu$ is defined as $$\dim \mu= \inf\{\dim A: \mu(A)=1\}$$ where $\dim A$ denotes the Hausdorff dimension of the set $A$. By the work of Walters \cite{walters}, $\mu_T$ is the unique absolutely continuous invariant probability measure for $T$ and realises the supremum \begin{eqnarray} h(\mu_T)- \int \log|T^{\prime}|\textup{d}\mu_T= \sup_{\mu \in \mathcal{M}(T)}\left\{ h(\mu)- \int \log|T^{\prime}|\textup{d}\mu: \int \log|T^{\prime}|\textup{d}\mu< \infty\right\}=0 \label{vp} \end{eqnarray} where $\mathcal{M}(T)$ denotes all $T$-invariant probability measures and $h(\cdot)$ denotes the measure-theoretic \emph{entropy}. As a direct consequence of (\ref{vp}) we deduce that for any $\p$ for which $h(\mu_{\p})< \infty$, \begin{eqnarray} \dim \mu_{\p}= \frac{h(\mu_{\p})}{\chi(\mu_{\p})} < 1 \label{less1} \end{eqnarray} where the formula $\dim (\cdot)=\frac{h(\cdot)}{\chi(\cdot)}$ is known to hold for all finite entropy ergodic measures and $\chi(\mu_{\p})= \int \log|T^{\prime}| \textup{d}\mu_{\p}$ is known as the \emph{Lyapunov exponent} of $\mu_{\p}$. What is not clear from (\ref{less1}) is whether there is a `\emph{dimension gap}' at 1. We say that there is a dimension gap if there exists some $c>0$ for which $$\sup_{\p \in \mathcal{P}} \dim \mu_{\p} \leq 1-c$$ where $\mathcal{P}$ denotes the simplex of all probability vectors. In this paper we will prove the following result. \begin{thm} \label{main} There exists $c>0$ such that $$\sup_{\p \in \mathcal{P}} \dim \mu_{\p} \leq 1-c.$$ \end{thm} Theorem \ref{main} was already proved by Kifer, Peres and Weiss \cite{kpw} who showed that \begin{eqnarray} \sup_{\p \in \mathcal{P}} \dim \mu_{\p} \leq 1-10^{-7}. \label{kpwgap} \end{eqnarray} We briefly sketch their proof. Given $\mathbf{w} \in \Sigma^{\ast}$ and $\delta>0$ let $\Gamma_{\mathbf{w}}^{\delta}$ be defined by $$\Gamma_{\mathbf{w}}^{\delta}=\left\{ x \in (0,1): \limsup_{n \to \infty} \left|\frac{1}{n}\sum_{i=0}^{n-1} \one_{\mathbf{w}}(T^ix)-\mu_T(\Pi(\mathbf{w}))\right|>\delta\right\}$$ which is the set of points whose orbits visit the interval $\Pi([\mathbf{w}])$ with an asymptotic frequency which differs by $\delta$ from the one prescribed by $\mu_T$. By using the ergodic theorem it is not difficult to show that for some $\delta_0>0$, $\dim \mu_{\p} \leq \max\{\dim \Gamma_{1}^{\delta_0}, \dim \Gamma_{11}^{\delta_0}\}$ for all $\p$. Also define $J_n(x)= \Pi([i_1\ldots i_n])$ if $ x \in \Pi([i_1\ldots i_n])$, that is, $J_n(x)$ is the `level $n$' projected cylinder that $x$ belongs to, let $|J_n(x)|$ denote the diameter of $J_n(x)$ and consider the set \begin{eqnarray} \label{e1} \mathcal{E}_{\lambda}= \bigcap_{j=1}^{\infty} \bigcup_{n=j}^{\infty} \left\{x \in (0,1): |J_n(x)| \leq \exp(-\lambda n)\right\} \end{eqnarray} which is the set of points whose orbits `frequently' visit a `small' neighbourhood of 0. Kifer, Peres and Weiss showed that for some $\lambda_0>0$, $\dim \mathcal{E}_{\lambda_0}<1$ which allowed them to reduce the problem down to finding an upper bound for the dimension of the set of points in $\Gamma_{1}^{\delta_0}$ and $\Gamma_{11}^{\delta_0}$ which \emph{don't} belong to $\mathcal{E}_{\lambda_0}$. They then showed that for any $\delta>0$, $$\sup_{\w \in \Sigma^{\ast}} \dim (\Gamma_{\w}^{\delta}\setminus \mathcal{E}_{\lambda_0}) < 1$$ which completed the proof. Another proof of Theorem \ref{main} was given by the author and Baker in \cite{bj} where it was shown that there exists a Bernoulli measure $\mu_{\q}$ such that $$\dim \mu_{\q}= \sup_{\p \in \mathcal{P}} \dim \mu_{\p}.$$ Notice that by (\ref{less1}) this immediately implies the existence of a dimension gap, however it gives no quantitative information about the size of the gap. In this paper we propose a new proof of the dimension gap. All objects which have been discussed so far have some interpretation in the language of thermodynamic formalism; for instance $\mu_{T}$ and $\mu_{\p}$ are \emph{Gibbs measures}, the dimension can typically be written in terms of the \emph{entropy}, and the \emph{variational principle} (\ref{vp}) describes the existence and uniqueness of a measure of maximal dimension. Therefore, it is a natural question to ask what is the meaning of a \emph{dimension gap} within the framework of thermodynamic formalism. As a consequence of the new proof that is given in this paper we demonstrate that a dimension gap corresponds to the existence of uniform lower bounds for the \emph{asymptotic variance} of a class of potentials. This is of particular interest since this appears to be a rare example of an application of lower bounds for the variance. We remark that while our approach does give some information about the size of the gap, since it does not improve on \cite{kpw} we will not make it explicit in order to keep our arguments concise. Throughout the paper we will assume that if $h(\mu_{\p})=-\sum_{n \in \mathbb{N}} p_n \log p_n < \infty$ then the entries $(p_n)_{n \in \mathbb{N}}$ of the probability vector $\p$ are decreasing and satisfy $p_n =O(\frac{1}{n^2})$ (meaning that there exists a constant $K>0$ such that $p_n \leq \frac{K}{n^2}$ for all $n$). To see that we can make the first assumption, suppose that for some $k \in \mathbb{N}$, $p_{k+1} < p_k$. Define $\p^{\prime}$ to be the probability vector given by \begin{equation*} \begin{array}{ccc} p_n^{\prime}=p_n & \textnormal{if} & n \notin \{k, k+1\} \\ p_n^{\prime}= \frac{p_k+p_{k+1}}{2} &\textnormal{if} & n \in \{k, k+1\}. \end{array} \end{equation*} Then since $h(\mu_{\p^{\prime}})> h(\mu_{\p})$ and $\chi(\mu_{\p^{\prime}})< \chi(\mu_{\p})$ (see for instance \cite[Lemma 3.5]{bj}), it follows that $\dim \mu_{\p^{\prime}}> \dim \mu_{\p}$. We can make the second assumption since given any probability vector $\p$ and any $\epsilon>0$, we can choose some probability vector $\q$ with the property that $q_n=0$ for all $n$ sufficiently large whose dimension `approximates' the dimension of $\mu_{\p}$, that is, $|\dim \mu_{\q}-\dim \mu_{\p}|< \epsilon$ (see for instance \cite[Proposition 3.6]{bj}). Since $\mu_{\q}$ is finitely supported, trivially $q_n=O(\frac{1}{n^2})$. Therefore it is sufficient to consider probability vectors that satisfy both assumptions on their weights. Throughout this paper we denote \begin{equation} \psi= \frac{1}{|T^{\prime}(z_1)|^{\frac{1}{4}}} \in (0,1) \label{psi}. \end{equation} Morally there are similarities with \cite{kpw} in the way in which the new proposed proof will be organised. To be precise, while Kifer, Peres and Weiss showed that it was enough to consider the dimension of the set of points in $\Gamma_{1}^{\delta_0}$ and $\Gamma_{11}^{\delta_0}$ which did not belong to $\mathcal{E}_{\lambda_0}$, we'll show that it is actually sufficient to study the dimension of Bernoulli measures whose probability vectors satisfy the following hypothesis for some constant $0<\epsilon< \psi <1$. \begin{hyp} The probability vector $\p$ satisfies $-\sum p_n \log p_n < \infty$ and additionally either \begin{itemize} \item[(a)] $p_1, p_2 > \epsilon$ or \item[(b)] $p_1> \psi$. \end{itemize} \label{hyp1} \end{hyp} In particular, we'll show that there exists $c>0$ and $0<\epsilon< \psi <1$ such that whenever $\p$ does not satisfy Hypothesis \ref{hyp1} for this choice of $\epsilon$ then $\dim \mu_{\p}< 1-c$. Essentially this is down to the fact that if Hypothesis \ref{hyp1} is not satisfied, $\mu_{\p}$ must assign a lot of mass to a small neighbourhood of 0 (since the entries $p_n$ are decreasing) which allows us to bound the dimension of the measure directly from the fact that the Lyapunov exponent is forced to be large. Consequently this allows us to restrict our attention to $\p$ which satisfy Hypothesis \ref{hyp1}. Fixing such $\p$, by using tools from thermodynamic formalism we will show that we can relate $\dim \mu_{\p}$ to the derivative of a particular function $\beta_{\p}(t)$ at $t=1$. By using the properties of $\beta_{\p}$ we will show that the problem reduces to obtaining a lower bound on $\beta_{\p}^{\prime\prime}(t)$ which holds uniformly for all $t$ belonging to a compact interval and all $\p$ which satisfy Hypothesis \ref{hyp1}. In turn, this reduces to studying lower bounds on the asymptotic variance of a particular class of potentials, which comprises the main body of work in this paper. The paper is organised as follows. In section 2 we provide some preliminaries, including the necessary tools from thermodynamic formalism and some useful properties of the Gauss map. In section 3 we will show that there exist some constants $c, \epsilon_0>0$ such that if $h(\mu_{\p})< \infty$ and $\p$ does not satisfy Hypothesis \ref{hyp1} for $\epsilon=\epsilon_0$ or if $h(\mu_{\p})= \infty$ then $\dim \mu_{\p}< 1-c$. In particular, this will allow us to assume that Hypothesis \ref{hyp1} holds for $\epsilon=\epsilon_0$ for the remainder of the paper. In section 4 we obtain a bound on the dimension of measures which satisfy Hypothesis \ref{hyp1} (for $\epsilon=\epsilon_0$). In section 5 we tie the last two sections together to provide a proof of Theorem \ref{main}. Finally in section 6 we discuss a generalisation of Theorem \ref{main}. \section{Preliminaries} \subsection{Symbolic coding.} Let $\Sigma$, $\Sigma^{\ast}$, $\sigma$, $\Pi$ be defined as before. For $\i \in \Sigma^{\ast}$ let $|\i|$ denote the length of the word $\i$. For $\i, \j \in \Sigma$ let $\i \wedge \j \in \Sigma \cup \Sigma^{\ast}$ denote the longest initial block common to both $\i$ and $\j$. We equip $\Sigma$ with the metric $d$ given by $d(\i, \j)=\exp(-|\i \wedge \j|)$ if $|\i \wedge \j|< \infty$ and $d(\i, \j)=0$ otherwise. Given $\i=(i_n)_{n \in \mathbb{N}} \in \Sigma$, we let $\i|_n=i_1 \ldots i_n$ denote the finite word obtained by truncating $\i$ after $n$ digits. Given $i_1 \ldots i_n \in \Sigma^{\ast}$ let $(i_1 \ldots i_n)^{\infty}$ denote the unique periodic point $\i \in \Sigma$ of period $n$ for which $\i|_n=i_1 \ldots i_n$. Given a finite word $i_1 \ldots i_n$, denote $\I_{i_1\ldots i_n}= \Pi([i_1 \ldots i_n])$. Note that since $\I_n= [\frac{1}{n+1}, \frac{1}{n})$, $|\I_n|= \frac{1}{n(n+1)}=O(\frac{1}{n^2})$. \subsection{Function spaces on $[0,1]$.} Let $C([0,1])$ denote all continuous functions $f:[0,1] \to \mathbb{R}$. Let $$[f]_1= \sup_{x \neq y} \frac{|f(x)-f(y)|}{|x-y|}$$ denote the Lipschitz constant of a function $f:[0,1] \to \mathbb{R}$. We say that $f$ is Lipschitz (continuous) if $[f]_1< \infty$. Let $\lip$ denote the space of all bounded Lipschitz continuous functions and equip this with the norm $\norm{\cdot}_{0,1}=[\cdot]_1+\norm{\cdot}_{\infty}$. We say that a potential $f: [0,1] \to \mathbb{R}$ is \emph{locally H\"older} if there exist constants $C>0$ and $0<\alpha<1$ such that for all $n \geq 1$ the variations $\textnormal{var}_n(f)$ decay exponentially: \begin{eqnarray} \textnormal{var}_n(f)= \sup_{i_1\ldots i_n \in \mathbb{N}^n} \left\{ |f(x)-f(y)|: x,y \in \I_{i_1,\ldots,i_n} \right\} \leq C\alpha^n. \label{variations} \end{eqnarray} Note that $f$ being locally H\"older does not necessarily imply that it is bounded. We define $$\mathcal{H}_{\alpha}=\left\{f:[0,1] \to \mathbb{R} : \textnormal{$f$ is bounded and $\sup_n \frac{\textnormal{var}_n(f)}{\alpha^n}< \infty$}\right\} $$ and denote the space of all bounded locally H\"older functions by $\H= \cup_{0<\alpha<1} \H_{\alpha}$. If $f \in \mathcal{H}_{\alpha}$, define the seminorm $[f]_{\alpha}$ to be the smallest constant $C$ that one can take in (\ref{variations}) and we equip $\mathcal{H}_{\alpha}$ with the norm $\norm{\cdot}_{\alpha}= [\cdot]_{\alpha}+ \norm{\cdot}_{\infty}$. We say that a locally H\"older potential $f: [0,1] \to \mathbb{R}$ is \emph{summable} if \begin{eqnarray} \sum_{n \in \mathbb{N}} \exp(\sup f|_{\I_{n}}) < \infty. \label{summable} \end{eqnarray} \subsection{Thermodynamic formalism.} We can define the \emph{topological pressure} of a potential $g$ as follows. \begin{defn}[Topological pressure] Let $g: [0,1] \to \mathbb{R}$ be a locally H\"older potential. Then the pressure of $g$ is given by $$P(g) = \lim_{n \to \infty} \frac{1}{n} \log \left( \sum_{x: T^nx=x } \exp(S_ng(x))\right)$$ where $S_ng(x)$ denotes the Birkhoff sum $S_ng(x)=g(x)+ \ldots g(T^{n-1}x)$. \end{defn} In general, the pressure of $g$ can either be finite or infinite, but if $g$ is summable then $P(g)< \infty$. Given a locally H\"older potential $g:[0,1] \to \mathbb{R}$, we say that a measure $\mu_g$ is a \emph{Gibbs measure} for $g$ if there exist constants $C, P> 0$ such that for all $n \in \mathbb{N}$, $\i \in \Sigma^{\ast}$ and $x \in \I_{\i}$, \begin{eqnarray} C^{-1} \leq \frac{\mu_g(\I_{\i})}{\exp(S_ng(x)-nP)} \leq C. \label{gibbs} \end{eqnarray} Note that we do not require $\mu_g$ to be invariant. By \cite[Corollary 2.10]{murpf} we know about the existence of $T$-invariant Gibbs measures. \begin{prop}[Existence of Gibbs measures] Let $g:[0,1] \to \mathbb{R}$ be a locally H\"older summable potential. Then there exists a unique $T$-invariant (probability) Gibbs measure $\mu_g$ for $g$. Moreover, the constant $P$ in (\ref{gibbs}) is given by $P=P(g)$. \label{exist} \end{prop} Gibbs measures have a useful characterisation via the Ruelle-Perron-Frobenius theorem, see \cite[Corollary 2.10]{murpf}. \begin{prop}[Ruelle-Perron-Frobenius theorem] \label{rpf} Let $g:[0,1] \to \mathbb{R}$ be a locally H\"older potential with $P(g)=0$ and let $\l_g: \mathcal{H} \to \mathcal{H}$ be the transfer operator given by $$\l_g f(x)=\sum_{Ty=x} \exp(g(y))f(y).$$ Then there exists a unique (positive) eigenfunction $\l_gh=h$ and a unique eigenmeasure $\l_g^{\ast} \tilde{\mu}= \tilde{\mu}$, where $\l^{\ast}_g$ denotes the dual of $\l_g$. Moreover $\tilde{\mu}$ is a Gibbs measure for $g$. Let $\m_g: \mathcal{H} \to \mathcal{H}$ be the normalised operator defined by $$\m_g f(x)=\frac{1}{h(x)}\sum_{Ty=x} h(y) \exp(g(y))f(y)$$ so that $\m_g \one =\one$. Then $\textup{d}\mu= h \textup{d} \tilde{\mu}$ is the unique $T$-invariant Gibbs measure for $g$ and $\m_g^{\ast} \mu=\mu$. \end{prop} Given $u \in \mathcal{H}$ we call $u -u \circ T$ a \emph{coboundary}. We say that two locally H\"older functions $f, g: [0,1] \to \mathbb{R}$ are \emph{cohomologous} (denoted by $f \sim g$) if there exists some function $u \in \H$ such that $$f=g+u-u\circ T.$$ \subsection{Regularity of $T$.} It is easy to check that for all $x \in [0,1]$, $|(T^2)^{\prime}(x)| \geq \frac{9}{4}$. That means that although $T$ is itself not uniformly expanding, the second iterate $T^2$ is. Since $T^{\prime}(x)= -x^{-2}$ and $T^{\prime\prime}(x)=2x^{-3}$ it follows easily that \begin{eqnarray} \sup_{n \in \mathbb{N}} \sup_{x, y, z \in \I_n} \left|\frac{T^{\prime\prime}(x)}{T^{\prime}(y)T^{\prime}(z)}\right| = 16. \label{renyi1} \end{eqnarray} Consequently, one can use (\ref{renyi1}) to show that $-\log|T^{\prime}|$ is locally H\"older; in particular $-\log|T^{\prime}| \in \H_{\frac{2}{3}}$. Throughout the rest of the paper we fix $\alpha= \frac{2}{3}$. A consequence of the H\"older regularity of $-\log|T^{\prime}|$ is the following useful \emph{bounded distortion} property, see for instance \cite[\S 7.4 Lemma 2]{cfs}. \begin{prop}[Bounded distortion property] \label{bd} There exists some $C>0$ such that for all $n \in \mathbb{N}$, $i_1 \ldots i_n \in \Sigma^{\ast}$ and $x, y \in \I_{i_1 \ldots i_n}$, $$C^{-1} \leq \frac{(T^n)^{\prime}(x)}{(T^n)^{\prime}(y)} \leq C.$$ In particular for any $x \in \I_{i_1 \ldots i_n}$, $$C^{-1} \leq \frac{|(T^n)^{\prime}(x)|}{|\I_{i_1 \ldots i_n}|^{-1}} \leq C.$$ \end{prop} \section{Measures that do not satisfy Hypothesis \ref{hyp1}} \label{tail} In this section we show that there exists some $c, \epsilon>0$ such that if $\p$ does not satisfy Hypothesis \ref{hyp1} for this choice of $\epsilon$, then $\dim \mu_{\p}< 1-c$. Given $\lambda>0$ recall that $\mathcal{E}_{\lambda}$ was defined to be \begin{eqnarray} \label{e} \mathcal{E}_{\lambda}= \bigcap_{j=1}^{\infty} \bigcup_{n=j}^{\infty} \left\{x \in (0,1): |J_n(x)| \leq \exp(-\lambda n)\right\}. \end{eqnarray} For $\frac{1}{2}<s<1$ we denote \begin{eqnarray} \kappa(s)= \log \left( \sup_{x \in (0,1)} \sum_{n \in \mathbb{N}} \frac{1}{|T^{\prime}(T_n^{-1}x)|^s} \right) < \infty. \label{q} \end{eqnarray} By \cite[Theorem 4.1]{kpw}, for any $\lambda>0$ and $\frac{1}{2}<s<1$ \begin{eqnarray} \dim \mathcal{E}_{\lambda} \leq s+\frac{\kappa(s)}{\lambda}. \label{kpw bound} \end{eqnarray} We begin by using (\ref{kpw bound}) to show that any measure with infinite entropy (and therefore infinite Lyapunov exponent) will have dimension at most $\frac{1}{2}$. \begin{lma} Let $\mu_{\p}$ be a Bernoulli measure such that $h(\mu_{\p})= \infty$. Then \[\dim \mu_{\p} \leq \frac{1}{2}.\] \label{infinite thm} \end{lma} \begin{proof} Let $h(\mu_{\p})= \infty$. Then $\chi(\mu_{\p})=\infty$. Thus for $\mu_{\p}$ almost every $x$, $$\liminf_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \log |T^{\prime}(T^k(x))| = \infty.$$ Fix $\lambda>0$. Then for $\mu_{\p}$ almost every $x$ \begin{eqnarray} \frac{1}{n} \sum_{k=0}^{n-1} \log (T^{\prime}(T^k(x))) > 2\lambda \label{lambda} \end{eqnarray} for all $n$ sufficiently large. By rearranging (\ref{lambda}) we obtain that for all $x$ that satisfy (\ref{lambda}), there exists a subsequence $n_k$ such that $$|(T^{n_k})^{\prime}(x)|^{-1} < \exp(-2\lambda n_k)$$ for all $k \in \mathbb{N}$. By Proposition \ref{bd} this implies that $$|J_{n_k}(x)| \leq C|(T^{n_k})^{\prime}(x)|^{-1} \leq C\exp(-2\lambda n_k) \leq \exp(-\lambda n_k)$$ along the subsequence $n_k$, provided $\lambda$ is sufficiently large. Therefore $x \in \mathcal{E}_{\lambda}$ which implies that $\mu_{\p}(\mathcal{E}_{\lambda})=1$ since we were considering $x$ that belong to a set of full measure. Let $\frac{1}{2}< s<1$. By (\ref{kpw bound}), $\dim \mathcal{E}_{\lambda} \leq s+ \frac{\kappa(s)}{\lambda}$. Since $\mu_{\p}(\mathcal{E}_{\lambda})=1$ for all $\lambda$, it follows that $\dim \mu_{\p} \leq s+ \frac{\kappa(s)}{\lambda}$ where $\kappa(s)$ is given by (\ref{q}). Since $s$ can be chosen arbitrarily close to $\frac{1}{2}$ and $\lambda$ can be chosen to be arbitrarily large, the result follows. \end{proof} We can use similar ideas to consider measures with finite entropy whose associated probability vectors do not satisfy Hypothesis \ref{hyp1}. \begin{lma} Fix $\frac{1}{2}<s_0<1$. Let $\lambda_0>0$ such that $s_0+\frac{\kappa(s_0)}{\lambda_0}<1$. Then there exists $\epsilon_0>0$ such that if $h(\mu_{\p})< \infty$ and $\p$ does not satisfy Hypothesis \ref{hyp1} for $\epsilon=\epsilon_0$ then $$\dim \mu_{\p} \leq s_0+\frac{\kappa(s_0)}{\lambda_0}.$$ \label{tail lemma} \end{lma} \begin{proof} Fix $\lambda_0$ sufficiently large that $s_0+\frac{\kappa(s_0)}{\lambda_0}<1$. Fix $N$ sufficiently large that $$\frac{1-\psi}{2} \inf_{x \in \I_N} \log |T^{\prime}(x)|> 2\lambda_0,$$ where $\psi$ was defined in (\ref{psi}). Fix $\epsilon_0$ sufficiently small that $\epsilon_0< \frac{1-\psi}{2N}$. Since the $p_n$ are decreasing it follows that $\sum_{n=N+2}^{\infty} p_n \geq 1-\psi-N \epsilon_0$. Thus since $\epsilon_0< \frac{1-\psi}{2N}$, $$\int \log|T^{\prime}|d\mu_{\p} \geq \frac{1-\psi}{2} \inf_{x \in \I_N} \log |T^{\prime}(x)|> 2\lambda_0.$$ As in the proof of Lemma \ref{infinite thm} this implies that $\mu_{\p}$ almost every $x$ belongs to $\mathcal{E}_{\lambda_0}$ and therefore $\dim \mu_{\p} \leq s_0+\frac{\kappa(s_0)}{\lambda_0}$. \end{proof} \section{Measures that satisfy Hypothesis \ref{hyp1}} \label{top} Throughout this section we fix $\epsilon=\epsilon_0$ given by Lemma \ref{tail lemma} and we fix a probability vector $\p$ that satisfies the following hypothesis. \begin{hyp} \label{hyp} The probability vector $\p$ satisfies that $\dim \mu_{\p}> \frac{3}{4}$ and additionally either \begin{itemize} \item[(a)] $p_1, p_2 \geq \epsilon$ or \item[(b)] $p_1> \psi$. \end{itemize} \end{hyp} If $\p$ satisfies Hypothesis \ref{hyp} we may also say that $\mu_{\p}$ satisfies Hypothesis \ref{hyp}. Note that by Lemma \ref{infinite thm}, $\dim \mu_{\p}> \frac{3}{4}$ implies that $h(\mu_{\p})< \infty$ and so Hypothesis \ref{hyp} is slightly stronger than Hypothesis \ref{hyp1} (in particular, if $\p$ satisfies Hypothesis \ref{hyp1} then either $\dim \mu_{\p} \leq \frac{3}{4}$ or $\p$ satisfies Hypothesis \ref{hyp}). Also, since $h(\mu_{\p})< \infty$ we have $\dim \mu_{\p}= \frac{h(\mu_{\p})}{\chi(\mu_{\p})}$. To make our arguments clearer we also assume that $p_n >0$ for all $n$, although the proof could be easily adapted without this extra assumption. The main result in this section is that we can obtain a uniform upper bound on the dimension of any measure $\mu_{\p}$ whose probability vector satisfies Hypothesis \ref{hyp}. \begin{lma} There exists $\eta_1>0$ such that for any $\mu_{\p}$ that satisfies Hypothesis \ref{hyp}, $$\dim \mu_{\p} \leq 1-\eta_1.$$ \label{top bound} \end{lma} The method used in this section is based on an approach which was proposed by Kesseb\"ohmer, Stratmann and Urba\'nksi and was outlined in a talk given by Kesseb\"ohmer in \cite{kess}. For a fixed probability vector $\p$ define the Bernoulli potential $f_{\p} : [0,1] \setminus \mathbb{Q} \to (-\infty, 0]$ by $$f_{\p}= \sum_{n \in \mathbb{N}} \log p_n \one_{\I_n}.$$ Notice that $f_{\p}$ is the Gibbs potential for the Bernoulli measure $\mu_{\p}$. We are now ready to introduce the function $\beta_{\p}$. \begin{defn} Fix a probability vector $\p$ that satisfies Hypothesis \ref{hyp}. We can define the function $\beta_{\p}: [0,1] \to [0,1]$ where $\beta_{\p}(t)$ is defined implicitly as the solution to \begin{eqnarray} P(-\beta_{\p}(t)\log |T^{\prime}|+tf_{\p})=0. \label{bp} \end{eqnarray} \end{defn} Note that it is not immediately obvious that $\beta_{\p}$ should be well-defined; this fact will follow from Proposition \ref{bp properties}. \begin{figure} \centering \includegraphics[width=80mm]{beta_preview} \caption{A typical graph of $\beta_{\p}(t)$.} \label{beta} \end{figure} We denote the function that appears inside the pressure in (\ref{bp}) by $g_{\p,t}: [0,1] \setminus \mathbb{Q} \to \mathbb{R}$ \begin{eqnarray} g_{\p,t}= -\beta_{\p}(t)\log |T^{\prime}| + tf_{\p} . \label{gpt} \end{eqnarray} By Proposition \ref{gibbs} we know that there exists a unique invariant Gibbs measure for $g_{\p,t}$ which we will denote by $\mu_{\p,t}$. The function $\beta_{\p}$ will be the object of our focus throughout this section. In the following proposition we summarise its important properties. \begin{prop} \label{bp properties} The function $\beta_{\p}: [0,1] \to [0,1]$ satisfies the following properties: \begin{enumerate} \item $\beta_{\p}(t)$ is convex and decreasing on $[0,1]$. \item $\beta_{\p}(0) = 1$ and $\beta_{\p}(1)=0$.\item $\beta_{\p}(t)$ is analytic for $t \in [0,1]$. Moreover the first derivative of $\beta_{\p}$ (with respect to $t$) is given by \begin{eqnarray} \beta_{\p}^{\prime}(t)= \frac{-\int f_{\p}d\mu_{\p,t}}{\int \log |T^{\prime}|d\mu_{\p,t}} \label{beta1} \end{eqnarray} (so in particular $\dim \mu_{\p}= |\beta_{\p}^{\prime}(1)|$) and the second derivative is given by \begin{eqnarray} \beta_{\p}^{\prime\prime}(t) = \frac{\sigma^2_{\mu_{\p,t}}(-\beta_{\p}^{\prime}(t)\log |T^{\prime}|+f_{\p})}{\int \log|T^{\prime}|d\mu_{\p,t}} \label{beta2} \end{eqnarray} where the variance $\sigma_{\mu_{\p,t}}^2(-\beta_{\p}^{\prime}(t)\log |T^{\prime}|+f_{\p})$ is given by \begin{eqnarray} \label{variance} \sigma^2_{\mu_{\p,t}}(f_{\p,t})= \int f_{\p,t}^2\textup{d}\mu_{\p,t}+ 2\sum_{n=1}^{\infty}\left(\int f_{\p,t} \cdot f_{\p,t} \circ T^n \textup{d}\mu_{\p,t}\right) . \end{eqnarray} \end{enumerate} Moreover, these properties determine the graph of $\beta_{\p}(t)$; see Figure \ref{beta}. \end{prop} \begin{proof} It is easy to see that $\beta_{\p}$ is decreasing, since $$P(-\beta_{\p}(t)\log|T^{\prime}|+tf_{\p})= \lim_{n \to \infty} \frac{1}{n} \log \left( \sum_{i_1,\ldots i_n \in \mathbb{N}^n}\frac{(p_{i_1} \dots p_{i_n})^t}{|(T^n)^{\prime}(\Pi((i_1 \ldots i_n)^{\infty}))|^{\beta_{\p}(t)}}\right).$$ To see that $\beta_{\p}$ is convex, notice that for any $n \in \mathbb{N}$, and $a, u, t \in (0,1)$ \begin{eqnarray*} \sum_{i_1,\ldots, i_n \in \mathbb{N}^n} \frac{p_{i_1}^{at}\ldots p_{i_n}^{at}}{|(T^n)^{\prime}(\Pi((i_1\ldots i_n)^{\infty}))|^{a\beta_{\p}(t)}}\frac{p_{i_1}^{(1-a)u}\ldots p_{i_n}^{(1-a)u}}{|(T^n)^{\prime}(\Pi((i_1\ldots i_n)^{\infty}))|^{(1-a)\beta_{\p}(u)}} \leq \\ \left(\sum_{i_1,\ldots, i_n \in \mathbb{N}^n} \frac{p_{i_1}^{t}\ldots p_{i_n}^{t}}{|(T^n)^{\prime}(\Pi((i_1\ldots i_n)^{\infty}))|^{\beta_{\p}(t)}}\right)^a\left(\sum_{i_1,\ldots, i_n \in \mathbb{N}^n} \frac{p_{i_1}^{u}\ldots p_{i_n}^{u}}{|(T^n)^{\prime}(\Pi((i_1\ldots i_n)^{\infty}))|^{\beta_{\p}(u)}}\right)^{1-a} \end{eqnarray*} by H\"older's inequality. Therefore \begin{align*} P(-(a\beta_{\p}(t)&+(1-a)\beta_{\p}(u))\log|T^{\prime}|+(at+(1-a)u)f_{\p}) \\ &\leq aP(-\beta_{\p}(t)\log |T^{\prime}|+tf_{\p}) +(1-a)P(-\beta_{\p}(u)\log |T^{\prime}|+uf_{\p})=0. \end{align*} Therefore it follows that $\beta_{\p}(at+(1-a)u) \leq a\beta_{\p}(t)+(1-a)\beta_{\p}(u)$ since when $t$ is fixed, $P(-b\log |T^{\prime}|+tf_{\p})$ is decreasing in $b$. For the second part, by using Proposition \ref{bd} it is easy to see that $P(-\log |T^{\prime}|)=0$ which implies that $\beta_{\p}(0)=1$. Similarly it is easy to see that $P(f_{\p})=0$ thus it follows that $\beta_{\p}(1)=0$. To prove the third part, we begin by showing that $\beta_{\p}$ is analytic in a neighbourhood of 1. Let $r< \frac{1}{2}$. By Proposition \ref{bd} and the fact that $p_n =O( \frac{1}{n^2})$, $$\sum_{n \in \mathbb{N}} p_n^{1-r} =O \left(\sum_{n \in \mathbb{N}} \frac{1}{n^{2(1-r)}}\right)$$ and therefore is a finite sum. Let $(t, b) \in [1-\frac{r}{2}, 1+\frac{r}{2}] \times [-\frac{r}{2}, \frac{r}{2}]$. Then since $p_n=O(\frac{1}{n^2})$ there exists a constant $K>0$ such that \begin{align*} P(-b\log &|T^{\prime}|+tf_{\p}) \\ &\leq \lim_{n \to \infty} \frac{1}{n} \log \left( \sum_{i_1\ldots i_n \in \mathbb{N}^n} (p_{i_1}\ldots p_{i_n})^{1-\frac{r}{2}} |(T^n)^{\prime}(\Pi(i_1\ldots i_n)^{\infty})|^{\frac{r}{2}} \right) \\ &\leq \lim_{n \to \infty} \frac{1}{n} \log \left( \sum_{i_1\ldots i_n \in \mathbb{N}^n} (p_{i_1}\ldots p_{i_n})^{1-r} (p_{i_1}\ldots p_{i_n})^{\frac{r}{2}}(|\I_{i_1}|\ldots |\I_{i_n}|)^{-\frac{r}{2}} C^{n\frac{r}{2}} \right) \\ &\leq \lim_{n \to \infty} \frac{1}{n} \log \left( \sum_{i_1\ldots i_n \in \mathbb{N}^n} (p_{i_1}\ldots p_{i_n})^{1-r} K^{n}C^{n\frac{r}{2}} \right) \\ &= \frac{r}{2}\log C + \log K + \log\left(\sum_{n \in \mathbb{N}} p_n^{1-r}\right) < \infty \end{align*} where the second inequality follows by Proposition \ref{bd}. Therefore, by \cite[Theorem 2.6.12]{mu} $P(-b\log |T^{\prime}|+tf_{\p})$ is analytic for all $(t, b) \in [1-\frac{r}{2}, 1+\frac{r}{2}] \times [-\frac{r}{2}, \frac{r}{2}]$ and by the implicit function theorem $\beta_{\p}(t)$ is analytic for all $t \in (1-\frac{r}{2}, 1+\frac{r}{2})$. We will return to show that $\beta_{\p}$ is analytic on the whole interval $[0,1]$ after verifying that (\ref{beta1}) holds for any $t \in (1-\frac{r}{2}, 1+\frac{r}{2})$ (and indeed for all $t$ at which $\beta_{\p}(t)$ is analytic). To verify (\ref{beta1}) we follow the arguments of Ruelle \cite{ruelle}. Fix $t$ such that $\beta_{\p}$ is analytic at $t$. We differentiate (\ref{bp}) and apply \cite[Proposition 2.6.13]{mu} and the implicit function theorem to deduce that \begin{eqnarray} -\beta_{\p}^{\prime}(t) \int \log |T^{\prime}|d\mu_{\p,t} + \int f_{\p} d\mu_{\p,t} =0. \label{deriv} \end{eqnarray} In particular, since $\beta_{\p}(1)=0$, it follows that $\mu_{\p,1}=\mu_{\p}$ and therefore $$\dim \mu_{\p}= \frac{h(\mu_{\p})}{\chi(\mu_{\p})}=-\frac{\int f_{\p}d\mu_{\p,1}}{\int \log|T^{\prime}|d\mu_{\p,1}}= -\beta_{\p}^{\prime}(1)=|\beta_{\p}^{\prime}(1)|.$$ Using this we can now show that in fact $\beta_{\p}(t)$ is analytic for \emph{all} $t \in [0,1]$. By Hypothesis \ref{hyp} $|\beta_{\p}^{\prime}(1)|=\dim \mu_{\p}> \frac{1}{2}$, therefore it follows by convexity of $\beta_{\p}$ that $\beta_{\p}(t)> \frac{1}{2} (1-t)$ for all $t \in [0,1]$. In particular for all $t \in [0,1]$ \begin{eqnarray} \beta_{\p}(t)+t \geq \beta_{\p}(t)+\frac{1}{2} t> \frac{1}{2}. \label{beta lb} \end{eqnarray} Fix $t$ and choose $\epsilon$ sufficiently small so that $\beta_{\p}(t)+t-2\epsilon> \frac{1}{2}$. Then for all $(u, b) \in (t-\epsilon, t+\epsilon) \times (\beta_{\p}(t)-\epsilon, \beta_{\p}(t)+\epsilon)$, \begin{eqnarray*} P(-b\log|T^{\prime}|+uf_{\p})&=& \lim_{n \to \infty} \frac{1}{n} \log \left(\sum_{\i \in \mathbb{N}^n} \frac{p_{\i}^u}{|(T^n)^{\prime}(\Pi((\i)^{\infty}))|^{b}}\right) \\ &\leq& C'+ \log \left(\sum_{n \in \mathbb{N}} \frac{1}{n^{2(b+u)}}\right)< \infty \end{eqnarray*} where $C'$ is a constant coming from Proposition \ref{bd} and the fact that $p_n=O(\frac{1}{n^2})$, and the final inequality is because $b+u \geq \beta_{\p}(t)+t-2\epsilon>\frac{1}{2}$. By the implicit function theorem and \cite[Theorem 2.6.12]{mu}, $\beta_{\p}(t)$ is analytic for all $t \in [0,1]$, and the derivative $\beta_{\p}^{\prime}(t)$ satisfies (\ref{beta1}) by the same arguments as before. To verify (\ref{beta2}) we differentiate (\ref{deriv}) to obtain $$\beta_{\p}^{\prime\prime}(t) \int \log|T^{\prime}| \textup{d}\mu_{\p,t} + \beta_{\p}^{\prime}(t) \frac{\textup{d} \left( \int \log |T^{\prime}| \textup{d}\mu_{\p,t}\right)}{\textup{d} t} - \frac{\textup{d}\left(\int f_{\p} \textup{d}\mu_{\p,t}\right)}{\textup{d} t}=0.$$ By \cite[Proposition 2.6.14]{mu}, $$\frac{\textup{d} \left( \int \log |T^{\prime}| \textup{d}\mu_{\p,t}\right)}{\textup{d} t}= \sigma^2_{\mu_{\p,t}}( -\beta_{\p}^{\prime}(t)\log|T^{\prime}|+f_{\p}, \log|T^{\prime}|) $$ and $$\frac{\textup{d}\left(\int f_{\p} \textup{d}\mu_{\p,t}\right)}{\textup{d} t}= \sigma^2_{\mu_{\p,t}}( -\beta_{\p}^{\prime}(t)\log|T^{\prime}|+f_{\p}, f_{\p}) $$ and therefore \begin{eqnarray} \beta_{\p}^{\prime\prime}(t) = \frac{\sigma^2_{\mu_{\p,t}}(-\beta_{\p}^{\prime}(t)\log |T^{\prime}|+f_{\p})}{\int \log|T^{\prime}|\textup{d}\mu_{\p,t}} \geq 0. \label{deriv2} \end{eqnarray} By (\ref{deriv}), $\mu_{\p,t}(-\beta_{\p}^{\prime}(t)\log|T^{\prime}|+f_{\p})=0$ for all $\p$ and $t$ (where $\mu_{\p,t}(f)$ denotes $\int f \textup{d} \mu_{\p,t}$), thus $\sigma_{\mu_{\p,t}}^2(f_{\p,t})$ is given by (\ref{variance}). \end{proof} By rewriting $\dim \mu_{\p}$ as the absolute value of the derivative of $\beta_{\p}$ at 1, we are now able to exploit the tools of calculus to find an upper bound on $|\beta_{\p}^{\prime}(1)|=\dim \mu_{\p}$. In particular we are interested in showing that $\beta_{\p}$ is `uniformly convex' in some compact interval of $t$. Therefore we need to obtain lower bounds on $\beta_{\p}^{\prime\prime}(t)$ which are uniform over all $\p$ which satisfy Hypothesis \ref{hyp} and all $t$ belonging to some compact interval. From now on we shall denote $f_{\p,t}: [0,1] \setminus \mathbb{Q} \to \mathbb{R}$ by \begin{eqnarray} f_{\p,t}= -\beta_{\p}^{\prime}(t)\log |T^{\prime}|+f_{\p}. \label{fpt} \end{eqnarray} By (\ref{deriv2}), we are interested in finding an upper bound for the Lyapunov exponent $\chi(\mu_{\p,t})$ and a lower bound for the variance $\sigma^2_{\mu_{\p,t}}(f_{\p,t})$ which henceforth we will denote by $\sigma_{\p,t}^2(f_{\p,t})$. The Lyapunov exponent is not difficult to estimate from above, but we will delay this until Lemma \ref{e2 proof}. Instead, our primary focus will be obtaining a lower bound for the variance. It is well known that the variance satisfies \begin{eqnarray} \label{gk fpt} \sigma^2_{\p,t}(f_{\p,t})= \int \tilde{f}_{\p,t}^2\textup{d}\mu_{\p,t}+ 2\sum_{n=1}^{\infty}\left(\int \tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t} \circ T^n \textup{d}\mu_{\p,t}\right) \end{eqnarray} for any function $\tilde{f}_{\p,t}$ which is cohomologous to $f_{\p,t}$. The second term on the right hand side of (\ref{gk fpt}) is what makes it difficult to study lower bounds on the variance. Therefore, our aim is to find a coboundary $U_{\p,t}-U_{\p,t} \circ T$ such that if we substitute $\tilde{f}_{\p,t}=f_{\p,t}+U_{\p,t}-U_{\p,t}\circ T$ into (\ref{gk fpt}) then the right hand term will vanish. Therefore, in the first part of this section we introduce a family of transfer operators which will aid us towards obtaining the appropriate function $U_{\p,t}$ for which $\sigma_{\p,t}^2(f_{\p,t})= \int \tilde{f}_{\p,t}^2\textup{d}\mu_{\p,t}$. To this end, we introduce a family of transfer operators. \begin{defn} For a fixed $\p$ and $t$ we define the bounded linear operator $\l_{\p,t}: \mathcal{H} \to \mathcal{H}$ by $$\l_{\p, t} w(x)= \sum_{Ty=x} \exp(g_{\p,t}(y))w(y).$$ Note that this can be written alternatively as $$\l_{\p, t} w(x)= \sum_{n \in \mathbb{N}} \exp(g_{\p,t}(T_n^{-1}x))w(T_n^{-1}x).$$ \end{defn} Notice that each operator in the family above is well-defined since $$\sum_{n \in \mathbb{N}} \exp(g_{\p,t}(T_n^{-1}x))= \sum_{n \in \mathbb{N}} \frac{p_n^t}{|T^{\prime}(T_n^{-1}x)|^{\beta_{\p}(t)}}< \infty.$$ It will be more convenient for us to work with the normalised transfer operator. \begin{prop} \label{rpf2} There exists a normalised operator $\m_{\p,t}: \mathcal{H} \to \mathcal{H}$ given by $$\m_{\p,t} w= h_{\p,t}^{-1}\l_{\p,t}(h_{\p,t} w)$$ such that $\m_{\p,t} \one=\one$, where $h_{\p,t}$ is the unique fixed point of $\l_{\p,t}$. Equivalently $\m_{\p,t}w=\sum_{Ty=x} \exp(\tilde{g}_{\p,t}(y))w(y)$ where $\tilde{g}_{\p,t}=g_{\p,t}+h_{\p,t}-h_{\p,t} \circ T$. Moreover, $\m_{\p,t}^{\ast} \mu_{\p,t}= \mu_{\p,t}$ and $\textup{d}\mu_{\p,t}=h_{\p,t}\textup{d} \tilde{\mu}_{\p,t}$ where $\l_{\p,t}^{\ast}\tilde{\mu}_{\p,t}=\tilde{\mu}_{\p,t}$. \end{prop} \begin{proof} This is essentially a restatement of Proposition \ref{rpf}. \end{proof} When seeking \emph{upper} estimates on the variance, the second term on the right hand side in (\ref{gk fpt}) can easily be dealt with, for instance one can bound it above by knowing an explicit rate for the decay of the correlation functions. However, when one is interested in \emph{lower} estimates, this term makes the variance difficult to bound from below. Since (\ref{gk fpt}) holds for any $\tilde{f}_{\p,t}$ which is cohomologous to $f_{\p,t}$, it would be useful if we could find some $\tilde{f}_{\p,t} \sim f_{\p,t}$ for which $$\int \tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t} \circ T^n \textup{d}\mu_{\p,t}=0$$ for all $n \in \mathbb{N}$. Since $\m_{\p,t}^{\ast} \mu_{\p,t}=\mu_{\p,t}$ and $\m_{\p,t}^n(\tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t} \circ T^n)= \tilde{f}_{\p,t} \m_{\p,t}^n(\tilde{f}_{\p,t})$ we can rewrite the above as \begin{eqnarray*} \int \tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t} \circ T^n \textup{d}\mu_{\p,t}&=& \int \m_{\p,t}^n(\tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t} \circ T^n)\textup{d}\mu_{\p,t} \\ &=& \int \tilde{f}_{\p,t} \cdot \m_{\p,t}^n(\tilde{f}_{\p,t}) \textup{d}\mu_{\p,t}=0. \end{eqnarray*} Writing $\tilde{f}_{\p,t}=f_{\p,t} + U_{\p,t} - U_{\p,t} \circ T$ for some coboundary $ U_{\p,t} - U_{\p,t} \circ T$, it transpires that the property we want is $\m_{\p,t}(f_{\p,t} + U_{\p,t} - U_{\p,t} \circ T)=0$. This leads us to the following definition for $U_{\p,t}$, which we now fix. \begin{defn} Define $U_{\p,t}:[0,1] \setminus \mathbb{Q} \to \mathbb{R}$ by $$U_{\p,t}= \sum_{n=1}^{\infty} \m_{\p,t}^n\left(f_{\p,t}\right)$$ and $\tilde{f}_{\p,t}:[0,1] \setminus \mathbb{Q} \to \mathbb{R}$ by $$\tilde{f}_{\p,t}= f_{\p,t} + U_{\p,t}- U_{\p,t} \circ T.$$ \label{coboundary} \end{defn} It will be a consequence of Lemma \ref{lemma r} that $U_{\p,t} \in \H_{\alpha}$ (although it is already not difficult to see this: it is easy to show that $\m_{\p,t} f_{\p,t} \in \H_{\alpha}$, and therefore by \cite[Theorem 2.4.6]{mu} one can deduce that $\norm{\m_{\p,t}^n f_{\p,t}}_{\alpha}$ decays exponentially fast in $n$). As suggested above, it turns out that this definition for $U_{\p,t}$ fits our purposes. \begin{lma} \label{coboundary good} For all $\p$ and $t$, $$\m_{\p,t}(\tilde{f}_{\p,t})=\m_{\p,t}(f_{\p,t}+U_{\p,t}-U_{\p,t} \circ T)=0.$$ \end{lma} \begin{proof} It follows from definition that \begin{eqnarray*} \m_{\p,t}(\tilde{f}_{\p,t}) &=& \m_{\p,t}(f_{\p,t})+ \m_{\p,t}(U_{\p,t}) - \m_{\p,t}(U_{\p,t} \circ T) \\ &=& \m_{\p,t}(f_{\p,t})+ \sum_{n=2}^{\infty} \m_{\p,t}^n(f_{\p,t}) - \sum_{n=2}^{\infty} \m_{\p,t}^n(f_{\p,t} \circ T) \\ &=& \sum_{n=1}^{\infty} \m_{\p,t}^n(f_{\p,t}) -\sum_{n=2}^{\infty} \m_{\p,t}^n(f_{\p,t} \circ T) \\ &=& \sum_{n=1}^{\infty} \m_{\p,t}^n(f_{\p,t}) -\sum_{n=2}^{\infty} \m_{\p,t}^{n-1}(\m_{\p,t}(f_{\p,t} \circ T))\\ &=& \sum_{n=1}^{\infty} \m_{\p,t}^n(f_{\p,t})- \sum_{n=2}^{\infty} \m_{\p,t}^{n-1}(f_{\p,t} \cdot \m_{\p,t}(\mathbf{1})) \\ &=& 0. \end{eqnarray*} \end{proof} As an immediate corollary to the above, we can write the variance as a single integral as we intended. \begin{cor} We can write $$\sigma_{\p,t}^2(f_{\p,t})=\int \tilde{f}_{\p,t}^2 \textup{d}\mu_{\p,t}.$$ \label{rewrite2} \end{cor} \begin{proof} By (\ref{gk fpt}) $$\sigma_{\p,t}^2(f_{\p,t})= \int \tilde{f}_{\p,t}^2\textup{d}\mu_{\p,t} + 2 \sum_{n=1}^{\infty} \int \tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t} \circ T^n \textup{d}\mu_{\p,t}.$$ Therefore, \begin{eqnarray*} \sigma_{\p,t}^2(f_{\p,t}) &=& \int \tilde{f}_{\p,t}^2 \textup{d}\mu_{\p,t} + 2\sum_{n=1}^{\infty} \int \tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t}\circ T^n \textup{d}\mu_{\p,t} \\ &=& \int \tilde{f}_{\p,t}^2 \textup{d}\mu_{\p,t} + 2\sum_{n=1}^{\infty} \int \m_{\p,t}^n(\tilde{f}_{\p,t} \cdot \tilde{f}_{\p,t} \circ T^n)\textup{d}\mu_{\p,t} \\ &=& \int \tilde{f}_{\p,t}^2 \textup{d}\mu_{\p,t} +2 \sum_{n=1}^{\infty} \int \tilde{f}_{\p,t} \cdot \m_{\p,t}^n(\tilde{f}_{\p,t}) \textup{d}\mu_{\p,t} \\ &=& \int \tilde{f}_{\p,t}^2 \textup{d}\mu_{\p,t} \end{eqnarray*} since $\m_{\p,t}^n(\tilde{f}_{\p,t})=0$ for all $n \in \mathbb{N}$. \end{proof} Now that we have managed to find a cohomologous function $\tilde{f}_{\p,t} \sim f_{\p,t}$ with the property that $\sigma^2_{\p,t}(f_{\p,t})= \int \tilde{f}_{\p,t}^2\textup{d}\mu_{\p,t}$, we can shift our focus to estimating $\int \tilde{f}^2_{\p,t}\textup{d}\mu_{\p,t}$ in a uniform way. Let $I=[\frac{1}{8}, \frac{1}{4}]$ and notice that for all $t \in I$, \begin{eqnarray} \beta_{\p}(t) \geq \beta_{\p}\left(\frac{1}{4}\right)= \frac{\beta_{\p}(\frac{1}{4})}{1-\frac{1}{4}} \cdot \left(1-\frac{1}{4}\right) \geq \frac{3}{4} |\beta_{\p}^{\prime}(1)| \geq \frac{9}{16} \label{916} \end{eqnarray} since $\beta_{\p}$ is convex and $\dim \mu_{\p} \geq \frac{3}{4}.$ Let $\mathcal{Z}$ be a finite set of periodic points of $T$. Suppose there exist constants $c_1$ and $c_2$ such that for all $\p$ and $t \in I$, \begin{enumerate} \item there exists a periodic point $z \in \mathcal{Z}$ of period $n$ such that $\frac{1}{n}|S_n \tilde{f}_{\p,t}(z)|\geq c_1$, \item $[\tilde{f}_{\p,t}]_{\alpha}\leq c_2 $. \end{enumerate} Then we can bound $\int \tilde{f}^2_{\p,t}\textup{d}\mu_{\p,t}$ from below by a `strip' of the integral which is determined by an interval centred at an appropriate point $z'$ in the orbit of $z=\Pi(\i)$ for which $|\tilde{f}_{\p,t}(z')| \geq c_1$. We simply need to make the interval width sufficiently small so that $\tilde{f}_{\p,t}$ remains large within the interval, which we can do by using the H\"older properties of $\tilde{f}_{\p,t}$. In particular if $m$ is large enough that $\alpha^m \leq \frac{c_1}{2c_2}$ then for any $y \in \I_{\i|_m}$, $$|\tilde{f}_{\p,t}(y)-\tilde{f}_{\p,t}(z)| \leq \frac{c_1}{2}$$ so it follows that for all $y \in \I_{\i|_m}$, $|\tilde{f}_{\p,t}(y)| \geq \frac{c_1}{2}$. Therefore \begin{eqnarray} \label{strategy} \sigma_{\p,t}^2(f_{\p,t})= \int \tilde{f}^2_{\p,t}\textup{d}\mu_{\p,t} \geq \frac{c_1^2}{4} \mu_{\p,t}(\I_{\i|_m}). \end{eqnarray} Therefore we see that a uniform lower bound on $\sigma_{\p,t}^2(f_{\p,t})$ depends on the following three lemmas. In what follows each statement holds uniformly for all $\p$ that satisfy Hypothesis \ref{hyp} and all $t \in I$. \begin{lma} \label{lemma periodic} Given $\i \in \Sigma^{\ast}$ let $z_{\i}$ denote the periodic point for $T$ given by $z_{\i}=\Pi((\i)^{\infty})$. There exists a uniform constant $c_1$ independent of $\p$ and $t$ such that for any $t$ and $\p$, there exists $z \in \{z_1, z_2, z_{12}\}$ for which $$\left|\frac{1}{2}S_{2} \tilde{f}_{\p, t}(z)\right| \geq c_1.$$ Moreoever, for any $\p$ which satisfies $p_1> \psi$, $|f_{\p,t}(z_1)| \geq c_1$. \end{lma} \begin{lma} The function $U_{\p,t}\in \lip$ for all $t$ and $\p$. Moreover, there exists a uniform constant $c_2$ such that $[\tilde{f}_{\p,t}]_{\alpha} \leq c_2$. \label{lemma r} \end{lma} \begin{lma} \label{measure lemma} There exists $c_3>0$ such that for any $\p$ and $t$, $$c_3^{-1} \frac{(p_{i_1}\cdots p_{i_n})^t}{|T^{\prime}(z) \cdots T^{\prime}(T^{n-1}z)|^{\beta_{\p}(t)}} \leq \mu_{\p,t}(\I_{i_1\ldots i_n}) \leq c_3 \frac{(p_{i_1}\cdots p_{i_n})^t}{|T^{\prime}(z) \cdots T^{\prime}(T^{n-1}z)|^{\beta_{\p}(t)}}$$ for any $n \in \mathbb{N}$, $i_1\ldots i_n \in \mathbb{N}^n$ and $z \in \I_{i_1\ldots i_n}$. \end{lma} In particular, if Lemmas \ref{lemma periodic} - \ref{measure lemma} hold then for each $\p$ and $t $ one can find $z \in \{z_1, z_2, z_{12}, z_{21}\}$ for which $|f_{\p,t}(z)| \geq c_1$. Therefore, by fixing $m$ sufficiently large that $\alpha^m \leq \frac{c_1}{2c_2}$ it follows that $$\sigma_{\p,t}^2(f_{\p,t}) \geq \frac{c_1^2}{4}c_3^{-1}\frac{\epsilon^{\frac{m}{4}}}{9^m}$$ for all $t \in I$ and $\p$ that satisfy Hypothesis \ref{hyp}. \subsection{Proof of Lemma \ref{lemma periodic}} \label{periodic} We begin by proving Lemma \ref{lemma periodic}. Essentially this boils down to two key observations. Firstly observe that $S_n \tilde{f}_{\p,t}(z)=S_n f_{\p,t}(z)$ for any periodic point $z=T^nz$ since $f_{\p,t}$ and $\tilde{f}_{\p,t}$ are cohomologous. Secondly observe that by the non-linearity of $T$, $-\log|T^{\prime}|$ is not locally constant whereas $f_{\p}$ is. In particular, this means that $$\left|\log \frac{T^{\prime}(z_1)T^{\prime}(z_2)}{T^{\prime}(z_{12})T^{\prime}(z_{21})}\right|\neq 0.$$ \vspace{5mm} \noindent \emph{Proof of Lemma \ref{lemma periodic}.} Fix $t$ and $\mathbf{p}= (p_1, p_2, \ldots)$. Recall that by the convexity of $\beta_{\p}$, $|\beta_{\p}^{\prime}(t)| \geq |\beta_{\p}^{\prime}(1)|=\dim \mu_{\p} >\frac{3}{4}$. Put $$c_{11}= \frac{1}{8} \left|\log \frac{T^{\prime}(z_1)T^{\prime}(z_2)}{T^{\prime}(z_{12})T^{\prime}(z_{21})}\right|>0.$$ Without loss of generality we can assume that both \begin{eqnarray} |f_{\p,t}(z_1)|=|-\beta_{\p}^{\prime}(t)\log|T^{\prime}(z_1)|+\log p_1| < c_{11} \label{z1} \end{eqnarray} and \begin{eqnarray} |f_{\p,t}(z_2)|=|-\beta_{\p}^{\prime}(t)\log|T^{\prime}(z_2)|+\log p_{2}| < c_{11} \label{z2} \end{eqnarray} since otherwise we are done. We will show that this forces $|\frac{1}{2}S_2 f_{\p,t}(z_{12})|>c_{11}$, which will complete the proof. By (\ref{z1}) and (\ref{z2}) it follows that $$\frac{1}{2}|-\beta_{\p}^{\prime}(t)\log |T^{\prime}(z_1)T^{\prime}(z_2)| + \log p_1p_{2}| \leq c_{11}.$$ Moreover \begin{eqnarray*} 4|\beta_{\p}^{\prime}(t)|c_{11}&=&\frac{|\beta_{\p}^{\prime}(t)|}{2}\left|\log \frac{T^{\prime}(z_1)T^{\prime}(z_2)}{T^{\prime}(z_{12})T^{\prime}(z_{21})}\right| \\ &\leq& \frac{1}{2}\left|-\beta_{\p}^{\prime}(t)\log |T^{\prime}(z_1)T^{\prime}(z_2)|+\log p_1p_{2}\right| \\ & & +\frac{1}{2}\left|-\beta_{\p}^{\prime}(t)\log |T^{\prime}(z_{12})T^{\prime}(z_{21})|+\log p_1p_{2}\right| \\ &\leq &\frac{1}{2}\left|-\beta_{\p}^{\prime}(t)\log |T^{\prime}(z_{12})T^{\prime}(z_{21})|+\log p_1p_{2}\right|+c_{11}. \end{eqnarray*} Therefore \begin{eqnarray*} \frac{1}{2}\left|-\beta_{\p}^{\prime}(t)\log |T^{\prime}(z_{12})T^{\prime}(z_{21})|+\log p_1p_{2}\right| \geq 4|\beta_{\p}^{\prime}(t)|c_{11}- c_{11} \geq 2c_{11} \end{eqnarray*} where the final inequality is because $|\beta^{\prime}_{\p}(t)| \geq \frac{3}{4}$. Next, put $c_{12}= \frac{1}{2} \log|T^{\prime}(z_1)|$. Recall that by definition of $\psi$ in (\ref{psi}), if $p_1> \psi$ this implies $\log p_1 \geq -\frac{1}{4} \log|T^{\prime}(z_1)|$. Therefore, since $-\beta_{\p}^{\prime}(t) \geq \frac{3}{4}$ it follows that $$|f_{\p,t}(z_1)| \geq \frac{3}{4} \log|T^{\prime}(z_1)|-\frac{1}{4} \log|T^{\prime}(z_1)|= \frac{1}{2}\log|T^{\prime}(z_1)|=c_{12}.$$ Finally, putting $c_1=\min\{c_{11}, c_{12}\}$ we complete the proof. \qed \subsection{Proof of Lemma \ref{lemma r}} In this section we will prove Lemma \ref{lemma r}. By \cite[Theorem 2.4.6]{mu} we know that for each $\p$ and $t$ there exist constants $c_{\p,t}>0$, $0<\rho_{\p,t}<1$ such that for all $f \in \mathcal{H}_{\alpha}$ with $\mu_{\p,t}(f)=0$, $$\norm{\m_{\p,t}^nf}_{\alpha} \leq c_{\p,t}\rho_{\p,t}^n \norm{f}_{\alpha}.$$ We would like to prove a uniform (in $\p$ and $t$) version of the above property. In fact we will work with the Lipschitz norm instead, and show that we can choose uniform $c_4>0$, $0<\rho<1$ such that for all $\p$ and $t$ and $f \in \lip$ with $\mu_{\p,t}(f)=0$, \begin{eqnarray} \norm{\m_{\p,t}^nf}_{0,1} \leq c_4\rho^n \norm{f}_{0,1}. \label{c3 thing} \end{eqnarray} To do this we will make use of `Hilbert-Birkhoff cone theory' \cite{liverani, viana} which provides technology that yields particularly explicit estimates for the rate of decay of norms under transfer operators, which will allow us to verify that a uniform property such as (\ref{c3 thing}) holds. The result will then follow by obtaining upper bounds on $\norm{\m_{\p,t}f_{\p,t}}_{0,1}$ and $\norm{\m_{\p,t}^2f_{\p,t}}_{0,1}$. We begin this section by summarising the tools from Hilbert-Birkhoff cone theory and how these can be applied to transfer operators. For more details the reader is directed to \cite{liverani, viana}. For $a>0$ define \begin{eqnarray*} \mathcal{C}_a &=& \left\{ w \in \cont: w \geq 0 \textnormal{ and } w(x) \leq e^{a|x-y|} w(y)\right\}. \label{cone} \end{eqnarray*} Then $\c$ is a closed convex cone; this means that $\lambda w \in \c$ and $w_1 + w_2 \in \c$ for all $\lambda >0$ and all $w, w_1, w_2 \in \c$. We can define a partial ordering $\preceq$ on $V$ by \begin{eqnarray*} \begin{array}{ccc} v \preceq w & \Leftrightarrow & w-v \in \c \cup \{0\}. \end{array} \end{eqnarray*} Moreover, using this partial ordering one can define the \emph{projective metric} $\Theta$ on $\c$; we will not actually require an explicit characterisation of this metric but it is defined and discussed in \cite[Proposition 2.2 and Example 2.3]{viana}. The following proposition follows from \cite[Propositions 2.3 and 2.5]{viana}. \begin{prop}\label{contract prop} Let $L:C([0,1]) \to C([0,1])$ be a linear operator and $\c$ be the cone as defined in (\ref{cone}), equipped with the projective metric $\Theta$. Suppose there exists $a>0$ and $\lambda \in (0,1)$ such that $L(\c) \subset \mathcal{C}_{\lambda a}$. Then \begin{enumerate} \item \begin{equation} D=\sup_{v, w \in \c} \Theta(L(v), L(w)) < \infty, \label{diam} \end{equation} \item there exists $r \in (0,1)$ that depends only on $D$ such that for all $v, w \in \c$, $$ \Theta(L(v), L(w)) \leq r\Theta(v,w).$$ \end{enumerate} \end{prop} The following is an easy modification of \cite[Lemma 1.3]{liverani}. \begin{prop} \label{transfer} Let $\norm{\cdot}_1$, $\norm{\cdot}_2$ be two norms on $C([0,1])$ and consider the cone $\c$ which induces the partial ordering $\preceq$. Suppose there exists $C \geq 1$ such that for all $f, g \in C([0,1])$ \begin{eqnarray*} \begin{array}{ccc} -f \preceq g \preceq f & \Rightarrow & \norm{g}_1 \leq \norm{f}_1 \\ & & \norm{g}_2 \leq C\norm{f}_2. \end{array} \end{eqnarray*} Then given any $f, g \in \c$ for which $\norm{f}_1=\norm{g}_1$, $$\norm{f-g}_2 \leq C^2(e^{\Theta(f,g)}-1)\norm{f}_2.$$ \end{prop} We also note that it is easy to check that $-f \preceq g \preceq f$ implies that $\norm{g}_{\infty} \leq \norm{f}_{\infty}$ and $\norm{g}_{L^1(m)} \leq \norm{f}_{L^1(m)}$ for any measure $m$ on $[0,1]$. Additionally one can check that $-f \preceq g \preceq f$ implies that $\norm{g}_{0,1} \leq (1+a)^2 \norm{f}_{0,1}$. We will now apply Proposition \ref{contract prop} to the operator $\m_{\p,t}^2$ to deduce that it strictly contracts the projective metric $\Theta$ (with the view to later combine this with Proposition \ref{transfer} in order to prove (\ref{c3 thing})). The following lemma will also provide us with uniform regularity properties for the fixed points $h_{\p,t}$. \begin{lma} There exists $a>0$, $D< \infty$ and $r \in (0,1)$ such that for all $v, w \in \c$, \begin{eqnarray} \sup_{v, w \in \c} \Theta(\m^2_{\p,t}(v), \m^2_{\p,t}(w)) \leq D \label{diamm} \end{eqnarray} and \begin{eqnarray} \Theta(\m^2_{\p,t}(v), \m^2_{\p,t}(w)) \leq r\Theta(v,w). \label{contract} \end{eqnarray} Moreoever, for all $x, y \in [0,1] \setminus \mathbb{Q}$ and all $\p$, $t$ \begin{eqnarray} \exp(-a|x-y|) \leq \frac{h_{\p,t}(x)}{h_{\p,t}(y)} \leq \exp(a|x-y|). \label{fp} \end{eqnarray} \label{fixed pt} \end{lma} \begin{proof} We begin by proving that the analogues of (\ref{diamm}) and (\ref{contract}) hold for $\l_{\p,t}^2$ for some $a_0$, $r_0$ and $D_0$. Since $f_{\p}$ is locally constant, $$[g_{\p,t}]_{\alpha}=[\beta_{\p}(t)\log|T^{\prime}|]_{\alpha} \leq [\log|T^{\prime}|]_{\alpha}$$ so there exists $\kappa< \infty$ such that $[g_{\p,t}]_{\alpha} \leq \kappa$ for all $\p$ and $t$. Let $a_0>0$, $w \in \mathcal{C}_{a_0}$ and $x, y \in [0,1]$. Recall that for all $x$, $|(T^2)^{\prime}(x)| \geq \frac{1}{\alpha^2}=\frac{9}{4}$. In particular, this means that any local inverse branch of $T^2$ must be contracting by $\alpha^2$. Thus \begin{eqnarray*} (\l^2_{\p,t} w)(x) &=&\sum_{\mathbf{n} \in \mathbb{N}^2} \exp(g^2_{\p,t}(T_{\mathbf{n}}^{-1}x))w\left(T_{\mathbf{n}}^{-1}x\right)\\ &\leq&\sum_{\mathbf{n} \in \mathbb{N}^2} \exp(g^2_{\p,t}(T_{\mathbf{n}}^{-1}y))w\left(T_{\mathbf{n}}^{-1}y\right)\exp((2\kappa +a_0)|T_{\mathbf{n}}^{-1}x- T_{\mathbf{n}}^{-1}y|) \\ &\leq& \sum_{\mathbf{n} \in \mathbb{N}^2} \exp(g^2_{\p,t}(T_{\mathbf{n}}^{-1}y))w\left(T_{\mathbf{n}}^{-1}y\right)\exp(\alpha^2(2\kappa +a_0)|x-y|). \end{eqnarray*} Choose $\alpha^2< \lambda_0 <1$ and $a_0 \geq \frac{2\alpha^2\kappa}{\lambda_0 - \alpha^2}$. Then it follows that \begin{eqnarray} (\l^2_{\p,t} w)(x) &\leq & (\l^2_{\p,t} w)(y) \exp(a_0\lambda_0 |x-y|). \label{forcont} \end{eqnarray} Clearly $\l_{\p,t}^2w \geq 0$ and $\l_{\p,t}^2 w \in C([0,1])$. Therefore $\l_{\p,t}^2 \mathcal{C}_{a_0} \subset \mathcal{C}_{\lambda_0a_0}$. Therefore by Proposition \ref{contract prop} there exists $D_0< \infty$ such that $\sup_{v,w \in \c}(\Theta(L(v), L(w))) \leq D_0$ and there exists some $r_0\in (0,1)$ which depends only on $D_0$ for which \begin{eqnarray} \Theta(\l^2_{\p,t}(v), \l^2_{\p,t}(w)) \leq r_0\Theta(v,w) \label{l contract} \end{eqnarray} for all $v, w \in \mathcal{C}_{a_0}$. In particular $r_0$ and $D_0$ are independent of $\p$ and $t$. Using (\ref{l contract}) we can prove (\ref{fp}) for $a=a_0$. Let $N \in \mathbb{N}$ and consider integers $m, n \geq N$. Using (\ref{l contract}) we can write $$\Theta(\l_{\p,t}^{2n+k} \one, \l_{\p,t}^{2m+k} \one) \leq r_0^N\Theta(\l_{\p,t}^{2(n-N)+k} \one, \l_{\p,t}^{2(m-N)+k}\one) \leq D_0r_0^N$$ for each $k \in \{0,1\}$. Let $L^1=L^1(\tilde{\mu}_{\p,t})$. Since $\norm{\l_{\p,t}^j \one}_{L^1}= \norm{\one}_{L^1}$ for all $j \in \mathbb{N}$, we can apply Proposition \ref{transfer} to the norms $\norm{\cdot}_1=\norm{\cdot}_{L^1}$ and $\norm{\cdot}_2= \norm{\cdot}_{\infty}$ to deduce that for all $n, m \geq N$, \begin{eqnarray*} \norm{\l_{\p,t}^{2n+k} \one- \l_{\p,t}^{2m+k} \one}_{\infty} &\leq& \exp(\Theta(\l_{\p,t}^{2n+k} \one,\l_{\p,t}^{2m+k} \one))-1 \\ &\leq& \exp(D_0r_0^N)-1 \\ &\leq& D_0\exp(D_0)r_0^N. \end{eqnarray*} This implies that $\l_{\p,t}^n \one$ is a Cauchy sequence in the uniform norm $\norm{\cdot}_{\infty}$. Thus the limit $\lim_{n \to \infty} \l_{\p,t}^n \one \in \mathcal{C}_{a_0}$ and is a fixed point of $\l_{\p,t}$. In particular, since the fixed point is unique, this means that $h_{\p,t}=\lim_{n \to \infty} \l_{\p,t}^n \one$ and therefore $h_{\p,t}$ satisfies (\ref{fp}) for $a=a_0$. We now use this fact to prove (\ref{diamm}) and (\ref{contract}). Let $a_1>0$. Then \begin{eqnarray*} & &(\m^2_{\p,t} w)(x) =h_{\p,t}^{-1}(x)\sum_{\mathbf{n} \in \mathbb{N}^2} \exp(g^2_{\p,t}(T_{\mathbf{n}}^{-1}x))w\left(T_{\mathbf{n}}^{-1}x\right)h_{\p,t}\left(T_{\mathbf{n}}^{-1}x\right)\\ & &\leq h_{\p,t}^{-1}(x)\sum_{\mathbf{n} \in \mathbb{N}^2} \exp(g^2_{\p,t}(T_{\mathbf{n}}^{-1}y))w\left(T_{\mathbf{n}}^{-1}y\right)h_{\p,t}(T_{\mathbf{n}}^{-1}y)\exp(((2\kappa+a_0+a_1)|T_{\mathbf{n}}^{-1}x- T_{\mathbf{n}}^{-1}y|)) \\ & &\leq h_{\p,t}^{-1}(y) \sum_{\mathbf{n} \in \mathbb{N}^2} \exp(g^2_{\p,t}(T_{\mathbf{n}}^{-1}y))w\left(T_{\mathbf{n}}^{-1}y\right)h_{\p,t}(T_{\mathbf{n}}^{-1}y)\exp(a_0+\alpha^2(2\kappa+ a_0+a_1)|x-y|) . \end{eqnarray*} Choose $\alpha^2< \lambda_1 <1$ and $a_1 \geq \frac{a_0+2\alpha^2\kappa+\alpha^2 a_0}{\lambda_1 - \alpha^2}$. Then it follows that \begin{eqnarray} (\m^2_{\p,t} w)(x) \leq (\m^2_{\p,t} w)(y) \exp(a_1\lambda_1 |x-y|). \label{forcont2} \end{eqnarray} Fix $a= \max\{1, a_1\}$. Clearly $\m_{\p,t}^2w \geq 0$ and $\m_{\p,t}^2 w \in C([0,1])$. Therefore $\m_{\p,t}^2 \mathcal{C}_{a} \subset \mathcal{C}_{\lambda a}$. Thus by Proposition \ref{contract prop}, (\ref{contract}) holds. Moreover since $a>a_0$, $h_{\p,t} \in \c$ and so (\ref{fp}) holds. \end{proof} Next we obtain a uniform upper bound on the operator norm of $\m_{\p,t}$, when restricted to the cone $\c$. \begin{lma} There exists $A>0$ such that for all $f \in \c$ and all $\p$, $t$, $$\norm{\m_{\p,t}^{2n}f}_{0,1} \leq A\norm{f}_{0,1}.$$ \label{op norm} \end{lma} \begin{proof} Firstly, we can immediately see that $\norm{\m_{\p,t}^kf}_{\infty} \leq \norm{f}_{\infty}$ for all $k \in \mathbb{N}$. Next, since $f \in \c$, by Lemma \ref{fixed pt} it follows that $\m_{\p,t}^{2n} f \in \c$ as well and therefore setting $F=\m_{\p,t}^{2n} f$, $$-(e^{a|x-y|}-1)F(x) \leq F(x)-F(y) \leq (e^{a|x-y|}-1)F(y)$$ for all $x, y \in [0,1]$ which implies that $$|F(x)-F(y)| \leq ae^a \norm{F}_{\infty} |x-y|$$ that is, $F$ is Lipschitz with Lipschitz constant $[F]_1 \leq ae^a \norm{F}_{\infty} $. Thus $[\m_{\p,t}^{2n}f]_1 \leq ae^a\norm{\m_{\p,t}^{2n}f}_{\infty} \leq ae^a\norm{f}_{\infty}$. \end{proof} Now using lemmas \ref{fixed pt} and \ref{op norm} we can apply Proposition \ref{transfer} to the operator $\m_{\p,t}^2$ to deduce that (\ref{c3 thing}) holds. \begin{lma} There exist constants $c_4>0$ and $0<\rho<1$ such that for all $\p$, $t$ and $f \in \lip$ with $\mu_{\p,t}(f)=0$, $$\norm{\m_{\p,t}^nf}_{0,1} \leq c_4\rho^n \norm{f}_{0,1}.$$ \label{decay} \end{lma} \begin{proof} Let $f \in \lip$ for which $\mu_{\p,t}(f)=0$. If $f$ is constant, $f=0$ since its integral is 0 and thus the result follows trivially. If $f$ is not constant, $\norm{f}_{0,1} >0$. Let $f_1$ and $f_2$ be the positive and negative parts of $f$ respectively, so that $f=f_1-f_2$ with $f_1, f_2 \geq 0$. We can guarantee that they belong to a cone by adding a constant. In particular, $f_i+ \norm{f}_{0,1} \in \mathcal{C}_1$ for each $i$ since \begin{eqnarray*} \frac{f_i(x)+ \norm{f}_{0,1}}{f_i(y)+ \norm{f}_{0,1}} &=& \exp \left(\log \left( \frac{f_i(x)+ \norm{f}_{0,1}}{f_i(y)+ \norm{f}_{0,1}}\right)\right) \\ &=& \exp \left(\log \left( \frac{f_i(x)-f_i(y)}{f_i(y)+ \norm{f}_{0,1}}+1\right)\right) \\ &\leq& \exp \left(\log \left( \frac{\norm{f}_{0,1}|x-y|}{f_i(y)+ \norm{f}_{0,1}}+1\right)\right) \\ &\leq& \exp \left( \frac{\norm{f}_{0,1}|x-y|}{f_i(y)+ \norm{f}_{0,1}}\right) \\ &\leq& \exp \left( \frac{\norm{f}_{0,1}|x-y|}{\norm{f}_{0,1}}\right) \\ &=& \exp (|x-y|) \end{eqnarray*} where the fourth line follows because $\log(1+z) \leq z$ for any $z>-1$. Denote $\eta= \norm{f}_{0,1}$. Then $f_i + \eta \in \c$. Then since $\mu_{\p,t}(f_1)= \mu_{\p,t}(f_2)$ we have \begin{eqnarray*} \norm{\m_{\p,t}^{2n}f}_{0,1} &=& \norm{\m_{\p,t}^{2n}(f_1+\eta)-\m_{\p,t}^{2n}(f_2+\eta)}_{0,1} \\ &\leq& \norm{\m_{\p,t}^{2n}(f_1+\eta)-\mu_{\p,t} (f_1+\eta)}_{0,1} +\norm{\m_{\p,t}^{2n}(f_2+\eta)-\mu_{\p,t}(f_2+\eta)}_{0,1}. \end{eqnarray*} Denoting $L^1=L^1(\mu_{\p,t})$, we can apply Proposition \ref{transfer} for $\norm{\cdot}_1=\norm{\cdot}_{L^1}$ and $\norm{\cdot}_2=\norm{\cdot}_{0,1}$ to obtain \begin{eqnarray*} \norm{\m_{\p,t}^{2n}f}_{0,1} &\leq& (1+a)^2(\exp(\Theta(\m_{\p,t}^{2n}( f_1+\eta), \mu_{\p,t}( f_1+\eta)\one))-1)\norm{\m_{\p,t}^{2n} (f_1+\eta)}_{0,1} \\ & & + (1+a)^2(\exp(\Theta(\m_{\p,t}^{2n} (f_2+\eta), \mu_{\p,t}(f_2+\eta)\one))-1)\norm{\m_{\p,t}^{2n} (f_2+\eta)}_{0,1} . \end{eqnarray*} Next we can apply (\ref{contract}) to get \begin{eqnarray*} \norm{\m_{\p,t}^{2n}f}_{0,1} &\leq& (1+a)^2(\exp(r^n\Theta(f_1+\eta, \mu_{\p,t}(f_1+\eta)\one))-1)\norm{\m_{\p,t}^{2n} (f_1+\eta)}_{0,1} \\ & & +(1+a)^2(\exp(r^n\Theta( f_2+\eta, \mu_{\p,t}(f_2+\eta)\one))-1)\norm{\m_{\p,t}^{2n} (f_2+\eta)}_{0,1}\\ &\leq& (1+a)^2(\exp(r^nD)-1)A(\norm{f_1+\eta}_{0,1}+\norm{f_2+\eta}_{0,1}) \\ &\leq& (1+a)^2Dr^n\exp(Dr^n) (\norm{f_1}_{0,1}+\norm{f_2}_{0,1}+2\eta)\\ &\leq& 4(1+a)^2ADe^Dr^n\norm{f}_{0,1} \end{eqnarray*} where $A$ is the uniform constant from Lemma \ref{op norm}. \end{proof} Before we can use the above result to prove Lemma \ref{lemma r}, we need uniform bounds on $\norm{\m_{\p,t}f_{\p,t}}_{0,1}$ and $\norm{\m_{\p,t}^2f_{\p,t}}_{0,1}$ for all $\p$ that satisfy Hypothesis \ref{hyp} and $t \in I$. Observe that for $t \in I$, $|\beta_{\p}^{\prime}(t)|, |p_n^t \log p_n| \leq 8$. To see that this holds for $|p_n^t \log p_n|$, define $\alpha_t(x)= x^t\log x$ for $x \in [0,1]$. Differentiating with respect to $x$ we obtain $$\frac{\textup{d}}{\textup{d}x} (\alpha_t(x))= tx^{t-1}\log x+x^{t-1}= x^{t-1}(t\log x+1).$$ Clearly the only turning point in $[0,1]$ is $x= e^{-\frac{1}{t}}$ and since $\alpha_t(0)=\alpha_t(1)=0$ this is a local minimum for $\alpha_t$, that is, a local maximum for $|x^t\log x|$. Moreover, for $t \geq \delta>0$, $$\alpha_t(e^{-\frac{1}{t}})= e^{-1}\log e^{-\frac{1}{t}}=-\frac{1}{t}e^{-1} \geq -\frac{1}{\delta}e^{-1}= \alpha_{\delta}(e^{-\frac{1}{\delta}}).$$ Therefore, $$|x^t\log x| \leq |\alpha_t(e^{-\frac{1}{t}})| \leq |\alpha_{\delta}(e^{-\frac{1}{\delta}})| = \frac{1}{\delta}e^{-1} \leq \frac{1}{\delta}.$$ The claim follows because $I= [\frac{1}{8}, \frac{1}{4}]$. \begin{lma} There exists a constant $c_5>0$ such that for all $\p$ that satisfies Hypothesis \ref{hyp}, all $t \in I$ and $k \in \{1,2\}$ \begin{eqnarray*} \norm{\m_{\p,t}^kf_{\p,t}}_{0,1} \leq c_5. \end{eqnarray*} \label{e1e2} \end{lma} \begin{proof} Observe that $$\m_{\p,t}f_{\p,t}(x)= h_{\p,t}^{-1}(x) \sum_{n \in \mathbb{N}} \frac{-p_n^t \beta_{\p}^{\prime}(t)\log|T^{\prime}(T_n^{-1}(x))| +p_n^t\log p_n}{|T^{\prime}(T_n^{-1}x)|^{\beta_{\p}(t)}} h_{\p,t}(T_n^{-1}x).$$ By (\ref{fp}) and Proposition \ref{bd}, there exists a uniform constant $C$ which is independent of $\p$, $t$ and $x$ such that $$|\m_{\p,t}f_{\p,t}(x)| \leq C\sum_{n \in \mathbb{N}} \frac{\log n}{n^{2\beta_{\p}(t)}}.$$ Thus we get a uniform upper bound for $\norm{\m_{\p,t}f_{\p,t}}_{\infty}$ and $\norm{\m_{\p,t}^2f_{\p,t}}_{\infty}$ by recalling that $\norm{\m_{\p,t}^2f_{\p,t}}_{\infty}\leq \norm{\m_{\p,t}f_{\p,t}}_{\infty}$ and because $\beta_{\p}(t) \geq \frac{9}{16}$ by (\ref{916}). To obtain the desired bound on $[\m_{\p,t}f_{\p,t}]_1$, we write $\m_{\p,t}f_{\p,t}$ in the form $$\m_{\p,t}f_{\p,t}(x)= \sum_{n \in \mathbb{N}} h_n(x) u_n(x)$$ where $h_n(x)= h_{\p,t}^{-1}(x)h_{\p,t}((x+n)^{-1})$ and $u_n$ is given by $$u_n(x)= \frac{2p_n^t\beta_{\p}^{\prime}(t)\log(x+n)+p_n^t\log p_n}{(x+n)^{2\beta_{\p}(t)}}.$$ Observe that since $h_{\p,t} \in \c$, we can use the same arguments as in the proof of Lemma \ref{op norm} to deduce that $|h_{\p,t}(x)-h_{\p,t}(y)| \leq ae^a\norm{h_{\p,t}}_{\infty}|x-y|$. Therefore using (\ref{fp}) and the inequality $$|h_n(x)-h_n(y)| \leq \norm{h_{\p,t}^{-1}}_{\infty} |h_{\p,t}((x+n)^{-1})-h_{\p,t}((y+n)^{-1})|+ \norm{h_{\p,t}}_{\infty}|h_{\p,t}^{-1}(x)-h_{\p,t}^{-1}(y)|$$ we obtain \begin{eqnarray*} |h_n(x)-h_n(y)| &\leq& ae^a\norm{h_{\p,t}}_{\infty}\norm{h_{\p,t}^{-1}}_{\infty}|x-y| +ae^a\norm{h_{\p,t}}_{\infty}^2\norm{h^{-1}_{\p,t}}_{\infty}^2|x-y|\\ &\leq& 2ae^{3a}|x-y|. \end{eqnarray*} Using this, (\ref{fp}) and the inequality $$|h_n(x)u_n(x)-h_n(y)u_n(y)| \leq \norm{h_n}_{\infty} |u_n(x)-u_n(y)|+ \norm{u_n}_{\infty}|h_n(x)-h_n(y)|,$$ we obtain \begin{eqnarray} |\m_{\p,t}f_{\p,t}(x)-\m_{\p,t}f_{\p,t}(y)| \leq \sum_{n \in \mathbb{N}} e^a|u_n(x)-u_n(y)|+\sum_{n \in \mathbb{N}} 2ae^{3a}\norm{u_n}_{\infty}|x-y|. \label{new} \end{eqnarray} There exists a uniform constant $C$ which is independent of $\p$, $t$ and $x$ such that $$u_n(x) \leq C \frac{\log n}{n^{2\beta_{\p}(t)}}$$ therefore the second sum in (\ref{new}) is uniformly bounded for all $\p$ and $t$. To verify that the first sum is bounded by a constant multiple of $|x-y|$, observe that $$\frac{\textup{d}}{\textup{d}x} u_n(x)= \frac{2p_n^t(\beta_{\p}^{\prime}(t)-2\beta_{\p}^{\prime}(t)\beta_{\p}(t) \log(x+n)-\beta_{\p}(t) \log p_n)}{(x+n)^{2\beta_{\p}(t)+1}}.$$ As before, there exists some uniform constant $C$ which is independent of $\p$, $t$ and $x$ such that $$\frac{\textup{d}}{\textup{d}x} u_n(x) \leq C \frac{\log n}{n^{2\beta_{\p}(t)+1}}$$ from which it follows that the first sum is also uniformly bounded by some constant multiple of $|x-y|$ which is independent of $\p$ and $t$. We can bound $|\m^2_{\p,t}f_{\p,t}(x)-\m^2_{\p,t}f_{\p,t}(y)|$ similarly, thus the result follows. \end{proof} We are now in a position to prove Lemma \ref{lemma r}. \vspace{5mm} \noindent \emph{Proof of Lemma \ref{lemma r}.} For all $\p$ that satisfy Hypothesis \ref{hyp} and $t \in I$ and $n \geq 1$, \begin{eqnarray*} [\m_{\p,t}^{2n} f_{\p,t}]_{1} &\leq& \norm{\m_{\p,t}^{2n} f_{\p,t}}_{0,1} \\ &\leq& c_4 \rho^{n-1} \norm{\m^2_{\p,t}f_{\p,t}}_{0,1} \\ &\leq& c_4c_5 \rho^{n-1} \end{eqnarray*} where the penultimate inequality follows by Lemma \ref{decay} and the last inequality follows by Lemma \ref{e1e2}. We can obtain an analogous upper bound for $[\m_{\p,t}^{2n+1} f_{\p,t}]_1$ for $n \geq 1$. Therefore, $[U_{\p,t}]_{1}$ is uniformly bounded in $\p$ and $t$, which in turn implies that $[U_{\p,t}]_{\alpha}$ is uniformly bounded in $\p$ and $t$. Since $$[f_{\p,t}]_{\alpha}=[\beta_{\p}^{\prime}(t)\log|T^{\prime}|]_{\alpha} \leq \left[\beta_{\p}^{\prime}\left(\frac{1}{8}\right)\log|T^{\prime}|\right]_{\alpha} \leq 8[\log|T^{\prime}|]_{\alpha}$$ the result follows. \qed \subsection{Proof of Lemma \ref{measure lemma}} In this section we investigate the Gibbs properties of $\mu_{\p,t}$ and thus prove Lemma \ref{measure lemma}. Consequently this also allows us to deduce that $\int \log|T^{\prime}|d\mu_{\p,t}$, which appears in the expression for $\beta_{\p}^{\prime\prime}(t)$ in (\ref{beta2}), is uniformly bounded above for any $\p$ that satisfies Hypothesis \ref{hyp} and all $t \in I$. By definition, $\mu_{\p,t}$ is a Gibbs measure for the potential $g_{\p,t}$ and therefore we know that for each $\p$ and $t$ there exists a constant $0<C_{\p,t}< \infty$ such that for all $n \in \mathbb{N}$ and all $i_1 \ldots i_n \in \Sigma^{\ast}$, \begin{equation} C_{\p,t}^{-1} \frac{(p_{i_1}\cdots p_{i_n})^t}{|T^{\prime}(z) \cdots T^{\prime}(T^{n-1}z)|^{\beta_{\p}(t)}} \leq \mu_{\p,t}(\I_{i_1\ldots i_n}) \leq C_{\p,t} \frac{(p_{i_1}\cdots p_{i_n})^t}{|T^{\prime}(z) \cdots T^{\prime}(T^{n-1}z)|^{\beta_{\p}(t)}}. \label{gibbs pt} \end{equation} We'll prove that in fact we can choose a uniform constant $c_3$ such that (\ref{gibbs pt}) becomes \begin{equation} c_3^{-1} \frac{(p_{i_1}\cdots p_{i_n})^t}{|T^{\prime}(z) \cdots T^{\prime}(T^{n-1}z)|^{\beta_{\p}(t)}} \leq \mu_{\p,t}(\I_{i_1\ldots i_n}) \leq c_3 \frac{(p_{i_1}\cdots p_{i_n})^t}{|T^{\prime}(z) \cdots T^{\prime}(T^{n-1}z)|^{\beta_{\p}(t)}} \label{gibbs pt2} \end{equation} uniformly for all $\p$ and $t$. \vspace{5mm} \noindent \emph{Proof of Lemma \ref{measure lemma}.} Recall that $\tilde{g}_{\p,t}=g_{\p,t}+h_{\p,t}-h_{\p,t} \circ T$ (see Proposition \ref{rpf2}). Since $f_{\p}$ is locally constant, $$[g_{\p,t}]_{\alpha}=[-\beta_{\p}(t)\log|T^{\prime}|]_{\alpha} \leq [\log|T^{\prime}|]_{\alpha}$$ and therefore $[g_{\p,t}]_{\alpha}$ can be bounded above by a constant which is independent of $\p$ and $t$. Also if $x, y \in \I_{i_1\ldots i_n}$ then by Lemma \ref{fixed pt} $$|\log h_{\p,t}(x)-\log h_{\p,t}(y)|= \left|\log \frac{ h_{\p,t}(x)}{ h_{\p,t}(y)}\right| \leq a|x-y| $$ and therefore $[h_{\p,t}]_{\alpha}$ can be bounded above by a constant which is independent of $\p$ and $t$. Therefore there exists $\tau>0$ such that $[\tilde{g}_{\p,t}]_{\alpha} \leq \tau$ for all $\p$ and $t$. Now we can apply arguments similar to \cite{bowen}. Let $n \in \mathbb{N}$ and any $i_1\ldots i_n \in \mathbb{N}^n$. Then \begin{eqnarray*} \mu_{\p,t}(\I_{i_2,\ldots,i_n}) &=& \int \one_{\I_{i_2,\ldots,i_n}}(x) \textup{d}\mu_{\p,t}(x) \\ &=& \int \sum_{Ty=x} \one_{\I_{i_1, i_2,\ldots,i_n}}(y) \textup{d}\mu_{\p,t}(x) \\ &=& \int \sum_{Ty=x} \exp(\tilde{g}_{\p,t}(y))\one_{\I_{i_1, \ldots, i_n}}(y)\exp(-\tilde{g}_{\p,t}(y))\textup{d}\mu_{\p,t}(x) \\ &=& \int \m_{\p,t}(\one_{\I_{i_1, \ldots, i_n}}(x)\exp(-\tilde{g}_{\p,t}(x)))\textup{d}\mu_{\p,t}(x) \\ &=& \int_{\I_{i_1, \ldots, i_n}} \exp(-\tilde{g}_{\p,t}(x))\textup{d}\mu_{\p,t}(x) \end{eqnarray*} where the final line follows because $\m_{\p,t}^{\ast} \mu_{\p,t}=\mu_{\p,t}$. Let $z \in \I_{i_1\ldots i_n}$. Then $$\mu_{\p,t}(\I_{i_2, \ldots, i_n})\exp(\tilde{g}_{\p,t}(z)) \leq \exp(\alpha^n[\tilde{g}_{\p,t}]_{\alpha}) \mu_{\p,t}(\I_{i_1, \ldots, i_n})$$ so that $$\frac{\mu_{\p,t}(\I_{i_1, \ldots, i_n})}{\mu_{\p,t}(\I_{i_2, \ldots, i_n})} \exp(-\tilde{g}_{\p,t}(z)) \geq \exp(-\alpha^n[\tilde{g}_{\p,t}]_{\alpha}).$$ Moreover, we can proceed to obtain the following sequence of inequalities \begin{eqnarray*} \frac{\mu_{\p,t}(\I_{i_2, \ldots, i_n})}{\mu_{\p,t}(\I_{i_3, \ldots, i_n})} \exp(-\tilde{g}_{\p,t}(Tz)) &\geq& \exp(-\alpha^{n-1}[\tilde{g}_{\p,t}]_{\alpha})\\ \vdots \\ \mu_{\p,t}(\I_{i_n}) \exp(-\tilde{g}_{\p,t}(T^{n-1}z)) &\geq& \exp(-\alpha[\tilde{g}_{\p,t}]_{\alpha}). \end{eqnarray*} Multiplying these all together we obtain \begin{eqnarray} \frac{\mu_{\p,t}(\I_{i_1, \ldots, i_n})}{\exp(S_n\tilde{g}_{\p,t}(z))} &\geq& \exp\left(-\frac{[\tilde{g}_{\p,t}]_{\alpha}}{1-\alpha}\right). \label{gibbs tilde} \end{eqnarray} Now, \begin{eqnarray*} S_n( \log h_{\p,t}-\log h_{\p,t} \circ T)(z)&=& \log h_{\p,t}(z)-\log h_{\p,t}(Tz) \\ & & + \log h_{\p,t}(Tz)-\log h_{\p,t}(T^2z) \\ & & \vdots \\ & & + \log h_{\p,t}(T^{n-1}z)-\log h_{\p,t}(T^nz) \\ &=& \log \frac{h_{\p,t}(z)}{h_{\p,t}(T^n z)} \geq -a. \end{eqnarray*} Plugging this into (\ref{gibbs tilde}) we obtain \begin{eqnarray} \frac{\mu_{\p,t}(\I_{i_1, \ldots, i_n})}{\exp(S_ng_{\p,t}(z))} &\geq& \exp\left(-\frac{[\tilde{g}_{\p,t}]_{\alpha}}{1-\alpha}-a\right) \geq \exp\left(-\frac{\tau}{1-\alpha}-a\right). \label{gibbs gpt} \end{eqnarray} By rearranging this inequality and expanding the ergodic sum we obtain the desired lower bound. The upper bound follows by an analogous argument. \qed We can now deduce that $\int \log|T^{\prime}|d\mu_{\p,t}$ is uniformly bounded above for all $\p$ and $t$. \begin{lma} There exists a uniform constant $L$ such that for all $\p$ that satisfy Hypothesis \ref{hyp} and $t\in I$, $$\int \log|T^{\prime}|d\mu_{\p,t} \leq L.$$ \label{e2 proof} \end{lma} \begin{proof} By Lemma \ref{measure lemma}, $\mu_{\p,t}(\I_n) \leq c_3 \frac{p_n^t}{|T^{\prime}(x)|^{\beta_{\p}(t)}}$ for any $\p$ which satisfies Hypothesis \ref{hyp}, any $t \in I$, any $n \in \mathbb{N}$ and $x \in \I_n$. Therefore for all $\p$ and $t$ we have \begin{eqnarray*} \int \log|T^{\prime}|d\mu_{\p,t}& \leq& C\sum_{n \in \mathbb{N}} \sup_{x \in \I_n}\frac{\log|T^{\prime}(x)|}{|T^{\prime}(x)|^{\beta_{\p}(t)}}\leq c_3 \sum_{n \in \mathbb{N}}\frac{2\log n}{n^{2\beta_{\p}(t)}}. \end{eqnarray*} Since by (\ref{916}) we know $\beta_{\p}(t) \geq \frac{9}{16}$, the result follows. \end{proof} \subsection{Proof of Lemma \ref{top bound}} Suppose $\p$ satisfies that $p_1, p_2 \geq \epsilon$. By Lemma \ref{lemma periodic} there exists $z \in \{z_{1}, z_{2}, z_{12}\}$ such that $\frac{1}{2}|S_2 \tilde{f}_{\p,t}(z)| \geq c_1$. Let $z=\Pi(\i)$. Fix $m$ sufficiently large that $\alpha^m \leq \frac{c_1}{2c_2}$. By Lemma \ref{measure lemma} and the fact that $\inf_{x \in \I_1 \cup \I_2} \frac{1}{|T^{\prime}(x)|}=\frac{1}{9}$, $$\mu_{\p,t}(\I_{i_1 \ldots i_m}) \geq c_3^{-1} \frac{\epsilon^{\frac{m}{4}}}{9^m}.$$ By (\ref{beta2}), (\ref{strategy}), Corollary \ref{rewrite2} and Lemma \ref{e1e2} \begin{eqnarray} \beta_{\p}^{\prime\prime}(t) \geq \frac{c_1^2 \epsilon^{\frac{m}{4}}}{4 \cdot 9^mc_3L}. \label{gamma} \end{eqnarray} Similarly if instead $\p$ satisfies that $p_1> \psi$, by Lemma \ref{lemma periodic} it follows that $f_{\p,t}(z_1) \geq c_1$. Let $z_1=\Pi(\i)$. Again by Lemma \ref{measure lemma}, $$\mu_{\p,t}(\I_{i_1 \ldots i_m}) \geq c_3^{-1} \frac{\psi^{\frac{m}{4}}}{9^m} \geq \frac{\epsilon^{\frac{m}{4}}}{9^m}$$ and therefore (\ref{gamma}) also holds. Set $\gamma_{\epsilon}= \frac{\epsilon^{\frac{m}{4}}}{9^m}$. Then \begin{eqnarray*} -1=\beta_{\p}(1)-\beta_{\p}(0)= \int_0^1 \beta_{\p}^{\prime}(t) \textup{d}t \leq \beta_{\p}^{\prime}(1)- r \gamma_{\epsilon} \end{eqnarray*} for some constant $r$. The result follows from the fact that $-\beta_{\p}^{\prime}(1)=\dim \mu_{\p}$. \qed \section{Proof of Theorem \ref{main}} By Lemma \ref{infinite thm} we can restrict to the case where $h(\mu_{\p})< \infty$. The proof then follows from Lemmas \ref{tail lemma} and \ref{top bound}. Fix $\epsilon=\epsilon_0>0$ that satisfies Lemma \ref{tail lemma}. If $\p$ does not satisfy Hypothesis \ref{hyp} then either $\dim \mu_{\p} \leq \frac{3}{4}$ or $$\dim \mu_{\p} \leq s+\frac{\kappa(s)}{\lambda_0}<1$$ by Lemma \ref{tail lemma}. Otherwise, by Lemma \ref{top bound} $$\dim \mu_{\p} \leq 1-r\gamma_{\epsilon_0}$$ which proves the existence of a dimension gap. \section{Generalisations} The method used in this paper can be generalised to prove the existence (and bounds on) a dimension gap for more general countable branch expanding maps under a suitable `non-linearity' assumption on the map. In particular, let $\{\mathcal{I}_n\}_{n \in \mathbb{N}}$ be a countable collection of open non-empty disjoint subintervals of $[0,1]$ such that $(0,1) \subset \bigcup_{n \in \mathbb{N}} \overline{\I_n}$ and let $T_n: \overline{\I}_n \to [0,1]$ be a sequence of expanding bijective $C^2$ maps (so $|T_n^{\prime}|>1$). Define $T: [0,1] \to [0,1]$ as \begin{eqnarray*} \begin{array}{ccccc} T(x)&=& T_n(x) & \textnormal{if}& x \in \overline{\I}_n \\ T(0)&=&0 & & \end{array} \end{eqnarray*} where we put $T(x)=T_k(x)$ for $k= \min\{n: x \in \overline{\mathcal{I}}_n\}$ if $x$ is a common endpoint of two intervals. Similarly, we adopt the convention that $T^{\prime}(x)= T_k^{\prime}(x)$ where $k=\min\{n: x \in \overline{\mathcal{I}}_n\}$. Let $T:[0,1] \to [0,1]$ be a countable branch expanding (Markov) map as described above. Additionally assume that $T$ satisfies the following conditions: \begin{enumerate} \item \textbf{Some iterate of $T$ is uniformly expanding.} There exists $l \in \mathbb{N}$ and $\Lambda>1$ for which $$|(T^l)^{\prime}(x)| \geq \Lambda >1$$ for all $x \in [0,1]$. \item \textbf{R\'enyi condition.} There exists $\kappa<\infty$ such that \begin{eqnarray} \sup_{n \in \mathbb{N}} \sup_{x, y, z \in \I_n} \left| \frac{T^{\prime\prime}(x)}{T^{\prime}(y)T^{\prime}(z)} \right| = \kappa < \infty. \label{renyi} \end{eqnarray} \item \textbf{Fast decaying interval lengths.} There exists $s<1$ such that $$\sum_{n \in \mathbb{N}} |\I_n|^s < \infty.$$ \item \textbf{Non-linearity assumption.} \begin{eqnarray} T^{\prime}(z_1)T^{\prime}(z_2) \neq T^{\prime}(z_{12})T^{\prime}(z_{21}). \label{nonlin} \end{eqnarray} \end{enumerate} Then there exists some $\eta>0$ for which $$\sup_{\p \in \mathcal{P}} \dim \mu_{\p} \leq 1-\eta$$ and $\eta$ depends on $\Lambda, l, \kappa, s$ and a `non-linearity' constant $\theta$ which is given by $$\theta=\left|\log \frac{T^{\prime}(z_1)T^{\prime}(z_2)}{T^{\prime}(z_{12})T^{\prime}(z_{21})}\right| \neq 0. $$ We now make some remarks about assumptions (1)-(4). Firstly, (2) guarantees that $-\log|T^{\prime}|$ is locally H\"older, which we saw was crucial for the proof. This in turn allows one to show that an analogue of Proposition \ref{bd} holds for $T$, which is also utilised at many points throughout the proof. In fact, the reason why our method yields a particularly poor estimate on the dimension gap when $T$ is the Gauss map is precisely because the constant $\kappa$ in (\ref{renyi}) is given by $\kappa=16$, which ends up appearing in several exponents throughout the proof. Next, we note that (3) is a sharp condition. To see this, suppose there does not exist $s<1$ for which $ \sum_{n \in \mathbb{N}} |\mathcal{I}_n|^s < \infty.$ Let $0<t<1$ be arbitrary. By assumption $\sum_{n=1}^{\infty} |\I_n|^t= \infty.$ Thus, we can choose some large $N$ for which $\sum_{n=N}^{k} |\I_n|^t \geq 1$ for some $k>N$. Fix $\mathbf{p}_N= (p_1, p_2, \ldots)$ where \begin{eqnarray*} p_n= \left\{ \begin{array}{cccc} 0 & n < N &\textnormal{or}& n > k \\ c|\I_n|^t & N \leq p_n \leq k & & \\ \end{array} \right. \end{eqnarray*} where $c$ is a normalising constant so that $\sum_{n=N}^{k}c|p_n|^t=1$. Consider the Bernoulli measure $\mu_{\p_N}$. Since $h(\mu_{\p_N}) < \infty$ it follows that the dimension $\dim \mu_{\p_N}= \frac{h(\mu_{\p_N})}{\chi(\mu_{\p_N})}$. Applying the analogue of Proposition \ref{bd} it follows that there exists a uniform constant $C>0$ for which $\log |T^{\prime}(x)| \leq -\log|\I_n|+ C$ for all $n \in \mathbb{N}$ and all $x \in \I_n$. Therefore \begin{eqnarray*} \dim \mu_{\p_N} &\geq& \frac{- \sum_{n=N}^{k} c|\I_n|^t \log c|\I_n|^t}{-\sum_{n=N}^{k} c|\I_n|^t(\log|\I_n|-C)} \\ &=& \frac{-t \sum_{n=N}^{k} (c|\I_n|^t \log |\I_n|)- \log c}{-\sum_{n=N}^{k}( c|\I_n|^t\log|\I_n|)+ C}. \end{eqnarray*} Since $N$ can be chosen arbitrarily large to make $-\sum_{n=N}^{k}( c|\I_n|^t\log|\I_n|)$ arbitrarily large, we deduce that $\dim \mu_{\p_N} \to t$ as $N \to \infty$. Therefore, for all $0<t<1$ we can choose a Bernoulli measure with dimension greater than $t$, proving that a dimension gap does not exist. Finally, (4) describes the fact that $T$ sees some non-linearity on one of the first two branches. This is precisely the property that was used in Lemma \ref{periodic} and would be sufficient (though not necessary) to prove an analogue of Lemma \ref{periodic} for a more general map $T$. \vspace{0.5cm} \noindent \textbf{Acknowledgements.} This paper is part of the author's PhD thesis conducted at the University of Warwick. The author is extremely grateful to her supervisor Mark Pollicott for suggesting the problem which is studied in this paper and for many useful discussions. This paper was written while the author was supported by a \emph{Leverhulme Trust Research Project Grant} (RF-2016-194). The author would also like to thank Marc Kesseb\"ohmer, Thomas Jordan and Simon Baker for helpful conversations.
{ "timestamp": "2018-06-05T02:11:47", "yymm": "1806", "arxiv_id": "1806.00841", "language": "en", "url": "https://arxiv.org/abs/1806.00841" }
\section{Introduction} \label{introduction} With the advent of globalization affecting the design and manufacturing process of integrated circuits (ICs), hardware security has emerged as a critical concern. The exposure to various adversaries, which may reverse engineer (RE) ICs, counterfeit them, steal their intellectual property (IP), inject hardware Trojans, leak and/or extract sensitive data at runtime has escalated~\cite{rostami14}. Next, we briefly review IP protection schemes and attacks in general. \textbf{IC camouflaging} seeks to mitigate RE attacks, wherein the layout-level appearance of the IC is altered such that it becomes intractable to decipher its underlying functionality and IP. For CMOS integration, various techniques have been proposed, e.g., look-alike gates~\cite{rajendran13_camouflage}, threshold-dependent camouflaging~\cite{nirmala16, erbagci16}, and obfuscated interconnects~\cite{patnaik17_Camo_BEOL_ICCAD}. \textbf{Logic locking/encryption} obfuscates the IP functionality rather than the device-level layout~\cite{yasin16_SARLock, xie16_SAT}. The so-called key gates are carefully tailored into the IP/chip, where only the correct key can ``unlock'' the original functionality. \textbf{Analytical attacks} targeting camouflaged (or locked) ICs were initially introduced in~\cite{subramanyan15, massad15}. These attacks are based on Boolean satisfiability (SAT) and the fact that a small set of discriminating input patterns (DIPs) may suffice to infer the camouflaged functionality (or locking key). Several SAT-attack resilient techniques were recently proposed~\cite{yasin16_SARLock, xie16_SAT, li16_camouflaging}; however, most of these techniques are still vulnerable to advanced analytical attacks such as~\cite{shamsi17, shen17, bypass-attack2017}. \textbf{Physical attacks} range from non-invasive (e.g., power side-channel attacks) and semi-invasive (e.g., localized fault-injection attacks) to invasive attacks (e.g., RE, microprobing the frontside/backside)~\cite{Wang17probing}. Such attacks are also promising for extracting sensitive data at runtime, even from secured chips, e.g.,~\cite{skorobogatov12,courbon16}. \textbf{Emerging devices} including, e.g., nanowire transistors, carbon-based or spin-based devices, may offer lower power dissipation and higher integration density compared to their CMOS counterparts~\cite{nikonov2013overview}. Additionally, emerging devices can augment the CMOS technology to improve hardware security~\cite{ghosh2016spintronics,bi16_JETC, parveen2017hybrid}. The most promising aspect of many emerging devices is \emph{polymorphism}: a polymorphic gate can readily implement different Boolean functions at runtime, where the functionality is determined by an internal/external control mechanism~\cite{ parveen2017hybrid}. It is important to note that polymorphic gates can inherently support both camouflaging and locking due to the following reasons. First, owing to their uniform device-level layout, the actual function of a polymorphic gate is hard to determine, particularly when optical-imaging-based RE techniques are used. Second, the actual function is dependent on the control input, which can act as a key input. \textbf{In this work}, we use the giant spin-Hall effect (GSHE) switch, first proposed in~\cite{datta2012non}, to build polymorphic gates for advanced protection. More specifically, we leverage the GSHE switch recently designed and analyzed by Rangarajan \emph{et al.\ }\cite{rangarajan2017energy} in the context of probabilistic computing. We emphasize that the notions of locking and camouflaging are interchangeable in this work due to the polymorphic nature of the proposed primitive, unlike for CMOS-centric approaches. The contributions of this work can be summarized as follows. \begin{enumerate} \item We leverage a polymorphic, GSHE-based device to propose a versatile security primitive. The primitive provides strong camouflaging capabilities---given two inputs, all 16 possible Boolean functions can be cloaked within a single instance. We elaborate on the device as well as the proposed primitive in detail in Sec.~\ref{device_model}. \vspace{1.25mm} \item We analyze the protection provided by the primitive against attacks such as imaging- and electron-microscopy-based RE, side-channel attacks, and analytical SAT attacks (Sec.~\ref{security}). As for SAT attacks, a comprehensive study is conducted and benchmarked against prior state-of-the-art techniques. Immunity to SAT attacks for probabilistic computing, directly supported by the primitive, is also discussed. \vspace{1.25mm} \item We outline the prospects of hybrid CMOS-GSHE designs for industrial benchmarks. We observe that delay-aware protection can provide strong resilience (against SAT attacks) with negligible layout overheads. \end{enumerate} \section{Background: Prior Art and Limitations} \label{sec:background} In~\cite{zhang2015giant}, the authors implemented a low-power and versatile gate using a GSHE-based magnetic tunnel junction (MTJ) as the basic switching element. However, this device is not explicitly tailored for security; it is unable to support logic locking by itself, as it is not truly polymorphic. More concerning is the limitation to only four possible Boolean functions, which renders this primitive weak against SAT attacks (Sec.~\ref{security}). Alasad~\emph{et al.}~\cite{alasad2017leveraging} use all-spin logic (ASL) to design three different security primitives, supporting three sets of camouflaged functionalities: INV/BUF, XOR/XNOR, and AND/NAND/OR/NOR. The layouts of the three primitives are unique; they can be readily distinguished by imaging-based RE tools, which also eases subsequent SAT attacks (Sec.~\ref{security}). Winograd~\emph{et al.}~\cite{winograd2016hybrid} introduced a spin-transfer torque (STT)-based reconfigurable lookup table (LUT), explicitly addressing hardware security. However, their approach falls short in terms of resilience against SAT attack. (Note that the authors did not report on any SAT attack themselves.) We protect the \emph{s38584} benchmark according to their technique and observe that the protected layout can be decamouflaged in less than 30 seconds on average (over 100 runs of camouflaging and SAT attacks). This weak resilience stems from the limited use of their STT-LUT primitive to curb power, performance, and area (PPA) overheads. As for CMOS-centric camouflaging, most schemes incur a high layout cost. For example, the look-alike NAND-NOR-XOR gate proposed by Rajendran~\emph{et al.}~\cite{rajendran13_camouflage} induces 4$\times$ area, 5.5$\times$ power, and 1.6$\times$ delay (compared to a regular two-input NAND gate) whereas the threshold-dependent full-chip camouflaging as proposed in \cite{erbagci16} still induces overheads of 14\%, 82\%, and 150\% in PPA, respectively. As a result, most schemes are limited to a cost-constrained and selective application, which has severe implications for security (Sec.~\ref{security}). \section{Device-Level Design of Spin-Based Primitive} \label{device_model} Protection schemes based on emerging devices can be competitive, even when compared to regular CMOS. While the GSHE switch leveraged in this work is still in the nascent stage of fabrication~\cite{penumatcha2016impact}, it is nevertheless promising because of its small scale and low power (Section~\ref{properties_GSHEswitch}). As for the relatively large delay, the GSHE-based primitive is still applicable without inducing significant delay overheads (Sec.~\ref{security}). \subsection{Structure and Operating Principle of the GSHE Switch} The GSHE switch, which is at the heart of the proposed primitive, is shown in Fig.~\ref{GSHE_switch}. Above the heavy metal spin-Hall layer (purple, bottom) are the write (W; red, bottom) and read (R; red, top) nanomagnets (NM). These nanomagnets (W-NM and R-NM) exhibit a negative mutual dipolar coupling. On top of the R-NM sit two fixed ferromagnetic layers (dark green) with anti-parallel magnetization directions. \begin{figure}[tb] \centering \includegraphics[width=.89\textwidth]{GSHE.png} \caption{ Structure of the GSHE switch. The concept is derived from~\cite{rangarajan2017energy}, but here we adopt a stacked integration to maximize the dipolar coupling.} \label{GSHE_switch} \vspace{-2mm} \end{figure} Applying a charge current to the bottom layer (large golden arrow in Fig.~\ref{GSHE_switch}) results in spin accumulation of one polarity (green spin-up spheres) in the transverse direction (pink arrows)~\cite{rangarajan2017energy}. This spin-polarized current then imparts a spin-transfer torque (STT) to the W-NM~\cite{slonczewski1996current}. The STT switches the W-NM from one stable state to the other which, in turn, switches the R-NM in the opposite sense.\footnote{That is because in the presence of negative magnetic dipolar coupling, the minimum energy state is the one in which the W and R nanomagnets are anti-parallel to each other~\cite{datta2012non}. \vspace{-1ex} } Now, the magnetization direction of the R-NM will be parallel to one of the fixed ferromagnets on top and anti-parallel to the other. The parallel path offers a lower resistance for a charge current passing from/to the respective top contact to/from the output terminal. This read-out phase commences once voltages are applied to the top contacts ($V^+$ and $V^-$). Depending on the polarity of the voltage applied to the low-resistance path, the output current either flows inward or outward---this represents the binary result of the GSHE switch operation (see Fig.~\ref{GSHE_NAND_NOR}). \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{NAND_NOR_currents.png} \caption{The current-centric truth tables for NAND and NOR functionalities, with inputs A and B (X is a control signal). As always the case for our GSHE-based primitive, logic 1/0 is represented by an output current +I/-I. \label{GSHE_NAND_NOR} } \vspace{-2mm} \end{figure} \subsection{Characterization and Comparison of the GSHE Switch} \label{properties_GSHEswitch} The conceptual layout of the GSHE switch (Fig.~\ref{fig:Layout_GSHE}) is drawn based on the design rules for beyond-CMOS devices~\cite{nikonov2013overview}, i.e., in units of maximum misalignment length $\lambda$. The area of the GSHE switch is accordingly estimated to be $0.0016 \mu$m$^2$. The material parameters are given in Table~\ref{parameters}. Notably, a spin current ($I_{S}$) of at least 20$\mu$A is required in this work to guarantee a deterministic switching behavior. \begin{table}[tb] \scriptsize \renewcommand{\arraystretch}{1.1} \caption{Material Parameters of the GSHE Switch } \vspace{-2mm} \begin{center} \setlength{\tabcolsep}{1.4mm} \begin{tabular}{c|c} \hline {\textbf{Parameter}} & {\textbf{Value}} \\ \hline \hline Volume of nanomagnets (NM) & ($28\times 15\times 2$) nm$^3$~\cite{rangarajan2017energy} \\ \hline \multirow{2}{*}{Saturation magnetization $M_s$ of NM } & $10^6$ A/m (W-NM)~\cite{rangarajan2017energy}\\ & $5\times 10^5$ A/m (R-NM)~\cite{rangarajan2017energy}\\ \hline \multirow{2}{*}{Uniaxial energy density $K_u$ of NM} & $2.5\times 10^4$ J/m$^3$ (W-NM)~\cite{rangarajan2017energy} \\ & $5\times 10^3$ J/m$^3$ (R-NM)~\cite{rangarajan2017energy} \\ \hline Spin current $I_{S}$, determ.\ switching & 20 $\mu$A~\cite{rangarajan2017energy}\\ \hline Resistance area product $RAP$ &$1\>\> \Omega \mu$m$^2$ \cite{maehara2011tunnel}\\ \hline Tunneling magnetoresistance $TMR$ &$170\%$ \cite{maehara2011tunnel}\\ \hline Parallel conductance $G_P$ & $420\>\>\mu$S\\ \hline Anti-parallel conductance $G_{AP}$ & $155.6\>\>\mu$S\\ \hline Resistivity of heavy metal (HM) $\rho$ &$5.6\times 10^{-7} \Omega$--m\\ \hline Spin-Hall angle $\theta_{SH}$ of HM &$0.4$\\ \hline Thickness $t_{HM}$ of HM & $1$ nm\\ \hline Internal gain $\beta$ of HM & $0.4\times(15\;$nm$/1\;$nm$)$\\ $\beta = \theta_{SH}\times(w_{NM}/t_{HM})$ & $=6$ \\ \hline Resistance $r$ of HM & $\approx 1\>\> k\Omega$ \\ \hline \end{tabular} \label{parameters} \end{center} \end{table} \begin{figure}[h] \centering \includegraphics[width=.75\textwidth]{GSHE_layout.pdf} \vspace{-2mm} \caption{The conceptual layout of the GSHE switch (main part), and the equivalent circuit (inset, derived from~\cite{datta2012non}). The power dissipation of the latter is dictated by the resistance $r$ of the heavy metal as well as the conductances of the anti-parallel, high-resistance path ($G_{AP}$) and the parallel, low-resistance path ($G_P$) build up by the fixed ferromagnets.} \label{fig:Layout_GSHE} \end{figure} The performance of the switch is determined by the nanomagnetic dynamics, which is simulated using the stochastic Landau-Lifshitz-Gilbert-Slonczewski equation~\cite{d2006midpoint}. Three simulated delay distributions are illustrated in Fig.~\ref{fig:delay_profile}. For the propagation delay of the primitive, we subsequently assume a mean delay of 1.55~ns obtained for $I_{S} = 20$~$\mu$A. \begin{figure}[tb] \centering \includegraphics[width=.85\textwidth]{delay_security.pdf} \vspace{-2mm} \caption{Delay distributions for the GSHE switch at various spin currents ($I_{S}$). The distributions are obtained from 100,000 simulations. Although the delays incurred in switching are stochastic, the switching process itself is still deterministic. Note that the spread and mean delay diminish with increasing $I_{S}$, however, at the cost of higher power dissipation.} \label{fig:delay_profile} \end{figure} The power dissipation for the read-out phase is derived according to the equivalent circuit shown in Fig.~\ref{fig:Layout_GSHE} (inset). Using the following equations and the parameters listed in Table~\ref{parameters}, the power dissipation of the GSHE switch (including leakage) is derived as 0.2125 $\mu$W. \vspace{-2ex} \begin{subequations} \small \begin{equation*} P = \frac{{V_{OUT}}^{2}}{r} + (V_{SUP}-V_{OUT})^{2}G_{P} + (V_{OUT}+V_{SUP})^{2}G_{AP} \end{equation*} \begin{equation*} V_{SUP} = \left|V^{+/-}\right| = \left(\frac{I_{S}}{\beta}\right)\left(\frac{1+r(G_{P}+G_{AP})}{G_{P}-G_{AP}}\right); \; V_{OUT} = \frac{I_{S}\>\> r}{\beta} \end{equation*} \begin{equation*} \frac{G_{P}}{G_{AP}} = 1 + TMR; \; G_{P} = \frac{A(nanomagnets)}{RAP} \end{equation*} \end{subequations} In Table~\ref{tab:devices}, we compare the metrics of the GSHE switch against those of existing devices, including ones that are not necessarily security-oriented. The switch is superior in terms of energy/power but is limited in terms of delay. As for security, the number of possible functions is the relevant metric; here, the GSHE switch significantly outperforms prior art. Moreover, a delay-aware application can provide adequate security without any significant overheads (Sec.~\ref{security}). \begin{table}[tb] \centering \scriptsize \caption{Comparison of Selected Emerging-Device Primitives} \label{tab:devices} \vspace{-2mm} \setlength{\tabcolsep}{1.2mm} \begin{tabular}{c|c|c|c|c} \hline \textbf{Publication} & \textbf{\# Functions} & \textbf{Energy} & \textbf{Power} & \textbf{Delay}\\ \hline \hline ~\cite{bi16_JETC} SiNW & NAND/NOR & 0.05--0.1 fJ & 1.13--1.77 $\mu$W & 42--56 ps \\ \hline ~\cite[a]{alasad2017leveraging} ASL & NAND/NOR/AND/OR & 0.58 pJ & 351.52 $\mu$W & 1.65 ns \\ \hline ~\cite[b]{alasad2017leveraging} ASL & XOR/XNOR & 1.16 pJ & 351.52 $\mu$W & 3.3 ns \\ \hline ~\cite[c]{alasad2017leveraging} ASL & INV/BUF & 0.13 pJ & 342.11 $\mu$W & 0.38 ns \\ \hline ~\cite{huang2016magnetic} DWM & AND/OR & 67.72 fJ & 60.46 $\mu$W & 1.12 ns \\ \hline \multirow{2}{*}{~\cite{parveen2017hybrid} DWM} & NAND/NOR/XOR/XNOR/ & \multirow{2}{*}{N/A} & \multirow{2}{*}{N/A} & \multirow{2}{*}{N/A} \\ & AND/OR/INV & & & \\ \hline ~\cite{zhang2015giant} GSHE & AND/OR/NAND/NOR & N/A & N/A & N/A \\ \hline \multirow{2}{*}{~\cite{winograd2016hybrid} STT} & NAND/NOR/XOR/ & \multirow{2}{*}{N/A} & \multirow{2}{*}{N/A} & \multirow{2}{*}{N/A} \\ & XNOR/AND/OR & & & \\ \hline \textbf{This work} & \textbf{All 16} & \textbf{0.33 fJ} & \textbf{0.2125 $\mu$W} & \textbf{1.55 ns} \\ \hline \end{tabular} \end{table} \subsection{Security Primitive: Cloaking of all 16 Boolean Functions} All 16 possible Boolean functions implemented by the proposed primitive are illustrated in Fig.~\ref{fig:GSHE_gates}. To realize NAND/NOR, e.g., three charge currents are fed into the bottom layer of the GSHE switch at once: two currents represent the logic signals A and B, and the third current (X) acts as the ``tie-breaking'' control input (recall Fig.~\ref{GSHE_NAND_NOR}). For the XOR/XNOR functionalities, one signal is provided as input current, whereas the other signal and its inverse are provided as input voltages at the $V^+$ and $V^-$ terminals of the fixed ferromagnets.\footnote{Toward this end, magneto-electric transducers~\cite{manipatruni15} may be placed in the interconnects. Such transducers can be tailored for uniform, indistinguishable layouts, and can be used to convert (i)~charge currents to their reverse (+I to -I, or B to B'), (ii)~voltages to charge currents (high/low voltage to +/-I), and (iii)~charge currents to voltages (+/-I to high/low voltages). } Swapping the voltage polarities switches between the complementary functions. Note that three wires are used for the input terminal for all 16 Boolean gates (recall Fig.~\ref{fig:Layout_GSHE}); this renders the layout of the primitive indistinguishable for optical-imaging-based RE, irrespective of the actual functionality. As such, some gates will require dummy wires. Depending on the threat model and concept for chip-level implementation (Sec.~\ref{sec:threat_and_concept}), one may implement these dummy wires using RE-resilient interconnects in the BEOL~\cite{patnaik17_Camo_BEOL_ICCAD}, or with the help of additional MUXes and key bits to seemingly switch between real/dummy wires at the FEOL. Similar protection is required for the assignment of the different input voltages and control signals. Finally, in addition to the 16 functions illustrated in Fig.~\ref{fig:GSHE_gates}, we can readily extend our primitive to cloak latches and flip-flops, by applying the clock signal to the fixed ferromagnets' terminals. Besides, the primitive can readily implement multi-input gates (i.e., $>$2 signal inputs) as well. \section{Threat Model and Concept for Secure Chip-Level Implementation} \label{sec:threat_and_concept} We assume the fab and the end-user to be untrusted; the ultimate goal for any adversary is to understand the true functionality of a camouflaged/locked chip. Our threat model represents a notable advancement over prior work related to camouflaging, where the IP holder traditionally \emph{must} trust the fab because of the device/circuit-level protection mechanism. To hinder fab-based adversaries, we outline two equally promising options for secure implementation: either (a) leverage split manufacturing~\cite{mccants11} or (b) provision for a tamper-proof memory. For option (a), the wires for the control inputs and the ferromagnet terminals shall remain protected from the untrusted FEOL fab. Hence, these wires have to be routed at least partially through the BEOL, which must be manufactured by a separate, trusted fab. For option (b), the tamper-proof memory holds a secret key that defines (using some additional circuitry) the correct assignment of control inputs and voltages for all devices. The key must be loaded (by the IP holder) into the memory only after fabrication. A malicious end-user can obtain the design specifics of the chip through RE and side-channel attacks. She/he can also use a working chip as an oracle for analytical attacks. In the remainder of this paper, we focus on malicious end-user. \begin{figure*}[tb] \centering \includegraphics[width=.9\textwidth]{GSHE_gates.png} \caption{All 16 possible Boolean functionalities for two inputs, A and B, implemented using the proposed primitive. If required, X serves as control signal, not as regular input. Note that BUF and INV capture two functionalities each. \label{fig:GSHE_gates} } \vspace{-2mm} \end{figure*} \begin{table}[b] \centering \scriptsize \caption{Characteristics of Synthesized Benchmarks (Italics: \emph{EPFL Suite}~\cite{EPFL15}; Bold: \emph{IBM Superblue Suite}~\cite{viswanathan2011ispd}) } \label{tab:benchmarks} \vspace{-2mm} \setlength{\tabcolsep}{0.7mm} \begin{tabular}{c|c|c|c||c|c|c|c} \hline \textbf{Benchmark } & \textbf{Inputs} & \textbf{Outputs} & \textbf{Gates } & \textbf{Benchmark } & \textbf{Inputs} & \textbf{Outputs} & \textbf{Gates } \\ \hline \hline \emph{aes\_{core}} & 789 & 668 & 39,014 & \emph{log2} & 32 & 32 & 51,627 \\ \hline b14 & 277 & 299 & 11,028 & \textbf{sb1} & 8,320 & 13,025 & 856,403 \\ \hline b21 & 522 & 512 & 22,715 & \textbf{sb5} & 11,661 & 9,617 & 741,483 \\ \hline c7552 & 207 & 108 & 4,045 & \textbf{sb10} & 10,454 & 23,663 & 1,117,846 \\ \hline ex1010 & 10 & 10 & 5,066 & \textbf{sb12} & 1,936 & 4,629 & 1,523,108 \\ \hline \emph{pci\_bridge32} & 3,520 & 3,528 & 35,992 & \textbf{sb18} & 3,921 & 7,465 & 659,511 \\ \hline \end{tabular} \end{table} \section{Security Analysis} \label{security} \subsection{ Study on Large-Scale IP Protection Against SAT Attacks} \textbf{Setup:} We model the proposed primitive and those of selected prior art~\cite{rajendran13_camouflage, parveen2017hybrid, alasad2017leveraging, zhang16, bi16_JETC, nirmala16, winograd2016hybrid, zhang2015giant} as outlined in~\cite{ yasin15_IDT, massad15}. Although the proposed primitive also supports locking, here we contrast it only to camouflaging primitives; logic locking and camouflaging are transformable notions without loss of generality~\cite{yasin15_IDT}. Note that we also contrast to CMOS-centric techniques; this is meaningful as any scheme hinges on the number and composition of their cloaked functionalities~\cite{massad15,subramanyan15}, not their implementation (i.e., at least for analytical attacks). For a fair evaluation, the same set of gates are protected---gates are randomly selected once for each benchmark, memorized, and then reapplied across all techniques. We evaluate all techniques against powerful SAT attacks~\cite{subramanyan15, code_pramod, shen17}, run on Intel Xeon server (2.3 GHz, 4 GB per task allowed). The time-out (``t-o'') is set to 48 hours. \textbf{Benchmarks:} We conduct our experiments on traditional benchmarks suites (\emph{ISCAS-85}, \emph{MCNC}, and \emph{ITC-99}), on the large-scale \emph{EPFL suite}~\cite{EPFL15}, and on the industrial \emph{IBM superblue} circuits~\cite{viswanathan2011ispd} (Table \ref{tab:benchmarks}). For the \emph{IBM superblue} circuits, we leverage~\cite{kahng14} to synthesize and generate the layouts for further analysis. As for SAT attacks, we pre-process the sequential circuits (\emph{IBM superblue}) as follows: the inputs (and outputs) of all flip-flops become primary outputs (and inputs); thereafter, the flip-flops are removed. (Doing so is essential to mimic access to scan chains for the SAT attacks~\cite{massad15}.) \textbf{On provably secure versus large-scale schemes:} Contrary to \emph{provably secure} schemes such as~\cite{yasin16_SARLock, xie16_SAT, li16_camouflaging}, one may find it difficult to engage in ``plain'' but large-scale camouflaging. The key reason for this concern is that the solution space $C$---covering all possible functionalities of a camouflaged design and thereby defining the computational efforts for SAT attacks---is hard to quantify precisely~\cite{massad15,subramanyan15,li16_camouflaging}. More specifically, $C$ depends primarily on (i)~the number and the composition of functions cloaked by each primitive and (ii)~the number and selection of gates protected with a primitive. Recall that prior art is limited in both (i) and (ii) by cost considerations. In contrast, thanks to the innate polymorphism of the proposed primitive, we are unbound toward large-scale and even full-chip camouflaging. Moreover, the primitive cloaks all 16 possible functionalities. Intuitively, our scheme should thus impose maximal efforts for SAT attacks. We believe that this renders our scheme competitive on par with provably secure techniques, and we substantiate this statement with a comprehensive study below. \begin{table*}[tb] \centering \scriptsize \setlength{\tabcolsep}{1mm} \caption{Runtime for Our SAT Attacks (Using~\cite{subramanyan15,code_pramod}), in Seconds (Time-Out t-o is 172,800 Seconds, i.e., 48 Hours) }\label{tab:satattackcomparison} \vspace{-2mm} \begin{tabular}{*{14}{c|}c} \hline \multirow{3}{*}{\textbf{Benchmark}} & \multicolumn{7}{|c|}{\textbf{10\% IP Protection}} & \multicolumn{7}{|c}{\textbf{20\% IP Protection}} \\ \cline{2-15} & \textbf{\cite{rajendran13_camouflage}} & \textbf{\cite{nirmala16,winograd2016hybrid}} & \textbf{\cite{bi16_JETC}} & \textbf{\cite[c]{alasad2017leveraging}, \cite{zhang16}} & \textbf{\cite{zhang2015giant}, \cite[a]{alasad2017leveraging}} & \textbf{\cite{parveen2017hybrid}} & \textbf{Our} & \multirow{2}{*}{\textbf{\cite{rajendran13_camouflage}}} & \multirow{2}{*}{\textbf{\cite{nirmala16,winograd2016hybrid}}} & \multirow{2}{*}{\textbf{\cite{bi16_JETC}$^\dag$}} & \multirow{2}{*}{\textbf{\cite[c]{alasad2017leveraging}, \cite{zhang16}}} & \multirow{2}{*}{\textbf{\cite{zhang2015giant}, \cite[a]{alasad2017leveraging}}} & \multirow{2}{*}{\textbf{\cite{parveen2017hybrid}$^\ddag$}} & \multirow{2}{*}{\textbf{Our}} \\ & (3)$^*$ & (6)$^*$ & (4)${^*}{^\dag}$ & (2)$^*$ & (4)$^*$ & (7+1)${^*}{^\ddag}$ & (16)$^*$ & & & & & & & \\ \hline \hline \emph{aes\_{core}} & 610 & 4,710 & 890 & 132 & 536 & 6,229 & 25,890 & 4,319 & 41,844 & 11,306 & 407 & 9,432 & t-o & t-o \\ \hline b14 & 2,078 & 20,603 & 11,465 & 6,884 & 17,634 & 27,438 & 60,306 & 56,155 & t-o & 64,145 & 8,426 & t-o & t-o & t-o \\ \hline b21 & 7,813 & 162,324 & 45,465 & 3,977 & 24,035 & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o \\ \hline c7552 & 37 & 210 & 74 & 12 & 66 & 371 & 2,289 & 169 & 14,575 & 1,153 & 110 & 1,327 & 172,548 & t-o \\ \hline ex1010 & 62 & 215 & 82 & 12 & 73 & 295 & 922 & 171 & 1,047 & 274 & 38 & 250 & 1,310 & 4,701 \\ \hline \emph{log2} & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o \\ \hline \emph{pci\_bridge32} & 1,119 & t-o & 9,011 & 1,325 & 2,690 & t-o & t-o & 54,577 & t-o & t-o & t-o & t-o & t-o & t-o \\ \hline & \multicolumn{7}{|c|}{\textbf{30\% IP Protection}} & \multicolumn{7}{|c}{\textbf{40--100\% IP Protection$^\S$}} \\ \hline \hline \emph{aes\_{core}} & 17,148 & t-o & 31,601 & 2,020 & 26,498 & t-o & t-o & t-o & t-o & t-o & 8,206 & t-o & t-o & t-o \\ \hline b14 & 56,787 & t-o & t-o & 38,495 & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o \\ \hline b21 & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o \\ \hline c7552 & 1,786 & t-o & t-o & 766 & t-o & t-o & t-o & t-o & t-o & t-o & 41,721 & t-o & t-o & t-o \\ \hline ex1010 & 448 & 4,357 & 938 & 87 & 719 & 11,736 & 24,727 & 1,703 & t-o & 129,290 & 169---7,073$^\S$ & 1,950 & t-o & t-o \\ \hline \emph{log2} & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o \\ \hline \emph{pci\_bridge32} & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o & t-o \\ \hline \end{tabular} \\[1mm] $^*$Number of cloaked functions; refer to Table~\ref{tab:devices} or the related publication for the actual sets of cloaked functions. Prior art covering the same set is grouped into one column. $^\dag$Here we refer to the camouflaging primitive, not the polymorphic gate reported on in Table~\ref{tab:devices}. $^\ddag$Here we also assume BUF to be available. $^\S$The benchmark ex1010 can be resolved for 100\% IP protection, when the primitives of~\cite[c]{alasad2017leveraging}, \cite{zhang16} are used. The related runtime range is for 40--100\% protection; all other runtimes are for 40\% protection. \end{table*} \textbf{Results:} Table~\ref{tab:satattackcomparison} contrasts the resilience (against~\cite{subramanyan15,code_pramod}) of all considered schemes for large-scale application. For the same number of gates protected, we observe that the more functions a primitive can cloak, the more resilient it becomes in practice. More importantly, the runtimes required for decamouflaging (if possible at all), tend to scale exponentially with the percentage of gates being camouflaged. Our primitive induces by far the highest efforts across all benchmarks. Except for \emph{ex1010}, none of the benchmarks could be resolved within 48 hours once we protect 20\% or more of all gates. To confirm this superior resilience, we conducted further attacks running for 240 hours for full-chip protection using the proposed primitive---the designs could still not be resolved. Moreover, we also observe some computational failures;\footnote{E.g., ``\emph{internal error in 'lglib.c': more than 134,217,724 variables}''.} this hints on another practical limitation w.r.t.\ scalability for SAT attacks, as one can reasonably expect~\cite{massad15}. Besides the attacks of~\cite{subramanyan15,code_pramod}, we also leverage \emph{Double DIP}~\cite{shen17}. The key advancement of this attack is that it rules out at least two incorrect keys in each iteration. Conducting the very same set of experiments as in Table~\ref{tab:satattackcomparison}, we observe that the runtimes are on average higher across all benchmarks. For example, decamouflaging \emph{aes\_core} (for 10\% protection using our primitive) requires $\approx$7 hours using~\cite{subramanyan15}, but $\approx$15 hours using~\cite{shen17}.\footnote{Due to lack of space, we refrain from providing all detailed results on~\cite{shen17}.} This finding suggests that large-scale camouflaging can be indeed on par with provably secure schemes. Independent of our study, note that some prior art (e.g.,~\cite{winograd2016hybrid}) proposed cost-limited protection schemes. Here we have demonstrated that an overly limited protection cannot withstand powerful SAT attacks (also recall Sec.~\ref{sec:background} for~\cite{winograd2016hybrid}). Next, we outline the prospects for camouflaging of industrial circuits. Recall that the delay of the GSHE switch is considerably higher when compared to CMOS (Sec.~\ref{device_model}). Interestingly, large-scale circuits typically exhibit biased distributions of delay paths, with most paths having short delays but few paths having dominant, critical delays (Fig.~\ref{fig:delay_superblue}). In an experimental study on those \emph{IBM superblue} circuits, we replace CMOS gates in the non-critical paths with the GSHE-based primitive such that no delay overheads can be expected.\footnote{We anticipate such hybrid designs to be practical, given the CMOS-compatible manufacturing of spin-based devices~\cite{Matsunaga2008 }. The main focus of this work, however, is hardware security, not circuit design. Hence, to mimic hybrid designs, we replace the delay numbers of selected CMOS gates in non-critical timing paths with that of the GSHE switch, i.e., 1.55 ns.} On an average, we can camouflage 5--15\% of all gates this way. Conducting SAT attacks~\cite{subramanyan15,code_pramod} on those protected designs, we observe that they cannot be resolved within 240 hours; in fact, most runs incur similar failures as discussed above. This indicates that the proposed primitive can help to strongly protect industrial circuits without excessive layout (PPA) overheads. \subsection{On Stochastic Switching to Hinder SAT Attacks} So far we leveraged the primitive in the context of classical, deterministic computation. Note, however, that the underlying GSHE switch supports tunable probabilistic computation~\cite{rangarajan2017energy}. Interestingly, the implications of probabilistic computation on hardware security are largely unexplored. Recall the general principle of SAT attacks, i.e., to carefully apply input patterns on a working chip and to observe the output patterns, throughout multiple sampling iterations, until the correct assignment for all key bits can be derived (by ruling out incorrect keys via disagreement). Now consider a scenario where the GSHE switch (or any probabilistic device, for that matter) is tuned for 95\% accuracy. This implies that 5\% of the patterns observed by the SAT attack are incorrect. We believe that most if not all proposed SAT attacks will fail in such scenarios.\footnote{The most promising contender here is arguably \emph{AppSAT}~\cite{shamsi17}, which is based on the probably-approximately-correct (PAC) paradigm. The attack as outlined in~\cite{shamsi17}, however, requires a consistent solution space regarding the input-output queries---probabilistic computation violates this assumption. The attack was not available to us for an experimental study at this time.} That is because they have not been tailored to account for incorrect output patterns. Even if they were, distinguishing incorrect patterns from correct ones is difficult when only given a ``probabilistic black-box oracle.'' Naturally, one might want to leverage machine learning (ML) toward this end. We argue, however, it remains to be seen whether ML-based attacks will be sufficiently robust and capable. Here we like to point out that (i)~the GSHE switch experiences thermally induced stochasticity~\cite{rangarajan2017energy}, (ii)~the error rate for any switch can be tuned individually, and (iii)~those individual distributions superpose with each other while they propagate throughout the entire design, resulting in stochastically correlated behavior at the primary outputs. \begin{figure}[tb] \centering \includegraphics[width=.95\textwidth]{Superblue.pdf} \vspace{-2mm} \caption{Delay distributions of selected \emph{IBM superblue} circuits. The paths with the longest, critical delays are marked by crosses for clarity. \label{fig:delay_superblue} } \end{figure} \subsection{Preventing Reverse Engineering and Side-Channel Attacks} \textbf{Layout identification and read-out attacks:} Recall that the layout of the proposed primitive is uniform (Sec.~\ref{device_model}), hence indistinguishable for optical-imaging-based RE. A more sophisticated attacker might, however, leverage electron microscopy (EM) for identification and read-out attacks~\cite{courbon16}. While such attacks are yet to be demonstrated on switching devices at runtime, we believe that the proposed primitive can prevent them. First, the dimensions of the GSHE switch are significantly smaller than CMOS devices, which is a challenge regarding the spatial resolution for EM-based analysis~\cite{courbon16}. Second, the primitive is truly polymorphic, i.e., its functionality can be switched at runtime; see also next. \textbf{Polymorphism at the chip-level:} Given truly polymorphic gates and some circuitry to judiciously switch the functionalities of gates, we can implement \emph{runtime polymorphism} at the chip-level. Then, internal functionalities are not static (possibly even for static input patterns), whereupon an RE-centric attacker is bound to misinterpret parts of the layout---it is virtually impossible to resolve all dynamic features on full-chip scale at once.\footnote{In~\cite{courbon16}, e.g., it took 50 ns to read-out one pixel of one memory cell, which is well above the 1.55 ns speed of the GSHE device.} Independent of RE threats, runtime polymorphism at the chip-level can also enable dynamic protection, e.g., as recently proposed by Koteshwara \emph{et al.}~\cite{koteshwara17}. Their idea is to alter the key dynamically, thereby rendering runtime-intensive attacks incapable (SAT attacks in particular). \textbf{Photonic side-channel attacks:} While CMOS devices emit photons during operation, making them vulnerable to powerful attacks such as~\cite{Schlösser2012}, the GSHE switch itself does not emit any photons. The fundamentally different switching principle hence makes the proposed primitive inherently resilient to read-out attacks based on photons. Still, we caution that an assessment against such attacks shall be performed in future. \textbf{Magnetic and temperature attacks:} Ghosh~\emph{et al.}~\cite{ghosh2016spintronics} outlined attacks on spintronic (memory) devices using magnetic fields and temperature curves. The design of the GSHE switch shall ensure a robust coupling between the W and R nanomagnets~\cite{rangarajan2017energy}. This would naturally be disturbed by any external magnetic fields. Hence, an attacker leveraging a magnetic probe may induce stuck-at-faults which are, however, hardly controllable due to multiple factors (very small size of switches, accordingly large magnetic fields required for the probe, state of W and R magnetizations, the orientation of the fixed magnets, voltage polarities on the fixed magnets). This implies that sensitization attacks such as~\cite{rajendran13_camouflage} will be difficult, if practical at all. Regarding temperature-driven attacks, note that the retention time of the switch will be impacted. The resulting disturbances, however, are likely stochastic due to the inherent thermal noise in the nanomagnets. \section{Conclusion} \label{conclusion} We explore the security aspects of the GSHE switch: a versatile spin-based polymorphic device which can support both camouflaging and logic locking. Through a comprehensive study using SAT attacks, we show the strong resilience of our deterministic primitive as compared to prior art. We further discuss the resilience of our primitive against various classes of side-channel attacks. Finally, we lay the foundations for promising security concepts: truly polymorphic behavior at runtime, and stochastic behavior to thwart analytical attacks. \section*{Acknowledgements} This work was carried out in part on the High Performance Computing resources at New York University Abu Dhabi.
{ "timestamp": "2018-06-05T02:10:54", "yymm": "1806", "arxiv_id": "1806.00790", "language": "en", "url": "https://arxiv.org/abs/1806.00790" }
\section{Introduction}\label{intro} Neural networks (NNs) in machine learning systems are critical drivers of new technologies such as image processing and speech recognition. Modern NNs are built as graphs with millions of trainable parameters \cite{Krizhevsky2012,Szegedy2015,He2016}, which are tuned until the network converges. This parameter explosion demands large amounts of memory for storage and logic blocks for operation, which make the process of training difficult to perform \emph{on-chip}. As a result, most hardware architectures for NNs perform training off-chip on power-hungry CPUs/GPUs or the cloud, and only support inference capabilities on the final FPGA or ASIC device \cite{Chen2014DN,Chen2015,Han2016EIE,Zhou2016,Yufei2017,Han2017ESE,Wang2018}. Unfortunately, off-chip training results in a non-reconfigurable network being implemented on-chip which cannot support training time optimizations over model architecture and hyperparameters. This severely hinders the development of \emph{independent NN devices} which a) dynamically adapt themselves to new models and data, and b) do not outsource their training to costly cloud computation resources or data centers which exacerbate problems of large energy consumption \cite{Shehabi2016}. Training a network with too many parameters makes it likely to overfit \cite{Denil2013}, and memorize undesirable noise patterns \cite{Zhang2016_2}. Recent works \cite{Dey2017_ICANN,Dey2018_ITA,Aghasi2017,Ullrich2017} have shown that the number of parameters in NNs can be significantly reduced without degradation in performance. This motivates our present work, which is to train NNs with reduced complexity and easy reconfigurability on FPGAs. This is achieved by using \emph{pre-defined sparsity} \cite{Dey2017_ICANN,Dey2017_Asilomar,Dey2018_ITA}. Compared to other methods of parameter reduction such as \cite{Chen2015,Srivastava2014,Han2016DC,Gong2014,Wang2018}, pre-defined sparsity does not require additional computations or processing to decide which parameters to remove. Instead, most of the weights are always absent, i.e. sparsity is enforced \emph{prior to training}. This results in a sparse network of lesser complexity as compared to a conventional fully connected (FC) network. Therefore the memory and computational burdens posed on hardware resources are reduced, which enables us to accomplish training on-chip. Section \ref{arch} describes pre-defined sparsity in more detail, along with a hardware architecture introduced in \cite{Dey2017_ICANN} which exploits it. A key factor in NN hardware implementation is finite bit width effect. A previous FPGA implementation \cite{KaanK2017} used fixed point adders, but more resource-intensive floating point multipliers and floating-to-fixed-point converters. Another previous implementation \cite{Suyog2017} used probabilistic fixed point rounding techniques, which incurred additional DSP resources. Keeping hardware simplicity in mind, our implementation uses only fixed point arithmetic with clipping of large values. The major contributions of the present work are summarized here and described in detail in Section \ref{fpga}: \begin{itemize} \item The first implementation of NNs which can perform both training and inference on FPGAs by exploiting parallel edge processing. The design is parametrized and can be easily reconfigured to fit on FPGAs of varying capacity. \item A low complexity design which uses pre-defined sparsity while maintaining good network performance. To the best of our knowledge, this is the first NN implementation on FPGA exploiting pre-defined sparsity. \item Theoretical analysis and simulation results which show that sparsity leads to reduced dynamic range and is more tolerant to finite bit width effects in hardware. \end{itemize} \section{Sparse Hardware Architecture}\label{arch} \subsection{Pre-defined Sparsity}\label{pds} Our notation treats the input of a NN as layer 0 and the output as layer $L$. The number of neurons in the layers are $\{N_0,N_1,\cdots,N_L\}$. The NN has $L$ \emph{junctions} in between the layers, with $N_{i-1}$ and $N_i$ respectively being the number of neurons in the earlier (left) and later (right) layers of junction $i$. Every left neuron has a fixed number of edges (or weights) going from it to the right, and every right neuron has a fixed number of edges coming into it from the left. These numbers are defined as out-degree ($d^{\mathrm{out}}_i$) and in-degree ($d^{\mathrm{in}}_i$), respectively. For FC layers, $d^{\mathrm{out}}_i = N_i$ and $d^{\mathrm{in}}_i = N_{i-1}$. In contrast, pre-defined sparsity leads to sparsely connected (SC) layers, where $d^{\mathrm{out}}_i < N_i$ and $d^{\mathrm{in}}_i < N_{i-1}$, such that $N_{i-1}\times d^{\mathrm{out}}_i = N_i\times d^{\mathrm{in}}_i = W_i$, which is the total number of weights in junction $i$. Having a fixed $d^{\mathrm{out}}_i$ and $d^{\mathrm{in}}_i$ ensures that all neurons in a junction contribute equally and none of them get disconnected, since that would lead to a loss of information. The connection density in junction $i$ is given as $W_i/(N_{i-1}N_i)$ and the overall connection density of the network is defined as $\left( \sum_{i=1}^{L}{W_i} \right) /\ \left( \sum_{i=1}^{L}{N_{i-1}N_i} \right )$. Previous works \cite{Dey2017_ICANN,Dey2018_ITA} have shown that overall density levels of $<10\%$ incur negligible performance degradation -- which motivates us to implement such low density networks on hardware in the present work. \begin{comment} Some major highlights of pre-defined sparsity \cite{Dey2018_ITA} are: \begin{itemize} \item FCLs in NNs can be aggressively sparsified by removing a significant fraction of connections (weights) prior to starting training. \item There exist patterns for distributing the remaining weights in ways which maximize performance. \item Later junctions (closer to the output layer) need to be denser than earlier junctions (closer to the input layer). \end{itemize} \end{comment} \subsection{Hardware Architecture}\label{hw_desc} This subsection describes the mathematical algorithm and the subsequent hardware architecture for a NN using pre-defined sparsity. The input layer, i.e. the leftmost, is fed \emph{activations} ($a_0$) from the input data. For an image classification problem, these are image pixel values. Then the \emph{feedforward (FF)} operation proceeds as described in eq. \eqref{eq-ff}: \begin{IEEEeqnarray}{c}\label{eq-ff} a_i^{(j)} = \sigma \left( \sum _{f=1}^{d^{\mathrm{in}}_i} { w_{i}^{(j,k_f)}a_{i-1}^{(k_f)} + b_i^{(j)} } \right) \IEEEyesnumber \IEEEyessubnumber \label{eq-ff_a} \\ {\dot {a}}_i^{(j)} = {\sigma}^{'} \left( \sum _{f=1}^{d^{\mathrm{in}}_i} { w_{i}^{(j,k_f)}a_{i-1}^{(k_f)} + b_i^{(j)} } \right) \IEEEyessubnumber \label{eq-ff_b} \end{IEEEeqnarray} Both eqs. \eqref{eq-ff_a} and \eqref{eq-ff_b} are $\forall j \in \{1,\cdots,N_i\}, \forall i \in \{1,\cdots,L\}$. Here, $a$ is activation, ${\dot {a}}$ is its derivative (a-dot), $b$ is bias, $w$ is weight, and $\sigma$ and ${\sigma}^{'}$ are respectively the activation function and its derivative (with respect to its input), which are described further in Section \ref{fpga}. For $a$, ${\dot {a}}$ and $b$, subscript denotes layer number and superscript denotes a particular neuron in a layer. For the weights, ${w}_{i}^{(j,k_f)}$ denotes the weight in junction $i$ which connects neuron $k_f$ in layer $i-1$ to neuron $j$ in layer $i$. The summation for a particular right neuron $j$ is carried out over all $d^{\mathrm{in}}_i$ weights and left neuron activations which connect to it, i.e. $k_f \in \{1,\cdots,N_{i-1}\}$. These left indexes are arbitrary because the weights in a junction are \emph{interleaved}, or permuted. This is done to ensure good \emph{scatter}, which has been shown to enhance performance \cite{Dey2018_ITA}. The output layer activations $a_L$ are compared with the ground truth labels $y$ which are typically one-hot encoded, i.e. $y^{(j)}$, $\forall j \in \{1,\cdots,N_L\}$, is 1 if the class represented by output neuron $j$ is the true class of the input sample, otherwise 0. We use the cross-entropy cost function for optimization, the derivative of which with respect to the activations is $a_L-y$. We also experimented with quadratic cost, but its performance was inferior compared to cross-entropy. The \emph{backpropagation (BP)} operation proceeds as described in eq. \eqref{eq-bp}: \begin{IEEEeqnarray}{c}\label{eq-bp} \delta_L^{(j)} = a_L^{(j)}-y^{(j)} \IEEEyesnumber \IEEEyessubnumber \label{eq-bp_a} \\ \delta_i^{(j)} = {\dot {a}}_{i}^{(j)} \left( \sum _{f=1}^{d^{\mathrm{out}}_i} { w_{i+1}^{(k_f,j)}{\delta}_{i+1}^{(k_f)} } \right) \IEEEyessubnumber \label{eq-bp_b} \end{IEEEeqnarray} where $\delta$ denotes delta value. Eq. \eqref{eq-bp_a} is $\forall j \in \{1,\cdots,N_L\}$, and eq. \eqref{eq-bp_b} is $\forall j \in \{1,\cdots,N_i\}, \forall i \in \{1,\cdots,L-1\}$. The summation for a particular left neuron $j$ is carried out over all $d^{\mathrm{out}}_i$ weights and right neuron deltas which connect to it, i.e. $k_f \in \{1,\cdots,{N}_{i+1}\}$. The right indexes are arbitrary due to interleaving. Based on the $\delta$ values, the trainable weights and biases have their values updated and the network learns. We used the gradient descent algorithm, so the \emph{update (UP)} operation proceeds as described in eq. \eqref{eq-up}: \begin{IEEEeqnarray}{c}\label{eq-up} b_i^{(j)} \leftarrow b_i^{(j)} - \eta {\delta}_i^{(j)} \IEEEyesnumber \IEEEyessubnumber \label{eq-up_a} \\ w_{i}^{(j,k)} \leftarrow w_{i}^{(j,k)} - \eta a_{i-1}^{(k)}{\delta}_i^{(j)} \IEEEyessubnumber \label{eq-up_b} \end{IEEEeqnarray} where $\eta$ is the learning rate hyperparameter. Both eqs. \eqref{eq-up_a} and \eqref{eq-up_b} are $\forall i \in \{1,\cdots,L\}$. While eq. \eqref{eq-up_a} is $\forall j \in \{1,\cdots,N_i\}$, eq. \eqref{eq-up_b} is only for those $j \in \{1,\cdots,N_i\}$ and $k \in \{1,\cdots,N_{i-1}\}$ which are connected by a weight $w_{i}^{(j,k)}$. The architecture uses a) \emph{operational parallelization} to make FF, BP and UP occur simultaneously in each junction, and b) \emph{junction pipelining} wherein all the junctions execute all 3 operations simultaneously on different inputs. Thus, there is a factor of $3L$ speedup as compared to doing 1 operation at a time, albeit at the cost of increased hardware resources. Fig. \ref{fig-jnpipelining} shows the architecture in action. As an example, consider $L=2$, i.e. the network has an input layer, a single hidden layer, and an output layer. When the second junction is doing FF and computing cost on input $n+1$, it is also doing BP on the previous input $n$ which just finished FF, as well as updating (UP) its parameters from the finished cost computation results of input $n$. Simultaneously, the first junction is doing FF on the latest input $n+L = n+2$, and UP using the finished BP results of input $n-(L-1) = n-1$. BP does not occur in the first junction because there are no $\delta_0$ values to be computed. \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/jnpipelining.png} \caption{Junction pipelining and operational parallelization in the architecture.} \label{fig-jnpipelining} \end{figure} The architecture uses \emph{edge processing} by making every junction have a \emph{degree of parallelism} $z_i$, which is the number of weights processed in parallel in 1 clock cycle (or simply cycle) by all 3 operations. So the total number of cycles to process a junction is $W_i/z_i$ plus some additional cycles for memory accesses. This comprises a \emph{block cycle}, the reciprocal of which is ideal throughput (inputs processed per second). All parameters and computed values in a junction are stored in banks of $z_i$ memories. The $z_i$ weights in the $k$th cells of all $z_i$ weight memories are read out in the $k$th cycle. Additionally, up to $z_i$ activations, a-dots, deltas and biases are accessed in a cycle. The order of accessing them can be natural (row-by-row like the weights), or permuted (due to interleaving). All accesses need to be \emph{clash-free}, i.e. the different values to be accessed in a cycle must all be stored in different memories so as to avoid memory stalls, as shown in Fig. \ref{fig-interleaver_clashfreedom}. Optimum clash-free interleaver designs are discussed in \cite{Dey2017_Asilomar}. Fig. \ref{fig-pnp} shows simultaneous FF, BP and UP, along with memory accesses, in more detail inside a single junction. \begin{figure}[!t] \centering \includegraphics[width=0.7\linewidth]{figs/interleaver_clashfreedom.png} \caption{Example of clash-freedom in some junction with $z=6$. In each cycle, $z$ weights are read corresponding to 2 right neurons (shown in same color). When traced back through the interleaver $\pi_W$, this requires accessing $z$ left activations in permuted order. There are $z$ activation memories $M0-M5$, only 1 element from each is read in a cycle in order to preserve clash-freedom. This is shown by the checkerboards, where only 1 cell in each column is shaded. Picture taken from \cite{Dey2017_Asilomar} with permission.} \label{fig-interleaver_clashfreedom} \end{figure} \begin{figure}[!t] \centering \includegraphics[width = 0.9\linewidth]{figs/pnp.png} \caption{Operational parallelization in junction $i$ ($i \ne 1$), showing natural and permuted order accesses as solid and dashed lines, respectively.} \label{fig-pnp} \end{figure} \begin{comment} Note that the activation and its derivative memories need to store the FF results of a particular input until it comes back to the same layer during BP. This needs to be done without stalling processing of other inputs, so these memories are organized in queues of banks. Moreover, the delta memories are organized as a pair of banks so that 1 bank can be written into by BP in the right junction while the other bank is read from for BP in the left junction. While queues and pairs increases overall storage space, their fraction is insignificant compared to the memory required for weights. This issue is dealt with by having only 1 weight memory bank per junction, which is used for all 3 operations. Moreover, sparsity reduces hardware complexity by reducing weight memory sizes. \end{comment} This architecture is ideal for implementation on reconfigurable hardware due to a) its parallel and pipelined nature, b) its low memory footprint due to sparsity, and particularly c) the degree of parallelism $z_i$ parameters, which can be tuned to efficiently utilize available hardware resources, as described in Sections \ref{impl} and \ref{effects_z}. \section{FPGA Implementation}\label{fpga} \subsection{Device and Dataset}\label{board} We implemented the architecture described in Section \ref{hw_desc} on an Artix-7 FPGA. This is a relatively small FPGA and therefore allowed us to explore efficient design styles and optimize our RTL to make it more robust and scalable. We experimented on the MNIST dataset where each input is an image consisting of 784 pixels in 8-bit grayscale each. Each ground truth output is one-hot encoded between 0-9. Our implementation uses powers of 2 for network parameters to simplify the hardware realization. Accordingly we padded each input with 0s to make it have 1024 pixels. The outputs were padded with 0s to get 32-bit one-hot encoding. Prior to hardware implementation, software experiments showed that having extra always-0 I/O did not detract from network performance. \subsection{Network Configuration and Training Setup}\label{config} The network has 1 hidden layer of 64 neurons, i.e. 2 junctions overall. Other parameters were chosen on the basis of hardware constraints and experimental results, which are described in Sections \ref{bitwidth} and \ref{impl}. The final network configuration is given in Table \ref{table-config}. \begin{table}[!t] \begin{minipage}{\columnwidth} \renewcommand{\arraystretch}{1.2} \caption{Implemented Network Configuration} \label{table-config} \centering \begin{tabular}{|c|c|c|} \hline Junction Number ($i$) & 1 & 2\\ \hline Left Neurons ($N_{i-1}$) & 1024 & 64\\ \hline Right Neurons ($N_i$) & 64 & 32\\ \hline Fan-out ($d^{\mathrm{out}}_i$) & 4 & 16\\ \hline Weights ($W_i=N_{i-1}\times d^{\mathrm{out}}_i$) & 4096 & 1024\\ \hline Fan-in ($d^{\mathrm{in}}_i=W_i/N_i$) & 64 & 32\\ \hline $z_i$ & 128 & 32\\ \hline Block cycle ($W_i/z_i$) \footnote{In terms of number of clock cycles. Not considering the additional clock cycles needed for memory accesses.} & 32 & 32\\ \hline Density ($W_i/(N_{i-1}N_i)$) & 6.25\% & 50\%\\ \hline Overall Density & \multicolumn{2}{c|}{7.576\%}\\ \hline \end{tabular} \end{minipage} \end{table} We selected $12544$ MNIST inputs to comprise 1 epoch of training. Learning rate ($\eta$) is initially $2^{-3}$, halved after the first 2 epochs, then after every 4 epochs until its value became $2^{-7}$. Dynamic adjustment of $\eta$ leads to better convergence, while keeping it to a power of 2 leads to the $\eta$ multiplications in eq. \eqref{eq-up} getting reduced to bit shifts. Pre-defined sparsity leads to a total number of trainable parameters $= \left(w_1=4096\right)+\left(w_2=1024\right)+\left(b_1=N_1=64\right)+\left(b_2=N_2=32\right) = 5216$, which is much less than $12544$, so we theorized that overfitting was not an issue. We verified this using software simulations, and hence did not apply weight regularization. \subsection{Bit Width Considerations}\label{bitwidth} \subsubsection{Parameter Initialization} We initialized weights using the Glorot Normal technique, i.e. their values are taken from Gaussian distributions with mean $=0$ and variance $=2/\left(d^{\mathrm{out}}_i+d^{\mathrm{in}}_i\right)$. This translates to a three standard deviation range of $\pm0.51$ for junction 1 and $\pm0.61$ for junction 2 in our network configuration described in Table \ref{table-config}. The biases in our architecture are stored along with the weights as an augmentation to the weight memory banks. So we initialized biases in the same manner as weights. Software simulations showed that this led to no degradation in performance from the conventional method of initializing biases with 0s. This makes sense since the maximum absolute value from initialization is much closer to 0 than their final values when the network converges, as shown in Fig. \ref{fig-valueranges}. To simplify the RTL, we used the same set of $W_i/z_i$ unique values to initialize all weights and biases in junction $i$. Again, software simulations showed that this led to no degradation in performance as compared to initializing all of them randomly. This is not surprising since an appropriately high value of initial learning rate will drive each weight and bias towards its own optimum value, regardless of similar values at the start. \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/valueranges.png} \caption{Maximum absolute values (left y-axis) for $w$, $b$ and $\delta$, and percentage classification accuracy (right y-axis), as the network is trained.} \label{fig-valueranges} \end{figure} \subsubsection{Fixed Point Configuration} We recreated the aforementioned initial conditions in software and trained our configuration to study the range of values for network variables until convergence. The results for $w$, $b$ and $\delta$ are in Fig. \ref{fig-valueranges}. The $a$ values are generated using the \emph{sigmoid} activation function, which has range $= [0,1]$. \begin{comment} in the output layer in order to make the cost calculation work as intended. An output activation close to 1 implies a high degree of network confidence that a particular input sample belongs to that particular output class. We used the \emph{sigmoid} ($\sigma(\cdot)$) function, described in eq. (\ref{eq-sigmoid}): \begin{IEEEeqnarray}{c}\label{eq-sigmoid} \sigma(x) = \frac{1}{1+e^{-x}} \IEEEyesnumber \IEEEyessubnumber \\ {\sigma}^{'}(x) = \sigma(x) \left(1-\sigma(x)\right) \IEEEyessubnumber \end{IEEEeqnarray} The sigmoid function approaches 0 as its input argument becomes more negative and approaches 1 as its input argument becomes more positive. Its derivative approaches 0 when its input argument has a high absolute value. \end{comment} To keep the hardware optimal, we decided on the same \emph{fixed point} bit configuration for all computed values and trainable parameters --- $a$, $\dot{a}$, $\delta$, $w$ and $b$. Our configuration is characterized by the bit triplet $\left(b_w,b_n,b_f\right)$, which are respectively the total number of bits, integer bits, and fractional bits, with the constraint $b_w = b_n+b_f+1$, where the 1 is for the sign bit. This gives a numerical range of $[-{2}^{b_n},2^{b_n}-2^{-b_f}]$ and precision of $2^{-b_f}$. Fig. \ref{fig-valueranges} shows that the maximum absolute values of various network parameters during training stays within 8. Accordingly we set $b_n=3$. We then experimented with different values for the bit triplet and obtained the results shown in Table \ref{table-bitwidth}. Accuracy is measured on the last 1000 training samples. Noting the diminishing returns and impractical utilization of hardware resources for high bit widths, we chose the bit triplet $\left(12,3,8\right)$ as being the optimal case. \begin{table}[!t] \renewcommand{\arraystretch}{1.0} \caption{Effect of Bit Width on Performance} \label{table-bitwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $b_w$ & $b_n$ & $b_f$ & FPGA LUT & Accuracy after & Accuracy after\\ & & & Utilization \% & 1 epoch & 15 epochs\\ \hline 8 & 2 & 5 & 37.89 & 78 & 81\\ \hline 10 & 2 & 7 & 72.82 & 90.1 & 94.9\\ \hline 10 & 3 & 6 & 63.79 & 88 & 93.8\\ \hline 12 & 3 & 8 & 83.38 & 90.3 & 96.5\\ \hline 16 & 4 & 11 & \textcolor{red}{112} & 91.9 & 96.5\\ \hline \end{tabular} \end{table} \subsubsection{Dynamic Range Reduction due to Sparsity} We found that sparsity leads to reduction in the dynamic range of network variables, since the summations in eqs. \eqref{eq-ff} and \eqref{eq-bp} are over smaller ranges. This motivated us to use a special form of adder and multiplier which preserves the bit triplet between inputs and outputs by clipping large absolute values of output to either the positive or negative maximum allowed by the range. For example, 10 would become 7.996 and $-10$ would become $-8$. Fig. \ref{fig-a1distribution} analyzes the worst clipping errors by comparing the absolute values of the argument of the sigmoid function in the hidden layer, i.e. $\sum{w_1a_0}+b_1$ from eq. \eqref{eq-ff}, for our sparse case vs. the corresponding FC case ($d^{\mathrm{out}}_1=64$, $d^{\mathrm{out}}_2=32$). Notice that the sparse case only has 17\% of its values clipped due to being outside the dynamic range afforded by $b_n=3$, while the FC case has 57\%. The sparse case also has a smaller variance. This implies that the hardware errors introduced due to finite bit-width effects are less pronounced for our pre-defined sparse configuration as compared to FC. \begin{figure}[!t] \centering \includegraphics[width = 0.6\linewidth]{figs/a1distribution.png} \caption{Histograms of absolute value of eq. \eqref{eq-ff}'s $\sum{w_1a_0}+b_1$ with respect to dynamic range for (a) sparse vs. (b) FC cases, as obtained from ideal floating point simulations on software. Values right of the pink line are clipped.} \label{fig-a1distribution} \end{figure} \subsubsection{Experiments with ReLU} As demonstrated in literature \cite{Krizhevsky2012,Szegedy2015,He2016}, the native (ideal) ReLU activation function is more widely used than sigmoid due to the former's better performance, no vanishing gradient problem, and tendency towards generating sparse outputs. However, ideal ReLU is not practical for hardware due to its unbounded range. We experimented with a modified form of the \emph{ReLU} activation function where the outputs were clipped to a) 8, which is the maximum supported by $b_n=3$, and b) 1, to preserve bit width consistency in the multipliers and adders and ensure compatibility with sigmoid activations. Fig. \ref{fig-act} shows software simulations comparing sigmoid with these cases. Note that ReLU clipped at 8 converges similar to sigmoid, but sigmoid has better initial performance. Moreover, there is no need to promote extra sparsity by using ReLU because our configuration is already sparse, and sigmoid does not suffer from vanishing gradient problems because of the small range of our inputs. We therefore concluded that sigmoid activation for all layers is the best choice. \begin{comment} \begin{IEEEeqnarray}{c}\label{eq-relu} ReLU(x) = \begin{cases} 0 & \text{if } x\leq0 \\ x & \text{if } 0<x<1 \\ 1 & \text{if } x\geq1 \end{cases} \IEEEyesnumber \IEEEyessubnumber \\ {ReLU}^{'}(x) = \begin{cases} 0 & \text{if } x\leq0 \text{ or } x\geq1 \\ 1 & \text{if } 0<x<1 \end{cases} \IEEEyessubnumber \end{IEEEeqnarray} \end{comment} \begin{figure}[!t] \centering \includegraphics[width = 0.65\linewidth]{figs/act.png} \caption{Comparison of activation functions for $a_1$.} \label{fig-act} \end{figure} \subsection{Implementation Details}\label{impl} \subsubsection{Sigmoid Activation} The sigmoid function uses exponentials, which are computationally infeasible to obtain in hardware. So we pre-computed the values of $\sigma(\cdot)$ and ${\sigma}^{'}(\cdot)$ and stored them in look-up tables (LUTs). Interpolation was not used, instead we computed sigmoid for all 4096 possible 12-bit arguments up to the full 8 fractional bits of accuracy. On the other hand, its derivative values were computed to $6$ fractional bits of accuracy since they have a range of $[0,2^{-2}]$. Note that clipped ReLU activation uses only comparators and needs no LUTs. However, the number of sigmoid LUTs required is $\sum_{i=1}^{L}{z_i/d^{\mathrm{in}}_i}=3$, which incurs negligible hardware cost. This reinforces our decision to use sigmoid instead of ReLU. \subsubsection{Interleaver} We used clash-free interleavers of the \emph{SV+SS} variation, as described in \cite{Dey2017_Asilomar}. Starting vectors for all sweeps were pre-calculated and hard-coded into FPGA logic. \subsubsection{Arithmetic Units} We numbered the weights sequentially on the right side of every junction, which leads to permuted numbering on the left side due to interleaving. We chose $z_i\geq d^{\mathrm{in}}_i, \forall i \in \{1,\cdots L\}$. This means that the $z_i$ weights accessed in a cycle correspond to an integral ($z_i/d^{\mathrm{in}}_i$) number of right neurons, so the FF summations in eq. \eqref{eq-ff} can occur in a single cycle. This eliminates the need for storing FF partial sums. The total number of multipliers required for FF is $\sum _{i=1}^{L}{z_i}$. The summations also use a tree adder of depth $={\text{log}}_{2}\left(d^{\mathrm{in}}_i\right)$ for every neuron processed in a cycle. BP does not occur in the first junction since the input layer has no $\delta$ values. The BP summation in eq. \eqref{eq-bp_b} will need several cycles to complete for a single left neuron since weight numbering is permuted. This necessitates storing $\sum _{i=2}^{L}{z_i}$ partial sums, however, tree adders are no longer required. Eq. \eqref{eq-bp_b} for BP has 2 multiplications, so the total number of multipliers required is $2\sum _{i=2}^{L}{z_i}$. The UP operation in each junction $i$ requires $z_i$ adders for the weights and $z_i/d^{\mathrm{in}}_i$ adders for the biases, since that many right neurons are processed every cycle. Only the weight update requires multipliers, so their total number is $\sum _{i=1}^{L}{z_i}$. Our FPGA device has 240 DSP blocks. Accordingly, we implemented the 224 FF and BP multipliers using 1 DSP for each, while the other 160 UP multipliers and all adders were implemented using logic. \subsubsection{Memories and Data} All memories were implemented using block RAM (BRAM). The memories for $a$ and $\dot{a}$ never need to be read from and written into in the same cycle, so they are single-port. $\delta$ memories are true dual-port, i.e. both ports support reads and writes. This is required due to the read-modify-write nature of the $\delta$ memories since they accumulate partial sums. The `weight+bias' memories are simple dual-port, with 1 port used exclusively for reading the $k$th cell in cycle $k$, and the other for simultaneously writing the $(k-1)$th cell. These were initialized using Glorot normal values while all other memories were initialized with 0s. \begin{comment} Note that lower values of $z$ result in a smaller number of deep memories per bank, which is ideal for a fully BRAM implementation. Given a more powerful device, a larger value of $z$ will lead to many shallow memories in each bank, and thus make it easier to meet clash-freedom constraints, but may need logic-based distributed RAMs to share the burden. \end{comment} The ground truth one-hot encoding for all $12544$ inputs were stored in a single-port BRAM, and initialized with word size $=10$ to represent the 10 MNIST outputs. After reading, the word was padded with 0s to make it 32-bit long. On the other hand, the input data was too big to store on-chip. Since the native MNIST images are $28\times28=784$ pixels, the total input data size is $12544\times784\times8 = 78.68$ Mb, while the total device BRAM capacity is only $4.86$ Mb. So the input data was fed from PC using UART interface. \subsubsection{Network Configuration} Here we explain the choice of network configuration in Table \ref{table-config}. We initially picked $N_2=16$, which is the minimum power of 2 above 10. Since later junctions need to be denser than earlier ones to optimize performance \cite{Dey2018_ITA}, we experimented with junction 2 density and show its effects on network performance in Fig. \ref{fig-jn2density}. We concluded that 50\% density is optimum for junction 2. Note that individual $z_i$ values should be adjusted to have the same block cycle length for all junctions. This ensures an always full pipeline and no stalls, which can achieve the ideal throughput of 1 input per block cycle. This, along with the constraint $z_i\geq d^{\mathrm{in}}_i, \forall i \in \{1,\cdots L\}$, led to $z_1=256$, which was beyond the capacity of our FPGA. So we increased $N_2$ to 32 and set $z_2$ to the minimum value of 32, leading to $z_1=128$. We experimented with $d^{\mathrm{out}}_1=8$, but the resulting accuracy was within 1 percentage point of our final choice of $d^{\mathrm{out}}_1=4$. \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/jn2density.png} \caption{Performance for different junction 2 densities, keeping junction 1 density fixed at 6.25\%.} \label{fig-jn2density} \end{figure} \subsubsection{Timing and Results} A block cycle in our design is $\left(W_i/z_i+2\right)$ clock cycles since each set of $z_i$ weights in a junction need a total of 3 clock cycles for each operation. The first and third are used to compute memory addresses, while the second performs arithmetic computations and determines our clock frequency, which is 15MHz. We stored the results of several training inputs and fed them out to 10 LEDs on the board, each representing an output from 0-9. The FPGA implementation performed according to RTL simulations and within $1.5$ percentage points of the ideal floating point software simulations, giving 96.5\% accuracy in 14 epochs of training. \begin{comment} \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/fpgaworking.jpg} \caption{Our design working on the Xilinx XC7A100T-1CSG324C FPGA.} \label{fig-fpgaworking} \end{figure} Fig. \ref{fig-fpgaworking} shows our FPGA in action. \begin{figure}[!t] \centering \includegraphics[width = 0.65\linewidth]{figs/cyclebreakup.png} \caption{Breaking up each operation into 3 clock cycles.} \label{fig-cyclebreakup} \end{figure} \end{comment} \subsection{Effects of $z$}\label{effects_z} \begin{figure}[!t] \centering \includegraphics[width = 0.95\linewidth]{figs/effects_z.png} \caption{Dependency of various design and performance parameters on the total $z$, keeping the network architecture and sparsity level fixed.} \label{fig-effects_z} \end{figure} A key highlight of our architecture is the total degree of parallelism $\sum_{i=1}^{L}{z_i}$, which can be reconfigured to trade off training time and hardware resources while keeping the network architecture the same. This is shown in Fig. \ref{fig-effects_z}. The present work uses total $z=160$, which leads to a block cycle time of $2.27\mu s$, but economically uses arithmetic resources and has a small number of deep memories, making it ideal for a fully BRAM implementation. Given more powerful FPGAs, the same architecture can be reconfigured to achieve higher GOPS count and process inputs in $0.4\mu s$, albeit at the cost of more FPGA resources and a greater number of shallower memories. Moreover, this reconfigurability also allows a complete change in network structure and hyperparameters to process a new dataset on the same device if desired. \begin{comment} The final FPGA utilization after implementation is shown in Table \ref{table-impl12b}. The power consumption was 395 mW dynamic and 101 mW static. \begin{table}[!t] \renewcommand{\arraystretch}{1.1} \caption{FPGA Utilization after Implementation} \label{table-impl12b} \centering \begin{tabular}{|c|c|c|c|} \hline Resource & Used & Available & \% Used\\ \hline LUT & $52862$ & $63400$ & 83.38\\ \hline LUTRAM & $8771$ & $19000$ & 46.16\\ \hline Flipflop & $15754$ & $126800$ & 12.42\\ \hline DSP & 224 & 240 & \textbf{93.33}\\ \hline BRAM & 40 & 135 & 29.63\\ \hline \end{tabular} \end{table} \end{comment} \section{Conclusion}\label{conc} This paper demonstrates an FPGA implementation of both training and inference of a neural network pre-defined to be sparse. The architecture is optimized for FPGA implementation and uses parallel and pipelined processing to increase throughput. The major highlights are the degrees of parallelism $z_i$, which can be quickly reconfigured to re-allocate FPGA resources, thereby adapting any problem to any device. While the present work uses a modest FPGA board as proof-of-concept, this reconfigurability is allowing us to explore various types of networks on bigger boards as future work. Our RTL is fully parametrized and the code available on request. \bibliographystyle{IEEEtran} \section{Introduction}\label{intro} Neural networks (NNs) in machine learning systems are critical drivers of new technologies such as image processing and speech recognition. Modern NNs are built as graphs with millions of trainable parameters \cite{Krizhevsky2012,Szegedy2015,He2016}, which are tuned until the network converges. This parameter explosion demands large amounts of memory for storage and logic blocks for operation, which make the process of training difficult to perform \emph{on-chip}. As a result, most hardware architectures for NNs perform training off-chip on power-hungry CPUs/GPUs or the cloud, and only support inference capabilities on the final FPGA or ASIC device \cite{Chen2014DN,Chen2015,Han2016EIE,Zhou2016,Yufei2017,Han2017ESE,Wang2018}. Unfortunately, off-chip training results in a non-reconfigurable network being implemented on-chip which cannot support training time optimizations over model architecture and hyperparameters. This severely hinders the development of \emph{independent NN devices} which a) dynamically adapt themselves to new models and data, and b) do not outsource their training to costly cloud computation resources or data centers which exacerbate problems of large energy consumption \cite{Shehabi2016}. Training a network with too many parameters makes it likely to overfit \cite{Denil2013}, and memorize undesirable noise patterns \cite{Zhang2016_2}. Recent works \cite{Dey2017_ICANN,Dey2018_ITA,Aghasi2017,Ullrich2017} have shown that the number of parameters in NNs can be significantly reduced without degradation in performance. This motivates our present work, which is to train NNs with reduced complexity and easy reconfigurability on FPGAs. This is achieved by using \emph{pre-defined sparsity} \cite{Dey2017_ICANN,Dey2017_Asilomar,Dey2018_ITA}. Compared to other methods of parameter reduction such as \cite{Chen2015,Srivastava2014,Han2016DC,Gong2014,Wang2018}, pre-defined sparsity does not require additional computations or processing to decide which parameters to remove. Instead, most of the weights are always absent, i.e. sparsity is enforced \emph{prior to training}. This results in a sparse network of lesser complexity as compared to a conventional fully connected (FC) network. Therefore the memory and computational burdens posed on hardware resources are reduced, which enables us to accomplish training on-chip. Section \ref{arch} describes pre-defined sparsity in more detail, along with a hardware architecture introduced in \cite{Dey2017_ICANN} which exploits it. A key factor in NN hardware implementation is finite bit width effect. A previous FPGA implementation \cite{KaanK2017} used fixed point adders, but more resource-intensive floating point multipliers and floating-to-fixed-point converters. Another previous implementation \cite{Suyog2017} used probabilistic fixed point rounding techniques, which incurred additional DSP resources. Keeping hardware simplicity in mind, our implementation uses only fixed point arithmetic with clipping of large values. The major contributions of the present work are summarized here and described in detail in Section \ref{fpga}: \begin{itemize} \item The first implementation of NNs which can perform both training and inference on FPGAs by exploiting parallel edge processing. The design is parametrized and can be easily reconfigured to fit on FPGAs of varying capacity. \item A low complexity design which uses pre-defined sparsity while maintaining good network performance. To the best of our knowledge, this is the first NN implementation on FPGA exploiting pre-defined sparsity. \item Theoretical analysis and simulation results which show that sparsity leads to reduced dynamic range and is more tolerant to finite bit width effects in hardware. \end{itemize} \section{Sparse Hardware Architecture}\label{arch} \subsection{Pre-defined Sparsity}\label{pds} Our notation treats the input of a NN as layer 0 and the output as layer $L$. The number of neurons in the layers are $\{N_0,N_1,\cdots,N_L\}$. The NN has $L$ \emph{junctions} in between the layers, with $N_{i-1}$ and $N_i$ respectively being the number of neurons in the earlier (left) and later (right) layers of junction $i$. Every left neuron has a fixed number of edges (or weights) going from it to the right, and every right neuron has a fixed number of edges coming into it from the left. These numbers are defined as out-degree ($d^{\mathrm{out}}_i$) and in-degree ($d^{\mathrm{in}}_i$), respectively. For FC layers, $d^{\mathrm{out}}_i = N_i$ and $d^{\mathrm{in}}_i = N_{i-1}$. In contrast, pre-defined sparsity leads to sparsely connected (SC) layers, where $d^{\mathrm{out}}_i < N_i$ and $d^{\mathrm{in}}_i < N_{i-1}$, such that $N_{i-1}\times d^{\mathrm{out}}_i = N_i\times d^{\mathrm{in}}_i = W_i$, which is the total number of weights in junction $i$. Having a fixed $d^{\mathrm{out}}_i$ and $d^{\mathrm{in}}_i$ ensures that all neurons in a junction contribute equally and none of them get disconnected, since that would lead to a loss of information. The connection density in junction $i$ is given as $W_i/(N_{i-1}N_i)$ and the overall connection density of the network is defined as $\left( \sum_{i=1}^{L}{W_i} \right) /\ \left( \sum_{i=1}^{L}{N_{i-1}N_i} \right )$. Previous works \cite{Dey2017_ICANN,Dey2018_ITA} have shown that overall density levels of $<10\%$ incur negligible performance degradation -- which motivates us to implement such low density networks on hardware in the present work. \begin{comment} Some major highlights of pre-defined sparsity \cite{Dey2018_ITA} are: \begin{itemize} \item FCLs in NNs can be aggressively sparsified by removing a significant fraction of connections (weights) prior to starting training. \item There exist patterns for distributing the remaining weights in ways which maximize performance. \item Later junctions (closer to the output layer) need to be denser than earlier junctions (closer to the input layer). \end{itemize} \end{comment} \subsection{Hardware Architecture}\label{hw_desc} This subsection describes the mathematical algorithm and the subsequent hardware architecture for a NN using pre-defined sparsity. The input layer, i.e. the leftmost, is fed \emph{activations} ($a_0$) from the input data. For an image classification problem, these are image pixel values. Then the \emph{feedforward (FF)} operation proceeds as described in eq. \eqref{eq-ff}: \begin{IEEEeqnarray}{c}\label{eq-ff} a_i^{(j)} = \sigma \left( \sum _{f=1}^{d^{\mathrm{in}}_i} { w_{i}^{(j,k_f)}a_{i-1}^{(k_f)} + b_i^{(j)} } \right) \IEEEyesnumber \IEEEyessubnumber \label{eq-ff_a} \\ {\dot {a}}_i^{(j)} = {\sigma}^{'} \left( \sum _{f=1}^{d^{\mathrm{in}}_i} { w_{i}^{(j,k_f)}a_{i-1}^{(k_f)} + b_i^{(j)} } \right) \IEEEyessubnumber \label{eq-ff_b} \end{IEEEeqnarray} Both eqs. \eqref{eq-ff_a} and \eqref{eq-ff_b} are $\forall j \in \{1,\cdots,N_i\}, \forall i \in \{1,\cdots,L\}$. Here, $a$ is activation, ${\dot {a}}$ is its derivative (a-dot), $b$ is bias, $w$ is weight, and $\sigma$ and ${\sigma}^{'}$ are respectively the activation function and its derivative (with respect to its input), which are described further in Section \ref{fpga}. For $a$, ${\dot {a}}$ and $b$, subscript denotes layer number and superscript denotes a particular neuron in a layer. For the weights, ${w}_{i}^{(j,k_f)}$ denotes the weight in junction $i$ which connects neuron $k_f$ in layer $i-1$ to neuron $j$ in layer $i$. The summation for a particular right neuron $j$ is carried out over all $d^{\mathrm{in}}_i$ weights and left neuron activations which connect to it, i.e. $k_f \in \{1,\cdots,N_{i-1}\}$. These left indexes are arbitrary because the weights in a junction are \emph{interleaved}, or permuted. This is done to ensure good \emph{scatter}, which has been shown to enhance performance \cite{Dey2018_ITA}. The output layer activations $a_L$ are compared with the ground truth labels $y$ which are typically one-hot encoded, i.e. $y^{(j)}$, $\forall j \in \{1,\cdots,N_L\}$, is 1 if the class represented by output neuron $j$ is the true class of the input sample, otherwise 0. We use the cross-entropy cost function for optimization, the derivative of which with respect to the activations is $a_L-y$. We also experimented with quadratic cost, but its performance was inferior compared to cross-entropy. The \emph{backpropagation (BP)} operation proceeds as described in eq. \eqref{eq-bp}: \begin{IEEEeqnarray}{c}\label{eq-bp} \delta_L^{(j)} = a_L^{(j)}-y^{(j)} \IEEEyesnumber \IEEEyessubnumber \label{eq-bp_a} \\ \delta_i^{(j)} = {\dot {a}}_{i}^{(j)} \left( \sum _{f=1}^{d^{\mathrm{out}}_i} { w_{i+1}^{(k_f,j)}{\delta}_{i+1}^{(k_f)} } \right) \IEEEyessubnumber \label{eq-bp_b} \end{IEEEeqnarray} where $\delta$ denotes delta value. Eq. \eqref{eq-bp_a} is $\forall j \in \{1,\cdots,N_L\}$, and eq. \eqref{eq-bp_b} is $\forall j \in \{1,\cdots,N_i\}, \forall i \in \{1,\cdots,L-1\}$. The summation for a particular left neuron $j$ is carried out over all $d^{\mathrm{out}}_i$ weights and right neuron deltas which connect to it, i.e. $k_f \in \{1,\cdots,{N}_{i+1}\}$. The right indexes are arbitrary due to interleaving. Based on the $\delta$ values, the trainable weights and biases have their values updated and the network learns. We used the gradient descent algorithm, so the \emph{update (UP)} operation proceeds as described in eq. \eqref{eq-up}: \begin{IEEEeqnarray}{c}\label{eq-up} b_i^{(j)} \leftarrow b_i^{(j)} - \eta {\delta}_i^{(j)} \IEEEyesnumber \IEEEyessubnumber \label{eq-up_a} \\ w_{i}^{(j,k)} \leftarrow w_{i}^{(j,k)} - \eta a_{i-1}^{(k)}{\delta}_i^{(j)} \IEEEyessubnumber \label{eq-up_b} \end{IEEEeqnarray} where $\eta$ is the learning rate hyperparameter. Both eqs. \eqref{eq-up_a} and \eqref{eq-up_b} are $\forall i \in \{1,\cdots,L\}$. While eq. \eqref{eq-up_a} is $\forall j \in \{1,\cdots,N_i\}$, eq. \eqref{eq-up_b} is only for those $j \in \{1,\cdots,N_i\}$ and $k \in \{1,\cdots,N_{i-1}\}$ which are connected by a weight $w_{i}^{(j,k)}$. The architecture uses a) \emph{operational parallelization} to make FF, BP and UP occur simultaneously in each junction, and b) \emph{junction pipelining} wherein all the junctions execute all 3 operations simultaneously on different inputs. Thus, there is a factor of $3L$ speedup as compared to doing 1 operation at a time, albeit at the cost of increased hardware resources. Fig. \ref{fig-jnpipelining} shows the architecture in action. As an example, consider $L=2$, i.e. the network has an input layer, a single hidden layer, and an output layer. When the second junction is doing FF and computing cost on input $n+1$, it is also doing BP on the previous input $n$ which just finished FF, as well as updating (UP) its parameters from the finished cost computation results of input $n$. Simultaneously, the first junction is doing FF on the latest input $n+L = n+2$, and UP using the finished BP results of input $n-(L-1) = n-1$. BP does not occur in the first junction because there are no $\delta_0$ values to be computed. \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/jnpipelining.png} \caption{Junction pipelining and operational parallelization in the architecture.} \label{fig-jnpipelining} \end{figure} The architecture uses \emph{edge processing} by making every junction have a \emph{degree of parallelism} $z_i$, which is the number of weights processed in parallel in 1 clock cycle (or simply cycle) by all 3 operations. So the total number of cycles to process a junction is $W_i/z_i$ plus some additional cycles for memory accesses. This comprises a \emph{block cycle}, the reciprocal of which is ideal throughput (inputs processed per second). All parameters and computed values in a junction are stored in banks of $z_i$ memories. The $z_i$ weights in the $k$th cells of all $z_i$ weight memories are read out in the $k$th cycle. Additionally, up to $z_i$ activations, a-dots, deltas and biases are accessed in a cycle. The order of accessing them can be natural (row-by-row like the weights), or permuted (due to interleaving). All accesses need to be \emph{clash-free}, i.e. the different values to be accessed in a cycle must all be stored in different memories so as to avoid memory stalls, as shown in Fig. \ref{fig-interleaver_clashfreedom}. Optimum clash-free interleaver designs are discussed in \cite{Dey2017_Asilomar}. Fig. \ref{fig-pnp} shows simultaneous FF, BP and UP, along with memory accesses, in more detail inside a single junction. \begin{figure}[!t] \centering \includegraphics[width=0.7\linewidth]{figs/interleaver_clashfreedom.png} \caption{Example of clash-freedom in some junction with $z=6$. In each cycle, $z$ weights are read corresponding to 2 right neurons (shown in same color). When traced back through the interleaver $\pi_W$, this requires accessing $z$ left activations in permuted order. There are $z$ activation memories $M0-M5$, only 1 element from each is read in a cycle in order to preserve clash-freedom. This is shown by the checkerboards, where only 1 cell in each column is shaded. Picture taken from \cite{Dey2017_Asilomar} with permission.} \label{fig-interleaver_clashfreedom} \end{figure} \begin{figure}[!t] \centering \includegraphics[width = 0.9\linewidth]{figs/pnp.png} \caption{Operational parallelization in junction $i$ ($i \ne 1$), showing natural and permuted order accesses as solid and dashed lines, respectively.} \label{fig-pnp} \end{figure} \begin{comment} Note that the activation and its derivative memories need to store the FF results of a particular input until it comes back to the same layer during BP. This needs to be done without stalling processing of other inputs, so these memories are organized in queues of banks. Moreover, the delta memories are organized as a pair of banks so that 1 bank can be written into by BP in the right junction while the other bank is read from for BP in the left junction. While queues and pairs increases overall storage space, their fraction is insignificant compared to the memory required for weights. This issue is dealt with by having only 1 weight memory bank per junction, which is used for all 3 operations. Moreover, sparsity reduces hardware complexity by reducing weight memory sizes. \end{comment} This architecture is ideal for implementation on reconfigurable hardware due to a) its parallel and pipelined nature, b) its low memory footprint due to sparsity, and particularly c) the degree of parallelism $z_i$ parameters, which can be tuned to efficiently utilize available hardware resources, as described in Sections \ref{impl} and \ref{effects_z}. \section{FPGA Implementation}\label{fpga} \subsection{Device and Dataset}\label{board} We implemented the architecture described in Section \ref{hw_desc} on an Artix-7 FPGA. This is a relatively small FPGA and therefore allowed us to explore efficient design styles and optimize our RTL to make it more robust and scalable. We experimented on the MNIST dataset where each input is an image consisting of 784 pixels in 8-bit grayscale each. Each ground truth output is one-hot encoded between 0-9. Our implementation uses powers of 2 for network parameters to simplify the hardware realization. Accordingly we padded each input with 0s to make it have 1024 pixels. The outputs were padded with 0s to get 32-bit one-hot encoding. Prior to hardware implementation, software experiments showed that having extra always-0 I/O did not detract from network performance. \subsection{Network Configuration and Training Setup}\label{config} The network has 1 hidden layer of 64 neurons, i.e. 2 junctions overall. Other parameters were chosen on the basis of hardware constraints and experimental results, which are described in Sections \ref{bitwidth} and \ref{impl}. The final network configuration is given in Table \ref{table-config}. \begin{table}[!t] \begin{minipage}{\columnwidth} \renewcommand{\arraystretch}{1.2} \caption{Implemented Network Configuration} \label{table-config} \centering \begin{tabular}{|c|c|c|} \hline Junction Number ($i$) & 1 & 2\\ \hline Left Neurons ($N_{i-1}$) & 1024 & 64\\ \hline Right Neurons ($N_i$) & 64 & 32\\ \hline Fan-out ($d^{\mathrm{out}}_i$) & 4 & 16\\ \hline Weights ($W_i=N_{i-1}\times d^{\mathrm{out}}_i$) & 4096 & 1024\\ \hline Fan-in ($d^{\mathrm{in}}_i=W_i/N_i$) & 64 & 32\\ \hline $z_i$ & 128 & 32\\ \hline Block cycle ($W_i/z_i$) \footnote{In terms of number of clock cycles. Not considering the additional clock cycles needed for memory accesses.} & 32 & 32\\ \hline Density ($W_i/(N_{i-1}N_i)$) & 6.25\% & 50\%\\ \hline Overall Density & \multicolumn{2}{c|}{7.576\%}\\ \hline \end{tabular} \end{minipage} \end{table} We selected $12544$ MNIST inputs to comprise 1 epoch of training. Learning rate ($\eta$) is initially $2^{-3}$, halved after the first 2 epochs, then after every 4 epochs until its value became $2^{-7}$. Dynamic adjustment of $\eta$ leads to better convergence, while keeping it to a power of 2 leads to the $\eta$ multiplications in eq. \eqref{eq-up} getting reduced to bit shifts. Pre-defined sparsity leads to a total number of trainable parameters $= \left(w_1=4096\right)+\left(w_2=1024\right)+\left(b_1=N_1=64\right)+\left(b_2=N_2=32\right) = 5216$, which is much less than $12544$, so we theorized that overfitting was not an issue. We verified this using software simulations, and hence did not apply weight regularization. \subsection{Bit Width Considerations}\label{bitwidth} \subsubsection{Parameter Initialization} We initialized weights using the Glorot Normal technique, i.e. their values are taken from Gaussian distributions with mean $=0$ and variance $=2/\left(d^{\mathrm{out}}_i+d^{\mathrm{in}}_i\right)$. This translates to a three standard deviation range of $\pm0.51$ for junction 1 and $\pm0.61$ for junction 2 in our network configuration described in Table \ref{table-config}. The biases in our architecture are stored along with the weights as an augmentation to the weight memory banks. So we initialized biases in the same manner as weights. Software simulations showed that this led to no degradation in performance from the conventional method of initializing biases with 0s. This makes sense since the maximum absolute value from initialization is much closer to 0 than their final values when the network converges, as shown in Fig. \ref{fig-valueranges}. To simplify the RTL, we used the same set of $W_i/z_i$ unique values to initialize all weights and biases in junction $i$. Again, software simulations showed that this led to no degradation in performance as compared to initializing all of them randomly. This is not surprising since an appropriately high value of initial learning rate will drive each weight and bias towards its own optimum value, regardless of similar values at the start. \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/valueranges.png} \caption{Maximum absolute values (left y-axis) for $w$, $b$ and $\delta$, and percentage classification accuracy (right y-axis), as the network is trained.} \label{fig-valueranges} \end{figure} \subsubsection{Fixed Point Configuration} We recreated the aforementioned initial conditions in software and trained our configuration to study the range of values for network variables until convergence. The results for $w$, $b$ and $\delta$ are in Fig. \ref{fig-valueranges}. The $a$ values are generated using the \emph{sigmoid} activation function, which has range $= [0,1]$. \begin{comment} in the output layer in order to make the cost calculation work as intended. An output activation close to 1 implies a high degree of network confidence that a particular input sample belongs to that particular output class. We used the \emph{sigmoid} ($\sigma(\cdot)$) function, described in eq. (\ref{eq-sigmoid}): \begin{IEEEeqnarray}{c}\label{eq-sigmoid} \sigma(x) = \frac{1}{1+e^{-x}} \IEEEyesnumber \IEEEyessubnumber \\ {\sigma}^{'}(x) = \sigma(x) \left(1-\sigma(x)\right) \IEEEyessubnumber \end{IEEEeqnarray} The sigmoid function approaches 0 as its input argument becomes more negative and approaches 1 as its input argument becomes more positive. Its derivative approaches 0 when its input argument has a high absolute value. \end{comment} To keep the hardware optimal, we decided on the same \emph{fixed point} bit configuration for all computed values and trainable parameters --- $a$, $\dot{a}$, $\delta$, $w$ and $b$. Our configuration is characterized by the bit triplet $\left(b_w,b_n,b_f\right)$, which are respectively the total number of bits, integer bits, and fractional bits, with the constraint $b_w = b_n+b_f+1$, where the 1 is for the sign bit. This gives a numerical range of $[-{2}^{b_n},2^{b_n}-2^{-b_f}]$ and precision of $2^{-b_f}$. Fig. \ref{fig-valueranges} shows that the maximum absolute values of various network parameters during training stays within 8. Accordingly we set $b_n=3$. We then experimented with different values for the bit triplet and obtained the results shown in Table \ref{table-bitwidth}. Accuracy is measured on the last 1000 training samples. Noting the diminishing returns and impractical utilization of hardware resources for high bit widths, we chose the bit triplet $\left(12,3,8\right)$ as being the optimal case. \begin{table}[!t] \renewcommand{\arraystretch}{1.0} \caption{Effect of Bit Width on Performance} \label{table-bitwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $b_w$ & $b_n$ & $b_f$ & FPGA LUT & Accuracy after & Accuracy after\\ & & & Utilization \% & 1 epoch & 15 epochs\\ \hline 8 & 2 & 5 & 37.89 & 78 & 81\\ \hline 10 & 2 & 7 & 72.82 & 90.1 & 94.9\\ \hline 10 & 3 & 6 & 63.79 & 88 & 93.8\\ \hline 12 & 3 & 8 & 83.38 & 90.3 & 96.5\\ \hline 16 & 4 & 11 & \textcolor{red}{112} & 91.9 & 96.5\\ \hline \end{tabular} \end{table} \subsubsection{Dynamic Range Reduction due to Sparsity} We found that sparsity leads to reduction in the dynamic range of network variables, since the summations in eqs. \eqref{eq-ff} and \eqref{eq-bp} are over smaller ranges. This motivated us to use a special form of adder and multiplier which preserves the bit triplet between inputs and outputs by clipping large absolute values of output to either the positive or negative maximum allowed by the range. For example, 10 would become 7.996 and $-10$ would become $-8$. Fig. \ref{fig-a1distribution} analyzes the worst clipping errors by comparing the absolute values of the argument of the sigmoid function in the hidden layer, i.e. $\sum{w_1a_0}+b_1$ from eq. \eqref{eq-ff}, for our sparse case vs. the corresponding FC case ($d^{\mathrm{out}}_1=64$, $d^{\mathrm{out}}_2=32$). Notice that the sparse case only has 17\% of its values clipped due to being outside the dynamic range afforded by $b_n=3$, while the FC case has 57\%. The sparse case also has a smaller variance. This implies that the hardware errors introduced due to finite bit-width effects are less pronounced for our pre-defined sparse configuration as compared to FC. \begin{figure}[!t] \centering \includegraphics[width = 0.6\linewidth]{figs/a1distribution.png} \caption{Histograms of absolute value of eq. \eqref{eq-ff}'s $\sum{w_1a_0}+b_1$ with respect to dynamic range for (a) sparse vs. (b) FC cases, as obtained from ideal floating point simulations on software. Values right of the pink line are clipped.} \label{fig-a1distribution} \end{figure} \subsubsection{Experiments with ReLU} As demonstrated in literature \cite{Krizhevsky2012,Szegedy2015,He2016}, the native (ideal) ReLU activation function is more widely used than sigmoid due to the former's better performance, no vanishing gradient problem, and tendency towards generating sparse outputs. However, ideal ReLU is not practical for hardware due to its unbounded range. We experimented with a modified form of the \emph{ReLU} activation function where the outputs were clipped to a) 8, which is the maximum supported by $b_n=3$, and b) 1, to preserve bit width consistency in the multipliers and adders and ensure compatibility with sigmoid activations. Fig. \ref{fig-act} shows software simulations comparing sigmoid with these cases. Note that ReLU clipped at 8 converges similar to sigmoid, but sigmoid has better initial performance. Moreover, there is no need to promote extra sparsity by using ReLU because our configuration is already sparse, and sigmoid does not suffer from vanishing gradient problems because of the small range of our inputs. We therefore concluded that sigmoid activation for all layers is the best choice. \begin{comment} \begin{IEEEeqnarray}{c}\label{eq-relu} ReLU(x) = \begin{cases} 0 & \text{if } x\leq0 \\ x & \text{if } 0<x<1 \\ 1 & \text{if } x\geq1 \end{cases} \IEEEyesnumber \IEEEyessubnumber \\ {ReLU}^{'}(x) = \begin{cases} 0 & \text{if } x\leq0 \text{ or } x\geq1 \\ 1 & \text{if } 0<x<1 \end{cases} \IEEEyessubnumber \end{IEEEeqnarray} \end{comment} \begin{figure}[!t] \centering \includegraphics[width = 0.65\linewidth]{figs/act.png} \caption{Comparison of activation functions for $a_1$.} \label{fig-act} \end{figure} \subsection{Implementation Details}\label{impl} \subsubsection{Sigmoid Activation} The sigmoid function uses exponentials, which are computationally infeasible to obtain in hardware. So we pre-computed the values of $\sigma(\cdot)$ and ${\sigma}^{'}(\cdot)$ and stored them in look-up tables (LUTs). Interpolation was not used, instead we computed sigmoid for all 4096 possible 12-bit arguments up to the full 8 fractional bits of accuracy. On the other hand, its derivative values were computed to $6$ fractional bits of accuracy since they have a range of $[0,2^{-2}]$. Note that clipped ReLU activation uses only comparators and needs no LUTs. However, the number of sigmoid LUTs required is $\sum_{i=1}^{L}{z_i/d^{\mathrm{in}}_i}=3$, which incurs negligible hardware cost. This reinforces our decision to use sigmoid instead of ReLU. \subsubsection{Interleaver} We used clash-free interleavers of the \emph{SV+SS} variation, as described in \cite{Dey2017_Asilomar}. Starting vectors for all sweeps were pre-calculated and hard-coded into FPGA logic. \subsubsection{Arithmetic Units} We numbered the weights sequentially on the right side of every junction, which leads to permuted numbering on the left side due to interleaving. We chose $z_i\geq d^{\mathrm{in}}_i, \forall i \in \{1,\cdots L\}$. This means that the $z_i$ weights accessed in a cycle correspond to an integral ($z_i/d^{\mathrm{in}}_i$) number of right neurons, so the FF summations in eq. \eqref{eq-ff} can occur in a single cycle. This eliminates the need for storing FF partial sums. The total number of multipliers required for FF is $\sum _{i=1}^{L}{z_i}$. The summations also use a tree adder of depth $={\text{log}}_{2}\left(d^{\mathrm{in}}_i\right)$ for every neuron processed in a cycle. BP does not occur in the first junction since the input layer has no $\delta$ values. The BP summation in eq. \eqref{eq-bp_b} will need several cycles to complete for a single left neuron since weight numbering is permuted. This necessitates storing $\sum _{i=2}^{L}{z_i}$ partial sums, however, tree adders are no longer required. Eq. \eqref{eq-bp_b} for BP has 2 multiplications, so the total number of multipliers required is $2\sum _{i=2}^{L}{z_i}$. The UP operation in each junction $i$ requires $z_i$ adders for the weights and $z_i/d^{\mathrm{in}}_i$ adders for the biases, since that many right neurons are processed every cycle. Only the weight update requires multipliers, so their total number is $\sum _{i=1}^{L}{z_i}$. Our FPGA device has 240 DSP blocks. Accordingly, we implemented the 224 FF and BP multipliers using 1 DSP for each, while the other 160 UP multipliers and all adders were implemented using logic. \subsubsection{Memories and Data} All memories were implemented using block RAM (BRAM). The memories for $a$ and $\dot{a}$ never need to be read from and written into in the same cycle, so they are single-port. $\delta$ memories are true dual-port, i.e. both ports support reads and writes. This is required due to the read-modify-write nature of the $\delta$ memories since they accumulate partial sums. The `weight+bias' memories are simple dual-port, with 1 port used exclusively for reading the $k$th cell in cycle $k$, and the other for simultaneously writing the $(k-1)$th cell. These were initialized using Glorot normal values while all other memories were initialized with 0s. \begin{comment} Note that lower values of $z$ result in a smaller number of deep memories per bank, which is ideal for a fully BRAM implementation. Given a more powerful device, a larger value of $z$ will lead to many shallow memories in each bank, and thus make it easier to meet clash-freedom constraints, but may need logic-based distributed RAMs to share the burden. \end{comment} The ground truth one-hot encoding for all $12544$ inputs were stored in a single-port BRAM, and initialized with word size $=10$ to represent the 10 MNIST outputs. After reading, the word was padded with 0s to make it 32-bit long. On the other hand, the input data was too big to store on-chip. Since the native MNIST images are $28\times28=784$ pixels, the total input data size is $12544\times784\times8 = 78.68$ Mb, while the total device BRAM capacity is only $4.86$ Mb. So the input data was fed from PC using UART interface. \subsubsection{Network Configuration} Here we explain the choice of network configuration in Table \ref{table-config}. We initially picked $N_2=16$, which is the minimum power of 2 above 10. Since later junctions need to be denser than earlier ones to optimize performance \cite{Dey2018_ITA}, we experimented with junction 2 density and show its effects on network performance in Fig. \ref{fig-jn2density}. We concluded that 50\% density is optimum for junction 2. Note that individual $z_i$ values should be adjusted to have the same block cycle length for all junctions. This ensures an always full pipeline and no stalls, which can achieve the ideal throughput of 1 input per block cycle. This, along with the constraint $z_i\geq d^{\mathrm{in}}_i, \forall i \in \{1,\cdots L\}$, led to $z_1=256$, which was beyond the capacity of our FPGA. So we increased $N_2$ to 32 and set $z_2$ to the minimum value of 32, leading to $z_1=128$. We experimented with $d^{\mathrm{out}}_1=8$, but the resulting accuracy was within 1 percentage point of our final choice of $d^{\mathrm{out}}_1=4$. \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/jn2density.png} \caption{Performance for different junction 2 densities, keeping junction 1 density fixed at 6.25\%.} \label{fig-jn2density} \end{figure} \subsubsection{Timing and Results} A block cycle in our design is $\left(W_i/z_i+2\right)$ clock cycles since each set of $z_i$ weights in a junction need a total of 3 clock cycles for each operation. The first and third are used to compute memory addresses, while the second performs arithmetic computations and determines our clock frequency, which is 15MHz. We stored the results of several training inputs and fed them out to 10 LEDs on the board, each representing an output from 0-9. The FPGA implementation performed according to RTL simulations and within $1.5$ percentage points of the ideal floating point software simulations, giving 96.5\% accuracy in 14 epochs of training. \begin{comment} \begin{figure}[!t] \centering \includegraphics[width = 0.7\linewidth]{figs/fpgaworking.jpg} \caption{Our design working on the Xilinx XC7A100T-1CSG324C FPGA.} \label{fig-fpgaworking} \end{figure} Fig. \ref{fig-fpgaworking} shows our FPGA in action. \begin{figure}[!t] \centering \includegraphics[width = 0.65\linewidth]{figs/cyclebreakup.png} \caption{Breaking up each operation into 3 clock cycles.} \label{fig-cyclebreakup} \end{figure} \end{comment} \subsection{Effects of $z$}\label{effects_z} \begin{figure}[!t] \centering \includegraphics[width = 0.95\linewidth]{figs/effects_z.png} \caption{Dependency of various design and performance parameters on the total $z$, keeping the network architecture and sparsity level fixed.} \label{fig-effects_z} \end{figure} A key highlight of our architecture is the total degree of parallelism $\sum_{i=1}^{L}{z_i}$, which can be reconfigured to trade off training time and hardware resources while keeping the network architecture the same. This is shown in Fig. \ref{fig-effects_z}. The present work uses total $z=160$, which leads to a block cycle time of $2.27\mu s$, but economically uses arithmetic resources and has a small number of deep memories, making it ideal for a fully BRAM implementation. Given more powerful FPGAs, the same architecture can be reconfigured to achieve higher GOPS count and process inputs in $0.4\mu s$, albeit at the cost of more FPGA resources and a greater number of shallower memories. Moreover, this reconfigurability also allows a complete change in network structure and hyperparameters to process a new dataset on the same device if desired. \begin{comment} The final FPGA utilization after implementation is shown in Table \ref{table-impl12b}. The power consumption was 395 mW dynamic and 101 mW static. \begin{table}[!t] \renewcommand{\arraystretch}{1.1} \caption{FPGA Utilization after Implementation} \label{table-impl12b} \centering \begin{tabular}{|c|c|c|c|} \hline Resource & Used & Available & \% Used\\ \hline LUT & $52862$ & $63400$ & 83.38\\ \hline LUTRAM & $8771$ & $19000$ & 46.16\\ \hline Flipflop & $15754$ & $126800$ & 12.42\\ \hline DSP & 224 & 240 & \textbf{93.33}\\ \hline BRAM & 40 & 135 & 29.63\\ \hline \end{tabular} \end{table} \end{comment} \section{Conclusion}\label{conc} This paper demonstrates an FPGA implementation of both training and inference of a neural network pre-defined to be sparse. The architecture is optimized for FPGA implementation and uses parallel and pipelined processing to increase throughput. The major highlights are the degrees of parallelism $z_i$, which can be quickly reconfigured to re-allocate FPGA resources, thereby adapting any problem to any device. While the present work uses a modest FPGA board as proof-of-concept, this reconfigurability is allowing us to explore various types of networks on bigger boards as future work. Our RTL is fully parametrized and the code available on request. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-10-15T02:03:28", "yymm": "1806", "arxiv_id": "1806.01087", "language": "en", "url": "https://arxiv.org/abs/1806.01087" }
\section{Introduction}\label{sec:introduction}} \IEEEPARstart {S}{treaming} processing has an important role in solving many real-world problems. From fraud detection (e.g., real-time financial activity~\cite{parikh2008scalable}) to real-time recommendations (e.g., analytics over microblogs~\cite{wang2016efficient,sharma2016graphjet} and live streaming~\cite{liao}), applications that generate stream data are ubiquitous. Unlike structured stream data in which hot keys are relatively evenly distributed during the whole lifetime~\cite{Wikipedia}, real-world stream datasets often exhibit the unique feature that their inherent hot keys often evolve over times. One key is hot in some interval may be non-hot in the next interval. A typical example includes {\sf twitter} dataset where its catchword may vary frequently for different instants of time. At present, it also becomes greatly necessary and important to efficiently process these {\em time-evolving} stream datasets. Reasonably distributing time-evolving stream datasets on a cluster of machines can provide the beneficial businesses with the cost-effective services. In an effort to exploit maximum benefits, time-evolving stream processing systems need to do the best at two aspects at least. First, all loads for time-evolving stream datasets must be balanced to a maximum extent. This indicates whether each worker is fully mobilized. It also directly affects the overall latency and throughput of stream processing. Second, considering the state of the stream data is backed up on multiple workers, the combined memory overhead on all machines should be controlled with a minimum of duplicates. This indicates how much memory is stored redundantly, which directly influences the scalability of stream processing systems. Unfortunately, few existing solutions can meet all these two hard requirements. Fields Grouping utilizes key-based routing, which is prone to load imbalance across multiple workers~\cite{Storm}. Shuffle Grouping~\cite{Storm} uses round-robin manner to assign the loads. However, it potentially replicates the states associated with keys on each worker with a linear proportion of the memory overhead. Other solutions attempt to balance the loads by leveraging operator migration~\cite{shah2003flux,xing2005dynamic,gedik2014partitioning,gedik2014elastic,castro2013integrating,chen2016bufferbank,basanta2017patterns}. A part of the keys are allowed to be rebalanced when load imbalance is detected. A number of studies~\cite{nasir2015power, nasir2016two} also aim to reduce the rebalancing overhead by identifying hot key and further assigning more workers. These earlier efforts on structured stream processing make a significant progress on getting a reasonable tradeoff, which, however, is far unsatisfactory from practical use for time-evolving stream processing. This is particularly true when the number of machines is scaled (as discussed in Section 2.3). In this paper, we are addressing whether and how we can build such an efficient and scalable time-evolving stream processing system. Nevertheless, there remains tremendously challenging to build a time-evolving stream processing system with all the desired properties satisfied. First, since time-evolving stream processing often involves a large number of recent hot key identification operations, it should be not only accurate but also efficient, which is notoriously difficult. In order to track most recently-occurred keys, it necessarily has to preserve a large amount of key-related information. Although existing approaches make a great progress on accuracy, the expense is that a substantial amount of computation~\cite{mabroukeh2010taxonomy,shan2009frequent,lim2014fast} or memory overhead~\cite{golab2003identifying,chang2003finding,arasu2004approximate,wang2005tfp,deng2012new} has occurred. Second, handling time-evolving stream dataset may also need a timely adjustment for load balance at every moment, which is also difficult. Even worse, heterogeneous resources may further exacerbate this problem. To assign the appropriate workers for load balance, the servers have to frequently collect the state information from workers with considerable communication overhead~\cite{pietzuch2006network,buddhika2017online}. It remains challenging to make an efficient decision of worker assignment for preserving the real-time load balance. In this paper, we propose an efficient grouping approach (named FISH) to process time-evolving streaming data at scale. Interestingly, we observe that, no matter how large a time interval is, the keys of time-evolving stream dataset within this bounded scope have a skewed power-law distribution where a small fraction of keys dominate most loads. This therefore allows to achieve real-time load balance within a bounded time interval by using hierarchical treatment~\cite{todtling2005one}. We present an epoch-based approach to accurately identify recent hot key. Each {\em epoch} can be a custom-sized key sequence. Intra-epoch identification counts the occurrence number of the key, which only stores the number of most frequent keys~\cite{manku2002approximate,karger2004simple} for preserving the low memory overhead. Inter-epoch identification uses time-aware approach~\cite{mabroukeh2010taxonomy,shan2009frequent,lim2014fast}, which adopts epoch-level (rather than tuple-level) update for reducing the superfluous computation. To ensure the efficiency of worker assignment, we also recognize the simplicity of operations and the similarity of keys. We further propose a heuristic approach to infer (rather than prohibitively communicate) the information of remote worker in a more efficient manner. This paper makes the following contributions: \begin{itemize} \item We make a comprehensive study on the deficiencies of state-of-the-art grouping schemes for time-evolving stream datasets in terms of load balance and scalability. \item We present an efficient and scalable grouping scheme with epoch-based hot key identification and heuristic worker assignment, which can provide low-latency and high-throughput time-evolving stream processing. \item We evaluate our approach on both synthetic and real-world stream datasets. Experimental results show that our approach significantly outperforms state-of-the-art with the average and 99th percentile latency reduction by 87.12\% and 76.34\% (vs. W-Choices), and 96.66\% memory consumption reduction (vs. Shuffle Grouping). \end{itemize} The rest of this paper is organized as follows. We first give the background and motivation in Section~\ref{sec:Background and Motivation}. Section~\ref{sec:Overview} provides the overview of our approach. Section~\ref{sec:Mechanism} elaborates the design of FISH. Section~\ref{sec:Extension} describes the extension for handling dynamic scenario with worker variation. Section~\ref{sec:Evaluation} discusses the results. We survey the related work in Section~\ref{sec:Related work} and conclude this work in Section~\ref{sec:Conclusion}. \section{Background and Motivation}\label{sec:Background and Motivation} In this section, we first briefly review the background of distributed stream processing and existing stream partitioning schemes. We next investigate the potential inefficiency of existing solutions towards time-evolving stream dataset through a comprehensive motivating study, finally followed by several challenges for coping with the problem. \subsection{Distributed Stream Processing} Distributed stream processing engine (DSPE)~\cite{Storm,Flink,zaharia2013discretized,Samza,neumeyer2010s4} often runs on a cluster of machines that can communicate with each other via messages. The target stream applications are processed under these DSPEs in the form of a directed acyclic graph (DAG). Figure~\ref{fig:DAG} depicts a typical workflow of DSPE for the top-$k$ {\sf word count} stream application\footnote{Word count is a simple program that counts the number of occurrences of each word in a given input stream data} based on DAG where the vertex represents the operator of the stream engine that is applied on an incoming data stream for the data transformation. The directed edge represents data channel that points from an upstream operator (also called {\em source} for short) to a downstream operator (called {\em worker} for short). The data flow along these edges, representing a series of tuples, each associated with a key. In order to achieve high performance, DSPE usually exploits data parallelism by running many instances of these operators. Each operator is responsible for handling a set of partitioned input sub-stream data, which relies on the creation of a particular grouping scheme (as will be discussed in Section 2.2). In this case, a well-known problem for DSPE is load imbalance. For the example in Figure~\ref{fig:DAG}, the key for each tuple is the word itself. Sources distribute tuples to workers based on a specific grouping scheme. Workers count the occurrence number of each word. The hot-key $F$ in this time-evolving stream data may be identified as non-hot potentially, leading to imbalanced tuple assignment. Also note that keys often have been duplicated in different workers with proportional memory overhead to the number of word types. The inefficiency of these aspects will be extensively investigated in Section 2.3. \begin{figure}[t] \centering \includegraphics[width=3.5in]{images/DAG} \caption{The typical workflow of distributed stream processing. Each colored tuple indicates a unique key. The tuples of time-evolving stream dataset start inflowing the sources from the upper-right corner. } \label{fig:DAG} \end{figure} \subsection{Existing Stream Grouping Schemes} \indent The input stream is composed of a sequence of tuples, each of which is associated with a key. As shown in Figure~\ref{fig:DAG}, different colored tuples correspond to different keys. Grouping scheme assigns each tuple to a worker by key. Different grouping schemes may make different decisions for this key assignment, with a summary as follow: \begin{itemize}[leftmargin=*] \item \bfseries Shuffle Grouping (SG)\mdseries~\cite{Storm}: This scheme sends each tuple from the source to a round-robin selected worker, ensuring that each worker can evenly have the tuples. \item \bfseries Field Grouping (FG)\mdseries~\cite{Storm}: This scheme ensures that the same key is always sent to the same worker via hashing. \item \bfseries Partial Key Grouping (PKG)\mdseries~\cite{nasir2015power}: This scheme can be treated as a bounded FG. A given key for the PKG is allowed to be processed by two workers at most. \item \bfseries D-Choices (D-C)\mdseries~\cite{nasir2016two}: This scheme is an improved PKG, which allows that frequent keys can be processed by $d$ workers at most where $d$ is determined by the distribution of key. Other keys continue using PKG. \item \bfseries W-Choices (W-C)\mdseries~\cite{nasir2016two}: This scheme is similar to D-C, and the only difference is that it allows frequent keys can be processed on the entire set of workers instead of $d$ ones. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=7.0in]{images/moti_latency} \vspace{-1em} \caption{Latency of FG, PKG, SG, D-C and W-C on the Amazon Movie Review dataset with different number of workers. D-C100 and D-C1000 indicate different maximum set of keys by 100 and 1000, respectively. W-C has the similar denotation.} \vspace{-1em} \label{fig:MO_LA} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=3.33in]{images/moti_memory} \caption{Normalized memory overhead of FG, PKG, SG, D-C and W-C with different number of workers. All results are normalized to FG.} \label{fig:MO_MEM} \end{figure} \subsection{Issues with Existing Grouping Schemes on Time-evolving Stream Datesets: A Motivating Study}\label{sec:Motivation Study} These previous efforts~\cite{nasir2015power,nasir2016two} have made a significant advance for the load balance problem of DSPEs, particularly for skewed stream data. By considering the hotness of keys from the entire processing lifetime, their original key identification and assignment, however, are in essential unaware of the frequency variation of hot key within a bounded time interval. As a result, existing grouping schemes may result in the potential issues for the time-evolving stream processing with either load imbalance or prohibitive memory overhead, which can be much serious at scale (with a large number of workers). To investigate this problem, we have conducted a set of experiments on the real-world time-evolving Amazon Movie Review stream dataset\footnote{Amazon movie review dataset collects the movie popularity, which can be significantly varying for different time periods.} with different machine scales (16, 32, 64 and 128 workers) for {\sf word count} application based on different grouping schemes discussed in Section 2.1. Note that we test D-C and W-C schemes by considering top-$100$ and top-1000 keys in this motivating study. \textbf{Load Imbalance Issue}\quad Figure~\ref{fig:MO_LA} depicts the results of latency, which is widely used for representing the load balance of the DSPEs~\cite{xing2005dynamic,nasir2015power,nasir2016two}. The lower the latency is, the more balanced the system is. The 99th percentile latency of FG and PKG is up to 3,945 and 2,808 milliseconds, respectively. Both FG and PKG have high latency because of assigning only one or two workers to each key. The skew distribution of the key results in extreme load imbalance of each worker. The latency of W-C and D-C is related to the number of statistical keys. If there are 1000 keys, latency of both W-C1000 and D-C1000 is almost the same as the PKG. With the increase of workers, the latency has a significant increase. This is due to inaccurate identification in the sense that some hot keys are detected as non-hot. If 100 keys are involved, the latency of D-C100 and W-C100 can have a part of improvement, but scalability issue below arises. \textbf{Scalability Issue}\quad Figure~\ref{fig:MO_MEM} depicts the results of memory overhead. FG assigns only one worker per key, and hence, it has little memory overhead as can be seen in Figure~\ref{fig:MO_MEM}. In contrast, we can see that SG has the highest memory overhead by up to 23.16x in the case of 128 workers since many states have been replicated. The D-C100 and W-C100 are similar to the SG. When the number of workers increases, the memory overhead increase significantly. This is due to inaccurate identification in the sense that some non-hot keys are detected as hot. Therefore, SG, D-C100, and W-C100 may suffer from scalability problems. To ensure system scalability, we set the maximum set of keys by $1000$ for the following experiment {\bf Summary} It can be seen that neither of existing grouping schemes can perform well in both load balance and scalability. Although state-of-the-art D-C and W-C schemes have made the advance for a relatively good tradeoff, they may be still far from the ideal situations (where SG scheme shows the optimal case for latency criteria while FG scheme represents the optimal case for memory overhead criteria). More importantly, their tradeoff gradually underperforms as the number of workers is increasing. There still lacks effective grouping scheme for efficiently processing these time-evolving stream data at scale. \subsection{Challenges of Balancing Time-evolving Streaming Processing at Scale}\label{sec:Background and Motivation:challenge} Time-evolving stream data has a significant feature with the significant frequency variation of keys within different time intervals. Not only with the {\em global} load balance for the final state during the entire lifetime, time-evolving stream processing but also needs to additionally consider the {\em local} real-time load balance within some time interval at every moment, arising several unique challenges. First, by considering the time-evolving factor, the identification scope for the hot keys has been consequently changed from the entire processing to a large number of short time intervals. The problem of identifying recent hot keys within a time interval has been extensively studied in the Data Mining field, which can fall into two broad categories. Sliding-window based approach~\cite{golab2003identifying,chang2003finding,arasu2004approximate,wang2005tfp,deng2012new} uses window threshold for bounding recent key counting. To get the accurate results, they have to use a large window size at the cost of potentially prohibitive memory overhead. Time-aware based approach~\cite{mabroukeh2010taxonomy,shan2009frequent,lim2014fast} proposes that recent items have more weights so that a stale item is more likely to be pruned than a recent one. This approach uses a replacement strategy to reduce memory overhead, but each update for all items requires a time weight modification, leading to a large amount of computation. Nevertheless, we should note that time-evolving stream processing often involves a large number of recent hot key identification operations, which can be easily more than millions for the real-world stream dataset. Technically, each of these operations is supposed to be efficient and lightweight so that the whole DSPE system can spread their superiority for load balance and scalability. There still lacks an effective technique to accurately identify the recent hot keys while preserving the low overhead in both computation and memory consumption. Second, after the hot key identification, we have to assign an appropriate worker for each identified recent hot key. As discussed previously, the traditional stream processing only considers the global load balance for the final states. Thus, they simplify the work assignment problem by evenly assigning all tuples to the given workers. Nevertheless, the reality is that the processing capability between workers is often different for many reasons, e.g., heterogeneous devices or network delays. As a consequence, it is likely for existing approaches to assign the keys for a busy worker in some time interval, leading to the local imbalance. An ideal method for work assignment is to select the optimal candidate worker according to the number of unprocessed tuples and processing capacity of workers. Nevertheless, it is extremely difficult, if not impossible, to make efficient worker assignment. The unprocessed tuples information of workers is usually located in remote with respect to the source. Frequently requesting the queue states from workers may lead to a large amount of communication overhead between sources and workers. More serious is that this requested information may be quickly out of data since the state of workers is often changing dramatically. There remain tremendously challenging for developing such an efficient worker assignment for time-evolving stream processing. \section{Overview}\label{sec:Overview} To cope with the aforementioned challenges, we design our grouping approach in accordance with the following interesting observations for time-evolving stream processing. \underline{\bf\em Observation 1}: {\em The occurrence frequency between the recent hot keys and non-hot keys in the time-evolving stream data remains a large difference with a skewed distribution.} One typical example accounting for the above observation is {\sf twitter} dataset. Although its catchword may change from one to the other over time, the occurrence frequency of these catchwords can be still significantly higher than the non-hot ones (within a short interval). This finding has two implications at least for the recent hot-key identification. First, in spite of the frequent variation of hot keys, a small fraction of these keys can still dominate most loads during the whole stream processing. This allows to continue using ``eighty-two'' golden rule by handling these few critical keys for the balance of most loads. Since only a few keys are saved in multiple workers, a large amount of memory overhead can be saved. Second, hot keys are subject to change over time. The potentially hot keys may be inaccurately identified as non-hot ones from a global perspective in prior work~\cite{nasir2016two}. Considering the skewed distribution of hot keys in a short interval, this implies that it is supposed to identify recent hot keys accurately in a locally-bounded (instead of global) manner. \underline{\bf\em Observation 2}: {\em Considering the operation type of stream processing are usually simplex, the processing time for the same batch of tuples under the same given worker can be considered same with a negligible performance difference.} \begin{figure}[t] \centering \includegraphics[width=3.5in]{images/ob2} \vspace{-1.5em} \caption{The processing time for 10 workers. Each worker processes 50,000 tuples on the Amazon Movie datasets 12 times.} \label{fig:ob2} \vspace{-1.5em} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.5in]{images/Overview} \vspace{-1.5em} \caption{FISH infrastructure relative to the streaming processing and its internal organizational structure ({\em our work is shaded})} \vspace{-1.5em} \label{fig:overview} \end{figure} Figure~\ref{fig:ob2} illustrates the performance results of processing every 50,000 tuples 12 times for 10 randomly-selected workers. We can see that the performance fluctuation range can be on average as small as 4.37\%, which can be often considered reasonable and negligible in practice~\cite{kapoor2004capprobe}. This finding gives us an important implication for assigning an appropriate worker among all workers. The premise is that we have to know which worker has the fewest tasks unprocessed, which are generally unavailable at the source end. The intuitive method obtains this information via the considerable communication between workers. In contrast, this observation allows us to {\em infer} (rather than communicate) the unprocessed computation amount of all workers in a more efficient manner. According to these implications, we propose a custom-made grouping approach with the specially-designed key identification and work assignment. Figure~\ref{fig:overview} illustrates the overview of our approach (named FISH), consisting of two major components as follows. {\bf Accurate Recent Hot Key Identification} (Section 4.1): This part aims at accurately identifying the recent hot key for the time-evolving stream data. Although there exist a vast body of previous studies on recent hot key identification. These approaches are originally designed for mining the accurate data in data-mining applications, not yet satisfying the efficient requirement in the sense of low overhead in computation amount and memory consumption for stream processing applications. We present a specialized recent hot key identification approach that can accurately identify hot keys for a recent time interval with low computational and memory overhead. {\bf Heuristic Work Assignment} (Section 4.2): Given a set of workers, this part aims at assigning the identified hot keys to the appropriate workers for load balance. Unlike the previous studies that simply consider the global load balance at the final state (as discussed in Section 2.4), we additionally consider the local load balance at every time interval. This is particularly important for time-evolving stream processing. In contrast to communication-based worker assignment approach with heavy communication overhead, we propose a heuristic worker assignment, which can precisely infer the worker processing capacity based on the history information for worker assignment in a more efficient manner. Note that this work focuses on addressing the common case where each tuple is associated with a single key. For the scenario where each tuple is allowed to carry multiple keys, we can still extend FISH to combine specific applications by prioritizing or synthesizing multiple keys. This is out of our scope, which can be interesting future work. \begin{comment} \section{Problem Definition}\label{sec:Problem Definition} \indent We consider the scenario where the scalability problem is exacerbated for extremely skewed workloads in DSPEs. In order to pursue high performance in DSPEs, inspired us to explore a scalable key grouping scheme which balances the load with low memory overhead. This section gives the definitions of our problem. \noindent \textbf{Preliminary} Low latency and high throughput are common goals for DSPEs. In order to achieve low latency while ensuring high throughput, we need to carefully balance load in DSPEs. That means that we need to fully mobilize every worker. The latency ($T_{l}$) contains the time of transmission ($T_{t}$), queuing ($T_{q}$), and processing ($T_{p}$).Their relationship can be described as follows:$$T_{l} = T_{t} + T_{q} +T_{p} \eqno{(1)}$$ \indent For each tuple, since stream processing usually takes the same kind of operation for processing each tuple, $T_{t}$ and $T_{p}$ are generally fixed in the same worker, the latency primarily depends on the time each tuple waits in the queue ($Q_{w}$) of the worker before being processed. We have ($i$ represents tuples that have not yet been processed in the queue):$$T_{q} = \sum_{i\in Q_{w}} T_{p}^{i} \eqno{(2)}$$ \indent The tuples cannot be processed until all tuples in the queue have been processed. Since the operation for each tuple is basically similar to the streaming application, there is little difference in the calculation for each tuple~\cite{Storm,nasir2015power,nasir2016two}. Perhaps different workers have different processing capacity such as heterogeneous and so on, but there is the slight difference in the time for the same worker to process each tuple. Hence, we can simplify the formula as follows ($\overline{T_{p}^{w}}$ represents the average processing time of each tuple in worker $w$, $W$ means a set of workers): $$T_{q} =\left | Q_{w} \right | \times \overline{T_{p}^{w}}, \quad w \in W \eqno{(3)}$$ \indent This suggests that we need to keep the number of tuples ($\left |Q_{w} \right|$) in each worker queue as small as possible if we want to keep the low latency. A straightforward way is to use Shuffle Grouping. However, it has a huge memory overhead when dealing with state operators. Owing to each worker has the potential to handle each key, the key-related state has to be replicated on each worker. Memory overhead is directly related to the number of workers, thus this will directly hinder the system scalability. \noindent \textbf{Design Goal}\quad In other words, our target is to design a key grouping scheme to achieve load balancing under the premise of ensuring system scalability. In this paper, at time $t$, we define $L_{w}(t)$ ($L_{w}(t)$ represents the unprocessed \emph{load} of worker $w$ at time $t$) is the fraction of the queue length$Q_{w}$ and the average processing time of the worker$\overline{T_{p}^{w}}$ which directly reflect the status of each worker. $$L_{w}(t) = {\left | Q_{w} \right |\times \overline{T_{p}^{w}}} \eqno{(1)}$$ \indent We define imbalance such as used by Flux~\cite{shah2003flux}, PKG~\cite{nasir2015power}, and W-C\&D-C~\cite{nasir2016two} which is the difference between the maximum and the average load of the workers: $$I\left ( t \right ) = \mbox{max} L_{w}\left ( t \right ) - \mbox{avg}L_{w}\left ( t \right ) ,\quad w\in W \eqno{(5)}$$ Our target is to minimize $\mbox{avg}L_{w}$ and $I\left ( t \right )$ with low memory overhead by designing grouping scheme, which means that all worker assignments are balanced so as to achieve the goal of low latency and high throughput. \end{comment} \begin{table}[t] \begin{center} \caption{Notations used in this work} \label{sample-table} \vskip 0.0in \begin{tabular}{ll} \hline Symbol & Descriotion \\ \hline \emph{$\alpha$} & time decaying factor \\ \emph{$\theta$} & threshold for the hot key identification\\ \emph{$c_{k}$} & counter of a key ${k}$\\ \emph{$d_{min}$} & minimal number of workers for hot key\\ \emph{$f_{top}$} & the highest frequency\\ \emph{$f_{k}$} & frequency of a key ${k}$\\ \emph{k,v} & key identifier\\ \emph{$t_{pri},t_{cur}$} & prior and current timestamp \\ \emph{w} & worker identifier\\ \emph{A} & set of candidate workers \\ \emph{$C_{w}$} & unprocessed tuples for worker $w$\\ \emph{D} & set of input stream data\\ \emph{K} & set of top frequent keys\\ \emph{$K_{max}$} & the maximum capacity of the set $K$\\ \emph{M} & set of assignable workers for each key\\ \emph{$N_{epoch}$} & the number of sequential tuples in an epoch \\ \emph{$N_{w}$} & the number of assigned tuples to worker $w$ \\ \emph{$P_{w}$} & the processing capacity for worker $w$\\ \emph{$T_{w}$} & the estimate waiting time for worker $w$\\ \emph{$W_{num}$} & the number of workers\\ \hline \end{tabular} \end{center} \end{table} \section{Fish}\label{sec:Mechanism} This section elaborates the design of the recent hot key identification and heuristic work assignment. For facilitating the descriptions, we define several notations used in this work. Table ~\ref{sample-table} lists the details regarding notations. \subsection{Epoch-based Recent Hot-key Identification}\label{sec:Solution:Finding Recent Frequent Keys} People often treat the hot key identification in the whole lifetime of stream processing. We either use a time-aware factor to compute the frequency of all keys~\cite{mabroukeh2010taxonomy,shan2009frequent,lim2014fast} with a large amount of computation, or use the additional storage to memorize the history frequency of all keys in the cost of memory overhead~\cite{golab2003identifying,chang2003finding,arasu2004approximate,wang2005tfp,deng2012new}. Motivated by observation 1, the core idea of our recent hot key identification is an epoch-based approach, which divides the entire lifetime of stream processing into many epochs. {\em Epoch} is a collection of sequential tuples. The intra-epoch counts the occurrence number of the key, which only stores the number of most frequent keys~\cite{manku2002approximate,karger2004simple} for reducing the prohibitive memory overhead. The inter-epoch frequency counting of keys uses a time-aware~\cite{mabroukeh2010taxonomy,shan2009frequent,lim2014fast} approach which adopts epoch-level (rather than tuple-level) update for reducing the superfluous amount of computation. Based on the frequency results, these keys are finally classified into hot and non-hot ones. \begin{figure}[t] \centering \includegraphics[width=3.5in]{images/Box} \vspace{-2em} \caption{Procedure of epoch-level recent hot-key identification} \vspace{-0.5em} \label{fig:epoch} \end{figure} \begin{algorithm}[t] \caption{Epoch-based Key Frequency Statistics} \DontPrintSemicolon \KwIn{$\alpha$ -- time decaying factor\\ \hspace{9.4mm} $D$ -- input stream data\\ \hspace{9.2mm} $N_{epoch}$ -- the size of epoch\\ \hspace{9.2mm} $K_{max}$ -- maximum capacity of the set K} $K \leftarrow \phi$\; $counter \leftarrow 0$\; \ForEach {k $\in$ D}{ \tcc{Inter-epoch decaying} \If{$counter = N_{epoch}$}{ TimeDecayingUpdate$(K)$\; $counter \leftarrow 0$\; } \tcc{Intra-epoch counter} \eIf{$k \in K$}{ $c_{k} \leftarrow c_{k} +1$\; }{ \tcc{Insert or replace the key} \eIf{$\left | K \right | < K_{max} $}{ $K \leftarrow K\cup \left \{ k \right \}$\; $c_{k} \leftarrow 1$\; }{ ReplaceMin$(K,k)$\; } } $counter \leftarrow counter+1$\; } \textbf{Subroutine} {ReplaceMin$(K,k)$}\; \quad$k_{min}$ $\leftarrow$ $min_{v \in K}c_{v}$\; \quad$K \leftarrow K\setminus \left \{ k_{min} \right \}\cup \left \{ k \right \}$\; \quad $c_{k} \leftarrow c_{k_{min}}+1$\; \textbf{Subroutine} {TimeDecayingUpdate$(K)$}\; \tcc{Update counters according to the $\alpha$} \ForEach {$v\in K$ }{ $c_{v} \leftarrow c_{v} \times \alpha $\; } \end{algorithm} \subsubsection{Key Frequency Statistics} In the following, we next introduce how we obtain the frequency of keys based on an epoch-driven approach. \textbf{Intra-epoch Frequency Counting}\quad The intra-epoch counting aims to count the occurrence number of the key in each individual epoch. To reduce memory overhead, we continue to only store the most frequent $K_{max}$ keys~\cite{karger2004simple}. The related descriptions are located between Lines 8-17 in Algorithm 1. When a new key appears, if the current number of keys stored in $K$ is less than the maximum capacity, this key will be merged into the $K$ set, and its occurrence number is incremented. If $K$ is full, we use a replacement strategy to replace the least counted key from $K$. Note that its occurrence number is set to that of replaced ones plus 1 rather than 1 (as shown in \textsf{ReplaceMin}). The major reason is just for avoiding the unreasonable replacement of new keys~\cite{karger2004simple}. To be more specific, if it is set to 1, once a new key comes, we will always replace this key until the occurrence number of this key exceeds others. This is unreasonable since the previous key is replaced and its valuable information is not reusable for the memory saving. \textbf{Inter-epoch Hotness Decaying}\quad Instead of performing a time decaying update when each tuple arrives (as described between Line 5-7 in Algorithm 1), we adopt a time-aware decaying approach in the epoch granularity. After tuple statistics in each epoch is completed, we multiply the counters of all the stored keys by $\alpha$ ($0<\alpha<1$) so that the time decaying effect can be taken. Hence, the counter is not only related to the number of occurrence number of the key but also the time decaying factor. It is worth noting that the size of the epoch directly determines the computational overhead of the recent hot-key identification. The larger the epoch size is, the lower the computational overhead is, and vice versa. Nevertheless, large epoch size may also affect the accuracy of the hot key identification. We conduct our experiments with the empirical epoch size of $1000$ by default. It is revealed that this result can cover almost all datasets (as will be discussed in Section 6) without compromising identification accuracy, and also can reduce the computational complexity of decaying updates by three orders of magnitude. \begin{algorithm}[t] \caption{Classification of Hot Key (CHK)} \DontPrintSemicolon \KwIn{$d_{min}$ -- minimal number of workers for hot key\\ \hspace{9.3mm} $f_{top}$ -- the highest frequency\\ \hspace{9.3mm} $f_{k}$ -- the frequency of the key $k$} \KwOut{$d$ -- number of candidate workers} \eIf{$f_{k} > \theta$}{ \tcc{Assign the number of candidate workers to the key} $index \leftarrow \lfloor\log_{2} / (f_{top} \ / f_{k}) \rfloor$\; $ d \leftarrow W_{num} / 2^{index} $\; \If {$d < d_{min}$}{ $d \leftarrow d_{min}$\; } \eIf {$M_{k} < d$}{ $M_{k} \leftarrow d$\; }{ $d \leftarrow M_{k}$\; } }{ $d \leftarrow 2$\; } \Return $d$.\; \end{algorithm} \subsubsection{Hot Key Classification}\label{sec:Solution:Classify Hot Keys} We next introduce to classify recent hot keys based on the frequency results. Algorithm 2 describes the procedure of hot key classification (denoted as CHK). To determine the number of workers to which each key can be assigned, we use the set $M$ to hold the number of the candidate workers for each hot key. We are based on the idea that the higher the frequency is, the more workers assigned. First, we get the number of arithmetic assignment workers for the hot key through the formula from line 1 to 4 in Algorithm 2. Second, if the value of $d$ obtained is less than the minimum value of $d_{min}$, we directly assign $d$ to $d_{min}$. The $d_{min}$ is related to the sum of the frequency of all hot keys. Then, considering that the frequency of the key changes, $M_{k}$ saves the number of workers previously assigned to key $k$. If $d$ is greater than the $M_{k}$, $M_{k}$ is updated to $d$ and $d$ workers are assigned to the key. Otherwise, we assign $M_{k}$ workers to the hot key. It is worth noting that we assign workers for each key through a consistent hash so that we can deal with the dynamic workers. The detailed contents will be introduced in Section 5. \begin{algorithm}[t] \caption{Heuristic Worker Assignment} \DontPrintSemicolon \KwIn{$A$ -- set of candidate worker\\ \hspace{9.4mm} $T$ -- time interval } \KwOut{$appro$ -- number of selected worker.} $appro \leftarrow -1$\; \tcc{Estimate the current status of each worker} $t_{cur} \leftarrow GetNowTime( )$ \; \If{$t_{cur} - t_{pri} > T$}{ \ForEach{$w \in W$}{ \eIf{$(C_{w}+N_{w}) \times P_{w} > T$}{ $C_{w} \leftarrow ((C_{w}+N_{w}) \times P_{w} - T) / P_{w}$\; }{ $C_{w} \leftarrow 0$\; } } $t_{pri} \leftarrow t_{cur}$\; } \tcc{Select the appropriate load worker} \ForEach{$w \in A$}{ \eIf{$appro\ = -1$}{ $appro \leftarrow w$\; }{ \If{$C_{appro} \times P_{appro} > C_{w} \times P_{w}$}{ $appro \leftarrow w$\; } } } $C_{appro} \leftarrow C_{appro} + 1$\; \Return $appro$\; \end{algorithm} \subsection{Heuristic Worker Assignment}\label{sec:Solution:Choosing a Light Load Worker} This section introduces how to assign the identified hot keys to $d$ or $2$ workers by CHK. Choosing a light-load worker from $d$ or $2$ candidate workers is the next question that has to address. We present a heuristic method to efficiently estimate (rather than communicate in prior efforts) the runtime states of workers in a fine-grained time interval. \subsubsection{Worker State Estimation} \indent In order to fully mobilize each worker, each tuple is expected to be processed as soon as possible. The selection of the light load worker usually depends on two states of the worker: the number of unprocessed tuples and processing capacity. Unfortunately, obtaining this information from all workers can cause prohibitively communication overhead. We observe that stream processing usually takes the same kind of operation for processing each tuple. Therefore, we obtain the processing capacity ({\em the average processing time of a tuple}) of workers by a periodic sampling. Since the number of tuples for each worker can be directly obtained at the source end, we estimate that the number of unprocessed tuples of workers is as follow:$$C_{w} = \left ( (C_{w} + N_{w})\times P_{w} - T \right )/ P_{w} \eqno{(1)}$$ where $N_{w}$ is the number of assigned tuples from sources. $P_{w}$ is the processing capacity of the worker $w$ and $T$ is the fixed time interval (10s). As shown in Figure~\ref{fig:ob2}, there is little difference in the processing time for the same batch of tuples under the same worker. We set the default time interval to 10 seconds. We thus can estimate the number of unprocessed tuples $C_{w}$ for the worker $w$. \subsubsection{Candidate Worker Selection} We estimate the number of unprocessed tuples in a heuristic fashion. Each tuple is expected to be processed as quickly as possible to fully squeeze each worker for load balance. Considering potentially-different processing capability of different workers, we select the worker with the shortest waiting time as shown between Line 12 to 18 in Algorithm 3. The estimated waiting time can be expressed as follow:$$T_{w} =C_{w} \times P_{w} \eqno{(2)}$$ where $T_{w}$ is to estimate the waiting time for the worker $w$. Considering the similarity of stream processing, capture the states of workers using a sampling technique~\cite{warwick1975sample}. \subsubsection{Example Illustration} \begin{figure}[t] \centering \includegraphics[width=3.5in]{images/assign} \vspace{-2em} \caption{Example of worker assignment. The bar indicates the processing status of tasks on different workers as the execution time goes by. Each bar is associated with $a \times b$ where $a$ denotes the number of tuples and $b$ indicates the processing capability (PC) of the worker.} \label{fig:assign} \end{figure} Figure~\ref{fig:assign} shows the example of how an appropriate worker is assigned. In this example, there are a total of 4 workers. Suppose the processing capacity of all workers are normalized to workers $W1$ and $W2$. Workers $W3$ and $W4$ have the twice processing capacity (PC) than $W1$ or $W2$. Suppose the current time is at 500. What we need to do is to assign a tuple to a worker from $W1$, $W2$, $W3$, and $W4$. In Figure~\ref{fig:assign}, the blue bar indicates the time spent in processing tuples. The red bar represents the time required for the unprocessed tuples. The $W1$, $W2$, $W3$, and $W4$ are assigned 400, 440, 280, and 180 tuples, respectively. Simply based on the number of assigned tuples as done in previous studies~\cite{nasir2015power,nasir2016two}, the worker $W4$ will be selected. In contrast, our work considers both unprocessed tuples and processing capacity for each worker. It is estimated that the waiting times for $W1$, $W2$, $W3$, and $W4$ are 50 (\circled{1}), 40 (\circled{2}), 100 (\circled{3}), and 60 (\circled{4}), respectively. We hence select $W2$ because of its shortest pending time, which is preferable over $W2$ for the subsequent tuple processing. \begin{figure}[t] \centering \includegraphics[width=3.0in]{images/CH} \vspace{-1em} \caption{An example of maintaining hashing consistency. {\bf(a)} A consistent hashing example with 3 workers; {\bf(b)} A case of removing a worker; {\bf(c)} A case of adding a worker; {\bf(d)} Small-scale worker deployment.} \vspace{-1.5em} \label{fig:CH} \end{figure} \section{Extension: Dynamic Change of Workers}\label{sec:Extension} There remains the fact that the number of workers may be dynamically changing in a practical deployment. For example, a worker might be shut down or failed. Alternatively, the new worker is put into operation. A typical approach for adapting the dynamic scenario is to use a hashing algorithm~\cite{eastlake2001us}. By using a hash function $F =$ HASH $(k)$ mod $n$ where $k$ is the key and $n$ is the number of workers, keys can be mapped to different workers. Nevertheless, the overhead of this simple mapping is subject to the number of workers. When a worker is removed or added, all keys have to be remapped to all workers, resulting in considerable memory overhead. An alternative approach is that we can create a virtual ID mapping table for the workers based on a maximum number of supported workers, and make the assignments based on virtual IDs~\cite{plaxton1996fast}. However, this approach suffers from two defects at least. First, modifications to the virtual ID mapping table may introduce a large amount of synchronization overhead for the consistency across all sub-streams. Second, the assignment of workers is not random, resulting in the key not being evenly distributed to workers. It directly affects the balanced distribution of the load. Let us reconsider this problem, which can be abstracted to map a batch of keys to $n$ workers and need to meet two requirements. First, all keys are supposed to be randomly and mapped evenly to workers. Second, the addition or reduction of workers does not cause a large number of key-to-worker re-mappings with monotonicity. We therefore propose to use consistent hash~\cite{karger1997consistent,karger1999web} for reducing the unnecessary key-to-worker mappings. Figure~\ref{fig:CH} shows a case of consistent hashing algorithm. Each key can be hashed to a space with $2^{32}$ buckets\footnote{The size of the bucket space is determined by the hash algorithm. The hashing algorithm~\cite{eastlake2001us} is used in our method to return 32-bit integer data. The maximum value of unsigned integer data is $2^{32}-1$, we thus use $2^{32}$ for bucket space.}. We connect these numbers to form a hash ring. The data is mapped to the ring through the hash algorithm. Now we hash \emph{key1}, \emph{key2}, \emph{key3}, and \emph{key4} to the hash ring through a specific hash function. The worker is also mapped to the ring by using the same hash algorithm as key. In a clockwise direction, all the keys stored in their nearest worker. In Figure~\ref{fig:CH}(a), the current state should be that \emph{key1} is stored in \emph{worker1}, \emph{key3} in \emph{worker2}, \emph{key2} and \emph{key4} in \emph{worker3}. \textbf{Worker Removal and Addition} Suppose the \emph{worker2} is crashed. As shown in Figure~\ref{fig:CH}(b), we have to remove it from the hash ring. According to the clockwise rule, \emph{key3} is then mapped to \emph{worker3}. No changes for all other keys have happened. Alternatively, suppose a new worker is added. Figure~\ref{fig:CH}(c) illustrates the way for this case where \emph{worker4} is added. By the clockwise shift rule, \emph{key2} is originally mapped to \emph{worker3} and will now be remapped to \emph{worker4} as \emph{worker4} is closer to \emph{key2} than \emph{worker3} on the ring. The other key still maintains the original mapping relationship with only change for the \emph{key2} mapping. In summary, the addition or removal of workers only affects the mapping of keys with a few steps (by just only changing worker to adjacent worker) on the hash ring. Correspondingly, a small portion of the keys in the ring space need to be remapped. {\bf Small-scale Worker Deployment}\quad Note that in the case that the number of workers is small, consistent hashing algorithm prone to causing the uneven distribution of keys for each worker. As shown in Figure~\ref{fig:CH}(b), when \emph{worker2} is removed with only two workers, \emph{key2}, \emph{key3}, and \emph{key4} will be mapped to \emph{worker3}. Only \emph{key1} will be mapped to \emph{worker1}. We complement to use a virtual node mechanism~\cite{karger1999web,stoica2003chord}, which calculates multiple hash values for each worker. By this means, each worker has multiple virtual nodes, which are further mapped onto the hash ring. Figure~\ref{fig:CH}(d) shows an example with two virtual nodes for each worker. There thus have four virtual nodes, denoted as \emph{worker1-1}, \emph{worker1-2}, \emph{worker3-1}, and \emph{worker3-2}, respectively. The new key-to-worker mapping relationship in Figure~\ref{fig:CH}(d) (i.e., \emph{key1} and \emph{key2} are mapped to \emph{worker1}; \emph{key3} and \emph{key4} are mapped to \emph{worker3}) demonstrates that the distribution of keys is more balanced than otherwise. \section{Evaluation}\label{sec:Evaluation} \indent In this section, we evaluate the efficiency and effectiveness of FISH by answering five research questions: \begin{table}[t] \begin{center} \caption{Time-evolving stream datasets} \label{dataset-table} \vspace{-1em} \begin{tabular}{lllll} \hline Dataset & Abbr. & Tuples & Keys\\ \hline MemTracker & MT & 49.21M & 0.39M \\ Amazon Movie & AM & 7.91M & 0.25M \\ \hline Zipf & ZF & 50M & $10^{5}$ \\ \hline \end{tabular} \vspace{-2em} \end{center} \end{table} \begin{itemize}[leftmargin=*] \item {\bf\em RQ1}: How efficient is FISH compared to existing state-of-the-art grouping schemes? (Section~6.2) \item {\bf\em RQ2}: How to decide the internal parameters of FISH for load balance? (Section~6.3) \item {\bf\em RQ3}: How effective is each part of FISH? (Section~6.4) \item {\bf\em RQ4}: How effective is consistent hashing algorithm for dynamic extension of worker variation? (Section~6.5) \item {\bf\em RQ5}: How is overall effect of FISH for a practical deployment on Apache Storm? (Section~6.6) \end{itemize} \subsection{Experimental Setup} \textbf{Simulation Settings}\quad We process the stream dataset by simulating the basic DAG in Figure~\ref{fig:DAG}. Sources extract the data and the workers perform the data aggregation. The input stream data is received by sources through shuffle grouping. Each data consists of a timestamp and a corresponding key. We assign each tuple to the specified worker based on the grouping scheme we desire to evaluate. \textbf{Datasets}\quad We evaluate FISH on both real-world and synthetic stream datasets, as shown in Table~\ref{dataset-table}. We use two real-world datasets, including MemeTracker (MT)~\cite{leskovec2009meme} and Amazon Movie Review (AM)~\cite{mcauley2013amateurs}. MT provides quotes and phrases from blogs and news media. We consider a keyword stream, which consists of words in the quotes and phrases where 571 stopwords provided in~\cite{lewis2004rcv1} are excluded. AM provides user reviews with product identification, which is used as the key for the tuples. As for synthetic Zipf (ZF) dataset, we generate 50M tuples with $10^5$ unique keys. Considering the skewness of stream data, the generated time-evolving ZF dataset has the following distribution with the exponent in the range $z\in$ $\left \{ 1.0,1.1,\dots,2.0 \right \}$. 1) For the first 0.8$\times N$ tuples, the occurrence probability of a given key $i$ obeys $Pr\left [i \right ]\propto i^{-z}$; 2) For the last ($1-0.8$)$\times N$ tuples, the occurrence probability of a given key $i$ obeys $Pr\left [i \right ]\propto (k-i+1)^{-z}$ where $k$ is $10^4$ and $N$ is 5M. To simulate the feature of time-evolving data, the algorithms have been run 10 times with a different seed for the pseudo-random number generator. \indent {\bf Measurement Metrics}: To evaluate the scalability, we use the amount of memory overhead all workers have totally consumed as the metric. The less total memory overhead of all workers incurs, the fewer memory duplicates have been caused, indicating the better scalability. We use the processing time of loads to evaluate the load balance in the simulation environment. Generally, the more balanced the loads are, the better the worker can be fully mobilized. The execution time of different grouping schemes basically depends on the utilization of the workers, which can be used as an effective metric to represent the effect of load balance. \begin{figure}[t] \centering \includegraphics[width=3.3in]{images/performance_r} \vspace{-1em} \caption{Execution time of PKG, D-C, W-C, and FISH with different number of workers on the real-world datasets. (a) is for AM, and (b) is for MT. All results are normalized to SG.} \vspace{-1em} \label{fig:PR} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=7.0in]{images/performance_z} \vspace{-1em} \caption{Execution time of PKG, D-C, W-C, and FISH with different number of workers on the ZF datasets. All results are normalized to SG.} \vspace{-1em} \label{fig:PZ} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=7.0in]{images/memory_z} \vspace{-1em} \caption{Memory overhead of PKG, D-C, W-C, and FISH with different number of workers on the ZF datasets. All results are normalized to FG.} \vspace{-1em} \label{fig:MZ} \end{figure*} \subsection{RQ1: Overall Evaluation} We investigate overall load balance and memory overhead of FISH against state-of-the-art PKG, D-C, and W-C grouping schemes on both synthetic and real-world datasets. For the load balance, we use the SG as the baseline, which is a well-known grouping scheme with an ideal load balancing effect. For memory overhead, we use FG as a baseline since it does not generate any extra memory overhead \textbf{Load Imbalance}\quad Figure~\ref{fig:PR} illustrates the results on the real-world {AM} and {MT} datasets. We use the SG as the baseline. The lower the execution time is, the better the load balancing effect is. Compared to four tested grouping schemes, we can see that FISH has the best load balance effect for both {MT} and {AM} datasets. The execution time of FISH is almost same as the SG with the worst case of 1.07x. Compared to PKG, as the number of workers increases, the effect of FISH increases more significantly (from 1.19x to 8.32x for MT and from 1.12x to 7.31X for AM). This is because that PKG only assign two workers for all keys. The skew distribution of keys causes the tuples to be unevenly distributed among workers, resulting in load imbalance. Although W-C and D-C take into account the skew distribution of keys, its effect is still limited as the number of workers increases. Overall, FISH has up 7.44x and 6.95x improvement than D-C and W-C respectively. This is due to the fact that the feature of the time-evolving of the stream data is not taken into consideration, resulting in inaccurate hot key identification and inappropriate assignment. Figure~\ref{fig:PZ} further investigates the load balance of FISH on synthetic ZF dataset with the different skew factor. Overall, the gap between the four grouping schemes is increasing with the number of workers increases. PKG is worst among all of four grouping schemes. The effect of PKG becomes worse with the skew increases because it only assigns two workers for each key without considering the case of skewed data. The execution time of D-C and W-C becomes longer with the skew increases, although the skewed data feature is considered. Particularly with the increasing number of workers, the effect would become worse. FISH is up to 13.57x and 12.05x improvement than D-C and W-C respectively. This is because that time-evolving feature is not considered in D-C and W-C. As a result, they may lead to the fact that hot keys cannot be accurately identified and assigned. We also note that as the number of workers is scaling, FISH can always have the comparable load balance effect to SG with the worst case of 1.32x. \textbf{Memory Overhead}\quad Figure~\ref{fig:MZ} shows the memory overhead of FISH compared to existing grouping scheme FG, SG, PKG, D-C, and W-C. For system scalability, not only load balancing but also memory overhead must be taken into consideration. We use the memory overhead of FG as a balance to normalize the results of other grouping schemes. FG assigns only one worker per key without any extra memory overhead. Thanks to the special assignment for a small fraction of keys, which dominate most loads in stream data. Even with the extended number of workers, the memory overhead of FISH is comparable (from 1.11x to 2.61x) to FG with 128 workers. Although SG is able to balance the load well with the increasing number of workers, the memory overhead has increased significantly (from 15.52x to 88.32x). Compared to FG, the memory overhead of PKG, D-C, and W-C schemes is close. Yet, they suffer from the problem of load imbalance, as depicted in Figure~\ref{fig:PR} and Figure~\ref{fig:PZ}. In summary, compared to all of existing grouping schemes, FISH showcases the best results in load balance and memory overhead for time-evolving stream data. \subsection{RQ2: Internal Parameter Decision} We next investigate how to decide the appropriate internal parameters of FISH for better effect. Two major parameters include the decaying factor $\alpha$ in Algorithm 1 and the hot key threshold $\theta$ in Algorithm 2. \begin{figure*}[t] \centering \includegraphics[width=7.0in]{images/alpha} \vspace{-1em} \caption{Execution time and memory overhead as a function of skew $z$ with different time decaying factor $\alpha$. The results are collected on different number (16/32/64/128) of workers.} \vspace{-1.5em} \label{fig:alpha} \end{figure*} \textbf{Setting Decaying Factor $\alpha$}\quad Our goal is to find an appropriate $\alpha$ so that more stream data can be processed. Figure~\ref{fig:alpha} shows the impact of $\alpha$ value, ranging from $0$ to $1$, with different number of workers and skew. Overall, the large $\alpha$ value can lead to the long execution time. Note that, when $\alpha$ is with $1$, this shows the special case that does not consider the time-evolving feature. We thus can see that the execution time grows significantly (up to 12.14x compared to $\alpha$ of $0.2$) as the skew increases. When $\alpha$ is with $0$, all previous data for each update will be abandoned, although the execution time has a relatively-low level. An amount of memory overhead will be incurred, especially for low skew stream data (with 2.65x compared to $\alpha$ of $0.2$). The reason is that abandoning previous data may mis-lead to many false non-hot keys that are supposed to be hot. Among all possible values, we can see that $\alpha$ with $0.2$ has the best effect on load balance and memory overhead for many cases with different workers and skew. \textbf{Setting Hot Key Threshold $\theta$}\quad As discussed in the previous study~\cite{nasir2016two}, if $\theta$ is greater than $2/n$ where $n$ is the number of workers, the DSPE can definitely suffer from load imbalance. If it is less than $1/5n$, the probability of load imbalance generated by PKG is bounded by $1$ - $1/n$ at least. An appropriate threshold $\theta$ often lies in the range of from $2/n$ down to $1/8n$. Figure~\ref{fig:theta} shows the potential impact with different $\theta$ thresholds. \begin{figure*}[ht] \centering \includegraphics[width=7.0in]{images/theta} \vspace{-1em} \caption{Execution time and memory overhead as a function of skew with different hot key thresholds $\theta$. The results are collected with different number (16/32/64/128) of workers.} \vspace{-1.5em} \label{fig:theta} \end{figure*} In theory, the small threshold often results in the better load balance. The large threshold often results in the lower memory overhead. However, in practice, we can find in Figure~\ref{fig:theta} that significant load imbalance occurs only in the case of $\theta = 2/n$. For other thresholds, the result has almost no difference especially for the large number of workers. As for the memory overhead, we find that memory overhead has little change as $\theta$ is changing. We conservatively choose a threshold as $1/4n$ for two reasons. First, its execution time is similar to the threshold of $1/8n$ which reflects the similar effect of load balancing. Second, as for memory overhead, it is almost no difference compared to the threshold of $2/n$. However, the threshold with $1/8n$ produces more memory overhead for the large number of workers and low skewed data. A compromised threshold with $1/4n$ can provide the reasonable results on both load balance and memory overhead. \begin{figure}[t] \centering \includegraphics[width=3.3in]{images/epoch_e} \vspace{-1em} \caption{Execution time of FISH with and without our epoch-based hot key identification, denoted as w/(o) epoch. The results are collected on different number (16/32/64/128) of workers.} \vspace{-1em} \label{fig:BOX_I} \end{figure} \subsection{RQ3: Breakdown} We next break down the effectiveness of FISH, including recent hot-key identification, hot-key classification and heuristic worker assignment. \textbf{Effectiveness of Epoch-based Hot Key Identification} Figure~\ref{fig:BOX_I} shows the effectiveness of our epoch-based hot key identification compared to the entire lifetime counting-based approach in D-C and W-C. We can see that the execution time has been greatly improved. Especially when the number of workers and the skew increase, the effect becomes more pronounced (up to 11.91x). The main reason accounting for this is that hot key identification in D-C and W-C may potentially lead to inaccurate hot-keys. They monitor the entire lifetime of all keys, thereby resulting in the situation that the most recent hot keys are difficult to capture. This can thus lead to load imbalance among workers. More workers and larger skew can further aggravate the problem of load balance. \begin{figure}[t] \centering \includegraphics[width=3.3in]{images/memory_reduce} \vspace{-1em} \caption{Memory overhead and execution time of using different strategies in D-C and W-C against our CHK. The memory overhead results are normalized to CHK. The results are collected on different number (64/128) of workers.} \vspace{-1em} \label{fig:MR} \end{figure} \textbf{Effectiveness of Hot Key Classification} Figure~\ref{fig:MR} illustrates the memory overhead of FISH with and without CHK. FISH without using CHK includes two cases of hot-key processing approaches that are used in W-C (written as w/ W-C) and D-C (w/ D-C), respectively. As shown in Figure~\ref{fig:MR}, we can see that CHK can greatly reduce the memory overhead in comparison to the one of W-C. This benefit can be more significant as the number of workers increases. Compared to the method used in W-C, FISH can save up to $25.23\%$ and $45.34\%$ of memory costs for $64$ and $128$ workers respectively. Although the method used in D-C has the less memory overhead than CHK in some cases, it may suffer from longer execution time and more serious load imbalance problems than CHK. Due to the skew distribution of keys, the frequency of hot keys usually varies dramatically. Simply treating all hot keys equally often results in load imbalance (for D-C) or unnecessary memory overhead (for W-C). \begin{figure}[t] \centering \includegraphics[width=3.1in]{images/Hen} \vspace{-1em} \caption{Execution time of FISH with and without heuristic worker assignment. We collect results with different number (16/32/64/128) of workers.} \vspace{-1em} \label{fig:Hen} \end{figure} \textbf{Effectiveness of Heuristic Worker Assignment}\quad In order to verify the effectiveness of heuristic worker assignment (hwa), we assume that half of the worker's processing capability is twice than the others. Figure~\ref{fig:Hen} plots the results. We can see that FISH can provide up to 2.61x improvement on the execution time compared to the traditional worker assignment in previous studies~\cite{nasir2015power,nasir2016two} which assigns the keys according to the amount of worker's load. The main reason accounting for this is that simply ensuring each worker has the same number of tuples in the final state may assign a busy worker for a tuple in some time interval, particularly true for the situation where workers have different processing capacity. In contrast, our approach is able to cope with scenarios where workers are heterogeneous and dynamically changing by inferring the status of workers. \begin{figure}[t] \centering \includegraphics[width=3.3in]{images/CH_E} \vspace{-1em} \caption{Memory overhead of FISH with/without consistent hashing (CH) for dynamic change of workers. (a) Add a worker in a half execution of task; (b) Reduce a worker in a half execution of task. } \vspace{-1em} \label{fig:CH_J} \end{figure} \subsection{RQ4: Effectiveness of Consistent Hashing} In order to investigate the effectiveness of consistent hashing (CH), we create the dynamic scenario by randomly adding or removing a worker instance during the processing. Figure~\ref{fig:CH_J} illustrates the memory overhead of FISH with and without CH with different skewed stream data. As we can see, for stream data with low skew, FISH without CH almost has memory overhead twice than FISH with CH no matter the workers are increased or decreased. This is because that the previous key and worker mappings rely heavily on the number of workers. The variation of worker number just means that almost all possible mappings need to be changed, leading to twice memory overhead. Stream processing on highly-skewed dataset has less increase of memory overhead. The reason for this is that the hot keys for stream dataset with high skewness need to be re-mapped to new workers. Considering a part of new workers have already reserved the corresponding data of these hot keys. As a result, this can save an amount of memory overhead so that not too much remapping has occurred when the number of workers is changing. \subsection{RQ5: Practical Deployment on Apache Storm} To quantify the impact of FISH, we have integrated it into Apache Storm and deployed it on a cluster with 8 compute nodes, each of which has 20 available ports. We build a DAG topology configured with 32 sources and 128 workers. We compared FISH with state-of-the-art FG, SG, PKG, D-C and W-C grouping schemes. \begin{figure}[t] \centering \includegraphics[width=3.3in]{images/latency} \vspace{-1em} \caption{The average and percentiles latency by deploying FG, PKG, D-C, W-C, SG, and FISH on Apache Storm with the MT and AM datasets} \vspace{-1em} \label{fig:latency} \end{figure} \textbf{Latency}\quad Figure~\ref{fig:latency} shows the results regarding end-to-end latency. The plot reports the average latency with the 50th, 95th, and 99th percentiles across all workers, respectively. Thanks to the accurate hot key identification and heuristic worker assignment. The 50th (median) and 99th percentiles in FISH have the geometric mean of latency with only 7 and 562 milliseconds (for MT), as well as 9 and 640 milliseconds (for AM), respectively. These results are almost the ideal latency provided by SG. In summary, FISH significantly outperform FG, W-C, D-C, and PKG. FISH can reduce the average and 99th percentile latency of state-of-the-art W-C by 87.12\% and 76.34\%, respectively. \begin{figure}[t] \centering \includegraphics[width=3.3in]{images/throughput} \caption{Throughput comparison by deploying FG, PKG, D-C, W-C, SG, and FISH on Apache Storm with the MT and AM datasets} \vspace{-1em} \label{fig:throughput} \end{figure} \textbf{Throughput}\quad Figure~\ref{fig:throughput} shows the results regarding throughput. Overall, FG has the lowest throughput (with 30K tuples/sec for MT and 23K tuples/sec for AM). Compared to FG, PKG involves a considerable improvement. Further, D-C and W-C perform better than PKG, but still have a distance gap for matching the throughput of SG. In comparison, FISH can provide a throughput 1.32 times higher than W-C, and 1.48 times higher than D-C. On the whole, we can observe that FISH can provide the almost ideal throughput close to the one by SG. \begin{figure}[t] \centering \includegraphics[width=2.8in]{images/mem} \vspace{-1em} \caption{Relative memory overhead of FISH against SG} \vspace{-1.5em} \label{fig:mem} \end{figure} \textbf{Memory Overhead}\quad As discussed above, we can see that SG provides the best effect on load balance in terms of latency and throughput. We next investigate the comparative results of memory overhead of FISH in comparison to SG. Figure~\ref{fig:mem} plots the normalized results with different skewness. The baseline is the results with SG. We can find that, for the skew with 1.0, the memory overhead in FISH can be as low as $3.34\%$ of that in SG. Overall, FISH has significantly less ($<16\%$) memory overhead than SG. {\bf Summary}\quad According to aforementioned results, we find that FISH is able to technically provide the compelling latency and throughput results of SG grouping scheme for time-evolving stream data at a very small fraction of memory overhead. \section{Related work} \label{sec:Related work} A large number of previous studies~\cite{gedik2014partitioning,rivetti2015efficient,shah2003flux,xing2005dynamic,castro2013integrating,gedik2014elastic,kumbhare2015fault} leverage operator migration for load balance in DSPEs. Once a situation of load imbalance is detected, the system activates a rebalancing routine that moves some keys and their associated states away from an overloaded server. Flux~\cite{shah2003flux} encapsulates adaptive state partitioning and dataflow routing, migrates operators from the most loaded to the least loaded server. Xing et al.~\cite{xing2005dynamic} present a correlation based load distribution algorithm for dynamic load migration to adapt to changing loads. Fernandez et al.~\cite{castro2013integrating} propose an integrated approach for scale-out and failure recovery through checkpointing and migration. Gedik~\cite{gedik2014partitioning} propose partitioning functions for stream processing systems that employ stateful data parallelism to improve application throughput and control migration cost. These rebalance-based approaches usually require setting a number of parameters, such as how often to check for imbalance. These parameters are typically application-specific with different tradeoff situations between imbalance and rebalance cost. Further, each sub-stream needs to maintain a routing table that maps the key to each PEIs with prohibitive memory overhead. Also, modifying the routing table introduces additional consistency checking across all sub-streams~\cite{nasir2015power}. In contrast, we consider operators replication that allows the key can be processed by multiple workers and show it is sufficient to balance the load without active monitoring of the load imbalance. A wide spectrum of studies attempt to consider operator replication to prevent load imbalance~\cite{nasir2015power,nasir2016two,nasir2017load}. They allow that each key can be processed by multiple workers. POTC~\cite{nasir2015power} based on the “power of two choices”~\cite{azar1999balanced} which associates each key to two possible operator instances, and selects the minimum load of the two whenever a tuple for a given key must be processed. Nasir et al.~\cite{nasir2016two} propose a lightweight streaming grouping scheme which is based on the SpaceSaving~\cite{karger2004simple} algorithm and does not require training or monitoring to detect the heavy hitters. CG~\cite{nasir2017load} studied the load balancing problem for streaming engines running in a heterogeneous cluster. Our specialized approach differs from these replication-based approaches with the following significant innovation: 1) We first consider the feature of time-evolving stream data and investigate real-time load balance within some time interval; 2) We present a novel heuristic method to assess the state information of remote workers for efficient worker assignment. There also involves much effort put into operator placement, which ensures load balance by exploiting computational resources. Xing et al.~\cite{xing2006providing} propose a correlation-based algorithm that strives to minimize operator movement overhead and support more resilient operator placement.~\cite{aniello2013adaptive} deploys a topology via using both online and offline analyzing methods under the minimal network communication. Eidenbenz et al~\cite{eidenbenz2016task} analyze the task allocation problem and propose an approximation algorithm to exploit optimal solution. In contrast to these studies with resource partition, our approach makes workload partition for load balance. Note that our approach is compatible with an integration of this type of approach with a hybrid partition, which can be interesting future work for achieving load balance with minimum computational resources. \section{Conclusion}\label{sec:Conclusion} In this work, we investigate the load balance problem for time-evolving stream processing with a large scale deployment. Our key innovation comes from two major technical advances. First, we present an epoch-based approach to identify recent hot keys efficiently by intra-epoch frequency counting and inter-epoch hotness decaying. Second, based on the similarity of operations in streaming processing, we further propose a heuristic approach to infer the state information of remote workers to make the efficient worker assignment. We evaluate our approach on a cluster of 128 nodes with both synthetic and real-world datasets. Our practical deployment on Apache Storm demonstrates that FISH significantly outperforms state-of-the-art with the average and 99th percentile latency reduction by up to 87.12\% and 76.34\% (vs. W-Choices), and 96.66\% memory consumption reduction (vs. Shuffle Grouping). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:10:05", "yymm": "1806", "arxiv_id": "1806.00760", "language": "en", "url": "https://arxiv.org/abs/1806.00760" }
\section{Introduction} This is the second of two articles intended to prove the following classification theorem of vacuum static black holes, \begin{Theorem}[The classification Theorem]\label{TCTHM3} Any static black hole data set is either, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\Roman*)}, widest=a, align=left] \item\label{FTII} a Schwarzschild black hole, or, \item\label{FTI} a Boost, or, \item\label{FTIII} is of Myers/Korotkin-Nicolai type. \end{enumerate} \end{Theorem} For a contextual discussion of Theorem \ref{TCTHM3} including the notion of static black hole data set and a detailed description of each family see the introduction of Part I,\cite{PartI}. In Part I it was proved that static black holes data sets have only one end and that the horizons are weakly outermost. This accomplished the first two of the three steps required for the proof of Theorem \ref{TCTHM3}, as was explained in subsection \ref{TSOTP} of Part I. In this Part II we prove the third step, namely that the end is either asymptotically flat or asymptotically Kasner. The results of this article are found in sections \ref{S1S} and \ref{VWAE}. Section \ref{S1S}, which is independent of section \ref{VWAE}, has interest in itself and gives a thorough discussion of free $\Sa$-symmetric data sets. This section is used in section \ref{VWAE} where it is proved that black hole ends are either asymptotically flat or asymptotically Kasner. The techniques introduced for the asymptotic study are so far new and are based upon a careful analysis of static solutions on metrically collapsed annuli. Many of the conclusions are hitherto unknown and have their own interest. \vspace{0.2cm} Before discussing in subsection \ref{TCSTA} the structure of the article and the different proofs, let us make a succinct summary of the background material. We hope it will help the presentation and the reading. Complementary information is found in the background section \ref{BACKGROUNDMATERIAL2} which contains in particular the background material of Part I. \vspace{0.2cm} Formally, a (vacuum) static black hole data set $(\Sigma;g,N)$ consists of a non-compact orientable three-manifold $\Sigma$ with compact and non-empty boundary $\partial \Sigma$, a three-metric $g$ such that $(\Sigma;g)$ is metrically complete, and a non-negative lapse function $N$ that is zero on $\partial \Sigma$ (the horizons) and positive in the interior $\Sigma^{\circ}=\Sigma\setminus \partial \Sigma$ of $\Sigma$, satisfying the static vacuum Einstein equations \begin{equation}\label{STATSTAT} NRic=\nabla\nabla N,\quad \Delta N=0. \end{equation} A static black hole data set $(\Sigma;g,N)$ gives rise to a vacuum static black hole spacetime\footnote{The exterior communication region of it.} (${\bf Ric}=0$), \begin{equation}\label{SPTE2} {\bf \Sigma}=\mathbb{R}\times \Sigma,\quad {\bf g}=N^{2}dt^{2}+g, \end{equation} where $\partial_{t}$ is the static Killing field. Conversely, a static black hole spacetime of the form (\ref{SPTE2}), gives rise to a static black hole data set $(\Sigma;g,N)$. Throughout this article we will work with data sets rather than spacetimes. In other words, we work at the `initial data level'. The data sets of the Schwarzschild black holes are, \begin{equation} \Sigma=\mathbb{R}^{3}\setminus B(0,2m),\quad g=\frac{1}{1-2m/r}dr^{2}+r^{2}d\Omega^{2}\quad {\rm and}\quad N=\sqrt{1-2m/r} \end{equation} where $m>0$ is the ADM-mass and $B(0,2m)$ is the open ball of radius $2m$. The Boost data sets are, \begin{equation} \Sigma=[0,\infty)\times {\rm T}^{2},\quad g=dx^{2}+h,\quad N=x \end{equation} where $h$ is any flat metric on the two-torus ${\rm T}^{2}=\Sa\times \Sa$. Finally a data set $(\Sigma;g,N)$ is of Myers/Korotkin-Nicolai type if $\Sigma$ has the topology of an open solid three-torus minus a finite number of open three-balls, and if the asymptotic is Kasner. The Kasner spaces (that define the asymptotic) are defined as any $\mathbb{Z}\times \mathbb{Z}$-quotient of the data, \begin{equation} \tilde{\Sigma}=(0,\infty)\times \mathbb{R}^{2};\quad \tilde{g}= dx^{2}+x^{2\alpha}dy^{2}+x^{2\beta}dz^{2},\quad \tilde{N}=x^{\gamma}, \end{equation} where $y$ and $z$ are coordinates on each of the factors $\mathbb{R}$ of $\mathbb{R}^{2}$, and $\alpha, \beta$ and $\gamma$ are any numbers satisfying, \begin{equation}\label{CIRCLE1} \alpha+\beta+\gamma=1,\qquad \alpha^{2}+\beta^{2}+\gamma^{2}=1 \end{equation} (see Figure \ref{Figure21}). \begin{figure}[h] \centering \includegraphics[width=6cm, height=6cm]{KVar.pdf} \caption{The circle that defines the range of the Kasner parameters $\alpha$, $\beta$, $\gamma$.} \label{Figure21} \end{figure} The group $\mathbb{Z}\times \mathbb{Z}$ acts freely on the factor $\mathbb{R}^{2}$ by translations and therefore the quotient manifold is diffeomorphic to $(0,\infty)\times {\rm T}^{2}$. Observe that the diameters of the `transversal tori' $\{x\}\times {\rm T}^{2}$ are determined by the $\mathbb{Z}\times \mathbb{Z}$-action and can be therefore arbitrarily small. The Kasner spaces with $(\alpha,\beta,\gamma)=(0,0,1)$ are the Boosts and are the Kasner data with faster growth of the lapse (linear). They are the only Kasner that are static black hole data sets\footnote{One must include $\{0\}\times {\rm T}^{2}$ to the manifold.}, the other being singular as $x\rightarrow 0$. We denote the Boosts by the letter $B$. The Kasner spaces $(\alpha,\beta,\gamma)=(1,0,0)$ and $(\alpha,\beta,\gamma)=(0,1,0)$, that have constant lapse and are therefore flat, are denoted respectively by the letters $A$ and $C$. In simple terms, a data set is asymptotically Kasner if it approaches a particular Kasner data, at any order of differentiability, faster than any inverse power of the distance (see Definition \ref{KADEF}). In this article we will use mainly the harmonic presentation of data sets, namely we will use $(\Sigma;\hg,U)$ instead of $(\Sigma; g,N)$ where, \begin{equation} \hg=N^{2}g,\quad U=\ln N. \end{equation} The static equations (\ref{STATSTAT}) now are, \begin{equation} Ric_{\hg}=2\nabla U\nabla U,\quad \Delta_{\hg} U=0, \end{equation} and have several geometric advantages. In particular, the Ricci curvature of $\hg$ is non-negative, and is zero iff $U$ is constant. The Kasner spaces in the harmonic presentation are now $\mathbb{Z}\times \mathbb{Z}$-quotients of, \begin{align} \tilde{\Sigma}=(0,\infty)\times \Sigma,\quad \tilde{\hg}=dx^{2}+x^{2a}dy^{2}+x^{2b}d z^{2},\quad \tilde{U}=c\ln x \end{align} where $a, b$ and $c$ satisfy \begin{equation} 2c^{2}+(a-\frac{1}{2})^{2}+(b-\frac{1}{2})^{2}=\frac{1}{2}\quad \text{and}\quad a+b=1 \end{equation} Thus, the circle (\ref{CIRCLE1}), (see Figure \ref{Figure21}), is seen as an ellipse in the plane $a+b=1$, (see Figure \ref{Figure31}). The $g$-flat solutions $A, C$ and $B$ are now $(a,b,c)=(1,0,0), (0,1,0)$, and $(1/2,1/2,1/2)$, respectively. \begin{figure}[h] \centering \includegraphics[width=6cm, height=6cm]{KVarH.pdf} \caption{The ellipse that defines the range of the parameters $a, b$ and $c$.} \label{Figure31} \end{figure} \vspace{0.2cm} The study of the asymptotic will be done by looking at rescaled annuli. Let $k\geq 1$, $r>0$ and let $\mathcal{A}_{\hg}(2^{-k}r,2^{k}r)$ be the annulus, \begin{equation} \mathcal{A}_{\hg}(2^{-k}r,2^{k}r):=\{p\in \Sigma: 2^{-k}r<\dist_{\hg}(p,\partial \Sigma)<2^{k}r\} \end{equation} where $d_{\hg}(p,\partial \Sigma)$ is the $\hg$-distance from $p$ to the boundary $\partial \Sigma$. Fixed $k$, we will let $r$ increase, and, over the annulus $\mathcal{A}_{\hg}(2^{-k}r,2^{k}r)$ we will look at the rescaled metrics $\hg_{r}:=\hg/r^{2}$. Anderson's estimates (see Theorem \ref{LACD2} in Part I) show that $Ric_{\hg_{r}}$ and $\nabla U$ are uniformly bounded (i.e. their $\hg_{r}$-norms have bounds that do not depend on $r$), as so are any derivatives of them. This fundamental property will permit first the analysis of the geometry of rescaled annuli, and then, by suitable concatenation, the analysis of the asymptotic. We will discuss all that in the next subsection. As a result of the different proofs it will be clear that, if the asymptotic is Kasner, then the parameter $c$ is positive, $c>0$. Hence the Kasner spaces with $c>0$ will be the ones more relevant to us. Still, the Kasner $A$ and $C$, that have $c=0$, will also come into play often but for technical reasons. There is a significant difference between $A$ and $C$ on one side, and the Kasner with $c>0$ on the other side: the diameter of the transversal tori $\{x\}\times {\rm T}^{2}$ grows linearly with the distance $x$ in the first case but sub-linearly in the second case. Thus, if $c>0$, the diameters of the tori $\{x\}\times {\rm T}^{2}$ with respect to $\hg_{x}=\hg/x^{2}$, tend to zero as $x\rightarrow \infty$. Another way to say this is: fixed $k\geq 1$, the Riemannian annuli $(\mathcal{A}_{\hg_{x}}(2^{-k},2^{k});\hg_{x})$ metrically collapse, as $x$ tends to infinity, to a segment of length $2^{k}-2^{-k}$ but there is no such type of collapse if the Kasner is $A$ or $C$, they metrically collapse to a two-dimensional flat annulus, (for metric-collapse see subsection \ref{SFCCRM}). As said, these global differences will cause technical difficulties while studying Kasner asymptotic. We will resume this point in subsection \ref{TCSTA}. \vspace{0.2cm} We move now to explain the structure of the article and the route behind the series of results and their proofs. In particular the claims of section \ref{VWAE} are somehow interrelated and therefore it is useful to have a clear overview. \subsection{The content and the structure of this article (Part II)}\label{TCSTA} Section \ref{BACKGROUNDMATERIAL2} contains the background material, including notation and terminology. Subsection \ref{SDSMT2} contains the main definitions, as the one of static black hole data set or Kasner asymptotic, and states again the classification theorem as Theorem \ref{TCTHM3}. Subsection \ref{SAP2} defines annuli and partitions cuts, that are technically useful to study asymptotic properties. All that is the background material that was already introduced in Part I. The rest of the background material is special for this Part II and is the following. Also inside subsection \ref{SAP}, we pay special attention to `scaling', and notations related to it that will be used massively when studying ends in section \ref{VWAE} (it is important to keep track of them). Scaling techniques are useful due to the scale invariance of Anderson's decay estimates for the curvature and for the gradient of the logarithm of the lapse, see the Theorems \ref{LACD1} and \ref{LACD2} in Part I. Furthermore the study of ends through scaling requires a minimum material on the Cheeger-Gromov-Fukaya theory of convergence and collapse of Riemannian manifolds under curvature bounds that is shortly introduced in subsection \ref{SFCCRM}. Subsection \ref{SSKK} contains a careful account of Kasner spaces and a suitable proof of their (well known) uniqueness, which will be used throughout section \ref{VWAE} when we discuss asymptotic. This ends the background section. \vspace{0.2cm} Section \ref{S1S} marks the beginning of the results of Part II, whose final goal is to describe, in section \ref{VWAE}, the asymptotic of ends of black hole data sets. Section \ref{S1S} studies various aspects of data sets which are free $\Sa$-symmetric. It has interest in itself and proves a number of novel results on these types of spaces. The contents are as follows. Subsection \ref{RDRE} presents the reduced equations, that is, the fields and the equations that are obtained after the quotient by a $\Sa$-symmetric static data set by $\Sa$, Proposition \ref{PRED}. The reduced data $(\qM;q,U,V)$ of a static data set $(\Sigma; \hg,U)$, with a $\Sa$-symmetry generated by a Killing field $\xi$, consists of a two-manifold $\qM$, a Riemannian metric $q$ on $\qM$, and two fields, the usual field $U=\ln N$, and $V=\ln \Lambda$ with $\Lambda=|\xi|_{\hg}$. Relevant examples of reduced data sets are discussed in the subsections \ref{KSOL} and \ref{CIGARSOL}. Subsection \ref{KSOL} discusses, as a natural example, the reduced Kasner spaces (this subsection can be skipped). Subsection \ref{CIGARSOL} makes a thorough description of another particular reduced data that we call the `cigars' (due to their geometric shape). Subsection \ref{CIGUNIQU} proves a uniqueness statement for the cigars and subsection \ref{CIGNHCP} characterises the cigars as the data that model high-curvature regions. These properties of the cigars play an essential role in subsection \ref{DFIAT}, where it is proved that $|\nabla U|^{2}$, $|\nabla V|^{2}$ and $\kappa$ (the Gaussian curvature of $q$), have quadratic decay at infinity on $(\qM; q)$, (provided $(S;q)$ is metrically complete and $\partial S$ is compact). A few comments are in order here. The discussion of such decay depends on whether the twist $\Omega$ of $\xi$, which is a constant, is zero or not. When it is zero, the quadratic decay can be obtained using the same techniques \'a la Bakry-\'Emery used to prove a generalised Anderson's decay in Part I for the gradient of the logarithm of the lapse, Proposition \ref{REDCUR}. However, it turns out that such techniques do not entirely apply when the twist $\Omega$ is not zero. For this reason, quadratic decay in such case is proved arguing by contradiction, which explains why we study high-curvature regions in subsection \ref{CIGNHCP}. In the same subsection \ref{DFIAT} it is shown, using the decay previously proved, that $S$ has only a finite number of simple ends, each diffeomorphic to $[0,\infty)\times \Sa$. Furthermore it is proved in Proposition \ref{LPRO} that $U$ has a limit $U_{\infty}$ at infinity, $-\infty\leq U_{\infty}\leq \infty$. These are the most important results of section \ref{S1S}. \vspace{0.2cm} Altogether section \ref{VWAE} proves that black hole data sets are either asymptotically flat or asymptotically Kasner. The two types of asymptotic, that are discussed separately in subsections \ref{ENDSAF} and \ref{ENDSAK}, are distinguished by the type of volume growth of the end $(\Sigma;\hg)$, which is at most cubic by the Bishop-Gromov volume comparison. In subsection \ref{ENDSAF} it is shown that if the volume growth of the static black hole end is cubic then the end is asymptotically flat, whereas in subsection \ref{ENDSAK}, which has four subsections, it is shown the fundamental Theorem \ref{KAFR} stating that, if the volume growth is sub-cubic, then the asymptotic is Kasner. It is important to remark that as a byproduct of the proofs it will be shown that the Kasner asymptotic is indeed different from the flat Kasner $A$ or $C$, and of course from any Kasner with parameter $\gamma$ less than zero, (or $c<0$ if we work in the harmonic presentation), that are ruled out by the maximum principle (as then $N\rightarrow 0$ at infinity). This behaviour is compatible with the asymptotic of Myers/Korotkin-Nicolai black holes that can be that of any Kasner different from $A$, $B$, $C$ and different from those with $\gamma<0$. We leave it as an open problem to prove that the only static black hole data sets asymptotic to a Boost are in fact the Boosts. A more elaborated discussion on this point can be found in the introduction of Part I. Let us describe more in detail now the four subsections \ref{SPTWOP0}, \ref{SPTWOP1}, \ref{FTKASS}, \ref{POKA}. The preliminary subsection \ref{SPTWOP0} discusses metrics on two-tori under conditions on the curvature and the diameter, and is used in the next subsection to study the geometry of the level sets of the lapse on (almost) one-collapsed annuli (that is, annuli whose geometry is `near' one-dimensional in the Gromov-Hausdorff metric, see subsection \ref{RDSACL}). Subsection \ref{SPTWOP1} proves a sufficient condition for a static end (non-necessarily a static black hole end) to have Kasner asymptotic different from $A$ or $C$, Theorem \ref{KASYMPTOTIC}. Roughly speaking, the sufficient criteria says that if the rescaled geometry of a sufficiently one-collapsed annulus has $|\nabla U|_{\hg_{r}}\neq 0$, then the end is asymptotic to a Kasner different from $A$ or $C$. The proof of Theorem \ref{KASYMPTOTIC} necessitates of two propositions, that we now comment in big terms. First, Proposition \ref{PORSI} shows, roughly (see the hypothesis), that any rescaled annulus that is sufficiently one-collapsed and has $|\nabla U|_{\hg_{r}}\neq 0$, is `$C^{k}$-close' to a Kasner space. More importantly, it estimates the `$C^{k}$-distance' to the Kasner space, to any order of differentiability $k$, in terms of any power of the diameter of the transversal tori (i.e. the level sets of the lapse). The proof requires a very detailed study of one-collapsed annuli, done (in part) by carefully inspecting the geometry of the transversal tori, whose second fundamental forms is fully controlled as an easy consequence of Anderson's estimates. Second, Proposition \ref{PORE} proves a type of `geometric bootstrap'-procedure, basically stating (see the hypothesis) that if the rescaled geometry of an annulus is `close in $C^{k}$' to a Kasner different from $A$ and $C$, then the rescaled geometry of a next annulus (following the one before) is even `closer in $C^{k}$' to an annulus of a Kasner different from $A$ and $C$. Thus, once we are in the hypothesis of Theorem \ref{KASYMPTOTIC}, a proof can be reachd by (roughly) using Proposition \ref{PORSI} first, and then using repeatedly Proposition \ref{PORE}. The sufficient criteria for Kasner asymptotic of Theorem \ref{KASYMPTOTIC} is the one leading ultimately to the proof of Theorem \ref{KAFR} showing, as said, the KA of black hole data sets with sub-cubic volume growth. But to apply the theorem one must grant first that the geometry of at least one rescaled annulus satisfies its hypothesis, and this is not easy. The proof of Theorem \ref{KAFR} is elaborated and requires subsections \ref{FTKASS} and \ref{POKA}. Subsection \ref{FTKASS} returns to the study of free $\Sa$-symmetric data sets $(\Sigma;\hg,U)$ by analysing the asymptotic of their ends under the natural condition $U\leq U_{\infty}$. It is shown in Theorem \ref{SSKAA} that, for such data, either the asymptotic is Kasner different from $A$ or $C$ or the whole data is flat and $U$ is constant. The proof of this follows after various steps. First we use the results of section \ref{S1S} to prove that such ends, when non-flat, are $\star$-static (Definition \ref{DEFSSS}), meaning in this case that the level sets of $U$ near $U_{\infty}$ are connected, compact and of genus greater than zero. It is then proved that either the asymptotic is Kasner different from $A$ and $C$ or it has sub-quadratic curvature decay. Finally sub-quadratic curvature decay is ruled out by making use of the monotonic quantity (\ref{GMONOTONIC}) along the level sets of $U$ (that we know are not spheres because the data is $\star$-static). Theorem \ref{SSKAA}, on free $\Sa$-symmetric data sets, is needed in several parts of subsection \ref{POKA} to show the aforementioned non-trivial Theorem \ref{KAFR}. As we explain more in detail below, free $\Sa$-symmetric reduced data sets show up often as the `collapsed limit' of rescaled annuli. Thus, as we study ends precisely via rescaled annuli, it is natural to expect Theorem \ref{SSKAA} to enter into scene in the analysis at some moment. Let us elaborate on this point a bit more. If an end has sub-cubic volume growth, then (sub)sequences of rescaled annuli either metrically collapse to a one-dimensional space and unwrap (i.e. after considering covers) to a ${\rm T}^{2}$-symmetric (Kasner) static data or metrically collapse to a two-dimensional orbifold $(\qM;q)$ and locally unwrap to a free $\Sa$-symmetric static data (this comes from a general fact about convergence and collapse of Riemannian manifolds that we discuss in subsection \ref{SFCCRM}). Two-dimensional reduced ends arising as such scaled limits are described in subsection \ref{RDSACL}, (which is the last of section \ref{S1S} earlier discussed), and can have only a finite number of orbifold points. Thus, by the results of subsection \ref{FTKASS}, the asymptotic is Kasner. This crucial information is used suitably to prove Theorem \ref{DFNKA} in subsection \ref{POKA}, stating that the asymptotic of static ends with sub-cubic volume growth is Kasner different from $A$ and $C$ or the curvature decays sub-quadratically along a suitable set (precisely, a ray union a simple cut). Thus, to prove Theorem \ref{KAFR} after having Theorem \ref{DFNKA}, one must rule out the sub-quadratic decay. This is done by showing through various propositions that static black hole ends with sub-cubic volume growth are $\star$-static, and then using Proposition \ref{CORONA} that forbids such type of decay for $\star$-static data by appealing to the monotonic quantity (\ref{GMONOTONIC}). The proof of the classification theorem is done in section \ref{TCTH} by carefully putting together all the previous results as was explained in section \ref{TSOTP} of Part I. \vspace{.3cm} {\bf Acknowledgment} I would like to thank Herman Nicolai, Marc Mars, Marcus Kuhri, Gilbert Weinstein, Michael Anderson, Greg Galloway, Miguel Sanchez, Carla Cederbaum, Lorenzo Mazzieri, Virginia Agostiniani and John Hicks for discussions and support. Also my gratefulness to Carla Cederbaum for inviting me to the conference `Static Solutions of the Einstein Equations' (T\"ubingen, 2016), to Piotr Chrusciel for inviting me to the meeting in `Geometry and Relativity' (Vienna, 2017) and to Helmut Friedrich for the very kind invitation to visit the Albert Einstein Institute (Max Planck Institute, Potsdam, 2017). This work has been largely discussed at them. Finally my gratefulness to the support received from the Mathethamical Center at the Universidad de la Rep\'ublica, Uruguay. \section{Background material}\label{BACKGROUNDMATERIAL2} \subsection{Static data sets and the main Theorem}\label{SDSMT2} Manifolds will always be smooth ($C^{\infty}$). Riemannian metrics as well as tensors will also be smooth. If $g$ is a Riemannian metric on a manifold $\Sigma$, then \begin{equation} \dist_{g}(p,q)= \inf\big\{\length_{g}(\gamma_{pq}):\gamma_{pq}\ \text{smooth curve joining $p$ to $q$}\big\}, \end{equation} is a metric, where $L_{g}$ is the notation we will use for length (when it is clear from the context we will remove the sub-index $g$ and write simply $\dist$ and $L$). A Riemannian manifold $(\Sigma;g)$ is {\it metrically complete} if the metric space $(\Sigma; \dist)$ is complete. If $(\Sigma;g)$ is metrically complete and $\partial \Sigma=\emptyset$ then the manifold is geodesically complete and we say simply as usual that $(\Sigma;g)$ is complete. \begin{Definition}[Static data sets]\label{SDS} A static (vacuum) data set $(\Sigma;\sg,N)$ consists of an orientable three-manifold $\Sigma$, possibly with boundary, a Riemannian metric $\sg$, and a function $N$, such that, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\roman*)}, widest=a, align=left] \item $N$ is strictly positive in the interior $\Sigma^{\circ}(=\Sigma\setminus \partial\Sigma)$ of $\Sigma$, \item $(\sg,N)$ satisfy the vacuum static Einstein equations, \begin{equation} \label{SEQ} N Ric = \nabla\nabla N,\qquad \Delta N=0 \end{equation} \end{enumerate} \end{Definition} The definition is quite general. Observe in particular that $\Sigma$ and $\partial \Sigma$ could be compact or non-compact. To give an example, a data set $(\Sigma;\sg,N)$ can be simply the data inherited on any region of the Schwarzschild data. This flexibility in the definition of static data set allows us to write statements with great generality. A horizon is defined as usual. \begin{Definition}[Horizons] Let $(\Sigma;\sg,N)$ be a static data set. A horizon is a connected component of $\partial \Sigma$ where $N$ is identically zero. \end{Definition} Note that the Definition \ref{SDS} doesn't require $\partial \Sigma$ to be a horizon, though the data sets that we classify in this article are those with $\partial \Sigma$ consisting of a finite set of compact horizons ($\Sigma$ is a posteriori non compact). It is known that the norm $|\nabla N|$ is constant on any horizon and different from zero. It is called the surface gravity. It is convenient to give a name to those spaces that are the final object of study of this article. Naturally we will call them {\it static black hole} data sets. \begin{Definition}[Static black hole data set]\label{DWO} A static data set $(\Sigma;\sg,N)$ with $\partial \Sigma=\{N=0\}$ and $\partial \Sigma$ compact, is called a static black hole data set. \end{Definition} In order to study the asymptotic of ends of black hole data sets it will be more convenient to work with `static data ends' that are simply data sets with one end and compact boundary. \begin{Definition}[Static data end]\label{SDE} A metrically complete static data set $(\Sigma;g,N)$ with $\partial \Sigma$ compact and $\Sigma$ containing only one end, will be called a static data end. \end{Definition} As will be shown, black hole data sets have only one end and so they are static data ends themselves. On the other hand, static data ends not necessarily arise from them. Hence, several of the theorems in this article, that are proved for static data ends, have a large range of applicability. The following definition, taken from \cite{4b6cb19bc94d4cf485e58571e3062f77}, recalls the notion of {\it weakly outermost} horizon. \begin{Definition}[Galloway, \cite{4b6cb19bc94d4cf485e58571e3062f77}] Let $(\Sigma; \sg, N)$ be a static black hole data set. Then, a horizon $H$ is said weakly outermost if there are no embedded surfaces $S$ homologous to $H$ having negative outwards mean curvature. \end{Definition} The following is the definition of Kasner asymptotic. It requires a decay into a background Kasner space faster than any inverse power of the distance. The definition follows the intuitive notion and it is written in the coordinates of the background Kasner, very much in the way AF is written in Schwarzschildian coordinates. \begin{Definition}[Kasner asymptotic]\label{KADEF} A data set $(\Sigma; g,N)$ is asymptotic to a Kasner data $(\Sigma^{\mathbb{K}};g^{\mathbb{K}},N^{\mathbb{K}})$, $\Sigma^{\mathbb{K}}=(0,\infty)\times {\rm T}^{2}$, if for any $m\geq 1$ and $n\geq 0$ there is $c>0$, a bounded closed sets $K\subset \Sigma$, $K^{\mathbb{K}}\subset \Sigma^{\mathbb{K}}$ and a diffeomorphism $\phi:\Sigma\setminus K\rightarrow \Sigma^{\mathbb{K}}\setminus K^{\mathbb{K}}$ such that, \begin{gather} |\partial_{I}(\phi_{*}g)_{ij}-\partial_{I}g^{\mathbb{K}}_{ij}|\leq \frac{c}{x^{m}},\\ |\partial_{I}(\phi_{*}N)-\partial_{I}N^{\mathbb{K}}|\leq \frac{c}{x^{m}}, \end{gather} for any multi-index $I=(i_{1},i_{2},i_{3})$ with $|I|=i_{1}+i_{2}+i_{3}\leq n$, where, if $x, y$ and $z$ are the coordinates in the Kasner space, then $\partial_{I}=\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\partial_{z}^{i_{3}}$. \end{Definition} The next is the definition of data set of Myers/Korotkin-Nicolai type that we use. \begin{Definition}[Black holes of M/KN type]\label{KNTDEF} A static data set $(\Sigma;\sg,N)$ is of Myers/Korotkin-Nicolai type if \begin{enumerate} \item $\partial \Sigma$ consist of $h\geq 1$ weakly outermost (topologically) spherical horizons, \item $\Sigma$ is diffeomorphic to a solid three-torus minus $h$-open three-balls, \item the asymptotic is Kasner. \end{enumerate} \end{Definition} It is worth to restate now the main classification theorem that we shall prove \begin{Theorem}[The classification Theorem]\label{TCTHM4} Any static black hole data set is either, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\Roman*)}, widest=a, align=left] \item\label{FTII} a Schwarzschild black hole, or, \item\label{FTI} a Boost, or, \item\label{FTIII} is of Myers/Korotkin-Nicolai type. \end{enumerate} \end{Theorem} As an outcome of the proof it will be shown that the Kasner asymptotic of the static black holes of type \ref{FTIII}, that is of M/KN type, is different from the Kasner $A$ and $C$ (of course it can't be asymptotic to a Kasner with $\gamma<0$ by the maximum principle). We leave it as an open problem to prove that the only static black hole data sets asymptotic to a $B$ are the Boosts. \begin{Problem} Prove that the Boosts are the only static black hole data sets asymptotic to a Boost. \end{Problem} We do not know if the only solutions of type \ref{FTIII} are the Myers/Korotkin-Nicolai solutions. We state this as an open problem. \begin{Problem}\label{OPENPRO2} Prove or disprove that the only static solutions of type \ref{FTIII} are the Myers/Korotkin-Nicolai solutions. \end{Problem} On a large part of the article we will use the variables $(\hg,U)$ with $\hg=N^{2}g$ and $U=\ln N$, instead of the natural variables $(g,N)$. The data $(\Sigma;\hg,U)$ is the {\it harmonic presentation} of the data $(\Sigma;g,N)$. The static equations in these variables are, \begin{equation} Ric_{\hg}=2\nabla U\nabla U,\quad \Delta_{\hg} U=0 \end{equation} and therefore the map $U:(\Sigma;\hg)\rightarrow \mathbb{R}$ is harmonic, (hence the name). \subsection{Scaling, annuli and partitions}\label{SAP2} \begin{enumerate}[leftmargin=*, label={\rm \arabic*}, widest=a, align=left] \item {\sc Metric balls}. If $C$ is a set and $p$ a point then $\dist_{g}(C,p)=\inf\{\dist_{g}(q,p):q\in C\}$. Very often we take $C=\partial \Sigma$. If $C$ is a set and $r>0$, then, define the open ball of `center' $C$ and radius $r$ as, \begin{equation} B_{g}(C,r)=\{p\in \Sigma:\dist_{g}(C,p)<r\} \end{equation} \item {\sc Scaling}. Very often we will work with scaled metrics. To avoid a cumbersome notation we will use often the subindex $r$ (the scale) on scaled metrics, tensors and other geometric objects. Precisely, let $r>0$, then for the scaled metric $g/r^{2}$ we use the notation $g_{r}$, namely, \begin{equation} g_{r}:=\frac{1}{r^{2}}g \end{equation} Similarly, $d_{r}(p,q)=d_{g_{r}}(p,q)$, $\langle X,Y\rangle_{r}=\langle X,Y\rangle_{g_{r}}$, $|X|_{r}=|X|_{g_{r}}$, and for curvatures and related tensors too, for instance if $R$ is the scalar curvature of $g$, then $R_{r}$ is the scalar curvature of $g_{r}$. This notation will be used very often and is important to keep track of it. \item {\sc Annuli}. Let $(\Sigma;g)$ be a metrically complete and non-compact Riemannian manifold with non-empty boundary $\partial \Sigma$. - Let $0<a<b$, then we define the open annulus $\mathcal{A}_{g}(a,b)$ as \begin{equation} \mathcal{A}_{g}(a,b)=\{p\in \Sigma: a<\dist_{g}(p,\partial \Sigma)<b\} \end{equation} We write just $\mathcal{A}(a,b)$ when the Riemannian metric $g$ is clear from the context. - When working with scaled metrics $g_{r}$, we will alternate often between the following notations \begin{equation} \mathcal{A}_{r}(a,b),\quad \mathcal{A}_{g_{r}}(a,b),\quad \mathcal{A}_{g}(ra,rb), \end{equation} (to denote the same set), depending on what is more simple to write or to read. For instance we could write $\mathcal{A}_{2^{j}}(1,2)$ instead of $\mathcal{A}_{g_{2^{j}}}(1,2)$ or $\mathcal{A}_{g}(2^{j},2^{1+j})$. If the metric is clear from the context we will use the first notation $\mathcal{A}_{r}(a,b)$. - If $C$ is a connected set included in $\mathcal{A}_{g}(a,b)$, then we define, \begin{equation} \mathcal{A}^{c}_{g}(C;a,b) \end{equation} to denote the connected component of $\mathcal{A}_{g}(a,b)$ containing $C$. The set $C$ could be for instance a point $p$ in which case we write $\mathcal{A}^{c}_{g}(p;a,b)$. \item {\sc Partitions cuts and end cuts}. To understand the asymptotic geometry of data sets, we will study the geometry of scaled annuli. Sometimes however it will be more convenient and transparent to use certain sub-manifolds instead of annuli that can have a rough boundary. For this purpose we define partitions, partition cuts, end cuts, and simple end cuts. {\it Assumption}: Below (inside this part) we assume that $(\Sigma;g)$ is a metrically complete and non-compact Riemannian manifold with non-empty and compact boundary $\partial \Sigma$. \begin{Definition}[Partitions] A set of connected compact three-submanifolds of $\Sigma$ with non-empty boundary \begin{equation} \{\mathcal{P}^{m}_{j,j+1},\ j=j_{0},j_{0}+1,\ldots;\ m=1,2,\ldots,m_{j}\geq 1\}, \end{equation} ($j_{0}\geq 0$), is a {\it partition} if, \begin{enumerate} \item $\mathcal{P}^{m}_{j,j+1}\subset \mathcal{A}(2^{1+2j},2^{4+2j})$ for every $j$ and $m$. \item $\partial \mathcal{P}^{m}_{j,j+1}\subset (\mathcal{A}(2^{1+2j},2^{2+2j})\cup \mathcal{A}(2^{3+2j},2^{4+2j}))$ for every $j$ and $m$. \item The union $\cup_{j,m}\mathcal{P}^{m}_{j,j+1}$ covers $\Sigma\setminus B(\partial \Sigma,2^{2+2j_{0}})$. \end{enumerate} \end{Definition} \begin{figure}[h] \centering \includegraphics[width=7cm, height=9cm]{Partition.pdf} \caption{The figure shows the annuli $\mathcal{A}(2^{1+2j},2^{2+2j})$, $\mathcal{A}(2^{3+2j},2^{4+2j})$ and the two components, for $m=1,2$ of $\mathcal{P}^{m}_{j,j+1}$.} \label{PARTITIONF} \end{figure} Figure \ref{PARTITIONF} shows schematically a partition. The existence of partitions is done (succinctly) as follows. Let $j_{0}\geq 0$ and let $j\geq j_{0}$. Let $f:\Sigma\rightarrow [0,\infty)$ be a (any) smooth function such that $f\equiv 1$ on $\{p:\dist(p,\partial \Sigma)\leq 2^{1+2j}\}$ and $f\equiv 0$ on $\{p: \dist(p,\partial \Sigma)\geq 2^{2+2j}\}$, \footnote{Consider a partition of unity $\{\chi_{i}\}$ subordinate to a cover $\{\mathcal{B}_{i}\}$ where the neighbourhoods $\mathcal{B}_{i}$ are small enough that if $\mathcal{B}_{i}\cap \{p:\dist(p,\partial \Sigma)\leq 2^{1+2j}\}\neq \emptyset$ then $\mathcal{B}_{i}\cap \{p: \dist(p,\partial \Sigma)\geq 2^{2+2j}\}=\emptyset$. Then define $f=\sum_{i\in I}\chi_{i}$, where $i\in I$ iff $\mathcal{B}_{i}\cap \{p:\dist(p,\partial \Sigma_{i})\leq 2^{1+2j}\}\neq \emptyset$.}. Let $x$ be any regular value of $f$ in $(0,1)$. For each $j$ let $\mathcal{Q}_{j}$ be the compact manifold obtained as the union of the closure of the connected components of $\Sigma\setminus \{f=x\}$ containing at least a component of $\partial \Sigma$. Then the manifolds $\mathcal{P}^{m}_{j,j+1}$, $m=1,\ldots,m_{j}$, are defined as the connected components of $\mathcal{Q}_{j+1}\setminus \mathcal{Q}_{j}^{\circ}$. We let $\partial^{-}\mathcal{P}^{m}_{j,j+1}$ be the union of the connected components of $\partial \mathcal{P}^{m}_{j,j+1}$ contained in $\mathcal{A}(2^{1+2j},2^{2+2j})$. Similarly, we let $\partial^{+}\mathcal{P}^{m}_{j,j+1}$ be the union of the connected components of $\partial \mathcal{P}^{m}_{j,j+1}$ contained in $\mathcal{A}(2^{3+2j},2^{4+2j})$. \begin{Definition}[Partition cuts] If $\mathcal{P}$ is a partition, then for each $j$ we let \begin{equation} \{\mathcal{S}_{jk},k=1,\ldots,k_{j}\} \end{equation} be the set of connected components of the manifolds $\partial^{-}\mathcal{P}^{m}_{j,j+1}$ for $m=1,\ldots,m_{j}$. The set of surfaces $\{\mathcal{S}_{jk},j\geq j_{0},\ldots,k=1,\ldots,k_{j}\}$ is called a {\it partition cut}. \end{Definition} \begin{Definition}[End cuts] Say $\Sigma$ has only one end. Then, a subset, $\{\mathcal{S}_{jk_{l}}, l=1,\ldots,l_{j}\}$ of a partition cut $\{\mathcal{S}_{jk},k=1,\ldots,k_{j}\}$ is called an end cut if when we remove all the surfaces $\mathcal{S}_{jk_{l}}$, $l=1,\ldots,l_{j}$, from $\Sigma$, then every connected component of $\partial \Sigma$ belongs to a bounded component of the resulting manifold, whereas if we remove all but one of the surfaces $\mathcal{S}_{jk_{l}}$, then at least one connected component of $\partial \Sigma$ belongs to an unbounded component of the resulting manifold. \end{Definition} If $\Sigma$ has only one end, then one can always remove if necessary manifolds from a partition cut $\{\mathcal{S}_{jk},k=1,\ldots,k_{j}\}$ to obtain an end cut. End cuts always exist. \begin{Definition}[Simple end cuts] Say $\Sigma$ has only one end. If an end cut $\{\mathcal{S}_{jk_{l}},j\geq j_{0},l=1,\ldots,l_{j}\}$ has $l_{j}=1$ for each $j\geq j_{0}$ then we say that the end is a {\it simple end cut} and write simply $\{\mathcal{S}_{j}\}$. \end{Definition} Simple end cuts do not always exist. If $\{\mathcal{S}_{j}\}$ is a simple end cut and $j_{0}\leq j<j'$ we let $\mathcal{U}_{j,j'}$ be the compact manifold enclosed by $\mathcal{S}_{j}$ and $\mathcal{S}_{j'}$. This notation will be used very often. \end{enumerate} \subsection{The ball-covering property and a Harnak-type of estimate for the Lapse}\label{TBCPHTE} Let $(\Sigma;\sg,N)$ be a static data set with $\partial \Sigma$ compact. In \cite{MR1809792}, Anderson observed that, as the four-metric $N^{2}dt^{2}+\sg$ is Ricci-flat, then Liu's ball-covering property holds, \cite{MR1216638} (the compactness of $\partial \Sigma$ is necessary here because Liu's theorem is for manifolds with non-negative Ricci curvature outside a compact set). Namely, for any $b>a>\delta>0$ there is $n$ and $r_{0}$ such that for any $r\geq r_{0}$ the annulus $\mathcal{A}(ra,rb)$ can be covered by at most $n$ balls of $g$-radius $r\delta$ centred in the same annulus (equivalently $\mathcal{A}_{r}(a,b)$ can be covered by at most $n$ balls of $g_{r}$-radius $\delta$ centred in the same annulus). Hence any two points $p$ and $q$ in a connected component of $\mathcal{A}_{r}(a,b)$ can be joined through a chain, say $\alpha_{pq}$, of at most $n+2$ radial geodesic segments of the balls of radius $\delta$ covering $\mathcal{A}_{r}(a,b)$. On the other hand Anderson's estimate implies that the $g_{r}$-gradient $|\nabla \ln N|_{r}$ is uniformly bounded (i.e. independent on $r$) on $\mathcal{A}_{r}(a-\delta,a+\delta)$ and therefore uniformly bounded over any curve $\alpha_{pq}$. Integrating $|\nabla \ln N|_{r}$ along the curves $\alpha_{pq}$ and using the bound we arrive at a relevant Harnak estimate controlling uniformly (i.e. independently of $r$) the quotients $N(p)/N(q)$. The estimate is due to Anderson and is summarised in the next Proposition (for further details see, \cite{0264-9381-32-19-195001}). \begin{Proposition}{\rm (Anderson, \cite{MR1809792})}\label{MAXMINU1} Let $(\Sigma;\sg,N)$ be a metrically complete static data set with $\partial \Sigma$ compact, and let $0<a<b$. Then, \begin{enumerate} \item\label{anterior} There is $r_{0}$ and $\eta>0$, such that for any $r>r_{0}$ and for any set $Z$ included in a connected component of $\mathcal{A}_{r}(a,b)$ we have, \begin{equation}\label{EQHARN} \max\{N(p):p\in Z\}\leq \eta \min\{N(p):p\in Z\} \end{equation} \item\label{posterior} Furthermore, if $r_{i}\rightarrow \infty$ and if $Z_{i}$ is a sequence of sets included, for each $i$, included in a connected component $\mathcal{A}^{c}_{r_{i}}(a,b)$ of $\mathcal{A}_{r_{i}}(a,b)$, and we have, \begin{equation} \max\{|\nabla \ln N|_{r_{i}}(p):p\in \mathcal{A}^{c}_{r_{i}}(a/2,2b)\}\rightarrow 0 \end{equation} then, \begin{equation} \frac{\max\{N(p):p\in Z_{i}\}}{\min\{N(p):p\in Z_{i}\}}\rightarrow 1. \end{equation} as $i\rightarrow \infty$. \end{enumerate} \end{Proposition} Let $(\Sigma;\hg,U)$ be a static data set in the harmonic presentation (assume $N>0$ and $\partial \Sigma$ compact). We have shown in Part I that $(\Sigma;\hg)$ is metrically complete and that $|\nabla U|_{\hg}^{2}$ decays quadratically. But as $Ric_{\hg}\geq 0$ Liu's ball covering property \cite{MR1216638} also holds on $(\Sigma;\hg)$ provided it is metrically complete and $\partial \Sigma$ is compact. Repeating then Anderson's argument we arrive at the following Harnak estimate but in the harmonic presentation. \begin{Proposition}[Anderson, \cite{MR1809792}]\label{MAXMINU} Let $(\Sigma;\hg,U)$ be a metrically complete static data set with $\partial \Sigma$ compact and let $0<a<b$. Then, \begin{enumerate} \item\label{anterior} There is $r_{0}>0$ and $\eta>0$, such that for any $r>r_{0}$ and set $Z$ included in a connected component of $\mathcal{A}_{r}(a,b)$ we have, \begin{equation} \max\{U(q):q\in Z\}\leq \eta+\min\{U(q):q\in Z\}, \end{equation} \item\label{posterior} Furthermore, if $r_{i}\rightarrow \infty$ and if $Z_{i}$ is a sequence of sets included for each $i$ in a connected component $\mathcal{A}^{c}_{r_{i}}(a,b)$ of $\mathcal{A}_{r_{i}}(a,b)$, and we have, \begin{equation} \max\{|\nabla U|_{r_{i}}(q):q\in \mathcal{A}^{c}_{r_{i}}(a/2,2b)\}\rightarrow 0 \end{equation} then, \begin{equation} \max\{U(q):q\in Z_{i}\}-\min\{U(q):q\in Z_{i}\}\rightarrow 0 \end{equation} as $i\rightarrow \infty$. \end{enumerate} \end{Proposition} Both propositions will be used later. \subsection{Facts about convergence and collapse of Riemannian manifolds}\label{SFCCRM} In some parts of this article we will use well known techniques in convergence and collapse of Riemannian manifolds. We recall here the concepts and the results that we will use. We first recall the basic definition of $C^{\infty}$-convergence (the presentation is in the category of tensors adjusted to our needs). We refer the reader to \cite{MR2243772} for more general definitions. A sequence of smooth compact Riemannian manifolds with smooth boundary $(M_{i};g_{i})$ converges in $C^{\infty}$ to a smooth compact Riemannian manifold with smooth boundary $(M_{\infty};g_{\infty})$, if there are smooth diffeomorphisms $\phi_{i}:M_{\infty}\rightarrow M_{i}$ such that $\phi^{*}g_{i}$ converges to $g_{\infty}$ in $C^{k}_{g_{\infty}}$ for all $k\geq 0$. That is, \begin{equation} \|\phi_{i}^{*}g_{i}-g_{\infty}\|_{C^{k}_{g_{\infty}}(M_{\infty})}\rightarrow 0 \end{equation} where the $C^{k}_{g_{\infty}}(M)$-norm of a smooth tensor field $W$ on a manifold $M$ is defined as usual as, \begin{equation} \|W\|^{2}_{C^{k}_{g}(M)}:=\sup_{x\in M}\bigg\{\sum_{i=0}^{i=k}|\nabla^{(i)}W|^{2}_{g}(x)\bigg\} \quad \text{where}\quad \nabla^{(i)}W=\underbrace{\nabla\ldots\nabla}_{\text{i-times}} W \end{equation} To fix ideas, the sequence of Riemannian manifolds, \begin{equation} M_{i}=[1/2,3/4]\times \Sa\times \Sa,\quad g_{i}=(1+x^{i})dx^{2}+d\theta_{1}^{2}+d\theta_{2}^{2} \end{equation} converges in $C^{\infty}$ to, \begin{equation} M_{\infty}=[1/2,3/4]\times \Sa\times \Sa,\quad g_{\infty}=dx^{2}+d\theta_{1}^{2}+d\theta_{2}^{2} \end{equation} If a sequence of manifolds $(M_{i};g_{i})$ grow in diameter, then there is no convergence in the previous sense but there can be convergence in the pointed sense to a pointed non-compact manifold $(M_{\infty}, p_{\infty};g_{\infty})$. This means that there is a sequence of points $p_{i}\in M_{i}$ and for each compact sub-manifold $N\subset M_{\infty}$ containing $p_{\infty}$, there are diffeomorphisms into the image $\phi_{i}:N\rightarrow M_{i}$ such that $\phi_{i}(p_{\infty})=p_{i}$ and such that $(N;\phi^{*}_{i}g_{i})$ converges in $C^{\infty}$ to $(N,g_{\infty})$. For instance, the sequence of manifolds, \begin{equation} M_{i}=[0,i]\times \Sa\times \Sa,\quad g_{i}=(1+\frac{1}{(1+x)^{i}})dx^{2}+d\theta_{1}^{2}+d\theta_{2}^{2} \end{equation} converges in $C^{\infty}$ and in the pointed sense to, \begin{equation} M_{\infty}=[0,\infty)\times \Sa\times \Sa,\quad g_{\infty}=dx^{2}+d\theta_{1}^{2}+d\theta_{2}^{2} \end{equation} It can happen that a sequence of manifolds metrically collapses into a manifold of lower dimension. For example, consider the sequence of Riemannian manifolds, \begin{equation} M_{i}=[0,1/2]\times \Sa\times \Sa,\quad g_{i}=dx^{2}+x^{i}d\theta_{1}^{2}+d\theta_{2}^{2} \end{equation} where the coefficient $x^{i}$, over the first factor $\Sa$ tends uniformly to zero as $i\rightarrow \infty$. This sequence of manifolds metrically collapse to the two-dimensional Riemannian manifold, \begin{equation} M_{\infty}=[0,1/2]\times \Sa,\quad g_{\infty}=dx^{2}+d\theta_{2}^{2}. \end{equation} Similarly, the sequence of Riemannian manifolds, \begin{equation} M_{i}=[0,1/2]\times \Sa\times \Sa,\quad g_{i}=dx^{2}+x^{i}d\theta_{1}^{2}+x^{i}d\theta_{2}^{2} \end{equation} metrically collapse to the one-dimensional Riemannian manifold, \begin{equation} M_{\infty}=[0,1/2],\quad g_{\infty}=dx^{2} \end{equation} that is, to the interval $[0,1/2]$ with the usual metric. Metric collapse means that the Gromov-Hausdoff distance (GH-distance and denoted by $d_{GH}$) between them, as metric spaces, tends to zero (see \cite{MR2243772}). It is a general fact that collapse with bounded curvature is always into a one dimensional manifold, or a two dimensional orbifold. We discuss below the only results that we will use in this respect. The context will be always that of metrically complete static data sets $(\Sigma;\hg,U)$ with $\Sigma$ non-compact and $\partial \Sigma$ compact. Let $\gamma$ be a ray emanating from $\partial \Sigma$, that is, an infinite geodesic $\gamma(s)$ such that $\gamma(0)\in \partial \Sigma$ and $\dist(\gamma(s),\partial \Sigma)=s$ (when the data is a static black hole data sets then we assume, because $\hg$ is singular on $\partial \Sigma$, that $\gamma$ is a ray from the boundary of a compact neighbourhood of $\partial \Sigma$). The first result we will use is the following. Suppose that for a divergent sequence of points $p_{i}\in \gamma$, the rescaled annuli $(\mathcal{A}^{c}_{r_{i}}(p_{i};a,b);\hg_{r_{i}})$ metrically collapse to $([a,b];dx^{2})$. Note that by Anderson's estimates (see Part I), the collapse is with bounded curvature (and bounded derivatives of the curvature). Then, there is a sequence $\mathcal{B}_{i}$ of neighbourhoods of $\mathcal{A}^{c}_{r_{i}}(p_{i};a,b)$ and finite covers $\tilde{\mathcal{B}}_{i}$ such that $(\tilde{\mathcal{B}}_{i}; \tilde{\hg}_{i})$ converges in $C^{\infty}$ to a ${\rm T}^{2}$-symmetric Riemannian space $([a,b]\times {\rm T}^{2};\tilde{\hg})$, \footnote{Another way to state this is the following. Given $\epsilon>0$ there are $\delta>0$ and $r_{0}>0$ such that for any $p\in \gamma$ with $r=r(p)\geq r_{0}$, such that the annulus $(\mathcal{A}^{c}_{r}(p;a,b);\hg_{r})$ is $\delta$-close in the GH-distance to the segment $[a,b]$, then there is a neighbourhood $\mathcal{B}$ of $\mathcal{A}^{c}_{r}(p;a,b)$ and a finite cover $\tilde{\mathcal{B}}$ such that $(\tilde{\mathcal{B}};\tilde{\hg}_{r})$ is $\epsilon$-close in $C^{k}$ to a ${\rm T}^{2}$-symmetric flat space $([a,b]\times {\rm T}^{2};\tilde{\hg})$.}. Here it is important that the points $p_{i}$ belong to $\gamma$ otherwise the existence of such coverings may be not true (this is well known, see examples in \cite{MR3302042} for instance). The second result we will use is the following. Suppose that for a divergent sequence of points $p_{i}\in \gamma$, the rescaled annuli $(\mathcal{A}^{c}_{r_{i}}(p_{i};a,b); \hg_{r_{i}})$ metrically collapse, but not into a segment. By Anderson's estimates again, the collapse is with bounded curvature (and bounded derivatives of the curvature). Then, there is a sequence $\mathcal{B}_{i}$ of neighbourhoods of $\mathcal{A}^{c}_{r_{i}}(p_{i};a,b)$ collapsing into a two dimensional Riemannian orbifold with orbifold points of angles $2\pi/2, 2\pi/3, 2\pi/4,\ldots$ Furthermore, if a sequence of points $q_{i}$ converges to a non-orbifold point $q$ then there are neighbourhoods $\mathcal{U}_{i}$ of $q_{i}$ and finite covers $(\tilde{\mathcal{U}}_{i};\tilde{\hg}_{i})$ converging in $C^{\infty}$ to an $\Sa$-symmetric Riemannian manifold, whose quotient by $\Sa$ is isometric to a neighbourhood of the limit point $q$ in the limit Riemannian manifold. For collapse of two-dimensional manifolds the situation is similar but simpler. We will use the following. Let $(S;q)$ be a non-compact Riemannian manifold with non-empty boundary and let $\gamma$ be a ray from $\partial S$. Let $p_{i}\in \gamma$ be a divergent sequence of points. Suppose that $(\mathcal{A}^{c}_{r_{i}}(p_{i};a,b);q_{i})$ metrically collapses with bounded curvature. Then it does so into an interval $[a,b]$ and there is a sequence of neighborhoods $\mathcal{B}_{i}$ of $\mathcal{A}^{c}_{r_{i}}(p_{i};a,b)$ and finite covers $\tilde{\mathcal{B}}_{i}$, such that $(\tilde{B}_{i};q_{i})$ converges in $C^{\infty}$ to a $\Sa$-symmetric Riemannian manifold, whose quotient by $\Sa$ is $[a,b]$. The existence of the coverings for each case follows from Theorem 12.1 in \cite{MR1145256}. The orbifold structure when there is two-dimensional metric collapse follows from Proposition 11.5 in \cite{MR1145256}. \subsection{The Kasner solutions}\label{SSKK} \subsubsection{Explicit form and parameters} The Kasner data, denoted by $\mathbb{K}$, are $\mathbb{R}^{2}$-symmetric solutions explicitly given by \begin{equation} \label{KASNERO} \sg=dx^{2} +x^{2\alpha}d y^{2} +x^{2\beta}d z^{2},\quad N=x^{\gamma} \end{equation} with $(x,y,z)$ varying in the manifold $\mathbb{R}^{+}\times \mathbb{R}\times \mathbb{R}$, and where $(\alpha,\beta,\gamma)$ satisfy \begin{equation}\label{CIRCLE} \alpha+\beta+\gamma=1\quad \text{and}\quad \alpha^{2}+\beta^{2}+\gamma^{2}=1 \end{equation} but are otherwise arbitrary (see Figure \ref{Figure21}). The solutions corresponding to two different triples $(\alpha,\beta,\gamma)$ and $(\alpha',\beta',\gamma')$ are equivalent (i.e. isometric) iff $\alpha=\beta'$, $\beta=\alpha'$ and $\gamma=\gamma'$. The metrics (\ref{KASNERO}) are flat only when $(\alpha,\beta,\gamma)=(1,0,0), (0,1,0)$ or $(0,0,1)$. We will give them the following names, \begin{equation} A:\ (\alpha,\beta,\gamma)=(1,0,0),\quad C:\ (\alpha,\beta,\gamma)=(0,1,0),\quad B:\ (\alpha,\beta,\gamma)=(0,0,1) \end{equation} The solution $B$ is the Boost. $\mathbb{Z}$-actions, $\mathbb{Z}\times (0,\infty)\times \mathbb{R}^{2}\rightarrow (0,\infty)\times \mathbb{R}^{2}$, are given by fixing a (non-zero) vector field $X$, combination of $\partial_{y}$ and $\partial_{z}$, and letting $n\times p\rightarrow p+nX$. The quotients are ${\rm S}^{1}$-symmetric static solutions. Similarly, $\mathbb{Z}^{2}$ quotients give ${\rm S}^{1}\times {\rm S}^{1}$-symmetric static solutions. $\mathbb{Z}^{2}$-quotient of the Kasner space will also be called Kasner spaces and denoted too by $\mathbb{K}$. These are the spaces defining Kasner asymptotic. \subsubsection{The harmonic presentation} The Kasner spaces in the harmonic presentation are \begin{equation} \label{KHARMAP1} \hg=dx^{2}+x^{2a}dy^{2}+x^{2b}d z^{2},\quad U=c\ln x \end{equation} where $a, b$ and $c$ satisfy \begin{equation} 2c^{2}+(a-\frac{1}{2})^{2}+(b-\frac{1}{2})^{2}=\frac{1}{2}\quad \text{and}\quad a+b=1 \end{equation} Thus, the circle (\ref{CIRCLE}), (see Figure \ref{Figure21}), is seen now as an ellipse in the plane $a+b=1$, (see Figure \ref{Figure31}). The $g$-flat solutions $A, B$ and $C$ are, \begin{equation} A:\ (a,b,c)=(1,0,0),\quad C:\ (a,b,c)=(0,1,0),\quad B:\ (a,b,c)=(1/2,1/2,1/2) \end{equation} The Kasner solutions (\ref{KHARMAP1}) are scale invariant. Namely, for any $\lambda>0$, $(\mathbb{R}^{+}\times \mathbb{R}^{2};\lambda^{2}\hg)$ represents the same Kasner space as $(\mathbb{R}^{+}\times \mathbb{R}^{2};\hg)$ does. This can be seen by making the change \begin{equation} {\tt x}=\lambda x,\quad {\tt y}=\lambda^{1-a}y,\quad {\tt z}= \lambda^{1-b}z \end{equation} that transforms (\ref{KHARMAP1}) into \begin{equation}\label{KASNER} \hg=d{\tt x}^{2} +{\tt x}^{2a}d {\tt y}^{2} +{\tt x}^{2b}d {\tt z}^{2},\quad U=c\ln {\tt x} -c\ln \lambda \end{equation} Another way to say this is that $(1-2c)t\partial_{t}+x\partial_{x}+(1-a)y\partial_{y}+(1-b)z\partial_{z}$ is a homothetic Killing of the space-time. The scale invariance can of course be seen also in the original space $(\mathbb{R}^{+}\times \mathbb{R}^{2};g,N)$. Note that in general, the isometry that exists between $(\mathbb{R}^{+}\times \mathbb{R}^{2};\hg)$ and $(\mathbb{R}^{+}\times \mathbb{R}^{2};\lambda^{2}\hg)$ does not pass to the quotient by a $\mathbb{Z}\times \mathbb{Z}$-action. \subsubsection{Uniqueness}\label{UNIQ} The Kasner data are the only data with a free $\mathbb{R}\times \mathbb{R}$-symmetry other than the Minkowski data \begin{equation} \Sigma=\mathbb{R}^{3},\quad g=dx^{2}+dy^{2}+dz^{2},\quad N=1. \end{equation} We give now a proof of this fact in a way that becomes useful when we study the Kasner asymptotic later in Section \ref{ENDSAK}. The proof is as follows. We work in the harmonic presentation $(\Sigma;\hg,U)$, therefore geometric tensors are defined with respect to $\hg$. If the data set $(\Sigma;\hg,U)$ has a free $\mathbb{R}^{2}$-symmetry, and is not the Minkowski solution, then $U$ can be taken as a harmonic coordinate with range in an interval $I$. Then, on $\mathbb{R}^{2}\times I$ we can write \begin{equation}\label{UDECESTIM} \hg=\lambda^{2}dU^{2}+h \end{equation} where $\lambda=\lambda(U)$, and where $h=h(U)$ is a family of flat metrics on $\mathbb{R}^{2}$. Without loss of generality assume that $U=0$ at the left end of $I$. Let $(z_{1},z_{2})$ be a (flat) coordinate system on $\mathbb{R}^{2}\times \{0\}$. In the coordinate system $(z_{1},z_{2},U)$ the static equation $Ric_{\hg}=2\nabla U\nabla U$ reduces to \begin{align} \label{RTE1} & \partial_{U}h_{AB}=2\lambda \Theta_{AB},\\ \label{RTE2} & \partial_{U}\Theta_{AB}=\lambda(-\theta\Theta_{AB}+2\Theta_{AC}\Theta^{C}_{\ B}),\\ \label{RTE3} & \Theta_{AB}\Theta^{AB}-\theta^{2}=-\frac{2}{\lambda^{2}}, \end{align} where $\Theta$ is the second fundamental form of the leaves $\mathbb{R}^{2}\times \{U\}$ and $\theta=\Theta_{A}^{\ A}$ is the mean curvature. The static equation $\Delta_{\hg} U=0$ reduces to \begin{equation}\label{RTE4} \partial_{U}\bigg(\frac{\sqrt{|h|}}{\lambda}\bigg)=0 \end{equation} where $|h|$ is the determinant of $h_{AB}$. Hence \begin{equation}\label{LEQO} \Gamma \sqrt{|h|}= \lambda \end{equation} for a constant $\Gamma>0$. This can be inserted in (\ref{RTE1})-(\ref{RTE2}) to get the autonomous system of ODE \begin{align} \label{RTE21} & \partial_{U}h_{AB}=2\Gamma \sqrt{|h|} \Theta_{AB},\\ \label{RTE22} & \partial_{U}\Theta_{AB}=\Gamma \sqrt{|h|}(-\theta\Theta_{AB}+2\Theta_{AC}\Theta^{C}_{\ B}), \end{align} The equation (\ref{RTE3}) transforms into \begin{equation}\label{RTE23} \Theta_{AB}\Theta^{AB}-\theta^{2}=-\frac{2}{\Gamma^{2}|h|}, \end{equation} and (it is direct to see) that it holds for all $U$ provided it holds for $U=0$ and (\ref{RTE21}) and (\ref{RTE22}) hold for all $U$. The (\ref{RTE23}) is thus only a `constraint" equation. Therefore the system (\ref{RTE1})-(\ref{RTE3}) is solved by giving $h_{AB}(0), \Theta_{AB}(0)$ and $\Gamma>0$ satisfying (\ref{RTE23}), then running (\ref{RTE21})-(\ref{RTE22}) and finally obtaining $\lambda$ from (\ref{LEQO}). To solve (\ref{RTE21})-(\ref{RTE22}) first change variables from $U$ to $s$, where $ds=\Gamma\sqrt{|h|}dU$. The system (\ref{RTE21})-(\ref{RTE22}) now reads \begin{align} \label{RTE31} & \partial_{s}h_{AB}=2 \Theta_{AB},\\ \label{RTE32} & \partial_{s}\Theta_{AB}=-\theta\Theta_{AB}+2\Theta_{AC}\Theta^{C}_{\ B}, \end{align} Use these equations to check that \begin{align} \label{THE13} & \partial_{s}\theta=-\theta^{2},\\ \label{THE12} & \partial_{s}\Theta_{12}=(\Theta_{11}h^{11}+\Theta_{22}h^{22}-2\Theta_{12}h^{12})\Theta_{12} \end{align} Thus, $\theta$ has its own evolution equation which gives $\theta(s)=1/(s+1/\theta(0))$. Moreover if we choose $(z_{1},z_{2})$ on $\{U=0\}$ to diagonalise $h(0)$ and $\Theta(0)$ simultaneously (i.e. $h_{11}(0)=1, h_{22}=1, h_{12}(0)=0$ and $\Theta_{12}(0)=0$), then (\ref{THE12}) shows that $\Theta_{12}=0$ and $h_{12}=0$ for all $s$ and therefore that the evolutions for $h_{11}$ and $h_{22}$ decouple to independent ODEs. With this information it is straightforward to see that the solutions to (\ref{THE13})-(\ref{THE12}), which at the initial times satisfy also (\ref{RTE23}) are only the Kasner solutions. We will use all the previous discussion later in Section \ref{ENDSAK}. \section{Free $\Sa$-symmetric solutions}\label{S1S} This section studies various aspects of data sets which are free $\Sa$-symmetric. The contents are as follows. Subsection \ref{RDRE} presents the reduced equations, Proposition \ref{PRED}. Subsection \ref{KSOL} discusses the reduced Kasner spaces and subsection \ref{CIGARSOL} describes thoroughly a reduced data that we call the `cigars' (due to their geometric shape). Subsection \ref{CIGUNIQU} proves the cigar's uniqueness and subsection \ref{CIGNHCP} characterises the cigars as the data that model high-curvature regions. These properties of the cigars play an essential role in subsection \ref{DFIAT}, where it is proved that $|\nabla U|^{2}$, $|\nabla V|^{2}$ and $\kappa$ (the Gaussian curvature of $q$), have quadratic decay at infinity on $(\qM; q)$, provided $(\qM;q)$ is metrically complete and $\partial S$ is compact. The discussion of such decay depends on whether the twists $\Omega$ of $\xi$, which is a constant, is zero or not. In the same subsection \ref{DFIAT} it is shown, using the decay previously proved, that $S$ has only a finite number of simple ends, each diffeomorphic to $[0,\infty)\times \Sa$. Furthermore it is proved in Proposition \ref{LPRO} that $U$ has a limit $U_{\infty}$ at infinity, $-\infty\leq U_{\infty}\leq \infty$. Finally, subsection \ref{RDSACL} describes the global structure of reduced data sets arising as collapsed limits that will be relevant to study asymptotic of static ends through scaling. \subsection{The reduced data and the reduced equations}\label{RDRE} Let $(\sM;g,N)$ be a static data set invariant under a free $\Sa$-action. The action induces a foliation of $\Sigma$ by $\Sa$-invariant circles. Let $(\Sigma;\hg,U)$ be the harmonic presentation. We will quotient the data $(\Sigma;\hg,U)$ by the Killing field and study the reduced system. The complete list of reduced variables and other necessary notation, is the following. \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm -}, widest=a, align=left] \item As usual let $\hg=N^{2}g$, \item let $\xi$ be the Killing field generating the $\Sa$-action. \item let $\Lambda=|\xi|_{\hg}$ be the $\hg$-norm of $\xi$, \item let $\Omega=\epsilon^{\hg}_{abc}\xi^{a}\nabla^{b}\xi^{c}$ be the $\hg$-twist of $\xi$ ($\epsilon^{\hg}$ is the $\hg$-volume form and $\nabla$ any cov. der.), \item let $U=\ln N$, \item let $V=\ln \Lambda$, \item let $\qM$ be the quotient manifold of $\sM$ by the $\Sa$-action, \item let $\hqg$ be the quotient two-metric of $\hg$, \item let $\gcur$ be the Gaussian curvature of $\hqg$. \end{enumerate} With all this at hand the following is the definition of a reduced static data set. \begin{Definition}[Reduced static data set] A data set $(S;q,U,V)$ arising from reducing a $\Sa$-invariant static data set is a reduced static data set. \end{Definition} The next proposition presents the reduced equations of a reduced data set\footnote{We haven't found a reference for these equations though most likely they are given somewhere}. The equations involve only $q$, $U$ and $V$, therefore the tensor $Ric$ and the operators, $\Delta$, $\nabla$ and $\langle\ ,\ \rangle$ are with respect to $q$. \begin{Proposition}\label{PRED} The (reduced) static equations of a reduced data set $(\qM;q,U,V)$ are, \begin{align} \label{ES1} & Ric=\nabla\nabla V +\nabla V\nabla V +\frac{1}{2}\Omega^{2}e^{-4V}q+2\nabla U\nabla U,\\ \label{ES2} & \Delta V +\langle \nabla V,\nabla V\rangle=\frac{1}{2}\Omega^{2}e^{-4V},\\ \label{ES3} & \Delta U +\langle \nabla U,\nabla V\rangle=0. \end{align} where $\Omega$ (introduced earlier) is constant. Moreover $\Omega$ is zero iff $\xi$ is hypersurface orthogonal inside $\sM$. \end{Proposition} Before passing to the proof let us make some comments on the reduced equations. \vspace{0.2cm} - When $\Omega=0$ the system (\ref{ES1})-(\ref{ES2}) is locally equivalent to the Weyl equations around any point where $\nabla \Lambda\neq 0$. We won't use this information however in the rest of the article. \vspace{0.2cm} - The solutions to (\ref{ES1})-(\ref{ES3}) are invariant under the simultaneous transformations \begin{equation}\label{TRANSSCA} q\rightarrow \lambda^{2}q,\quad V\rightarrow V+\frac{1}{2}\ln \nu,\quad U\rightarrow U+\mu, \quad \Omega\rightarrow \frac{\nu}{\lambda} \Omega \end{equation} for any $\lambda>0, \nu>0$ and $\mu$. Namely, if we replace $(q,V,U)$ and $\Omega$ in (\ref{ES1})-(\ref{ES3}) for $(\lambda^{2}q, V+\frac{1}{2}\ln \nu, U+\mu)$ and $\nu\Omega/\lambda$ respectively, then the equations are still verified. We will call them simply `scalings" and denote them by $(\lambda,\nu,\mu)$. \vspace{0.2cm} - Given a solution to (\ref{ES1})-(\ref{ES2}), the metric $\hg$ can be recovered using the expression \begin{equation}\label{OVERED} \hg=h_{ab}dx^{a}dx^{b}+\Lambda^{2}(d\varphi+\theta_{i}dx^{i})^{2} \end{equation} where $(x^{1},x^{2})$ are coordinates on $S$ and where the one form $\theta$ is found by solving \begin{equation}\label{MFORM} d(\theta_{i}dx^{i}) = \frac{\Omega}{\Lambda^{3}}\sqrt{|q|}dx^{1}\wedge dx^{2} \end{equation} where $|q|$ is the determinant of $q_{ij}$ and where $\partial_{\varphi}=\xi$ is the original Killing field. As $\xi$ is the generator of a $\Sa$-action, the range of $\varphi$ is $[0,2\pi)$. Without this information the range of $\varphi$ is undetermined. This is related to the fact that, locally, the reduction procedure requires only that $\xi$ is a non-zero Killing field. If the orbits of $\xi$ do not close up in parametric time $2\pi$, still the reduced equations (\ref{ES1})-(\ref{ES3}) hold, and to recover $\hg$ using (\ref{OVERED}) and (\ref{MFORM}) the right range of $\varphi$ needs to be provided. This indeterminacy gives rise to two globally inequivalent ways to scale data $(\Sigma;\hg,U;\xi)$ giving rise to the same reduced variables and equations. We assume that $\xi\neq 0$ and has closed orbits. The first is the scaling, \begin{equation}\label{FIRSCA} \hg\rightarrow \lambda^{2}\hg,\qquad \xi\rightarrow \frac{\sqrt{\nu}}{\lambda}\xi \end{equation} the second is (recall $\hg=q_{ij}dx^{i}dx^{j}+\Lambda^{2}(d\varphi+\theta_{i}dx^{i})^{2}$), \begin{align}\label{SECSCA} \hg\rightarrow \lambda^{2}q_{ij}dx^{i}dx^{j}+\nu\Lambda^{2}(d\varphi +\frac{\lambda}{\nu^{1/2}}\theta_{i}dx^{i})^{2},\quad \xi\rightarrow \xi \end{align} In either case, the reduced variables $(q,U,V)$ scale in the same way (\ref{TRANSSCA}). The two new three-metrics are locally isometric but the new length of the orbits of the killing field $\xi$ do not necessarily coincide. The length of the orbits is scaled by $\lambda$ in the first case, and by $\sqrt{\nu}$ in the second case. \vspace{0.2cm} - As in dimension two we have $Ric=\gcur q$, then (\ref{ES1})-(\ref{ES2}) imply that the Gaussian curvatures acquires the expression \begin{equation}\label{KAPPAF} \gcur =\frac{3}{4}\Omega^{2}e^{-4V}+|\nabla U|^{2}. \end{equation} In particular $\gcur$ is non-negative. This will be an important property when analysing the geometry of the reduced data. \vspace{0.2cm} The proof of Proposition \ref{PRED} is just computational and relies on formulae in \cite{Dain:2008xr}. We include it for the sake of completeness, but it can be skipped otherwise. \begin{proof}[Proof of Proposition \ref{PRED}] We use calculations from \cite{Dain:2008xr}, but the notation is different. Precisely we use the following notation: $\mathcal{N}$ is the quotient of the spacetime manifold $\stM$ by the $\Sa$-action, $\omega_{a}$ is the twist one form of the Killing field $\xi$ in the spacetime and $\lambda$ its norm. Naturally, we have the commutative diagram \vspace{0.2cm}\vs \begin{center} \setlength{\unitlength}{.75mm} \begin{picture}(50,25) \put(12,5){\vector(0,1){15}} \put(52,5){\vector(0,1){15}} \put(10,0){$\sM$} \put(50,0){$\stM$} \put(17,2){\vector(1,0){30}} \put(30,5){$i_{\sM}$} \put(10,22){$\qM$} \put(50,22){$\mathcal{N}$} \put(17,24){\vector(1,0){30}} \put(30,27){$i_{\qM}$} \put(7,12){$\pi$} \put(54,12){$\pi$} \end{picture} \end{center} where the $\pi$'s are the projections into the quotient spaces and the inclusions $i_{\sM}$ and $i_{\qM}$ are totally geodesic, namely the second fundamental form $K$ of $\sM$ in $\stM$ and the second fundamental form $\chi$ of $\qM$ in $\mathcal{N}$, are both zero. Let $\boldsymbol{n}$ be the normal to $\qM$ in $\mathcal{N}$. Equation (45) from \cite{Dain:2008xr} implies $\boldsymbol{n}(\lambda)=0$ and $i^{*}_{S}\omega_{a}=0$. Using this information inside (18) of \cite{Dain:2008xr} we obtain, \begin{equation} \boldsymbol{\tilde{\nabla}}_{a}\boldsymbol{\tilde{\nabla}}^{a}\lambda=\frac{\omega(\boldsymbol{n})^{2}}{2\lambda^{3}} \end{equation} where $\boldsymbol{\tilde{\nabla}}_{a}$ is the covariant derivative of the quotient metric on $\mathcal{N}$. We compute \begin{equation} \boldsymbol{\tilde{\nabla}}_{a}\boldsymbol{\tilde{\nabla}}^{a}\lambda=-\boldsymbol{n}^{a}\boldsymbol{n}^{b}\boldsymbol{\tilde{\nabla}}_{a}\boldsymbol{\tilde{\nabla}}_{b} \lambda +\Delta\lambda=\langle \frac{\nabla N}{N},\nabla\lambda\rangle +\Delta\lambda \end{equation} where now $\Delta$ and $\langle\ ,\ \rangle$ are defined with respect to the quotient two-metric over $\qM$ that we denote by $h$. Thus \begin{equation}\label{LEQQ} \Delta\lambda+\langle \frac{\nabla N}{N},\nabla\lambda\rangle=\frac{\omega(\boldsymbol{n})^{2}}{2\lambda^{3}} \end{equation} On the other hand as $N$ is harmonic in $(\sM,g)$ we have \begin{equation}\label{NEQ} \Delta N + \langle \nabla N, \frac{\nabla \lambda}{\lambda}\rangle=0 \end{equation} where the operators are again with respect to $h$. Finally, the equations (26) and (30) in \cite{Dain:2008xr} give \begin{equation} \gcur_{h}=\frac{\Delta\lambda}{\lambda}+\frac{1}{4}\frac{\omega(\boldsymbol{n})^{2}}{\lambda^{4}} \end{equation} where $\gcur_{h}$ is the gaussian curvature of $h$. Now, $q=N^{2}h$, hence \begin{equation}\label{GCUREQ} N^{2}\gcur=\gcur_{h}-\Delta \ln N=\hat{\gcur}-\frac{\Delta N}{N}+\frac{|\nabla N|^{2}}{N^{2}} \end{equation} where again $\Delta$ and $|\ \ |$ are with respect to $h$. Combining (\ref{LEQ}), (\ref{NEQ}) and (\ref{GCUREQ}) we obtain \begin{equation}\label{GCUREQII} \gcur=\frac{3}{4}\frac{\omega(\boldsymbol{n})^{2}}{N^{2}\lambda^{4}}+\frac{|\nabla N|^{2}}{N^{4}} \end{equation} Now, the spacetime expression \begin{equation} \partial_{t}^{a}\boldsymbol{\epsilon}_{abcd}\xi^{b}\boldsymbol{\nabla}^{c}\xi^{d}=N\omega(\boldsymbol{n}) \end{equation} is well known to be constant where $\boldsymbol{\nabla}$ is the spacetime covariant derivative and $\boldsymbol{\epsilon}$ the spacetime volume form (see \cite{MR757180} Theorem 7.1.1). On the other hand \begin{equation}\label{OEQ} \Omega=N\epsilon_{abc}\xi^{a}\nabla^{b}\xi^{c}=\partial_{t}^{a}\boldsymbol{\epsilon}_{abcd}\xi^{b}\boldsymbol{\nabla}^{c}\xi^{d} \end{equation} where $\epsilon^{\sg}_{abc}$ is the $\sg$-volume form. Expressing (\ref{LEQQ}), (\ref{NEQ}), (\ref{GCUREQII}) and (\ref{OEQ}) in terms of $U,V$, and expressing the Laplacians and norms in terms of $q$ we obtain (\ref{ES2})-(\ref{ES3}). To obtain (\ref{ES1}) use \begin{equation} \kappa_{h}h_{ab}=\frac{\nabla_{a}\nabla_{b}\lambda}{\lambda}+\frac{\omega(\boldsymbol{n})^{2}}{2\lambda^{4}}h_{ab}+\frac{\nabla_{a}\nabla_{b}N}{N} \end{equation} taken from eqs. (20) and (25) in \cite{Dain:2008xr}, and re-express it in terms of $q_{ab}$ and its covariant derivative. \end{proof} \subsection{Example: the reduced Kasner}\label{KSOL} The most simple examples of reduced static data sets come from reducing the Kasner solutions through suitable Killing fields. Below we describe the reduced Kasner in detail. Recall that the Kasner data sets (in the harmonic representation) are \begin{align} \hg=dx^{2}+x^{2a}dy^{2}+x^{2b}d z^{2},\qquad U=U_{1}+c\ln x \end{align} where $c,a$ and $b$ satisfy $c^{2}+(a-\frac{1}{2})^{2}=\frac{1}{4}$ and $a+b=1$. If we reduce these metrics through the Killing field $\xi=\lambda \partial_{z}$ we obtain the reduced data $(q,U,V)$, \begin{align} & q=dx^{2}+x^{2a}d\varphi^{2},\\ & U=U_{1}+c\ln x,\\ & V=V_{1}+b\ln x \end{align} where of course \begin{equation} c^{2}+(a-\frac{1}{2})^{2}=\frac{1}{4},\qquad a+b=1. \end{equation} and also \begin{equation} \Omega=0 \end{equation} Above we made $V_{1}=\ln \lambda$, (note that $V_{1}=V(1)$ and that $U_{1}=U(1)$). If we make this solution periodic along $\varphi$ and vary $a$, (hence $b$ and $c$) and $\lambda$ we obtain all the possible reduced solutions with $\Omega=0$ and with a $\Sa$-symmetry (in $\varphi$). More general than this we can quotient the Kasner solutions by the Killing field \begin{equation} \xi=\lambda(\cos\omega\ \partial_{y}+\sin\omega\ \partial_{z}) \end{equation} for any $\lambda>0$ and $\omega\in [0,2\pi)$, (fixed). A direct calculation shows that the reduced data set $(q,U,V)$ is \begin{align} & q=dx^{2}+\bigg[\frac{x^{2}}{x^{2a}\cos^{2}\omega+x^{2b}\sin^{2}\omega}\bigg]d\varphi^{2},\\ & U=U_{1}+c\ln x,\\ & V=V_{1}+\frac{1}{2}\ln (x^{2a}\cos^{2}\omega+x^{2b}\sin^{2}\omega), \end{align} where of course \begin{equation} c^{2}+(a-\frac{1}{2})^{2}=\frac{1}{4},\qquad a+b=1. \end{equation} and furthermore \begin{equation} \Omega^{2}=4e^{4V_{1}}(a-b)^{2}\cos^{2}\omega\sin^{2}\omega \end{equation} Above we made $e^{V_{1}}=\lambda$, (note that $V_{1}=V(1)$ and that $U_{1}=U(1)$). If we make this solution periodic along $\varphi$ and vary $a$, (hence $b$ and $c$) and $\lambda$ and $\omega$, we obtain all the possible reduced solutions with a $\Omega\neq 0$ and with a $\Sa$-symmetry (in $\varphi$). A simple computation shows that as long as $\Omega\neq 0$ the norm $\Lambda$ of the Killing field $\xi$ grows at least as fast as the square root of the distance to the boundary of the data set. More precisely we have \begin{equation} \Lambda^{2}\geq \eta |\Omega| x \end{equation} where $\eta$ does not depend on the data set. As we will see later this is indeed a general property for the asymptotic of any reduced data set. \subsection{A subclass of the reduced Kasner: the cigars}\label{CIGARSOL} When either $(a,b)=(1,0)$ or $(a,b)=(0,1)$ and $\omega\notin \{0,\pi/2,\pi,3\pi/2\}$ we obtain an important class of solutions that we will call the {\it cigars} (motivated by their shape, see Figure \ref{Figure1}). Their metrics are complete in $\mathbb{R}^{2}$. After a convenient change of variables, the cigars are given by \begin{equation}\label{CIGAR} U=U_{0},\quad V=V_{0}+\frac{1}{2}\ln (1+r^{2})\quad \text{and}\quad q=4\Omega^{-2}e^{4V_{0}}\big(dr^{2}+\frac{r^{2}}{1+r^{2}}d\varphi^{2}\big) \end{equation} where $U_{0}$ and $V_{0}$ are arbitrary constants and where $r$ is the radial coordinate from the origin and $\varphi$ is the polar angle ranging in $[0,2\pi)$, (note that $V_{0}=V(r=0)$). The asymptotic metric is $q=4\Omega^{-2}e^{4V_{0}}(dr^{2}+d\varphi^{2}$), hence cylindrical of section equal to $4\pi \Omega^{-1}e^{2V_{0}}$. \vspace{0.2cm}\vs \begin{figure}[h] \centering \includegraphics[width=10cm,height=.8cm]{CIGAR.pdf} \caption{Representation of the cigar.} \label{Figure1} \end{figure} \vspace{0.2cm}\vs As $U$ is constant, then the lapse $N$ is also constant and the original static solution, (from where the data (\ref{CIGAR}) is coming from), is flat. Let us explain now which quotient of $\mathbb{R}^{3}$ gives rise to the cigars. For any positive $\delta$ we let $T_{\delta}$ be the translation in $\mathbb{R}^{3}$ of magnitude $\delta$ along the $z$-axis and for any $\varphi$ we let $R_{\varphi}$ be the rotation in $\mathbb{R}^{3}$ of angle $\varphi$ around the $z$-axis. Consider the isometric $\mathbb{R}$-action $I$ on $\mathbb{R}^{3}$ given by \begin{equation} I: (t)\times (x,y,z)\longrightarrow T_{te^{V_{0}}}\big( R_{t\Omega (e^{-V_{0}})/2}(x,y,z)\big) \end{equation} Now, we quotient $\mathbb{R}^{3}$ as follows: two points $(x,y,z)$ and $(x',y',z')$ are identified iff $(x',y',z')=I(2\pi n,(x,y,z))$ for some $n\in \mathbb{Z}$. The quotient is free $\Sa$-symmetric where the action is by restricting $I$ to $[0,2\pi)$. A straight forward calculation shows that the quotient data $(q,U,V)$ is the cigar solution. \subsubsection{The cigars's uniqueness}\label{CIGUNIQU} The cigars (\ref{CIGAR}) are the only complete non-compact boundary-less solutions to (\ref{ES1})-(\ref{ES3}) with $\Omega\neq 0$. To see this observe that any complete non-compact solution must have $U$ constant because $U$ satisfies \begin{equation}\label{UEST} |\nabla U|(p)\leq \frac{\eta}{\dist(p,\partial \qM)} \end{equation} and if $\qM$ is complete and non-compact then $\dist(p,\partial \qM)=\infty$ and $U$ is constant (this decay is direct from Anderson's estimate; We will make another proof of it in Proposition \ref{REDCUR}). Thus, as before, the original static $(\sM; g, N)$ solution is flat (and a $\Sa$-bundle). It is not difficult to see that the only possibility must be a quotient of $\mathbb{R}^{3}$ as described above. However in Proposition \ref{LLPP} we give an alternative proof whose technique will be useful later when we present the cigar as the singularity model. Before and for the sake of completeness we prove that the only complete (reduced) data set with $\Omega=0$ is flat with $U$ constant. \begin{Proposition} The only complete boundary-less (reduced) static data with $\Omega = 0$ is flat with $U$ constant . \end{Proposition} \begin{proof} As $U=U_{0}$ and $\Omega=0$ then $\nabla\nabla \Lambda=0$ (eq. (\ref{ES1})). This implies that $\Lambda$ is linear along geodesics. Thus, as the space is complete and $\Lambda>0$ then $\Lambda$ must be constant and $q$ flat. The result follows. \end{proof} \begin{Proposition}\label{LLPP} The only complete boundary-less (reduced) static data with $\Omega\neq 0$ are the cigars. \end{Proposition} \begin{proof} The estimate (\ref{UEST}) shows that $U$ must be constant, i.e. $U=U_{0}$. Hence, making $\overline{\Lambda}=\sqrt{2/\Omega}\ \Lambda$ we have \begin{align} \nabla\nabla \overline{\Lambda}=\frac{1}{\overline{\Lambda}^{3}}q,\qquad \kappa=\frac{3}{\overline{\Lambda}^{4}} \end{align} The first is an equation of Killing type and can be integrated easily along geodesics. If $\gamma(s)$ is a geodesic parametrised by arc-length then we have $\overline{\Lambda}''=\overline{\Lambda}^{-4}$ (make $\overline{\Lambda}(\gamma(s))=\overline{\Lambda}(s)$) which has the solutions \begin{equation}\label{SOLLAM} \overline{\Lambda}^{2}(s)=\frac{1}{(\overline{\Lambda}{'}_{0}^{2}+1/\overline{\Lambda}_{0}^{2})}\big(1+(\overline{\Lambda}_{0}\overline{\Lambda}'_{0}+(\overline{\Lambda}_{0}'^{2}+1/\overline{\Lambda}_{0}^{2})s)^{2}\big) \end{equation} where $\overline{\Lambda}_{0}=\overline{\Lambda}(0)$ and $\overline{\Lambda}_{0}'=\overline{\Lambda}'(0)$. We have thus the bound \begin{equation} \overline{\Lambda}^{2}(s)\geq\frac{1}{(|\nabla \overline{\Lambda}_{0}|^{2}+1/\overline{\Lambda}_{0}^{2})} \end{equation} where $|\nabla \overline{\Lambda}_{0}|=|\nabla\overline{\Lambda}|(0)$. This lower bound is achieved only at $s=\overline{\Lambda}_{0}|\nabla \overline{\Lambda}_{0}|/(|\nabla \overline{\Lambda}_{0}^{2}+1/\overline{\Lambda}_{0}^{2})$ on the geodesic that points in the direction of least ${\Lambda}_{0}'$, i.e. when it is equal to $-|\nabla \overline{\Lambda}_{0}|$. Therefore at the point $p$ where the minimum is achieved we have $\nabla\overline{\Lambda}(p)=0$. Hence, along any geodesic $\gamma(s)$ emanating from $p$, (i.e. $\gamma(0)=p$), we have \begin{equation} \overline{\Lambda}^{2}=\overline{\Lambda}^{2}_{0}\bigg(1+\frac{s^{2}}{\overline{\Lambda}_{0}^{4}}\bigg) \end{equation} Thus, near $p$ we can write \begin{equation} q=ds^{2}+\ell^{2}d\varphi^{2} \end{equation} with $\ell=\ell(s)$ satisfying \begin{equation} \ell''=-\kappa \ell=-\frac{3}{\overline{\Lambda}^{4}}\ell \end{equation} and with $\ell(0)=0$ and $\ell'(0)=1$. The solution is \begin{equation} \ell^{2}=\frac{s^{2}}{\big(1+s^{2}/\overline{\Lambda}_{0}^{4}\big)} \end{equation} recovering (\ref{CIGAR}) at least near $p$. It is simple to see that this $q$ indeed represents the metric all over $S$ which in turn must be diffeomorphic to $\mathbb{R}^{2}$. \end{proof} \subsubsection{The cigars as models near high-curvature points}\label{CIGNHCP} \begin{Lemma}\label{LOCMOD} Let $(S_{i};p_{i};q_{i},V_{i},U_{i})$ be a pointed sequence of metrically complete (reduced) static data sets all having the same $\Omega\neq 0$. Suppose that \begin{equation} \dist_{q_{i}}(p_{i},\partial S_{i})\geq d_{0}>0 \end{equation} and that either \begin{equation} \kappa_{q_{i}}(p_{i})\rightarrow \infty,\quad{\rm or}\quad |\nabla V_{i}|_{q_{i}}(p_{i})\rightarrow \infty \end{equation} Then, there are scalings $(\hat{\lambda}_{i},\hat{\nu}_{i},\hat{\mu}_{i})$ such that the scaled sequence $(S_{i};p_{i};\hat{q}_{i},\hat{V}_{i},\hat{U}_{i})$ converges in $C^{\infty}$ and in the pointed sense to either a flat cylinder or a cigar with the same $\Omega$. \end{Lemma} {\it Notation}: To simplify notation inside the proof, we will use the notation $\kappa_{i}$ for $\kappa_{q_{i}}$ and $|\nabla V_{i}|$ for $|\nabla V_{i}|_{q_{i}}$, (the index `$i$' is from the sequence and of course does not represent a scaling). \begin{proof} The proof is divided in various cases. {\it Case I}. Suppose that $|\nabla V_{i}|(p_{i})$ diverges but that $\kappa_{i}(p_{i})$ remains uniformly bounded. To start on we make scalings $(\overline{\lambda}_{i},\overline{\nu}_{i},\overline{\mu}_{i})$ where \begin{equation} \overline{\lambda}_{i}=|\nabla V_{i}|(p_{i}),\quad \overline{\nu}_{i}=e^{-2V_{i}(p_{i})},\quad \overline{\mu}_{i}=-U_{i}(p_{i}). \end{equation} Let $(\overline{q}_{i},\overline{V}_{i},\overline{U}_{i})$ be the scaled variables. Observe that $\Omega$ scales to $\overline{\Omega}_{i}=\overline{\nu}_{i}\Omega/\overline{\lambda}_{i}$. We have \begin{equation}\label{LEQ} \overline{\Lambda}_{i}(p_{i})=1,\quad |\nabla \overline{\Lambda}_{i}|(p_{i})=1, \end{equation} where, recall, $\overline{\Lambda}_{i}=e^{\overline{V}_{i}}$. Consider now the three-dimensional static pointed data set $(\Sigma_{i};o_{i};\overline{\hg}_{i},\overline{U}_{i})$ whose reductions are the $(S_{i};p_{i};\overline{q}_{i},\overline{V}_{i},\overline{U}_{i})$. The $o_{i}$ are points in $\Sigma_{i}$ projecting into the $p_{i}$'s. Let $\overline{\xi}_{i}$ be the scaling of $\xi_{i}$. In this context the relations (\ref{LEQ}) are \begin{equation} |\overline{\xi}_{i}|(o_{i})=1,\quad |\nabla |\overline{\xi}_{i}||(o_{i})=1, \end{equation} where the norms are with respect to $\overline{\hg}_{i}$. Moreover, $\overline{\Omega}_{i}=\overline{\nu}_{i}\Omega/\overline{\lambda}_{i}\rightarrow 0$ because the $\overline{\nu}_{i}$ are bounded and the $\overline{\lambda}_{i}$ tend to infinity. Let us study now the convergence of the derivatives $(\overline{\nabla}\, \overline{\xi}_{i})(o_{i})$ of the Killings $\overline{\xi}_{i}$ at the points $o_{i}$. For notational simplicity we will remove for a moment the subindexes `i' (but we keep them in mind). For the calculation we consider $\overline{\hg}$-orthonormal basis $\{e_{1},e_{2},e_{3}\}$ around the points $o$, with $e_{3}(o)=\overline{\xi}(o)/|\overline{\xi}|(o)$ and $(\overline{\nabla}_{e_{i}}e_{j})(o)=0$. Then, using the relation $\overline{\Omega}=\overline{\epsilon}_{abc}\overline{\xi}^{a}\overline{\nabla}^{b}\overline{\xi}^{c}$ and the Killing condition $\overline{\nabla}_{a}\overline{\xi}_{b}+\overline{\nabla}_{b}\overline{\xi}_{a}=0$, the components of $\overline{\nabla}\, \overline{\xi}$ are computed as, \begin{align} & \langle \nabla_{e_{j}}\overline{\xi}, e_{j}\rangle=0,\\ & \langle \nabla_{e_{1}}\overline{\xi}, e_{2}\rangle=-\langle \nabla_{e_{2}}\overline{\xi},e_{1}\rangle=\frac{\overline{\Omega}}{|\overline{\xi}|},\\ & \langle \nabla_{e_{3}}\overline{\xi}, e_{j}\rangle=-\langle \nabla_{e_{j}}\overline{\xi}, e_{3}\rangle=-\nabla_{e_{j}}|\overline{\xi}|. \end{align} If furthermore $e_{1}(o)$ and $e_{2}(o)$ are chosen such that $\nabla_{e_{1}(o)}|\overline{\xi}|=0$ and $\overline{\nabla}_{e_{2}(o)}|\overline{\xi}|=1$ then, (restoring now the indexing `i'), the components $\langle\overline{\nabla}_{e_{j}}\overline{\xi}_{i},e_{k}\rangle(o_{i})$ are either zero or tend to zero as $i$ goes to infinity except for $\langle\overline{\nabla}_{e_{1}}\overline{\xi}_{i},e_{3}\rangle(o_{i})$ and $\langle \overline{\nabla}_{e_{3}}\overline{\xi}_{i},e_{1}\rangle(o_{i})$ that are constant and equal to one and minus one respectively. Now we observe that \begin{equation} \dist_{\overline{\hg}_{i}}(o_{i},\partial \Sigma_{i})=\overline{\lambda}_{i}\dist_{\hg_{i}}(o_{i},\partial \Sigma_{i})=\overline{\lambda}_{i}\dist_{q_{i}}(p_{i},\partial S_{i})\geq \overline{\lambda}_{i}d_{0}\rightarrow \infty. \end{equation} Therefore by Anderson's estimates, the curvature of the $\overline{\hg}_{i}$ over balls of centers $o_{i}$ and any fixed radius tend to zero. Hence, there are neighbourhoods $\mathcal{B}_{i}$ of $o_{i}$ and covers $\tilde{\mathcal{B}}_{i}$ such that the pointed sequence $(\tilde{\mathcal{B}}_{i};\tilde{o}_{i};\overline{\hg}_{i})$ converges in $C^{\infty}$ and in the pointed sense to the Euclidean three-space (for the cover metric we use also $\overline{\hg}_{i}$). We claim that the lift of the Killing fields $\overline{\xi}_{i}$ to the $\tilde{\mathcal{B}}_{i}$, (that we will denote too by $\overline{\xi}_{i}$) converge in $C^{\infty}$ to the generator of a (non-trivial) rotation of $\mathbb{R}^{3}$. To see this recall first that for any Killing field $\chi$ it holds $\nabla_{a} \nabla_{b} \chi_{c}=-Rm_{bca}^{\ \ \ d}\chi_{d}$. Thus, at any point $x$ we can find $\overline{\xi}_{i}(x)$ by integrating a second order linear ODE along a geodesic that extends from $\gamma(0)=\tilde{o}_{i}$ to $x$, given the initial data $\overline{\xi}_{i}(\gamma(0))$ and $\overline{\nabla}_{\gamma'(0)}\overline{\xi}_{i}$. As it was shown earlier that the data $\overline{\xi}_{i}(\tilde{o}_{i})$ and $(\overline{\nabla}\overline{\xi}_{i})(\tilde{o}_{i})$ converges, hence so does $\overline{\xi}_{i}$ and the perpendicular distribution of the limit Killing field $\overline{\xi}_{\infty}$ is integrable because $\lim \overline{\Omega}_{i}=0$. Thus, $\overline{\xi}_{\infty}$ generates a rotation in $\mathbb{R}^{3}$. As $|\overline{\xi}_{\infty}|(\tilde{o}_{\infty})=1$ and $|\nabla |\overline{\xi}_{\infty}||(\tilde{o}_{\infty})=1$ it must be that $\tilde{o}_{\infty}$ is at a distance one from the rotational axis. In coordinates $(x,y,z)$ of $\mathbb{R}^{3}$ the limit vector field would be, (for instance), $x\partial_{y}-y\partial_{z}$ and the limit point would be, (for instance), $(1,0,0)$. This convergence of $\overline{\xi}_{i}$ to the generator of a rotation will be used in the following to extract a pair of relevant informations. First we show that inside the surfaces $S_{i}$ there are geodesic loops $\ell_{i}$, based at the points $p_{i}$, whose $\overline{q}_{i}$ length tends to zero. Let us see this. For $i$ large enough, the orbit of the Killing $\overline{\xi}_{i}$ inside $\tilde{\mathcal{B}}_{i}$, that starts at the point $\tilde{o}_{i}$, twists around an `axis' and come very close to close up into a circle when it approaches again the point $\tilde{o}_{i}$ (see Figure \ref{S11}). Hence, a small two-dimensional disc formed by short geodesic segments emanating perpendicularly to $\overline{\xi}_{i}(\tilde{o}_{i})$ at $\tilde{o}_{i}$ must intersect the orbit at a nearby point $\tilde{o}'_{i}$. Moreover the geodesic segment joining $\tilde{o}_{i}$ and $\tilde{o}'_{i}$, projects into a geodesic loop $\ell_{i}$ on $S_{i}$ based at $p_{i}$. The length of the loops $\ell_{i}$ clearly tend to zero as i goes to infinity. Second, for $i\geq i_{0}$ large enough, the norm of the Killings $\overline{\xi}_{i}$ over the balls $B_{\overline{\hg}_{i}}(o_{i},1/2)$ $\subset \tilde{\mathcal{B}}_{i}$ is bounded below by $1/4$. Hence, $\overline{\Lambda}_{i}$ is bounded below by $1/4$ over the balls $B_{\overline{q}_{i}}(p_{i},1/2)$ in $S_{i}$. More importantly the Gaussian curvature $\overline{\kappa}_{i}$ is bounded above by $100\overline{\Omega}_{i}^{2}$ also on $B_{\overline{q}_{i}}(p_{i},1/2)$. \begin{figure}[h] \centering \includegraphics[width=9cm, height=5.5cm]{S11.pdf} \caption{} \label{S11} \end{figure} From these two facts we conclude that the geometry near the points $o_{i}$ is collapsing with bounded curvature. This implies that if we scale up $\overline{q}_{i}$ to have the injectivity radius at $o_{i}$ equal to one, then the new scaled spaces converge in the pointed sense to a flat cylinder. The composition of this last scaling and the one we performed first is the scaling $(\hat{\lambda}_{i},\hat{\nu}_{i},\hat{\mu}_{i})$ we were looking for. {\it Case II}. Suppose now that both $|\nabla V_{i}|(p_{i})$ and $\kappa_{i}(p_{i})$ are diverging. If the quotient $\kappa_{i}(p_{i})/|\nabla V_{i}|^{2}(p_{i})$ tends to zero, then we can perform a scaling $(\overline{\lambda}_{i},\overline{\nu}_{i},\overline{\mu}_{i})$ that leaves $\Omega$ invariant and that makes $\overline{\kappa}_{i}(p_{i})$ bounded and $|\nabla \overline{V}_{i}|(p_{i})$ diverging. We can then repeat the step in {\it Case I} with $(\overline{q}_{i},\overline{V}_{i},\overline{U}_{i})$ instead of $(q_{i},V_{i},U_{i})$ to prove the Lemma in this case too. Assume therefore that the quotient $\kappa_{i}(p_{i})/|\nabla V_{i}|^{2}(p_{i})$ remains bounded. Perform again a scaling $(\overline{\lambda}_{i}, \overline{\nu}_{i},\overline{\mu}_{i})$ that leaves $\Omega$ invariant and makes $\overline{\kappa}_{i}(p_{i})=1$ and therefore makes $|\nabla \overline{V}_{i}|(p_{i})$ bounded because $\kappa_{i}(p_{i})/|\nabla V_{i}|^{2}(p_{i})$ is invariant. Note that as $\dist_{\overline{q}_{i}}(p_{i},\partial S_{i})\rightarrow \infty$, the estimate (\ref{UDECESTIM}) impose that $|\nabla \overline{U}_{i}|$ must tend uniformly to zero over balls of centers $p_{i}$ and fixed but arbitrary radius. We claim that the curvature $\overline{\kappa}_{i}$ remains uniformly bounded on balls of centers $p_{i}$ and fixed radius. Let $L>0$, let $x$ be a point in $B_{\overline{q}_{i}}(p_{i},L)$ and let $\gamma(s)$ be a length-minimising geodesic joining $p_{i}$ to $x$. Let $\overline{\Lambda}_{i}(s)=\overline{\Lambda}_{i}(\gamma(s))$. Then, the value of $\overline{\Lambda}_{i}$ at $x$ is found by solving the second order ODE \begin{equation} \overline{\Lambda}''_{i}=\frac{\Omega^{2}}{4\overline{\Lambda}^{3}}+(|\nabla U|^{2}-2U'^{2})\overline{\Lambda} \end{equation} subject to the initial data $\overline{\Lambda}_{i}(0)=\overline{\Lambda}_{i}(\gamma(0))$ and $\overline{\Lambda}'_{i}(0)=\nabla_{\gamma'(0)}\overline{\Lambda}_{i}$, and evaluating at $s=\dist_{\overline{q}_{i}}(x,p_{i})$. If $\nabla \overline{U}_{i}$ were identically zero then the solutions would be exactly (\ref{SOLLAM}) and we would have the bound \begin{equation} \overline{\Lambda}_{i}^{2}(s)\geq \frac{1}{(\overline{\Lambda}{'}_{i}(0))^{2}+1/(\overline{\Lambda}_{i}(0))^{2}} \end{equation} for all $s\geq 0$. In particular, if $\overline{\Lambda}_{i}(0)$ is bounded below by $A$ and $|\overline{\Lambda}_{i}'(0)|$ is bounded above by $B$ then $\overline{\Lambda}_{i}(s)$ is bounded below by $\sqrt{1/(B^{2}+1/A^{2})}$. But as $|\nabla \overline{U}_{i}|$ tends to zero uniformly over balls or radius $L$, then the solutions to the ODE tend to (\ref{SOLLAM}) with initial data $\overline{\Lambda}_{i}(0)$ and $\overline{\Lambda}'_{i}(0)$. Now, as $\overline{\kappa}_{i}(p_{i})=1$ and $|\nabla \overline{V}_{i}|(p_{i})$ is bounded, there are constants $A$ and $B$ such that \begin{equation} \overline{\Lambda}_{i}(0)\leq A,\quad {\rm and}\quad |\overline{\Lambda}'_{i}(0)|\leq B \end{equation} no matter which the geodesic $\gamma$ is. Therefore if $i\geq i_{0}(L)$ is big enough then $\overline{\Lambda}_{i}(x)\leq 2\sqrt{1/(B^{2}+1/A^{2})}$. Hence, $\overline{\kappa}_{i}\geq 3\overline{\Omega}_{i}(B^{2}+1/A^{2})^{2}/32$ everywhere on $B_{\overline{q}_{i}}(p_{i},L)$. The bound we proved for the curvature implies that if for a certain subsequence the injectivity radius at the points $p_{i}$ tends to zero then there are finite covers that converge to a cigar. But this is impossible because the cigars do not admit any non-trivial quotient. Hence the injectivity radius remains bounded away from zero and the pointed sequence $(S_{i};p_{i};\overline{q}_{i},\overline{V}_{i},\overline{U}_{i})$ must sub-converge in the pointed sense to a solution with $U$ constant. By uniqueness it is always a cigar and we are done. \end{proof} Let us make an extra observation about a construction made inside the proof. Recall that the spaces $(\tilde{\mathcal{B}}_{i},\overline{\hg}_{i})$ converge to $\mathbb{R}^{3}$ and the Killings $\overline{\xi}_{i}$ converge to the generator of a rotation. Let $z_{i}$ be points where $(\nabla |\overline{\xi}_{i}|)(z_{i})=0$. These points one can think that lie in the `axis' of rotation. Naturally if we quotient the balls of centers $z_{i}$ and radius two we obtain a two-disc. This disc projects into a `cup' on $S_{i}$ containing $p_{i}$ (see Figure \ref{S11}). In the metric $q_{i}$, the `radius' of this cup (i.e. the maximum distance from a point to the boundary) goes to zero. The Lemma \ref{LOCMOD} provides models for the scaled geometry near points of high curvature or high $V$-gradient, but it does not say how such points affect the unscaled geometry nearby. This is an important information that we will need later. In rough terms, what occurs is that at any finite distance from such a point the (unscaled) geometry becomes one dimensional, pretty much like a cigar highly scaled down. The next Lemma \ref{66} explains the phenomenon. In few words it explains how the geometry looks like near geodesics that join points of high curvature or high $V$-gradient and the boundary of the surfaces $S_{i}$. This basic information will be sufficient to extract conclusions later. The scaled geometry around points in such geodesics will be model essentially as regions of the cigar whose curvature at the origin is conventionally $\kappa_{0}=3(2\pi)^{2}$ and therefore whose metric is \begin{equation} q_{0}=\frac{1}{(2\pi)^{2}}\big(dr^{2}+\frac{r^{2}}{1+r^{2}}d\varphi^{2}\big) \end{equation} where $r\geq 0$. Let us describe the models more explicitly. A pointed space $(\{0\leq r\leq 40\};x;q_{0})$, where $x$ be a point in this cigar with $r(x)\leq 25$, is a model of type ${\rm Ci}$ (from `cigar"). A pointed space $(\{r(x)-10\leq r\leq r(x)+10\};x;q_{0})$, where $x$ be a point with $r(x)>25$, is a model of type ${\rm Cy}$ (from `cylinder"). The Figure \ref{S12} sketches these two types of models. \begin{figure}[h] \centering \includegraphics[width=9cm, height=5cm]{S12.pdf} \caption{} \label{S12} \end{figure} \begin{Lemma}\label{66} Let $(S_{i};p_{i};q_{i},V_{i},U_{i})$ be a pointed sequence of metrically complete (reduced) static data sets all having the same $\Omega\neq 0$ and suppose that \begin{equation} \dist_{q_{i}}(p_{i},\partial S_{i})\geq d_{0}>0 \end{equation} and that either \begin{equation} \kappa_{q_{i}}(p_{i})\rightarrow \infty,\quad{\rm or}\quad |\nabla V_{i}|_{q_{i}}(p_{i})\rightarrow \infty. \end{equation} For every $i$ let $\gamma_{i}$ be a geodesic segment joining $p_{i}$ to $\partial S_{i}$ and minimising the distance between them (if $\partial S_{i}=\emptyset$ let $\gamma_{i}$ be an infinite ray). Fix a positive $d_{1}$ less than $d_{0}$. Then, for every $k\geq 1$, $\epsilon>0$ there exists $i_{0}$ such that for any $i\geq i_{0}$ and for any $x_{i}\in \gamma_{i}$ with $\dist_{q_{i}}(x_{i}, p_{i})\leq d_{1}$ there exist a neighbourhood $\mathcal{B}_{i}$ of $x_{i}$ and a scaled metric $\overline{q}_{i}=\overline{\lambda}_{i}^{2}q_{i}$ such that $(\mathcal{B}_{i};x_{i}; \overline{q}_{i})$ is $\epsilon$-close in $C^{k}$ to either a model space ${\rm Ci}$ or a model space ${\rm Cy}$. \end{Lemma} {\it Notation}: Again to simplify notation inside the proof, we will use the notation $\kappa_{i}$ for $\kappa_{q_{i}}$ and $|\nabla V_{i}|$ for $|\nabla V_{i}|_{q_{i}}$. \begin{proof} Half of the work has been done essentially already in Lemma \ref{LOCMOD} because the geometry near points of high curvature or high $V$-gradient are model locally (at a right scale) by a space Ci or a space Cy. We say this formally as follows: given $\epsilon>0$ and $k\geq 1$ there are $K_{0}>0$ and $i_{1}>0$ such that for any $i\geq i_{1}$ and $x_{i}\in \gamma_{i}$ such that $\dist_{i}(x_{i},p_{i})\leq d_{1}$ and either $\kappa_{i}(x_{i})\geq K_{0}$ or $|\nabla V_{i}|(x_{i})\geq K_{0}$, then the conclusions of the Lemma hold. Thus, it is left to show that the conclusions hold too for points on $\gamma_{i}$ that do not have `high" curvature or high gradient, that is for which $\kappa_{i}(x_{i})\leq K_{0}$ and $|\nabla V_{i}|(x_{i})\leq K_{0}$. We prove that in what follows. We will show that there is $i_{2}\geq i_{1}$ such that for any $i\geq i_{2}$ and for any $x_{i}\in \gamma_{i}$ such that $\dist_{i}(x_{i},p_{i})\leq d_{1}$, $\kappa_{i}(x_{i})\leq K_{0}$ and $|\nabla V_{i}|(x_{i})\leq K_{0}$, the conclusion of the Lemma also holds and the local model is of type Cy. Given $i$, let $x_{i}$ be a point such that $x_{i}\in \gamma_{i}$ such that $\dist_{i}(x_{i},p_{i})\leq d_{1}$, $\kappa_{i}(x_{i})\leq K_{0}$ and $|\nabla V_{i}|(x_{i})\leq K_{0}$. We begin claiming that there are $r_{0}<(d_{0}-d_{1})/2$ and $K_{1}>0$ independent of $i$ such that $\kappa_{i}(x)\leq K_{1}$ for all $x\in B_{q_{i}}(x_{i},r_{0})$. Let $r_{0}$ be any number less than $(d_{0}-d_{1})/2$ and let $x$ be a point such that $\dist(x,x_{i})\leq r_{0}$. Let $\alpha_{i}(s)$ be a length minimising geodesic joining $x_{i}$ to $x$ ($\alpha_{i}(0)=x_{i}$). Denote $V_{i}(s):=V_{i}(\alpha_{i}(s))$. Let, \begin{equation} \hat{V}_{i}(s)=V_{i}(s)-V_{i}(0) \end{equation} Then we have, \begin{equation}\label{INITIALDATAC} \hat{V}_{i}(0)=0,\quad {\rm and}\quad |\hat{V}_{i}'(0)|\leq K_{0} \end{equation} where the first equation is by the definition of $\hat{V}_{i}(0)$ and the second follows by assumption. On the other hand $\hat{V}_{i}(s)$ satisfies the differential equation (\ref{ES1}), namely, \begin{equation}\label{ODEFORV} \hat{V}_{i}''+\hat{V}_{i}{'}^{2}=\big(\frac{1}{2}\Omega^{2}e^{-4V_{i}(0)}\big)e^{-4\hat{V}_{i}}+(|\nabla U|^{2}-2U'^{2}) \end{equation} where the last expression in parenthesis is evaluated of course on $\alpha_{i}(s)$. Let us make two comments on this equation. First, the coefficient $\Omega^{2}e^{-4V_{i}(0)}/2$ is less or equal than $\kappa_{i}(x_{i})$ and thus less or equal than $K_{0}$ by assumption. Second, the summand $(|\nabla U|^{2}-2U'^{2})(s)$ is uniformly bounded, say by $K_{2}>0$, independently of $s$, $x$, $x_{i}$ and $i$. This follows from the estimate (\ref{UEST}) and $\dist_{q_{i}}(\alpha_{i}(s),\partial S_{i})\geq (d_{1}-d_{0})/2$; This last inequality is due to, \begin{equation} \dist_{q_{i}}(\alpha_{i}(s),\partial S_{i})\geq \dist_{q_{i}}(x_{i},\partial S_{i})-\dist_{q_{i}}(\alpha_{i}(s),x_{i}) \end{equation} and the inequalities $\dist_{q_{i}}(x_{i},\partial S_{i})\geq (d_{1}-d_{0})$ and $\dist_{q_{i}}(\alpha_{i}(s),x_{i})\leq \dist_{q_{i}}(x,x_{i})\leq (d_{1}-d_{0})/2$. Until now we have shown control on the ODE (\ref{ODEFORV}) and the initial data (\ref{INITIALDATAC}). Therefore by standard ODE analysis, it follows that one can chose $r_{0}$ small enough such that $|\hat{V}_{i}(s)|\leq K_{1}$, (i.e. preventing blow up), for a $K_{1}$ independent on $s$, $x$, $x_{i}$ and $i$. This bound on $V_{i}(x)$ (we removed the hat now) and the bound on $|\nabla U|^{2}(x)$ gives the desired bound on $\kappa_{i}(x)$. We have proved a curvature bound $\kappa_{i}(x)\leq K_{1}$ for all $x\in B_{q_{i}}(x_{i},r_{0})$. Using this bound we are going to show that the injectivity radius at $x_{i}$, namely $inj_{q_{i}}(x_{i})$, tends to zero as $i$ tends to infinity. Indeed, if on the contrary $inj_{q_{i}}(x_{i})\geq r_{1}>0$ for some $r_{1}>0$, then because the curvature is bounded on $B_{q_{i}}(x_{i}, r_{0})$, there is $v>0$ and $r_{2}\leq \min\{r_{0},r_{1}\}/2$ such that the area of the ball $B_{q_{i}}(x_{i},r_{2})$ is greater or equal than $v$. As $B_{q_{i}}(x_{i},r_{2})\subset B_{q_{i}}(p_{i},d_{0})$ then we have \begin{equation} \frac{A_{i}(B_{q_{i}}(p_{i},d_{0}))}{d_{0}^{2}}\geq \frac{v}{d_{0}^{2}} \end{equation} On the other hand observe that by Lemma \ref{LOCMOD} the geometry near the points $p_{i}$ is locally collapsing (at a right scale) to a line or to half a line. Thus, there is $i_{3}$ such that for $i\geq i_{3}$ there is $\delta_{i}\rightarrow 0$, such that the quotient \begin{equation} \frac{A_{i}(B_{q_{i}}(p_{i},\delta_{i}))}{\delta_{i}^{2}} \end{equation} is less or equal than $v/(2d_{0}^{2})$ (in fact the quotient tends to zero). But by Bishop-Gromov the function \begin{equation} s\rightarrow \frac{A_{i}(B_{q_{i}}(p_{i},s))}{s^{2}} \end{equation} is monotonically decreasing and therefore we should have \begin{equation} \frac{v}{2d_{0}^{2}}\geq \frac{A_{i}(B_{q_{i}}(p_{i},d_{0}))}{d_{0}^{2}}\geq \frac{v}{d_{0}^{2}} \end{equation} which is impossible. Thus the injectivity radius at $x_{i}$ tends to zero. Therefore the balls $B_{q_{i}}(x_{i},r_{0})$ collapse with bounded curvature and the existence of a scaling whose limit is a cylinder (Cy) is now direct. \end{proof} The Lemma \ref{66} gives a local model for the collapsed geometry around points on the geodesics $\gamma_{i}$. The concatenation of the local models provide a global picture that is summarised in the next corollary (whose proof is now direct), see Figure \ref{S13}. \begin{figure}[h] \centering \includegraphics[width=7cm, height=5cm]{S13.pdf} \caption{} \label{S13} \end{figure} \begin{Corollary}\label{SINGMODELUNDER} Let $(S_{i};p_{i};q_{i},V_{i},U_{i})$ be a pointed sequence of metrically complete (reduced) static data sets all having the same $\Omega\neq 0$ and suppose that \begin{equation} \dist_{q_{i}}(p_{i},\partial S_{i})\geq d_{0}>0 \end{equation} and that either \begin{equation} \kappa_{q_{i}}(p_{i})\rightarrow \infty,\quad{\rm or}\quad |\nabla V_{i}|_{q_{i}}(p_{i})\rightarrow \infty. \end{equation} For every $i$ let $\gamma_{i}$ be a geodesic segment joining $p_{i}$ to $\partial S_{i}$ and minimising the distance between them (if $\partial S_{i}=\emptyset$ let $\gamma_{i}$ be an infinite ray). Fix a positive $d_{1}$ less than $d_{0}$. Then there is $i_{0}$ such that for any $i\geq i_{0}$ there is a neighbourhood $\mathcal{B}_{i}$ of the ball $B_{q_{i}}(p_{i},d_{1})$, diffeomorphic to a disc and metrically collapsing to a segment of length $d_{1}$ as $i$ goes to infinity. \end{Corollary} \subsection{Decay of the fields at infinity and asymptotic topology}\label{DFIAT} We know already that the gradient of $U$ decays quadratically at infinity. In this section we show that also de gradient of $V$ and the Gaussian curvature $\kappa$ decay quadratically. The proof depends on whether $\Omega$ is zero or not. The case $\Omega=0$ is simple and relies only on the techniques a la Bakry-\'Emery used earlier. As a by product we re-prove the quadratic decay of the gradient of $U$, valid when $\Omega=0$ or not. When $\Omega\neq 0$, the proof requires the use of Corollary \ref{SINGMODELUNDER}. \subsubsection{Case $\Omega=0$}\label{DAIFS} \begin{Proposition}\label{REDCUR} There is a constant $\eta>0$ such that for every metrically complete (reduced) static data set we have \begin{equation}\label{DEC1} |\nabla U|^{2}(p)\leq\frac{\eta}{\dist^{2}(p,\partial \qM)}. \end{equation} Moreover when $\Omega=0$ we have \begin{equation}\label{DEC2} |\nabla V|^{2}(p)\leq\frac{\eta}{\dist^{2}(p,\partial \qM)}, \end{equation} hence also \begin{equation} \gcur(p) \leq\frac{\eta}{\dist^{2}(p,\partial \qM)}. \end{equation} \end{Proposition} \vspace{0.2cm} \begin{proof} Write (\ref{ES1}) as \begin{equation} Ric_{f}^{\alpha}=\frac{1}{2}\Omega^{2}e^{-4V}q+2\nabla U\nabla U\geq 0 \end{equation} with $f=-V$, $\alpha=1$, and recall from (\ref{ES3}) that $\Delta_{f}U=0$. Then, using (\ref{BOCHNERF}) with $\psi=U$ we obtain \begin{equation} \Delta_{f} |\nabla U|^{2}\geq 4|\nabla U|^{4} \end{equation} and hence (\ref{DEC1}) by Lemma \ref{LEMMAME}. Similarly, if $\Omega=0$ we have $\Delta_{f}V=0$ and using (\ref{BOCHNERF}) again but with $\psi=V$ we obtain \begin{equation} \Delta_{f} |\nabla V|^{2}\geq 2|\nabla V|^{4} \end{equation} and hence (\ref{DEC2}) by Lemma \ref{LEMMAME}. \end{proof} The next proposition describes in simple form the asymptotic topology of data sets $(\qM;q,U,V)$ when $\Omega=0$. Observe however that we require $\partial \qM$ compact and of course $(\qM;q)$ metrically complete. \begin{Proposition}\label{SUPO} Let $(\qM;q,U,V)$ be a metrically complete (reduced) static data set with $\Omega=0$, $\qM$ non-compact and $\partial \qM$ compact. Then there is a set $K$ with compact closure, such that \begin{equation} \qM=K\cup\big(\cup_{i=1}^{i=n} E_{i}\big) \end{equation} where every $E_{i}$ is diffeomorphic to $[0,\infty)\times \Sa$. \end{Proposition} \begin{proof} First we observe that as $\kappa\geq 0$, the ball covering property holds (indeed regardless of whether $\Omega=0$ or not). Hence, $\mathcal{S}$ has a finite number of ends. In particular we can write $\mathcal{S}$ as the union of a set with compact closure and a finite number of surfaces $E_{i}$, $i=1,\ldots,i_{S}$, each with compact boundary and containing only one end. It is sufficient to work with the surfaces $E_{i}$, that we denote generically as $E$. By Bishop-Gromov we have $\frac{A(B(\partial E,r))}{r^{2}}\searrow\ \mu$. The analysis depends on whether $\mu=0$ or $\mu>0$. {\it Case $\mu=0$.} Let $\gamma$ be a ray from $\partial E$ and let $p_{i}\in \gamma$ with $r(p_{i})=r_{i}=2^{i}$, for $i=0,1,2,\ldots$. If $\mu=0$, then the sequence of annuli $(\mathcal{A}^{c}_{r_i}(p_{i},1/4,4);q_{r_i})$ collapses in volume (in area) with bounded curvature. As we have explained earlier, this type of collapse is only through thin (finite) cylinders. Thus, (outside a compact set) $E$ is formed by an infinite concatenation of finite cylinders, (i.e. each diffeomorphic to $[0,1]\times {\rm S}^{1}$). {\it Case $\mu>0$.} As $\gcur\geq 0$ and $\gcur$ has quadratic decay, if $\mu>0$ then $(E;q)$ is asymptotic to a flat cone $(\mathcal{C};q_{\mu})$ where \begin{equation}\label{FLATANN} \mathcal{C}:=\mathbb{R}^{2}\setminus \{(0,0)\},\qquad q_{\mu}=dr^{2}+4\mu^{2}r^{2}d\varphi^{2} \end{equation} ($r$ is the radius and $\varphi$ is the polar angle in $\mathbb{R}^{2}$). It then follows that, outside a compact set of compact closure, $E$ is diffeomorphic to $[0,\infty)\times \Sa$ as wished. \end{proof} \subsubsection{Case $\Omega\neq 0$} The following lemma is the analogous to Lemma \ref{REDCUR} in the case $\Omega=0$. Note however that, contrary to the case $\Omega=0$, we assume that $\partial \qM$ is compact. We do not know if this condition can be removed or not. \begin{Lemma}\label{SUPO2} Let $(\qM;q,U,V)$ be a metrically complete (reduced) static data set with $\Omega\neq 0$, $\qM$ non-compact and $\partial \qM$ compact. Then, \begin{align} \label{ONZD3} |\nabla U|^{2}(p)\leq \frac{\eta}{\dist^{2}(p,\partial S)}, \qquad |\nabla V|^{2}(p)\leq \frac{\eta}{\dist^{2}(p,\partial S)}, \end{align} and, \begin{equation} \kappa(p)\leq \frac{\eta}{\dist^{2}(p,\partial S)} \end{equation} where $\eta>0$ is independent on the data. In particular \begin{equation} \Lambda^{2}(p)\geq \eta'\Omega\, \dist(p,\partial S) \end{equation} where $\eta'>0$ is also independent on the data. \end{Lemma} \begin{proof} The proof requires using Corollary \ref{SINGMODELUNDER}. Without loss of generality assume that $S$ is an end. Let $\gamma$ be a ray from $\partial S$. For every $j\geq 0$ let $r_{j}=2^{2j}$ and let $p_{j}\in \gamma$ be such that $\dist(p_{j},\partial S)=r_{j}$. The first goal will be to prove that $\kappa$ and $|\nabla V|^{2}$ decay quadratically along the union of annuli $\cup_{j\geq 0} \mathcal{A}^{c}_{r_{j}}(p_{j};1/8,8)$. We will prove later that this union covers $\gamma$ except for a finite segment of it (a priori that may not be the case). Let $x_{j_{i}}$ be any sequence of points such that $x_{j_{i}}\in \mathcal{A}^{c}_{r_{j_{i}}}(p_{j_{i}};1/8,8)$ for every $i\geq 0$. Each $x_{j_{i}}$ can be joined to $p_{j_{i}}$ through a continuous curve $\alpha_{i}$ entirely inside the annulus $\mathcal{A}^{c}_{r_{j_{i}}}(p_{j_{i}};1/8,8)$. Concatenating $\alpha_{i}$ with the part of $\gamma$ extending from $p_{j_{i}}$ to infinity, we obtain a curve, say $\hat{\alpha}_{i}$, extending from $x_{j_{i}}$ to infinity, and never entering the ball $B(\partial S,1/8)$, namely, keeping at a $q_{r_{j}}$-distance of $1/8$ from $\partial S$. We will use the existence of this curve below to reach a contradiction. Suppose now that either, \begin{equation}\label{DIVERGENCEP} \kappa(x_{j_{i}})\dist^{2}(x_{j_{i}},\partial S)\rightarrow \infty,\quad {\rm or}\quad |\nabla V|^{2}(x_{j_{i}})\dist^{2}(x_{j_{i}},\partial S)\rightarrow \infty \end{equation} We perform a sequence of scalings $(\lambda_{i},\nu_{i},\mu_{i})=(r_{j_{i}},r_{j_{i}},0)$ leading to the new fields, \begin{equation} q\rightarrow q_{i}=\frac{1}{r_{j_{i}}^{2}}q,\quad V\rightarrow V_{i}=V+\frac{1}{2}\ln r_{j_{i}},\quad U\rightarrow U_{i}=U \end{equation} With this scaling we obtain then a sequence of reduced data $(S;q_{i},V_{i},U_{i})$ all having the same $\Omega$ (recall $\Omega\rightarrow \Omega_{i}=(\nu_{i}/\lambda_{i})\Omega=\Omega$). At the same time we have $1/8\leq \dist_{i}(x_{j_{i}},\partial S)\leq 8$. Because of this, we can rewrite (\ref{DIVERGENCEP}) as, \begin{equation} \kappa_{i}(x_{j_{i}})\rightarrow \infty,\quad {\rm or}\quad |\nabla V_{i}|_{i}^{2}(x_{j_{i}})\rightarrow \infty, \end{equation} (where $\kappa_{i}=\kappa_{q_{i}}$ and $|\nabla V_{i}|=|\nabla V_{i}|_{q_{i}}$). Taking a subsequence if necessary we can assume that $\dist_{i}(x_{j_{i}},\partial S)\rightarrow d_{*}$ (where $d_{i}=d_{q_{i}}$). We are clearly in the hypothesis of Corollary \ref{SINGMODELUNDER}. Choosing $d_{1}$ (see the hypothesis of Corollary \ref{SINGMODELUNDER}) as $d_{1}=d_{*}+(d_{*}-1/8)/2$, we conclude that there is a sequence of neighbourhoods $\mathcal{B}_{i}$ containing $B_{i}(x_{j_{i}},d_{1})$ such that $(\mathcal{B}_{i};q_{i})$ metrically collapses to a segment of length $d_{1}$ (where $B_{i}=B_{q_{i}}$). The neighbourhood $\mathcal{B}_{i}$ essentially wraps around a geodesic $\beta_{i}$ joining $x_{j_{i}}$ and $\partial S$ and minimising the distance between them, and `covering' the part of it at a distance less or equal than $d_{1}$ from $x_{j_{i}}$. Hence, for $i$ large enough, the boundary of the $\mathcal{B}_{i}$ is inside the ball $B_{i}(\partial S,1/8)$. Therefore for $i$ large enough, the curve $\hat{\alpha}_{i}$ must enter $B_{i}(\partial S,1/8)$ before going to infinity. We reach thus a contradiction. We have then that for each $j$, the scaled curvature $\kappa_{r_{j}}$ is bounded on each of the annuli $\mathcal{A}^{c}_{r_{j}}(p_{j};1/8,8)$. Consider the areas $A_{r_{j}}$ of the annuli $\mathcal{A}^{c}_{r_{j}}(p_{j};1/8,8)$ with respect to $q_{r_{j}}$. If $A_{r_{j}}$ tend to zero then the annuli $(\mathcal{A}^{c}_{r_{j}}(p_{j};1/8,8),q_{r_{j}})$ collapse with bounded curvature and thus become thiner and thiner finite cylinders. The end $S$ is then (except for a set of compact closure) a concatenation of the annuli $\mathcal{A}^{c}_{r_{j}}(p_{j};1/8,8)$ and the quadratic curvature decay in the whole end follows as well as the quadratic decay of $|\nabla V|^{2}$ follows. If instead a sequence $A_{r_{j_{i}}}$ of the areas is bounded below away from zero then, due to the Bishop-Gromov monotonicity $A(B(\partial S,r))/r^{2}\searrow$ and the curvature bound, the geometry of the annuli $(\mathcal{A}^{c}_{r_{j}}(p_{j};1/8,8); q_{r_{j}})$ becomes more and more that of a flat annulus. Once a piece sufficiently close to a flat annulus forms then the whole end must be asymptotic to a flat annulus (for a detailed proof in dimension three see \cite{MR3233267}). Again, the quadratic decay of $\kappa$ and $|\nabla V|^{2}$ on the whole end follows. \end{proof} The following version of Proposition \ref{SUPO} but when $\Omega\neq 0$ is now straight forward after Proposition \ref{SUPO2} and the proof of Proposition \ref{SUPO} itself. \begin{Proposition}\label{SUPO3} Let $(\qM;q,U,V)$ be a metrically complete (reduced) static data set with $\Omega\neq0$, $\qM$ non-compact and $\partial \qM$ compact. Then there is a set $K$ with compact closure, such that \begin{equation} \qM=K\cup\big(\cup_{i=1}^{i=n} E_{i}\big) \end{equation} where every end $E_{i}$ is diffeomorphic to $[0,\infty)\times \Sa$. \end{Proposition} Taking into account the previous proposition we say that $(E;q,U,V)$ is a (reduced) static ends if $E\sim [0,\infty)\times \Sa$ and $(E;q)$ is metrically complete. \vspace{0.2cm} From the description of the asymptotic geometry of (reduced) static ends $(E;q,U,V)$, $(E\sim [0,\infty)\times \Sa$), we can easily find a simple end cut $\{\ell_{j};j=1,2,\ldots\}$. Each $\ell_{j}$ is of course isotopic to $\partial E$ and embedded in $\mathcal{A}(2^{1+2j},2^{2+2j})$. Let us be a bit more precise. Let $r_{j}=2^{1+2j}$ and as usual let $q_{r_{j}}=q/r_{j}^{2}$. If $\mu=0$ then the annuli $(\mathcal{A}_{r_{j}}(1,2);q_{r_{j}})$ metrically collapse to the segment $[1,2]$ and therefore the loops $\ell_{j}$ can be chosen to have $q_{r_{j}}$-length tending to zero. If instead $\mu>0$ then the loops can be chosen to converge to the radial circle $\{x=3/2\}$ as the annuli $(\mathcal{A}_{r_{j}}(1,2);q_{r_{j}})$ converge to the annulus $([1,2]\times \Sa;dx^{2}+4\mu^{2}x^{2}d\varphi^{2})$ as explained earlier. Let $\Sigma$ be the three-manifold whose quotient by the $\Sa$-Killing field is $E$. Let $\pi:\Sigma\rightarrow E$ be the projection. The tori $S_{j}:=\pi^{-1}(\ell_{j})$ form obviously a simple cut of $(\Sigma;\hg)$. Let us state this in a proposition that will be recalled later. \begin{Proposition}\label{SIMPLECUTUS} Let $(\Sigma;\hg,U)$ be a free $\Sa$-symmetric metrically complete static data set such that the reduced data set $(E;q,U,V)$ is a reduced end. Then, $\Sigma$ and $E$ admit simple cuts. \end{Proposition} The next proposition shows that $U$ tends uniformly to a constant $U_{\infty}$, on any (reduced) static end $(E;q,U,V)$. The constant $U_{\infty}$ satisfies $-\infty\leq U_{\infty}\leq \infty$. The proposition will be used in Section \ref{FTKASS}. \begin{Proposition}\label{LPRO} Let $(E; \hqg,U,V)$ be a metrically complete reduced static end. Then, $U\rightarrow U_{\infty}$ where the arrow signifies uniform convergence and the constant $U_{\infty}$ satisfies $-\infty\leq U_{\infty}\leq \infty$. \end{Proposition} \begin{proof} Note that the maximum principle is also applicable to $U$ because (\ref{ES3}) can be written as $div(e^{V}\nabla U)=0$. We will use this several times below. Let $\{\ell_{j},j=0,1,2,\ldots\}$ be a simple cut of $E$ as described above. Let $r_{j}=2^{1+2j}$. Assume that $\mu=0$. Then, as said, the $q_{r_{j}}$-length of the loops $\ell_{j}$ tends to zero. At the same time the norm $|\nabla U|_{r_{j}}$ restricted to the loops $\ell_{j}$ remains uniformly bounded. Therefore, by a simple integration along the $\ell_{j}$ it is deduced that, \begin{equation}\label{ONCEMORE} (\max\{U(q):q\in \ell_{j}\}-\min\{U(q):q\in \ell_{j}\})\rightarrow 0 \end{equation} If instead $\mu>0$ then the $q_{r_{j}}$-length of the loops $\ell_{j}$ remains uniformly bounded while the norm $|\nabla U|_{r_{j}}$, over the loops $\ell_{j}$, tends to zero. So by a simple integration along the loops $\ell_{j}$ we deduce again (\ref{ONCEMORE}). Now suppose that for a certain sequence $p_{i}\in \ell_{j_{i}}$, $U(p_{i})$ tends to a constant $-\infty\leq U_{\infty}\leq \infty$. Then by (\ref{ONCEMORE}), the maximum and the minimum of $U$ over $\ell_{j_{i}}$ also tend to $U_{\infty}$. We use now the maximum principle to write for any $i<i'$ \begin{align} \max\{U(q):q\in \ell_{j_{i}}\cup \ell_{j_{i'}}\}\geq & \max\{U(q):q\in \mathcal{L}_{j_{i},j_{i'}}\}\geq \\ \geq & \min\{U(q):q\in \mathcal{L}_{j_{i},j_{i'}}\}\geq \\ \geq & \min\{U(q):q\in \ell_{j_{i}}\cup \ell_{j_{i'}}\} \end{align} where $\mathcal{L}_{j_{i},j'_{i}}$ is the compact region enclosed by $\ell_{j_{i}}$ and $\ell_{j_{i'}}$. Letting $i'$ tend to infinity we deduce, \begin{align} \label{COSAP1}\max\{\max\{U(q):q\in & \ell_{j_{i}}\},U_{\infty}\}\geq \max\{U(q):q\in \mathcal{L}_{j_{i},\infty}\}\geq \\ \label{COSAP2}\geq & \min\{U(q):q\in \mathcal{L}_{j_{i},\infty}\}\geq \min\{\min\{U(q):q\in \ell_{j_{i}}\},U_{\infty}\} \end{align} where $\mathcal{L}_{j_{i},\infty}$ is the region enclosed by $\ell_{j_{i}}$ and infinity. As the left hand side of (\ref{COSAP1}) and the right hand side of (\ref{COSAP2}) tend to $U_{\infty}$ then $U$ must tend also uniformly to $U_{\infty}$. \end{proof} \begin{comment} Let us prove first (i). Suppose that there is a divergent sequence $p_{i}$ such that $U(p_{i})\rightarrow \infty$. We claim that in this case $U\rightarrow \infty$. For each $i$, $p_{i}$ belongs to a region $\mathcal{V}_{j_{i},j_{i}+1}$ for certain $j_{i}$. By the maximum principle we have \begin{equation} U(p_{i})\leq \max\{U;\mathcal{V}_{j_i,j_i+1}\}\leq \max\{\max\{U;\ell_{j_i}\},\max\{U;\ell_{j_i+1}\}\} \end{equation} As $U(p_{i})\rightarrow \infty$ we can extract a sequence $\ell_{j^i}$, ($j^i$ is equal to either $j_{i}$ or $j_{i+1}$), such that \begin{equation} \max\{U;\ell_{j^i}\}\rightarrow \infty. \end{equation} Thus by (\ref{ULEST}) we have, \begin{equation}\label{TTI} \min\{U;\ell_{j^i}\}\rightarrow \infty \end{equation} Now, given $j^{i'}>j^i$ we can use the maximum principle again to have \begin{equation} \min\{U;\mathcal{V}_{j^i,j^{i'}}\}\geq \min\{\min\{U,\ell_{j^i}\},\min\{U,\ell_{j^{i'}}\}\} \end{equation} Fixing $j^{i}$ and taking the limit $j^{i'}\rightarrow \infty$ we deduce that in the region enclosed by $\ell_{j^{i}}$ and infinity, the function $U$ is bounded from below by $\min\{U;\ell_{j^i}\}$ which tends to infinity by (\ref{TTI}). The claim thus follows. In exactly the same fashion one proves that if there is a sequence $p_{i}$ such that $U(p_{i})\rightarrow -\infty$ then $U\rightarrow -\infty$. If $U(p_{i})$ is bounded for every divergent sequence $p_{i}$, then $U$ must be bounded above and below. Let us show that in this case $U$ indeed must converge uniformly to a finite value $U_{\infty}$. A simple application of the maximum principle shows that if \begin{equation}\label{DESEST} (\max\{U;\ell_{j}\}-\min\{U;\ell_{j}\})\rightarrow 0 \end{equation} then $U\rightarrow U_{\infty}$ for a finite $U_{\infty}$. Let us prove that. Assume that $\mu=0$ and let again $r_{j}=2^{1+2j}$. In this case the $q_{r_{j}}$-length of the circles $\ell_{j}$ goes to zero. Thus as $|\nabla U|^{2}_{r_{j}}=\gcur_{r_{j}}$ is bounded (uniformly) over the annuli, the desired estimate (\ref{DESEST}) easily follows. If instead $\mu>0$, then the annuli $(\mathcal{A}_{r_{j}}(1/2,2); q_{r_{j}})$ converge to a flat annulus and $|\nabla U|^{2}_{r_{j}}$ tends uniformly to zero. Then, as the difference of the values of $U$ at two different points of $\ell_{j}$ is found by integration of $\nabla U$ along the segment in $\ell_{j}$ joining them, the condition (\ref{DESEST}) holds too. This proved item (i) of the proposition. Let us consider now the item (ii). We first note that the proof that $U$ is either uniformly divergent or uniformly bounded, relied only on the maximum principle and (\ref{ULEST}). Then as $\Lambda$ is harmonic and the maximum principle holds, we conclude in the same way as we did for $U$ that either $\Lambda$ diverges to infinity or remains uniformly bounded (recall $\Lambda>0$). We need to prove then that if $\Lambda$ is uniformly bounded then it tends to a non-zero value at infinity. As with $U$, the discussion depends on the limit $\mu$. If $\mu=0$ the same proof we did for $U$ applies to $\Lambda$ as well because $|\nabla \ln \Lambda|^{2}_{r_{j}}$, and hence $|\nabla \Lambda|^{2}_{r_{j}}$, are uniformly bounded ($r_{j}$ as earlier). Let us consider now the case $\mu>0$. We first note that because $\Lambda$ is harmonic and uniformly bounded, and the annuli $(\mathcal{A}_{r_{j}}(1/2,2),q_{r_{j}})$ converge in $C^{i}$ to a flat annulus (for every $i\geq 2$) then one can consider a subsequence for which $\Lambda$ converges (in $C^{i}$ for every $i\geq 2$) to a limit non-negative function $\overline{\Lambda}_{\infty}$ on the flat annulus and satisfying $\nabla\nabla \overline{\Lambda}_{\infty}=0$. We claim that $\overline{\Lambda}_{\infty}$ is indeed a constant. Assume then that $\overline{\Lambda}_{\infty}$ is not a constant but that it is non-negative. We can get a contradiction as follows. Observe that one can construct a subsequence of $r_{j}$ (denoted again as $r_{j}$ next) such that for ever $k\geq 2$ the annuli $(\mathcal{A}_{r_{j}}(1/2,2^{k}),h_{r_{j}})$ converge in $C^{i}$ to a flat annulus (for every $i\geq 2$) and for which $\Lambda$ converges (in $C^{i}$ for every $i\geq 2$) to a limit non-constant and non-negative function $\overline{\Lambda}_{\infty}$ satisfying \begin{equation}\label{LUZ} \nabla\nabla \overline{\Lambda}_{\infty}=0. \end{equation} We can think then that $\overline{\Lambda}_{\infty}$ is a function in a limit 'infinite' flat cone $(\mathcal{A}_{\mathbb{R}^{2}}(1/2,\infty), q_{\mu})$. But (\ref{LUZ}) implies that $\overline{\Lambda}_{\infty}$ is linear along geodesics ($\overline{\Lambda}_{\infty}(\gamma(s))=\overline{\Lambda}_{\infty}(0)+\overline{\Lambda}_{\infty}'(0)s$. Hence if $\nabla \overline{\Lambda}_{\infty}\neq 0$ at some point then one can easily construct a divergent geodesic over which $\overline{\Lambda}_{\infty}$ becomes negative at some point which is impossible. It remains to be seen that $\overline{\Lambda}_{\infty}\neq 0$. To see this assume that $\overline{\Lambda}_{\infty}=0$. Then as $\Lambda>0$ one can take a positive regular value $\lambda>0$ near zero such that $\Lambda^{-1}(\lambda)$ is an embedded curve $\ell$ isotopic to $\partial E$. Now, take a divergent sequence of curves $\ell_{j}$ embedded in $\mathcal{A}(2^{1+2j},2^{2+2j})$, each one of which is isotopic to $\partial{E}$. Integrating $\Delta\Lambda=0$ on the domains between $\ell$ and $\ell_{j}$ we obtain \begin{equation} 0>\int_{\ell}(\nabla_{n}\Lambda) dl=\int_{\ell_{j}}(\nabla_{n}\Lambda) dl\rightarrow 0 \end{equation} where the last integral tends to zero because it is scale invariant and tends to the value it would have in the limit cone which is zero. We reach thus a contradiction, hence $\overline{\Lambda}_{\infty}>0$. \end{comment} \subsection{Reduced data sets arising as collapsed limits}\label{RDSACL} In this last subsection about $\Sa$-symmetric data sets, it is worth to discuss the geometry of reduced data arising from scaled limit of data sets. This discussion will be recalled later in Section \ref{POKA} where we prove that the asymptotic of static black hole data sets with sub-cubic volume growth is Kasner. Let $(\Sigma;\hg,U)$ be a data set, and let $\gamma$ be a ray from $\partial \Sigma$. Let $p_{n}\in \gamma$ be a divergent sequence of points. Suppose there are neighbourhoods $\mathcal{B}_{n}$ of $\mathcal{A}^{c}_{r_{n}}(p_{n}, 1/2,2)$ such that $(\mathcal{B}_{n};\hg_{r_{n}})$ collapses to a two-dimensional orbifold. Having this, by a diagonal argument, one can find a subsequence of it (also indexed by $n$) and neighbourhoods $\mathcal{B}_{n}$ of $\mathcal{A}^{c}_{r_{n}}(p_{n};1/2,2^{k_{n}})$, with $k_{n}\rightarrow \infty$, and collapsing to a two-dimensional orbifold $(S_{\infty};q_{\infty})$. As the collapse is along $\Sa$-fibers (hence defining asymptotically a symmetry), we obtain, in the limit, a well defined reduced data $(S;q,\bar{U},V)$ where $\overline{U}$ is obtained as the limit of $U_{n}:=U-U(p_{n})$. On smooth points the scalar curvature $\kappa$ is non-negative. Orbifold points are conical with total angles an integer fraction of $2\pi$ ($2\pi/2$, $2\pi/3$, $2\pi/4$, etc) hence can be thought as having also non-negative curvature (they can be rounded off to have a smooth metric with $\kappa\geq 0$). Therefore $(S;q)$ has only a finite number of ends. Note that it has at least one end containing a limit, say $\overline{\gamma}$, of the ray $\gamma$. Let us denote that end by $S_{\overline{\gamma}}$. We claim that every end has only a finite number of orbifold points. This is the result of a simple application of Gauss-Bonnet. Indeed, let $S$ be an end. Let $\ell_{j}$, $j=1,2,3,\ldots$, be one-manfiolds embedded for each $j$ in $\mathcal{A}(2^{2j},2^{2j+3})$ such that $\ell_{1}$ and $\ell_{j}$ enclose a connected manifold $\Omega_{1j}$. Let $\mathcal{O}$ be the set of orbifold points in $S$. By Gauss-Bonnet we have \begin{equation} -\int_{\ell_{1}}kdl-\int_{\ell_{j}}kdl=\int_{\Omega_{1j}\setminus \mathcal{O}}\kappa dA+\sum_{p\in \Omega_{1j}\cap \mathcal{O}}2\pi\bigg(\frac{i(p)-1}{i(p)}\bigg) \end{equation} where $k$ is the mean-curvature (or first variation of logarithm of length) on the one-manifolds $\ell_{j}$ and the angle at each orbifold point $p\in \mathcal{O}$ is $2\pi/i(p)$. As the right hand side is greater or equal than the number of orbifold points in $\Omega_{1j}$, that is $\sharp\{\Omega_{1j}\cap \mathcal{O}\}$. Thus, if the left hand side remains bounded as $j\rightarrow \infty$ then the number of orbifold points must be finite. To see the existence of such one-manifolds $\ell_{j}$ for which the left hand side remains bounded just argue as follows. First note that the left hand side is scale invariant. Second observe that as for each $j$ the scaled annuli $(\mathcal{A}(2^{2j},2^{2j+3});q_{2^{2j}})$ in $S$ are scaled limits of annuli in $(\Sigma;\hg)$, (which has quadratic curvature decay), then one can always chose a suitable subsequence $j_{i}$ such that as $i\rightarrow \infty$ the annuli $(\mathcal{A}(2^{2j_{i}},2^{2j_{i}+3});q_{2^{2j_{i}}})$ either converge of collapse to a segment. The selection of the $\ell_{i}$ is then evident. \section{Volume growth and the asymptotic of ends}\label{VWAE} The asymptotic of ends is markedly divided by the volume growth. We discuss first cubic volume growth, which is the simplest and that implies AF. Then we discuss sub-cubic volume growth which implies (under certain hypothesis) AK. This last case requires an elaborated and long analysis. \subsection{Cubic volume growth and asymptotic flatness}\label{ENDSAF} Suppose $(\Sigma;\hg,U)$ is a static end with cubic volume growth. Cubic volume growth, non-negative Ricci curvature and quadratic curvature decay, implies that the end is asymptotically conical, (i.e. the metric is asymptotic to a metric of the form $dr^{2}+a^{2}r^{2}d\Omega^{2}$ in $\mathbb{R}^{3}$). Hence, outside an open set of compact closure, $\Sigma$ is diffeomorphic to $\mathbb{R}^{3}$ minus a ball. It was proved in \cite{MR3233266},\cite{MR3233267} that the data is then asymptotically flat (indeed asymptotically Schwarzschild). \subsection{Sub-cubic volume growth and Kasner asymptotic}\label{ENDSAK} The goal of this section will be to prove that the asymptotic of any static black hole data set with sub-cubic volume growth is Kasner different from a Kasner $A$ or $C$. Observe that the claim is for the asymptotic of black hole data sets, and not just that of any end with sub-cubic volume growth. We aim therefore to prove the following theorem. \begin{Theorem}\label{KAFR} Let $(\Sigma;\hg,U)$ be a static black hole data set with sub-cubic volume growth. Then the data is asymptotically Kasner, different from a Kasner $A$ or $C$. \end{Theorem} To achieve this we provide first a necessary and sufficient condition for Kasner asymptotic different from $A$ or $C$. This is the content of Proposition \ref{KASYMPTOTIC} for which we dedicate the whole subsection \ref{SPTWOP1}. In second place, we analise the asymptotic of free $\Sa$-symmetric static ends $(\Sigma;\hg,U)$ under the natural condition that $U(p)\leq U_{\infty}$ (recall that $U_{\infty}$, the limit of $U$ at $\infty$, exists by Proposition \ref{LPRO}). We dedicate subection \ref{FTKASS} to show Theorem \ref{SSKAA} claiming that, for such a data, either the asymptotic is Kasner different from $A$ or $C$, or the whole data is flat and $U$ is constant. The proof requires the results we have obtained for reduced data sets in section \ref{S1S}, as well as the development of an interesting monotonic quantity along the leaves of the level sets of $U$, that in turn will be used again in the proof of Theorem \ref{KAFR}. Finally, subsection \ref{POKA} uses the results of the previous two subsections to prove the desired Theorem \ref{KAFR}. \subsubsection{Preliminaries, $C^{k}$-norms on two-tori} \label{SPTWOP0} We prove here a series of results on the $C^{k}$-norm of tensor field on two-tori that will be used in section \ref{SPTWOP1}. We begin recalling the definition of the $C^{k}$-norms of a tensor with respect to a background metric. Let $(M;g)$ be a smooth Riemannian manifold. Let $W$ be a smooth tensor of any valence. We denote by $|W|_{g}(x)$ the $g$-norm of $W$ at $x\in M$. Given $k\geq 0$, the $C^{k}$-norm of $W$ with respect to $g$ is defined as \begin{equation} \|W\|^{2}_{C^{k}_{g}}:=\sup_{x\in M}\bigg\{\sum_{i=0}^{i=k}|\nabla^{(i)}W|^{2}_{g}(x)\bigg\} \quad \text{where}\quad \nabla^{(i)}W=\underbrace{\nabla\ldots\nabla}_{\text{i-times}} W \end{equation} \begin{Proposition}\label{ENDK1} Let $(T;h_{F})$ be a flat two-torus. Let $W$ be a smooth tensor field (of any valence), equal to zero at some point. Then for any $0\leq j\leq k$ we have \begin{equation}\label{TOP} \|W\|_{C^{j}_{h_F}}\leq c(k_{ij})\ \diam_{h_F}^{k-j}(T)\ \|W\|_{C^{k}_{h_F}} \end{equation} \end{Proposition} \begin{proof} We will prove the inequality for functions. To prove it for tensors use the expansion $W=\sum f_{I}\omega_{I}$, where $\omega_{I}$ is an orthonormal and parallel basis (i.e. $\delta_{II'}=<\omega_{I},\omega_{I'}>_{g}$ and $\nabla \omega_{I}=0$), and then use the result obtained for functions. We will work in $(\mathbb{R}^{2}; g_{\mathbb{R}^{2}})$ thought as the universal cover of $(T;h_F)$. In particular $\pi^{*}h_{F}=g_{\mathbb{R}^{2}}$ where $\pi:\mathbb{R}^{2}\rightarrow T$ is the projection. On a Cartesian coordinate system $(x_1,x_2)$ we have \begin{equation}\label{EUCMETR} g_{\mathbb{R}^{2}}=dx_{1}^{2}+dx_{2}^{2} \end{equation} and \begin{equation}\label{NOSI} \|f\|^{2}_{C^{j}_{h_{F}}}=\|f\|^{2}_{C^{j}_{g_{\mathbb{R}^{2}}}}=\sup_{x\in \mathbb{R}^{2}}\bigg\{\sum_{|I|=0}^{|I|=j} |\partial_{I} f|^{2}(x)\bigg\} \end{equation} where for any multi-index $I=(i_{1},\ldots,i_{|I|})$, $i_{l}\in \{1,2\}$, we denote $\partial_{I}=\partial_{x_{i_1}}\ldots\partial_{x_{i_{|I|}}}$. We will need to rely on the existence of a coordinate system $(\overline{x}_{1},\overline{x}_{2})$ on $\mathbb{R}^{2}$ on which the metric $g_{\mathbb{R}^{2}}$ is written as \begin{equation}\label{METRIC} g_{\mathbb{R}^{2}}=d\overline{x}_{1}^{2}+\alpha (d\overline{x}_{1}d\overline{x}_{2}+d\overline{x}_{2}d\overline{x}_{1})+d\overline{x}_{2}^{2}, \end{equation} where $\alpha$ is a constant such that $|\alpha|\leq 1/2$, and where the directions $\partial_{\overline{x}_{1}}$ and $\partial_{\overline{x}_{2}}$ are periodic of period less than $6\diam_{h_F}(T)$, that is, any line in the direction of either $\partial_{x_{1}}$ or $\partial_{x_{2}}$ projects into a circle in $T$ of length less that $6\diam_{h_F}(T)$. For the calculations that follow we assume that the coordinates $(\overline{x}_{1},\overline{x}_{2})$ are given. We will prove their existence at the end. Observe that the norm (\ref{NOSI}), which is defined with respect to the metric (\ref{EUCMETR}) and the norm \begin{equation}\label{NOSII} \|f\|^{2}_{C^{j}_{\overline{g}_{\mathbb{R}^{2}}}}=\sup\bigg\{\sum_{|I|=0}^{|I|=j} |\overline{\partial}_{I} f|^{2}\bigg\},\qquad \overline{\partial}_{I}=\partial_{\overline{x}_{i_{1}}}\ldots\partial_{\overline{x}_{i_{|I|}}}, \end{equation} which is defined with respect to the metric \begin{equation} \overline{g}_{\mathbb{R}^{2}}=d\overline{x}^{2}_{1}+d\overline{x}^{2}_{2}, \end{equation} are equivalent, namely $c_{1}(j) \|f\|_{C^{j}_{g_{\mathbb{R}^{2}}}} \leq \|f\|_{C^{j}_{\overline{g}_{\mathbb{R}^{2}}}}\leq c_{2}(j) \|f\|_{C^{j}_{g_{\mathbb{R}^{2}}}}$. This is proved by noting that the family of metrics (\ref{METRIC}) with $|\alpha|\leq 1/2$ is compact. Thus, to prove (\ref{TOP}) it is enough to prove \begin{equation}\label{TOPP} \|W\|_{C^{j}_{\overline{g}_{\mathbb{R}^{2}}}}\leq c(k)\ \diam_{h_F}^{k-j}(T)\ \|W\|_{C^{k}_{\overline{g}_{\mathbb{R}^{2}}}} \end{equation} Again we prove this for functions. We show it in what follows. We claim first that for any function $\psi$ which is zero at some point, say $(\overline{x}_{1}^{0},\overline{x}_{2}^{0})$, we have \begin{equation}\label{ITER} \sup\big\{|\psi|\big\}\leq 12\diam_{h_F}(T)\sup\big\{|\partial_{\overline{x}_{1}} \psi|, |\partial_{\overline{x}_{2}} \psi|\big\} \end{equation} This is seen by just writing \begin{equation} \psi(\overline{x}_{1},\overline{x}_{2})=\int_{0}^{\overline{x}_{1}-\overline{x}_{1}^{0}}\partial_{\overline{x}_{1}} \psi\bigg|_{(\overline{x}_{1}^{0}+s,\overline{x}_{2}^{0})}ds + \int_{0}^{\overline{x}_{2}-\overline{x}_{2}^{0}} \partial_{\overline{x}_{2}} \psi\bigg|_{(\overline{x}_{1},\overline{x}_{2}^{0}+s)}ds \end{equation} and using that $|\overline{x}_{1}-\overline{x}^{0}_{1}|$ and $|\overline{x}_{2}-\overline{x}_{2}^{0}|$ are less or equal than $6\diam_{h_F}(T)$. We use (\ref{ITER}) to prove (\ref{TOPP}). We observe first that, for any $\psi$ and multi-index $I$, ($|I|\geq 1$), the function $\overline{\partial}_{I}\psi$ has also a zero. To see this just fix $\overline{x}_{i}$, for all $i\neq i_{1}$ (at any values), and observe that the function $\psi$ as a function of $\overline{x}_{i_{1}}$ is the $\overline{x}_{i_{1}}$-derivative of a periodic function. Having the observation at hand, start with $\psi=f$ and use (\ref{ITER}) repeatedly to obtain (\ref{TOPP}). It remains to show the existence of the coordinates $(\overline{x}_{1},\overline{x}_{2})$. In the cartesian system $(x_{1},x_{2})$, the balls $B((4\diam_{\mathbb{R}^{2}},0),\diam_{\mathbb{R}^{2}}(T))$ and $B((0,4\diam_{\mathbb{R}^{2}}),\diam_{\mathbb{R}^{2}}(T))$, possess points $q_{1}$ and $q_{2}$ projecting (in $T$) to the same point as the point $q_{0}=(0,0)$ does. Define the directions $\partial_{\overline{x}_{1}}$ and $\partial_{\overline{x}_{1}}$ as, respectively, those defined by $q_{0}, q_{1}$ and $q_{0}, q_{2}$, and finally define the origin of the coordinates $(\overline{x}_{1},\overline{x}_{2})$ to be $(x_{1},x_{2})=(0,0)$. It is direct to check that the coordinates $(\overline{x}_{1},\overline{x}_{2})$ thus constructed enjoy the required properties. \end{proof} \begin{Proposition}\label{FLATTGA} Let $(T;h)$ be a Riemannian two-torus and let $p\in T$. Then there is a unique flat metric $h_{F}$, conformally related to $h$ and equal to $h$ at $p$. Moreover, for any integer $k\geq 1$, and reals $K_{1}>0$ and $K_{k}>0$ there is $D(K_{1})>0$ (small enough) and $C(k,K_{k})>0$ such that if \begin{equation} \|\gcur\|_{C^{1}_{h}}\leq K_{1},\quad \|\gcur\|_{C^{k}_{h}}\leq K_{k},\quad \text{and}\quad \diam_{h}(T)\leq D \end{equation} where $\gcur$ is the Gaussian curvature, then, \begin{equation}\label{FLATTGAEQQ} e^{-C}h_{F}\leq h\leq e^{C}h_{F} \end{equation} and \begin{equation}\label{FLATTGAEQ} \|h\|_{C^{k}_{h_F}}\leq C. \end{equation} \end{Proposition} \begin{proof} We will use that there is $D(K_{1})$, (small enough), such that if $\diam_{h}(T)\leq D(K_{1})$ then there is a finite cover $\pi:(\tilde{T};\tilde{h})\rightarrow (T;h)$, (i.e. $\pi:\tilde{T}\rightarrow T$ and $\tilde{h}=\pi^{*}h$), such that, (i) $\diam_{\tilde{h}}(\tilde{T})\leq 1$, and (ii) $inj_{\tilde{h}}(p)\geq i_{0}(K_{1})$ for all $p\in \tilde{T}$. Because $(\tilde{T};\tilde{h})$ is a cover of $(T;h)$ we also have (iii) $\|\tilde{\gcur}\|_{C^{k}_{\tilde{h}}}\leq K_{k}$. The claims, (i) and (ii), are well known from the standard theory of diameter-collapse with bounded curvature. In simple terms they follow easily from the fact that the exponential map $exp:\mathcal{T}_{p}T\rightarrow T$ restricted to a small ball in $\mathcal{T}_{p}T$ is an immersion and then finding an appropriate fundamental domain on $\mathcal{T}_{p}T$ around $p$ that will define $\tilde{T}$. We will not discuss this further, rather we will use it from now on. The properties (i) and (ii) imply that the geometry of $(\tilde{T};\tilde{h})$ is controlled\footnote{To be precise: A geometry is controlled in $C^{k}$ by $K$ if there is a cover of $\tilde{T}$ by $n(K)$-harmonic charts, with Lebesgue number $\delta(K)$, such that, on each chart $(x_{1},x_{2})$, we have (i) $e^{K'(K)}\delta_{ij}\leq \tilde{h}_{ij}\leq e^{K'(K)}\delta_{ij}$ and (ii) $\|\tilde{h}_{ij}\|_{C^{k}_{\delta_{ij}}}\leq K'(K)$. See \cite{MR2243772}.} in $C^{2}$ by $K_{1}$. Moreover if the geometry of $(\tilde{T};\tilde{h})$ is controlled in $C^{2}$ by $K_{1}$ and (iii) above holds, then the geometry of $(\tilde{T};\tilde{h})$ is controlled in $C^{k+1}$ by $K_{k}$. This allows us to make standard elliptic analysis in $(\tilde{T};\tilde{h})$ as if working in a fixed manifold. Let $\tilde{\phi}$ be the solution to \begin{equation}\label{LAPPHI} \Delta_{\tilde{h}}\tilde{\phi}=\tilde{\gcur},\quad \text{with}\quad \int_{\tilde{T}}\tilde{\phi}\, dA_{\tilde{h}}=0 \end{equation} With such $\tilde{\phi}$, the conformal metric $\tilde{h}_{F}=e^{2\tilde{\phi}}\tilde{h}$ is flat. Multiply (\ref{LAPPHI}) by $\tilde{\phi}$, integrate and use Cauchy-Schwarz to obtain \begin{equation}\label{RRRR} \int_{\tilde{T}}|\nabla \tilde{\phi}|^{2}_{\tilde{h}}\, dA_{\tilde{h}}\leq \big(\int_{\tilde{T}}\tilde{\gcur}^{2}\, dA_{\tilde{h}}\big)^{\frac{1}{2}}\big(\int_{\tilde{T}}\tilde{\phi}^{2}\, dA_{\tilde{h}}\big)^{\frac{1}{2}} \end{equation} Now, we can use the Poincar\'e inequality \begin{equation}\label{POINCA} \int_{\tilde{T}}\tilde{\phi}^{2}\, dA_{\tilde{h}}\leq I(K_{1})\int_{\tilde{T}}|\nabla \tilde{\phi}|^{2}\, dA_{\tilde{h}} \end{equation} in the right hand side of (\ref{RRRR}) to obtain an upper bound on $\|\nabla \tilde{\phi}\|_{L^{2}_{\tilde{h}}}$, (that $I=I(K_{1})$ is justified because the geometry of $\tilde{T}$ is controlled in $C^{2}$). Such bound can be used in turn again in (\ref{POINCA}) to obtain $\|\tilde{\phi}\|_{L^{2}_{\tilde{h}}}\leq B_{1}(K_{1})$. Using this $L^{2}$-bound together with standard elliptic estimates on (\ref{LAPPHI}) we obtain \begin{equation}\label{BNEC} \|\tilde{\phi}\|_{C^{k}_{\tilde{h}}}\leq B_{2}(k,K_{k}). \end{equation} As $k\geq 1$, we deduce \begin{equation}\label{BOUNNO} |\tilde{\phi}|\leq B_{2}(k,K_{k}), \end{equation} This implies that for a $C_{1}(k,K_{k})>0$ we have \begin{equation}\label{BNECI} e^{-C_{1}}\tilde{h}\leq \tilde{h}_{F}\leq e^{C_{1}}\tilde{h} \end{equation} Moreover the covariant derivative $\partial$ of $\tilde{h}_{F}$ is related to the covariant derivative $\nabla$ of $\tilde{h}$ by \begin{equation}\label{BNECII} \partial_{A}=\nabla_{A}+(\nabla_{A}\phi) h_{B}^{C}+(\nabla_{B}\phi) h_{A}^{C}-(\nabla^{C}\phi) h_{AB} \end{equation} Now (\ref{BNEC}), (\ref{BNECI}) and (\ref{BNECII}) (to compute $\partial^{(j)}$) imply the bound \begin{equation}\label{BOUNN} \|\tilde{\phi}\|_{C^{k}_{\tilde{h}_{F}}}\leq B_{3}(k,K_{k}). \end{equation} By the uniqueness of solutions to (\ref{LAPPHI}), $\tilde{\phi}$ has to coincide with its average by the Deck-transformations. Hence, $\tilde{\phi}$ and $\tilde{h}_{F}$ descend respectively to a function $\overline{\phi}$ and a flat metric $h_{F}$. As the bound (\ref{BOUNN}) is local we also have \begin{equation}\label{BOUNN22} \|\overline{\phi}\|_{C^{k}_{h_{F}}}\leq B_{3}(k,K_{k}). \end{equation} Finally define $\phi=\overline{\phi}-\overline{\phi}(p)$. With this definition we have $h(p)=h_{F}(p)$. From the bound $|\overline{\phi}|\leq B_{2}(k,K_{k})$ we obtain the bound $|\phi|\leq 2B_{2}(k,K_{k})$ hence (\ref{FLATTGAEQQ}). Also from $|\phi|\leq 2B(k,K_{k})$ and (\ref{BOUNN22}) we deduce, \begin{equation}\label{BOUNN3} \|\phi\|_{C^{k}_{h_{F}}}\leq B_{4}(k,K_{k}). \end{equation} hence (\ref{FLATTGAEQ}). \end{proof} \subsubsection{Necessary and sufficient condition for KA different from $A$ or $C$.}\label{SPTWOP1} Before passing to the next crucial propositions we make a pair of geometric observations about the Kasner solutions and introduce some terminology. On $\mathbb{R}^{+}\times \mathbb{R}^{2}$ consider a Kasner solution \begin{align} & \hg=dx^{2}+x^{2a}dy^{2}+x^{2b}dz^{2},\\ & U=c\ln x \end{align} and assume that $c\in (0,1/2]$. Quotient the space by a $\mathbb{Z}\times \mathbb{Z}$ action to obtain a Kasner solution on $\mathbb{R}^{+}\times {\rm T}^{2}$. For every $x$ let $T_{x}$ be the two-torus $\{x\}\times {\rm T}^{2}$. Fixed $c$, there are two possibilities for $(a,b)$, $(a_{-},b_{-})$ and $(a_{+},b_{+})=(b_{-},a_{-})$. In either case, and because $c\in (0,1/2]$, we have $0<a<1$, $0<b<1$. Let \begin{equation}\label{ASDEF} a_{*}=\max\big\{e^{2/(1-a)},e^{2/(1-b)}\big\} \end{equation} Observe that, as $a+b=1$ we have $a^{*}\geq 4$. Note that if $\overline{a}\geq a_{*}$ then \begin{equation}\label{DIAMHAL} \diam_{\hg_{\overline{a}}}(T_{\overline{a}})\leq \frac{1}{e^{2}}\diam_{\hg}(T_{1}) \end{equation} where, recall former notation, $\hg_{\overline{a}}=\hg/\overline{a}^{2}$. To see this simply note that \begin{equation}\label{ASTRA} \frac{1}{\overline{a}^{2}}(\overline{a}^{2a}dy^{2}+\overline{a}^{2b}dz^{2})=(\overline{a}^{2a-2}dy^{2}+\overline{a}^{2b-2}dz^{2})\leq \frac{1}{e^{4}}(dz^{2}+dy^{2}) \end{equation} so (\ref{DIAMHAL}) holds no matter how we quotient $\mathbb{R}^{2}$. Thus, the diameter of $T_{\overline{a}}$ with respect to $\hg_{\overline{a}}$, is at least $1/e^{2}$ of the diameter of $T_{1}$ with respect to $\hg$. \vspace{0.2cm} In the following propositions we will use the notation $\rho=|\nabla U|$, $\rho_{r}=|\nabla U|_{r}$ and $\lambda=1/\rho$, $\lambda_{r}=1/\rho_{r}$. Also, given $0<\rho^{*}\leq 1/2$ we let \begin{equation} a^{*}=\max\{a_{*}(c): c \in [\rho^{*}/4, 1/2]\}. \end{equation} The reader must keep this notation in mind. We will also use the following definition. Let $W$ be a tensor of any valence defined at just one point $x$ of a flat torus $(T;h_{F})$. Then the {\it $h_{F}$-extension} of $W$ is the tensor field defined by translating $W$ to all $T$ by its isometry group. \begin{Proposition}\label{PORSI} Let $(\Sigma; \hg, U)$ be a static end, and let $\gamma$ be a ray emanating from $\partial \Sigma$. Let $0<\rho^{*}\leq 1/2$ and let integers $j^{*}\geq 0$ and $m^{*}\geq 1$. Then, there exist positive constants $\epsilon^{*}, \mu^{*}\leq \rho^{*}/2, r^*, c^*$, such that if at a point $p\in \gamma$ with $r=r(p)\geq r^*$ we have, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\alph*)}, widest=a, align=left] \item\label{PORSI-a} $\dist_{GH}\big(\big(\mathcal{A}^{c}_{r}(p;1/2,2); d_{r}\big),\big([1/2,2];|\ldots|\big)\big)\leq \epsilon^*$, and, \item\label{PORSI-b} $|\rho_{r}(p)-\rho^*|\leq \mu^{*}$, \end{enumerate} then, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\Roman*)}, widest=II, align=left] \item\label{PORSI-I} there is a neighbourhood $\mathcal{U}_{p}$ of $\mathcal{A}_{r}^{c}(p;1/(2a^{*}),2a^{*})$ foliated by level sets of $U$ each of which is a two-torus, and, \item\label{PORSI-II} there is a Kasner space $(\mathcal{U}^{\mathbb{K}};\hg^{\mathbb{K}},U^{\mathbb{K}})$, $\mathbb{Z}^{2}$-quotient of the Kasner, \begin{equation} \tilde{\mathcal{U}}^{\mathbb{K}}=I\times \mathbb{R}^{2},\quad \tilde{\hg}^{\mathbb{K}}=dx^{2}+x^{2a}dy^{2}+x^{2b}dz^{2},\quad \tilde{U}^{\mathbb{K}}=d+ c\ln x \end{equation} ($I$ is some interval) and a smooth diffeomorphism $\phi:\mathcal{U}_{p}\rightarrow \mathcal{U}^{\mathbb{K}}=I\times {\rm T}^{2}$ such that \begin{align}\label{ESTIMATE} & \phi_{*}U=U^{\mathbb{K}},\\ & \|\phi_{*} \hg_r - \hg^{\mathbb{K}}\|_{c^{j^{*}}_{\hg^{\mathbb{K}}}} \leq C^{*} \diam_{\hg^{\mathbb{K}}}^{m^*}\big(\phi(T_{p})\big) \end{align} where $T_p$ is the level set of $U$ containing $p$. \end{enumerate} \end{Proposition} \begin{proof}\ \ref{PORSI-I} Proceeding by contradiction, assume that for every $\epsilon^{*}_{i}=1/i$, $\mu^{*}_{i}=1/i$ and $r^{*}=i$, with $i\geq i_{0}$, there is $p_{i}\in\gamma$ with $r_{i}=r(p_{i})\geq r^{*}_{i}$ for which \ref{PORSI-a} and \ref{PORSI-b} hold but for which the neighbourhood ${\mathcal{U}}_{p}$ with the desired properties does not exist. But if \ref{PORSI-a} holds and $p_{i}$ belongs to a ray then the space $(\mathcal{A}_{r_{i}}^{c}(p_{i};1/(2a^{*}),2a^{*});d_{r_{i}})$ necessarily metrically collapses to a segment of length $2a^{*}-1/(2a^{*})$. Thus there are neighbourhoods $\mathcal{B}_{i}$ of $\mathcal{A}_{r}^{c}(p_{i};1/(2a^{*}),2a^{*})$ and covers $\pi_{i}:\tilde{\mathcal{B}}_{i}\rightarrow \mathcal{B}_{i}$ such that $(\tilde{\mathcal{B}}_{i};\tilde{g}_{r_{i}},\tilde{U}_{i})$ converges to a $\Sa\times \Sa$-symmetric data set. The limit data set has non-constant $\rho$ because by \ref{PORSI-b} it must be $\tilde{\rho}_{r_{i}}(p_{i})\rightarrow \rho^{*}$ and $0<\rho^{*}\leq 1/2$. Hence the limit space is a Kasner space different from $A$ and $C$. Therefore for $i$ large enough the level sets of $\tilde{U}$ foliate $\tilde{\mathcal{B}}_{i}$ and hence $\mathcal{B}_{i}$. Thus the neighbourhoods $\mathcal{U}_{p_{i}}$ with the desired properties exist for $i$ large enough, which is a contradiction. \vspace{0.2cm} It is direct to see from the argumentation above that, after choosing $\epsilon^{*}$ smaller if necessary and $r^{*}$ bigger if necessary, $\rho_{r}$ is uniformly bounded above and below away from zero; that is, for some $0<\underline{\rho}<\overline{\rho}$, the bound $0<\underline{\rho}\leq \rho_{r}\leq \overline{\rho}$ holds on $\mathcal{U}_{p}$, for any $p\in \gamma$ with $r(p)\geq r^{*}$ for which \ref{PORSI-a} and (b) hold. In the proof of part \ref{PORSI-II} we will assume that $\epsilon^{*}$ and $r^{*}$ were chosen accordingly. As the proof progresses the values of $\epsilon^{*}$ and $r^{*}$ will be adjusted a few times. Note that the estimates (\ref{ESTCHEC}) of Part I and the uniform bound for $\rho_{r}$ show that for any $i\geq 0$, $|\nabla^{(i)}\rho_{r}|_{r}$ is uniformly bounded (without the need to adjust $\epsilon^{*}$ or $r^{*}$ for each $i$). Similarly for any $i\geq 0$, $|\nabla^{(i)}\lambda_{r}|_{r}$ is uniformly bounded. {\it Terminology}: It is natural then to introduce the following terminology that will be used throughout the proof of \ref{PORSI-II} below. Let $\mathcal{G}$ be a geometric quantity defined on each of the neighbourhoods $\mathcal{U}_{p}$ (for instance $\mathcal{G}=\lambda_{r}$). Then $\mathcal{G}$ is {\it uniformly bounded} if one can find a constant $C>0$ such that $\mathcal{G}\leq C$ holds on $\mathcal{U}_{p}$, for any $p$ with $r(p)\geq r^{*}$ for which \ref{PORSI-a} and (b) hold. \vspace{0.2cm} \ref{PORSI-II} The construction of the Kasner space and the map $\phi$ is done in the three progressive steps \ref{PORSI-II}-A, \ref{PORSI-II}-B and \ref{PORSI-II}-C below. In (II)-A we define a map $\phi$ from $\mathcal{U}$ into a product space $I\times {\rm T}^{2}$. Then, also in \ref{PORSI-II}-A, we define a product metric $\hg_{F}$ on $I\times {\rm T}^{2}$ that will be used as a support metric, and prove its main properties. In \ref{PORSI-II}-B, we use $\hg_{F}$ to define a good $\Sa\times \Sa$-symmetric approximation $\breve{\hg}$ to $\phi_{*}\hg$. Finally in \ref{PORSI-II}-C we show that $(\breve{\lambda},\breve{\hg})$ 'almost' satisfy the ODEs defining Kasner metrics that were discussed in subsection \ref{UNIQ} and make the error explicit. We show that the Kasner solution defined out of such ODEs with an initial data equal to that of $(\breve{\lambda},\breve{\hg})$ at an initial slice, gives the wished Kasner metric $\hg^{\mathbb{K}}$ and $U^{\mathbb{K}}$. \vspace{0.2cm} {\it Notation}: Throughout this part \ref{PORSI-II} we will be working on the neighbourhoods $\mathcal{U}_{p}$ and at the scaled geometry, namely dealing with $\hg_{r}$ rather than $\hg$. However to prevent a cumbersome notation we will omit the subindex $r$ everywhere. The reader should be aware of that. \ref{PORSI-II}-A. {\sc The trivialisation $\phi$ and the flat metric $\hg_{F}$.} Given $q\in \mathcal{U}_{p}$ let $\zeta_{q}(U)$ be the integral curve of the vector field $\nabla^{a} U$, extending throughout $\mathcal{U}_{p}$ and parametrised by $U$. Then define $\phi:\mathcal{U}_{p}\rightarrow T_{p}\times I$ by $\phi(q)=(T_{p}\cap \zeta_{q},U(q))$. We will be identifying $\mathcal{U}_{p}$ with $T_{p}\times I$ via the diffeomorphism $\phi$. On $T_{p}\times I$ the metric $\hg$ is written as \begin{equation} \hg=\lambda^{2} dU^{2}+h \end{equation} where $\lambda=1/\rho$ and $h$ is the induced metric on the tori $T_{U}:=T_{p}\times U$. Denote by $D$ the intrinsic covariant derivative on the $T_U$'s. As $T_{U(p)}$ will appear often we will use the simpler notation $T_{p}$. Let $h_{F}$ be the metric on $T_{p}$ that is conformally related to $h|_{T_{p}}$ and that is equal to $h$ at $p$ (Proposition \ref{FLATTGA}). On $T_{p}\times I$ define \begin{equation} \hg_{F}=dU^{2}+h_{F}. \end{equation} Around any point $q\in T_{p}$ we can consider coordinates $(z_{1},z_{2})$ such that $h_{F}=dz_{1}^{2}+dz_{2}^{2}$. On every patch $(z_{1},z_{2},U)$ we have $\hg_{F}=dU^{2}+dz_{1}^{2}+dz_{2}^{2}$. For this reason the $h_{F}$-covariant derivative on the tori $T_{U}$ will be denoted by $\partial_{A}$ or simply $\partial$. We claim that, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label=(\roman*), widest=IV, align=left] \item\label{PORSI-i} $e^{-C_{0}}h_{F}\leq h\leq e^{C_{0}}h_{F}$, where $C_{0}>0$ is uniform, \item\label{PORSI-ii} for any $i\geq 0$ and $l\geq 0$, $|\partial^{l}_{U}\partial^{(i)} h|_{h_{F}}$ and $|\partial^{l}_{U}\partial^{(i)}\lambda|_{h_{F}}$ are uniformly bounded. \end{enumerate} Of course these uniform bounds should be understood to hold at every point of every $T_{U}$ in $\mathcal{U}_{p}$. We prove first \ref{PORSI-i}. We start showing that for every $i\geq 0$, $|D^{i}\lambda|_{h}$, $|D^{(i)}\Theta|_{h}$ and $|D^{i}\theta|_{h}$ are uniformly bounded. Let $v$ and $w$ be two unit vectors tangent to a $T_{U}$ at one point. A normal unit vector to $T_{U}$ is $n^{a}=\lambda\nabla^{a}U$. Then we compute, \begin{equation}\label{SECFUNDBC} \Theta(v,w)=\langle \nabla_{v} (\lambda\nabla U),w\rangle = \big(\lambda\nabla_{a}\nabla_{b}U+(\nabla_{a}\lambda) \nabla_{b}U\big)v^{a}w^{b} \end{equation} By the estimates (\ref{ESTCHEC}) of Part I, $|\nabla_{a}U|_{\hg}$ and $|\nabla_{a}\nabla_{b}U|_{\hg}$ are uniformly bounded. Similarly, as mentioned in \ref{PORSI-I}, $\lambda$ and $|\nabla \lambda|_{\hg}$ are uniformly bounded. Hence $|\Theta|_{h}$ is uniformly bounded. For the same reason $\nabla$-derivatives of $\Theta$ are uniformly bounded, and therefore are the $D$-derivatives because $\nabla$ and $D$ differ from each other in $\Theta$. These bounds imply the uniform bounds also for $|D^{i}\lambda|_{h}$ and $|D^{i}\theta|_{h}$. Recall that the Gaussian curvature $\kappa$ of the metric $h$ on a slice $T_{U}$ is given by \begin{equation} 2\kappa=-|\Theta|^{2}-\theta^{2}-\frac{2}{\lambda^{2}}. \end{equation} The previous estimates then show that for every $i\geq 0$, $|D^{(i)}\kappa|_{h}$ is also uniformly bounded. So far these uniform bounds hold without the need to adjust $\epsilon^{*}$ or $r^{*}$, because they are due essentially to the bounds (\ref{ESTCHEC}) of Part I and the uniform bounds for $\rho$. In the sequel we may need however further adjustment. Chose then $\epsilon^{*}$ sufficiently small such that $\diam_{h}(T_{p})$ is small enough that we can use Proposition \ref{FLATTGA} on $T_{p}$ to conclude first that \begin{equation}\label{INCLUDE} e^{-K_{0}}h_{F}\leq h\big|_{T_{p}}\leq e^{K_{0}}h_{F} \end{equation} where $K_{0}>0$ is uniform and second that for any $i\geq 1$, $|\partial^{(i)} h|_{T_{p}}|_{h_{F}}$ is uniformly bounded. Now we explain how \ref{PORSI-i} is a simple consequence of the boundedness of the second fundamental forms. Recall that \begin{equation}\label{GGQQ} \partial_{U}h=2\lambda\Theta \end{equation} As $\lambda$ is uniformly bounded and as $e^{-K_{1}}h\leq \Theta\leq e^{K_{1}}h$ at every $T_{U}$ and for some uniform $K_{1}>0$, we deduce that $e^{-K_{2}}h\leq \partial_{U} h\leq e^{K_{2}}h$ for some uniform $K_{2}>0$. After integration in $U$ we obtain $e^{-K_{3}}h|_{T_{p}}\leq h \leq e^{K_{3}}h|_{T_{p}}$ for some uniform $K_{3}>0$, which is equivalent to $e^{-C_{0}}h_{F}\leq h \leq e^{C_{0}}h_{F}$ for a uniform $C_{0}>0$ because of (\ref{INCLUDE}). We turn to prove \ref{PORSI-ii}. We have mentioned already that $|\nabla \lambda|_{\hg}$ is uniformly bounded. Thus, $|\partial_{U} \lambda|(=|\rho^{2}\langle\nabla U,\nabla \lambda\rangle|)$ is uniformly bounded and so is $|\partial \lambda|_{h_{F}}$ by (\ref{INCLUDE}). We prove then that $|\partial_{U}h|_{h_{F}}$ and $|\partial h|_{h_{F}}$ are uniformly bounded. The uniform bound for $|\partial_{U}h|_{h_{F}}$ follows directly from the formula (\ref{GGQQ}), the uniform bound of $\lambda$ and of $|\Theta|_{h}$, and \ref{PORSI-i}. Let us turn now to prove the uniform bound for $|\partial h|_{h_{F}}$. We work in coordinates. We compute \begin{equation} \partial_{U}\partial_{C}h_{AB}=2(\partial_{C}\lambda)\Theta_{AB}+2\lambda \partial_{C}\Theta_{AB} \end{equation} where we can write \begin{equation} \partial_{C}\Theta_{AB}=D_{C}\Theta_{AB}+\Gamma_{CA}^{M}\Theta_{MB}+\Gamma_{CB}^{M}\Theta_{AM} \end{equation} with the Levi-Civita connection $\Gamma_{AB}^{C}$ being \begin{equation} \Gamma_{AB}^{C}=\frac{1}{2}\{\partial_{A}h_{MB}+\partial_{B}h_{AM}-\partial_{M}h_{AB}\}h^{MC} \end{equation} Hence, relying on the estimates previously obtained we can write \begin{equation} \partial_{U}(\partial_{C}h_{AB})=X_{CAB}^{\ \ \ C'A'B'}(\partial_{C'}h_{A'B'})+Y_{CAB} \end{equation} where $|X_{CAB}^{\ \ \ C'A'B'}|$ and $|Y_{CAB}|$ are uniformly bounded. Using this system of first order ODEs and the uniform bound for $|\partial h|_{T_{p}}|_{h_{F}}$ at the initial slice $T_{p}$, we get directly the desired uniform boundedness of $|\partial_{C}h_{AB}|$. Proving that for every $i\geq 0$ and $l\geq 0$, $|\partial^{l}_{U}\partial^{(i)}\lambda|_{h_{F}}$ and $|\partial_{U}^{l}\partial^{(i)} h|_{h_{F}}$ are uniformly bounded, is done by the iteration of the same arguments. \vspace{0.2cm} \ref{PORSI-II}-B. {\sc A `good' $\Sa\times \Sa$-symmetric approximation $\breve{\hg}$ of $\hg$.} We explain first how to define $\breve{\hg}$ and then we explain how well it does approximate $\hg$. Let $p_{0}$ be a point in $T_{p}$ where the Gaussian curvature is zero. The choice of $p_{0}$ will play some role that we will explain later. Then define \begin{equation} \breve{\hg}=\breve{\lambda}^{2}dU^{2}+\breve{h} \end{equation} where $\breve{\lambda}$ and $\breve{h}$ are, at every leaf $T_U$, simply the $h_{F}$-extensions of $\lambda(\zeta_{p_{0}}(U))$ and $h|_{\zeta_{p_{0}}(U)}$ respectively (recall the notion of $h_{F}$-extension right before the statement of the proposition). Note, in particular, that $h-\breve{h}$ and $\lambda-\breve{\lambda}$ are zero all over $\zeta_{p_{0}}(U)$. We prove now that for every $i\geq 0$ and $l\geq 0$ there is a uniform $C>0$ such that \begin{gather} \label{FOR1} |\partial_{U}^{l}\partial^{(i)}(h-\breve{h})|_{h_{F}}\leq C\diam^{m^{*}}_{h_{F}}(T_{p}),\\ \label{FOR11} |\partial_{U}^{l}\partial^{(i)}(\lambda -\breve{\lambda})|_{h_{F}}\leq C\diam^{m^{*}}_{h_{F}}(T_{p}) \end{gather} Fix $i$ and $l$. In a coordinate patch $(z_{1},z_{2},U)$ around $\zeta_{p_{0}}=p_{0}\times I$, ($p_{0}=(0,0)$), we have \begin{equation} \breve{h}_{AB}(z_{1},z_{2},U)=h_{AB}(0,0,U),\quad \breve{\lambda}(z_{1},z_{2},U)=\lambda(0,0,U) \end{equation} for all $(z_{1},z_{2},U)$. Taking $\partial_{U}$-derivatives we deduce that for every $l'\geq 0$, also $(\partial_{U}^{l'}\breve{h})|_{T_{U}}$ and $(\partial_{U}^{l'}\breve{\lambda})|_{T_{U}}$ are the $h_{F}$-extensions of $(\partial_{U}^{l'}h)|_{\zeta_{p_{0}}(U)}$ and $(\partial_{U}^{l'}\lambda)|_{\zeta_{p_{0}}(U)}$ respectively. Therefore $\partial^{l'}_{U}(h-\breve{h})$ and $\partial^{l'}_{U}(\lambda-\breve{\lambda})$ are zero at every point on $\zeta_{p_{0}}(U)$. If we prove that in addition for every $i'\geq 0$ and $l'\geq 0$, $|\partial^{(i')}\partial_{U}^{l'} (h-\breve{h})|_{h_{F}}$ is uniformly bounded then the $C^{i+m^{*}}_{h_{F}}$-norm of $\partial_{U}^{l}(h-\breve{h})$ on every $T_{U}$ would be uniformly bounded. We could then use Proposition \ref{ENDK1} at every tori $T_{U}$, (in Proposition \ref{ENDK1} use $W=\partial^{l}_{U}(h-\breve{h})$, $k=i+m^{*}$ and $j=i$), to conclude (\ref{FOR1}) and (\ref{FOR11}). Let us prove then these bounds. First, as $(\partial_{U}^{l'}\breve{h})|_{T_{U}}$ is the $h_{F}$ extension of $(\partial_{U}^{l'}h)|_{\zeta_{p_{0}}(U)}$, then at every point $q$ in a torus $T_{U}$ we have $|\partial^{l'}_{U}\breve{h}|_{h_{F}}(q)=|\partial^{l'}_{U}\breve{h}|_{h_{F}}(\zeta_{p_{0}}(U))=|\partial^{l'}_{U}h|_{h_{F}}(\zeta_{p_{0}}(U))$. But by (ii), for every $l'\geq 0$, $|\partial^{l'}_{U}h|_{h_{F}}$ is uniformly bounded, hence $|\partial^{l'}_{U}(h-\breve{h})|_{h_{F}}(\leq |\partial^{l'}_{U}h|_{h_{F}}+|\partial^{l'}_{U}\breve{h}|_{h_{F}})$, is uniformly bounded. In second place, as $(\partial_{U}^{l'}\breve{h})|_{T_{U}}$ and $(\partial_{U}^{l'}\breve{\lambda})|_{T_{U}}$ are the $h_{F}$-extensions of $(\partial_{U}^{l'}h)|_{\zeta_{p_{0}}(U)}$ and $(\partial_{U}^{l'}\lambda)|_{\zeta_{p_{0}}(U)}$ respectively then for any $i'\geq 1$ we have $\partial^{(i')}\partial^{l'}_{U}\breve{h}=0$ and $\partial^{(i')}\partial^{l'}_{U}\breve{\lambda}=0$. Therefore, \begin{equation} \partial^{l}_{U}\partial^{(i')}(h-\breve{h})=\partial^{l}_{U}\partial^{(i')}h\quad {\rm and} \quad\partial^{l}_{U}\partial^{(i')}(\lambda-\breve{\lambda})=\partial^{l}_{U}\partial^{(i')}\lambda \end{equation} By the estimates \ref{PORSI-i} and \ref{PORSI-ii} in \ref{PORSI-I}, the $h_{F}$-norm of the right hand side of each of these expressions is uniformly bounded. This concludes the proof of the bounds that we claimed above. These estimates imply now, for any $i\geq 0$ and $l\geq 0$, we have \begin{gather} \label{FOR2} |\partial^{l}_{U}\partial^{(i)}DD(\lambda-\breve{\lambda})|_{h_{F}}\leq C_{li}\diam_{h_{F}}^{m^{*}}(T_{p}), \end{gather} where the $C_{li}$ are uniform (use that $D=\partial +\Gamma$). This is the estimate that will be used in \ref{PORSI-II}-C. \vspace{0.2cm} \ref{PORSI-II}-C. {\sc The Kasner approximation $\hg^{\mathbb{K}}$ of $\hg$.} In coordinates $(z_{1},z_{2},U)$ the static equations are \begin{align} \label{SSA1} & \partial_{U}h_{AB}=2\lambda \Theta_{AB},\\ \label{SSA2} & \partial_{U}\Theta_{AB}=-D_{A}D_{B}\lambda +\lambda(2\kappa h_{AB}-\theta \Theta_{AB}+2\Theta_{AC}\Theta^{C}_{\ B}),\\ \label{SSA5} & \partial_{U}\bigg(\frac{\sqrt{|h|}}{\lambda}\bigg)=0,\\ \label{SSA3} & \Theta_{AB}\Theta^{AB}-\theta^{2}=-\frac{2}{\lambda^{2}}-2\kappa,\\ \label{SSA4} & D^{A}\Theta_{AB}=D_{B}\theta, \end{align} where, as earlier, $\theta=\Theta_{A}^{\ A}$. The equation (\ref{SSA5}) is the same as $\Delta U=0$ and is equivalent to \begin{equation}\label{SSA6} \partial_{U}\lambda=\lambda^{2}\theta \end{equation} We will use this equation instead of (\ref{SSA5}). Evaluating (\ref{SSA1}), (\ref{SSA2}), (\ref{SSA3}), (\ref{SSA6}) and (\ref{SSA4}) at $\zeta_{p_{0}}(U)$ and (\ref{SSA3}) at $p_{0}$ we get, \begin{align} \label{SSA1B} & \partial_{U}\breve{h}_{AB}=2\breve{\lambda} \breve{\Theta}_{AB},\\ \label{SSA2B} & \partial_{U}\breve{\Theta}_{AB}=\breve{\lambda}(2\overline{\kappa}\breve{h}_{AB}-\breve{\theta} \breve{\Theta}_{AB}+2\breve{\Theta}_{AC}\breve{\Theta}^{C}_{\ B})+O^{\infty}_{AB}(\diam_{h_{F}}^{m^{*}}(T_{p})),\\ \label{SSA5B} &\partial_{U}\breve{\lambda}=\breve{\lambda}^{2}\breve{\theta},\\ \label{SSA3B} & \big(\breve{\Theta}_{AB}\breve{\Theta}^{AB}-\breve{\theta}^{2}\big)\bigg|_{p_{0}}=-\frac{2}{\breve{\lambda}^{2}}\bigg|_{p_{0}},\\ \label{SSA4B} &\breve{\partial}^{A}\breve{\Theta}_{AB}=\partial_{B}\breve{\theta}, \end{align} where $\overline{\kappa}$ is defined as \begin{equation} \overline{\kappa}=\bigg[-\frac{1}{\breve{\lambda}^{2}}-\frac{1}{2}\big(\breve{\Theta}_{AB}\breve{\Theta}^{AB}-\breve{\theta}^{2}\big) \bigg]\bigg|_{p_{0}} \end{equation} (and is not the Gaussian curvature of $\breve{h}$ which is zero) and where $O^{\infty}_{AB}$ is \begin{equation}\label{ODDI} O^{\infty}_{AB}=-D_{A}D_{B}\lambda. \end{equation} This notation is to stress that, as was shown in (\ref{FOR2}), for any $l\geq 0$ we have \begin{equation} |\partial^{l}_{U}O^{\infty}_{AB}|_{h_{F}}\leq C_{l}\diam^{m^{*}}_{h_{F}}(T_{p}) \end{equation} where $C_{l}$ is uniform. Consider now the metric \begin{equation}\label{GKMETR} \hg^{\mathbb{K}}=(\lambda^{\mathbb{K}})^{2}dU^{2}+h^{\mathbb{K}}, \end{equation} where $\lambda^{\mathbb{K}}=\lambda^{\mathbb{K}}(U)$ and $h^{\mathbb{K}}=h^{\mathbb{K}}(U)$ solve \begin{align} \label{RTE41} & \partial_{U}h^{\mathbb{K}}_{AB}=2\lambda^{\mathbb{K}} \Theta^{\mathbb{K}}_{AB},\\ \label{RTE42} & \partial_{U}\Theta^{\mathbb{K}}_{AB}=\lambda^{\mathbb{K}}(-\theta^{\mathbb{K}}\Theta^{\mathbb{K}}_{AB}+2\Theta^{\mathbb{K}}_{AC}\Theta^{\mathbb{K}C}_{\ B}),\\ \label{RTE43} & \partial_{U}\lambda^{\mathbb{K}}=(\lambda^{\mathbb{K}})^{2}\theta^{\mathbb{K}} \end{align} subject to the initial data \begin{equation} h^{\mathbb{K}}_{AB}(0)=\breve{h}_{AB}(0),\quad \Theta_{AB}^{\mathbb{K}}(0)=\breve{\Theta}_{AB}(0)\quad \text{and}\quad \lambda^{\mathbb{K}}(0)=\breve{\lambda}(0). \end{equation} Following the discussion in Section \ref{UNIQ}, we see that $(\lambda^{\mathbb{K}}(U),h^{\mathbb{K}}(U))$ satisfy (\ref{RTE1}), (\ref{RTE2}) and (\ref{RTE43}) for all $U$, and (\ref{RTE3}) at the initial time, hence is a Kasner solution. Hence \begin{equation}\label{ZERO} 0=-\frac{1}{(\lambda^{\mathbb{K}})^{2}}-\frac{1}{2}\big(\Theta^{\mathbb{K}}_{AB}\Theta^{\mathbb{K}AB}-(\theta^{\mathbb{K}})^{2}\big) \end{equation} on each $T_{U}$. Thus, (\ref{RTE42}) is equivalent to \begin{equation} \partial_{U}\Theta^{\mathbb{K}}_{AB}=\lambda^{\mathbb{K}}(2\overline{\kappa}^{\mathbb{K}}-\theta^{\mathbb{K}}\Theta^{\mathbb{K}}_{AB}+2\Theta^{\mathbb{K}}_{AC}\Theta^{\mathbb{K}C}_{\ B}), \end{equation} where $\overline{\kappa}^{\mathbb{K}}$ is the right hand side of (\ref{ZERO}) and is zero. Therefore, thought as ODE's, the system (\ref{SSA1B}), (\ref{SSA2B}), (\ref{SSA5B}) is a perturbation of the system (\ref{RTE41}), (\ref{RTE42}), (\ref{RTE43}) where the 'perturbation' is $O^{\infty}_{AB}$ and should be thought as depending only on $U$. Both systems have also the same initial data. Therefore, using (\ref{ODDI}) and standard ODE analysis we obtain \begin{gather} \label{FCASE1} |\partial^{l}_{U}(\breve{h}-h^{\mathbb{K}})|_{h_{F}}\leq C_{l}^{*}\diam^{m^{*}}_{h_{F}}(T_{p}),\\ \label{FCASE2}|\partial^{l}_{U}(\breve{\lambda}-\lambda^{\mathbb{K}})|\leq C^{*}_{l}\diam^{m^{*}}_{h_{F}}(T_{p}) \end{gather} for any $l\geq 0$, where the $C^{*}_{l}$ are uniform. Now note that because $\partial^{(i)}h_{\mathbb{K}}=\partial^{(i)}\breve{h}=0$ then for every $i\geq 1$ we have \begin{gather} \partial_{U}^{l}\partial^{(i)}(h-h^{\mathbb{K}})=\partial_{U}^{l}\partial^{(i)}(h-\breve{h}),\\ \partial_{U}^{l}\partial^{(i)}(\lambda-\lambda^{\mathbb{K}})=\partial_{U}^{l}\partial^{(i)}(\lambda-\breve{\lambda}) \end{gather} Thus, from (\ref{FOR1}) and (\ref{FOR11}) we obtain \begin{gather} \label{FCASE3} |\partial_{U}^{l}\partial^{(i)}(h-h^{\mathbb{K}})|_{h_{F}}\leq C_{li}^{*}\diam^{m^{*}}_{h_{F}}(T_{p}),\\ \label{FCASE4} |\partial_{U}^{l}\partial^{(i)}(\lambda-\lambda^{\mathbb{K}})|_{h_{F}}\leq C_{li}^{*}\diam^{m^{*}}_{h_{F}}(T_{p}) \end{gather} where the $C^{*}_{li}$ are uniform. The estimates (\ref{ESTIMATE}) claimed in \ref{PORSI-II} are equivalent to (\ref{FCASE3}),(\ref{FCASE4}). This finishes the proof of the Proposition. \end{proof} \begin{Proposition}\label{PORE} Let $(\Sigma; \hg, U)$ be a static end, and let $\gamma$ be a ray. Let $0<\rho^{*}\leq 1/2$ and let $j^{*}\geq 1$ and $m^{*}\geq 2$. Let $\epsilon^{*}, \mu^{*}, r^*, C^*$, be as in Proposition \ref{PORSI}. Then, there exist positive $\delta^*$, $\ell^*$ and $B^{*}$ such that if $p$ is a point in $\gamma$ with $r=r(p)\geq r^*$ satisfying, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\alph*)}, widest=a, align=left] \item\label{PORE1} $\dist_{GH}\big(\big(\mathcal{A}^{c}_{r}(p;1/2,2); d_{r}\big),\big([1/2,2];|\ldots|\big)\big)\leq \epsilon^*$, \item\label{PORE2} $|\rho_{r}(p)-\rho^*|\leq \mu^{*}$, \item\label{PORE3} $|\theta_{r}(p)-1|\leq \delta^*$, \item\label{PORE4} $\diam_{\hg^{\mathbb{K}}}(\phi(T_p))\leq \ell^*$, \end{enumerate} (where $\mathcal{U}_{p}$, $(\mathcal{U}^{\mathbb{K}};\hg^{\mathbb{K}},U^{\mathbb{K}})$ and $\phi:\mathcal{U}_{p}\rightarrow \mathcal{U}^{\mathbb{K}}$ are respectively the neighbourhood, the Kasner data and the diffeomoprhisms from \ref{PORSI-I} and \ref{PORSI-II} of Proposition \ref{PORSI}), and $p'$ is a point in $\gamma$ with $r':=r(p')=a^* r$, then the following holds, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\Roman*)}, widest=a, align=left] \item\label{POREI} $\dist_{GH}\big(\big(\mathcal{A}^{c}_{a^* r}(p';1/2,2); d_{a^* r}\big),\big([1/2,2];|\ldots|\big)\big)\leq \epsilon^*/2$, \item\label{POREII} $\diam_{\hg_{a^{*}}^{\mathbb{K}}}(T_{p'})\leq \diam_{\hg^{\mathbb{K}}}(T_{p})/2$, \item\label{POREIII} $|\theta_{r'}(p')-1|\leq B^{*}\diam^{2}_{\hg^{\mathbb{K}}}(T_{p})+|\theta_{r}(p)-1|/2$, \item\label{POREIV} $|\rho_{r'}(p')-\rho_{r}(p)|\leq B^{*}\diam^{2}_{\hg^{\mathbb{K}}}(T_{p})+|\theta_{r}(p)-1|/2$. \end{enumerate} \end{Proposition} \begin{proof} Proceeding by contradiction we assume that for each $\delta^{*}_{i}=1/i$, $\ell^{*}_{i}=1/i$ and $B^{*}_{i}=i$, there is $p_{i}\in \gamma$ with $r(p_{i})\geq r^{*}$ satisfying \ref{PORE1}-\ref{PORE4}, and there is $p'_{i}\in \gamma$ with $r'_{i}=r(p'_{i})=a^{*}r(p_{i})$ such that either \ref{POREI}, \ref{POREII}, \ref{POREIII} or \ref{POREIV} does not hold. We prove now that for $i\geq i_{0}$ with $i_{0}$ large enough, indeed all \ref{POREI}, \ref{POREII}, \ref{POREIII} and \ref{POREIV} must hold. \ref{POREI} As $\diam_{\hg_{i}^{\mathbb{K}}}(\phi(T_{p_{i}}))\rightarrow 0$, then the metric distance between $(\mathcal{U}_{p_{i}};\hg_{r_{i}})$ and $(T_{p_{i}}\times I_{i}; \hg_{i}^{\mathbb{K}_{i}})$ tends to zero as $i\rightarrow \infty$ and at the same time the spaces $(T_{p_{i}}\times I_{i}; \hg_{i}^{\mathbb{K}_{i}})$ collapse metrically to a segment of length $(2a^{*}-1/(2a^{*}))$. Hence so does $(\mathcal{U}_{p_{i}};\hg_{r_{i}})$. As $\mathcal{U}_{p_{i}}$ contains $\mathcal{A}^{c}_{r_{i}}(p_{i};1/(2a^{*}),2a^{*})$ and therefore $\mathcal{A}^{c}_{r'_{i}}(p'_{i};1/2,2)$, these last annuli metrically collapse to a segment of length $2-1/2$. Hence \ref{POREI} must hold for $i$ sufficiently large. \ref{POREII} Let $c_{i}$ be the Kasner parameter of the Kasner space $\mathbb{K}_{i}$. Then by \ref{PORE2}, for sufficiently large $i$ we have $c_{i}> \rho^{*}/4$ hence \ref{POREII} must hold by the definition (\ref{ASDEF}) of $a^{*}$. \ref{POREIII} We write \begin{equation}\label{LOLA} |\theta_{r'_{i}}(p'_{i})-1|\leq |\theta_{r'_{i}}(p'_{i})-\theta^{\mathbb{K}_{i}}_{a^{*}}(p'_{i})|+|\theta^{\mathbb{K}_{i}}_{a^{*}}(p'_{i})-1| \end{equation} where $\theta^{\mathbb{K}_{i}}_{a^{*}}(T_{p'_{i}})$ is the mean curvature of the slice $T_{p'_{i}}$ with respect to the Kasner metric $(1/a^{*})^{2} \hg^{\mathbb{K}_{i}}$, namely $\theta^{\mathbb{K}_{i}}_{a^{*}}(T_{p'_{i}})=a^{*}\theta^{\mathbb{K}_{i}}(T_{p'_{i}})$. Similarly, as $r'_{i}=a^{*}r_{i}$ we have $\theta_{r'_{i}}(p'_{i})=a^{*}\theta_{r_{i}}(p'_{i})$. Therefore for the first term in the right hand side of (\ref{LOLA}) we can write \begin{equation}\label{COMB22} |\theta_{r'_{i}}(p'_{i})-\theta^{\mathbb{K}}_{a^{*}}(p'_{i})| = a^{*}|\theta_{r_{i}}(p'_{i})-\theta^{\mathbb{K}}(p'_{i})|\leq C^{*}_{1}\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}) \end{equation} where the last inequality is from \ref{POREII} in Proposition \ref{PORE} with $m^{*}\geq 2$, $j^{*}\geq 1$. Write the Kasner metric $\hg^{\mathbb{K}_{i}}$ as \begin{equation} \hg^{\mathbb{K}_{i}}=dx^{2}+x^{2a_{i}}d\varphi_{1}^{2}+x^{2b_{i}}d\varphi_{2}^{2}=(\lambda^{\mathbb{K}_{i}})^{2}dU^{2}+h^{\mathbb{K}_{i}} \end{equation} and let $x(p_{i})=x_{i}$ and $x(p'_{i})=x'_{i}$. Then, \begin{equation} \theta^{\mathbb{K}_{i}}(p_{i})=\frac{1}{x_{i}},\quad {\rm and}\quad \theta^{\mathbb{K}_{i}}(p'_{i})=\frac{1}{x'_{i}} \end{equation} and, \begin{equation}\label{TOSUB1} x_{i}'-x_{i}=\int \lambda^{\mathbb{K}_{i}}dU \end{equation} where the integral is along any integral line of $\nabla^{a} U$. On the other hand the $\hg_{r_{i}}$-length of the segment of $\gamma$ between $p_{i}$ and $p'_{i}$, is equal to $a^{*}-1$. This length is equal, up to an $O(\diam^{2}_{h^{\mathbb{K}_{i}}}(T_{p_{i}}))$ to the $\hg_{r_{i}}$-length of any integral line of $\nabla^{a} U$ between $T_{p_{i}}$ and $T_{p'_{i}}$. So, \begin{equation}\label{TOSUB2} a^{*}-1=\int \lambda_{r_{i}}dU+O(\diam^{2}_{h^{\mathbb{K}_{i}}}(T_{p_{i}})) \end{equation} But by Proposition \ref{PORSI} we have $|\lambda_{r_{i}}-\lambda^{\mathbb{K}_{i}}|\leq O(\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}))$. Subtract (\ref{TOSUB1}) and (\ref{TOSUB2}) to get \begin{equation}\label{EQFX} x_{i}'=x_{i}+(a^{*}-1)+O(\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}})) \end{equation} Thus \begin{equation} \theta^{\mathbb{K}_{i}}_{a^{*}}(p'_{i})=\frac{a^{*}}{x_{i}+a^{*}-1+O(\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}))} \end{equation} Then we calculate \begin{align} \label{COMB3} |\theta^{\mathbb{K}_{i}}_{a^{*}}(p'_{i})-1| & =\bigg|\frac{x_{i}-1+O(\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}))}{x_{i}+a^{*}-1+O(\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}))}\bigg|\\ \label{COMB4} & \leq \frac{1}{2}\bigg|\frac{1}{x_{i}}-1\bigg| + C^{*}_{3}\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}) \end{align} where to obtain the bound we used that $x_{i}\rightarrow 1$ and that $a^{*}\geq 4$ (see definition of $a^{*}$). But \begin{equation} \frac{1}{x_{i}}=\theta^{\mathbb{K}_{i}}(p_{i})=\theta^{\mathbb{K}_{i}}(p_{0i})=\theta_{r_{i}}(p_{0i}) \end{equation} where $p_{0i}$ is the point over $T_{p_{i}}$ that is used in the construction of $\hg^{\mathbb{K}_{i}}$ in \ref{PORSI-II}-C in Proposition \ref{PORSI}. But again by Proposition \ref{PORSI} we have, \begin{equation} |\theta_{r_{i}}(p_{i})-\theta_{r_{i}}(p_{0i})|\leq C_{4}^{*}\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}) \end{equation} and thus \begin{equation}\label{COMB5} \bigg|\frac{1}{x_{i}}-1\bigg|=|\theta_{r_{i}}(p_{0i})-1|\leq |\theta_{r_{i}}(p_{i})-1|+C_{4}^{*}\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}) \end{equation} Combining now (\ref{LOLA}), (\ref{COMB22}), (\ref{COMB3})-(\ref{COMB4}) and (\ref{COMB5}) we deduce that \ref{POREIII} also holds for $i$ sufficiently large. \ref{POREIV} This follows the same arguments as in \ref{POREIII}. Write, \begin{align} \label{POREIVE1} |\rho_{r'_{i}}(p'_{i})-\rho_{r_{i}}(p_{i})| \leq &\ |\rho_{r'_{i}}(p'_{i})-\rho^{\mathbb{K}_{i}}_{a^{*}}(p'_{i})|+|\rho^{\mathbb{K}_{i}}(p_{i})-\rho_{r_{i}}(p_{i})|\\ \label{POREIVE2} &+|\rho^{\mathbb{K}_{i}}_{a^{*}}(p'_{i})-\rho^{\mathbb{K}_{i}}(p_{i})| \end{align} The two terms on the right hand side of (\ref{POREIVE1}) are bounded by $O(\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}}))$ by Proposition \ref{PORSI} with $m^{*}\geq 2$. On the other hand following notation as in \ref{POREIII}, write $U^{\mathbb{K}_{i}}=c_{i}\ln x$ with $c_{i}\rightarrow \rho^{*}$. Then the term in (\ref{POREIVE2}) is equal to \begin{equation} \bigg|a^{*}\frac{c_{i}}{x'_{i}}-\frac{c_{i}}{x_{i}}\bigg| \end{equation} and using (\ref{EQFX}) we can easily manipulate this expression to obtain the bound \begin{equation} |x_{i}-1|/2+O(\diam^{2}_{h_{\mathbb{K}_{i}}}(T_{p_{i}})) \end{equation} because $a^{*}\geq 4$ and $\rho^{*}>0$. Finally use (\ref{COMB5}) to bound this expression once more and obtain \ref{POREIV}. \end{proof} \begin{comment} To prove a characterisation of KA it is more convenient to use an equivalent statement of KA more adapted to what we have been proving until now on scaled annuli. The following proposition explains that. \begin{Proposition} A data set $(\Sigma; \hg, U)$ is asymptotic to a Kasner space $(\Sigma_{\mathbb{K}};\hg^{\mathbb{K}},U^{\mathbb{K}})$ if for any given $m\geq 1$ and $j\geq 0$ there is $C>0$ and $k_{0}>0$ such that for any $k\geq k_{0}$ there is a diffeomorphism, \begin{equation} \phi^{k}:\mathcal{U}_{k}\rightarrow \mathcal{U}^{\mathbb{K}}_{k} \end{equation} from a neighbourhood $\mathcal{U}_{k}$ of the annulus $\mathcal{A}_{2^{k}}(1/2,2)$ in $\Sigma$ into a neighbourhood $\mathcal{U}^{\mathbb{K}}_{k}$ of the annulus $\mathcal{A}^{\mathbb{K}}_{2^{k}}(1/2,2)$ in $\Sigma^{\mathbb{K}}$, preserving the level sets of $U$ and $U^{\mathbb{K}}$, such that, \begin{equation} \|\phi_{*}^{k}\hg - \hg^{\mathbb{K}}\|_{C^{j}_{\hg^{\mathbb{K}}}}\leq \frac{C}{2^{km}},\quad \|\phi^{k}_{*}U-U^{\mathbb{K}}\|_{C^{j}_{g^{\mathbb{K}}}}\leq \frac{C}{2^{km}} \end{equation} \end{Proposition} The main difference with respect to definition \ref{KADEF} is that the $\phi^{k}$ are defined on annuli and not globally. However a global $\phi$ can be easily constructed by interpolating $\phi^{k}$ and $\phi^{k+1}$ on the transition regions. We omit the details. \end{comment} \begin{Theorem}\label{KASYMPTOTIC} {\rm (A characterisation of KA $\neq A, C$)} Let $(\Sigma;\hg,U)$ be a static end. Let $\gamma$ be a ray and suppose that there is a sequence $p_{i}\in \gamma$ such that $\rho_{r_{i}}(p_{i})\rightarrow \rho^{*}$, with $0<\rho^{*}\leq 1/2$ and that $(\mathcal{A}^{c}_{r_{i}}(p_{i};1/2,2); \hg_{r_{i}})$ metrically collapses to a segment $([1/2,2];|\ldots|)$. Then the end is asymptotically Kasner different from $A$ and $C$.. \end{Theorem} \begin{proof} For the $\rho^{*}$ given in the hypothesis and for any integers $j^{*}\geq 1$ and $m^{*}\geq 2$ let $\epsilon^{*}$, $\mu^{*}$, $r^{*}$ and $C^{*}$ be as in Proposition \ref{PORSI}, and let $\delta^{*}$, $\ell^{*}$ and $B^{*}$ be as in Proposition \ref{PORE}. We begin proving that there are $\mu^{**}\leq \mu^{*}$, $\delta^{**}\leq \delta^{*}$ and $\ell^{**}\leq \ell^{*}$ such that if for $i$ big enough the point $p^{0}:=p_{i}$ is such that, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\alph*')}, widest=a, align=left] \item\label{PORE1b} $\dist_{GH}\big(\big(\mathcal{A}^{c}_{r}(p^{0};1/2,2); d_{r}\big),\big([1/2,2];|\ldots|\big)\big)\leq \epsilon^*$, \item\label{PORE2b} $|\rho_{r^{0}}(p^{0})-\rho^*|\leq \mu^{**}$, \item\label{PORE3b} $|\theta_{r^{0}}(p^{0})-1|\leq \delta^{**}$, \item\label{PORE4b} $\diam_{h_{\mathbb{K}^{0}}}(\phi(T_{p^{0}}))\leq \ell^{**}$, \end{enumerate} then for all $p^{n}\in \gamma$ such that $\mathfrak{r}_{n}:=r(p^{n})=(a^{*})^{n}r(p_{0})$ we have \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\alph*)}, widest=a, align=left] \item\label{PORE1b2} $\dist_{GH}\big(\big(\mathcal{A}^{c}_{\mathfrak{r}_{n}}(p^{n};1/2,2); d_{\mathfrak{r}_{n}}\big),\big([1/2,2];|\ldots|\big)\big)\leq \epsilon^*$, \item\label{PORE2b2} $|\rho_{\mathfrak{r}_{n}}(p^{n})-\rho^*|\leq \mu^{*}$, \item\label{PORE3b2} $|\theta_{\mathfrak{r}_{n}}(p^{n})-1|\leq \delta^{*}$, \item\label{PORE4b2} $\diam_{h_{\mathbb{K}^{n}}}(\phi(T_{p^{n}}))\leq \ell^{*}2^{-n}$. \end{enumerate} To choose $\epsilon^{**}$, $\delta^{**}$ and $\mu^{**}$ we make the following observation. Suppose that for some $\mu^{**}\leq \mu^{*}$, $\delta^{**}\leq \delta^{*}$ and $\ell^{**}\leq \ell^{*}$, \ref{PORE1},\ref{PORE2},\ref{PORE3} and \ref{PORE4} hold for $p^{n}$ for $n=0,1,2,3,\ldots,m\geq 1$. Then, after using the conclusions \ref{POREI},\ref{POREII} and \ref{POREIII} in Proposition \ref{PORE} $m$-times (each time use Prop \ref{PORE} with $p=p^{n},p'=p^{n+1}$) one obtains without difficulty the bounds, \begin{align} \label{RHSI} & \diam_{h_{\mathbb{K}^{m}}}(\phi(T_{p^{m}}))\leq \frac{\ell^{**}}{2^{m-1}},\\ \label{RHSII} & |\theta_{r^{m}}(p^{m})-1|\leq \frac{mB^{*}\ell^{**}}{2^{m-1}}+\frac{\delta^{**}}{2^{m}},\\ \label{RHSIII} & |\rho_{\mathfrak{r}_{n}}(p^{n})-\rho_{r^{0}}(p^{0})|\leq \sum_{n=1}^{n=m}\bigg(\frac{B^{*}(\ell^{**})^{2}}{2^{2(n-1)}}+\frac{nB^{*}\ell^{**}}{2^{n}}+\frac{\delta^{**}}{2^{n+1}}\bigg) \end{align} With this information at hand, choose $\mu^{**}=\mu^{*}/4$, and $\delta^{**}\leq \delta^{*}$ and $\ell^{**}\leq \ell^{*}$ such that the right hand side of (\ref{RHSII}) is less or equal than $\delta^{*}/2$ for all $m\geq 1$ and, when in (\ref{RHSIII}) we consider $m=\infty$ (i.e. the infinite sum), this sum is less or equal than $\mu^{*}/4$. Chosed that way it is then trivial that \ref{PORE1},\ref{PORE2},\ref{PORE3} and \ref{PORE4} in this theorem indeed hold for all $p^{n}$, $n=0,1,2,3,\ldots,\infty$. Having now \ref{PORE1b2} and \ref{PORE2b2} for all $p^{n}$, we can use Proposition \ref{PORSI} to conclude that, for each $n$, \begin{enumerate} \item there are neighbourhoods $\mathcal{U}_{n}$, each covering $\mathcal{A}_{r_{n}}(p^{n};1/(2a^{*}),2a^{*})$ and their union covering the end of $\Sigma$, and, \item there are Kasner spaces $(\mathcal{U}^{\mathbb{K}_{n}};\hg^{\mathbb{K}_{n}},U^{\mathbb{K}_{n}})$, ${\rm T}^{2}$-quotients of, \begin{equation}\label{KCOSAS} I_{n}\times \mathbb{R}^{2},\quad \tilde{\hg}^{\mathbb{K}_{n}}=dx^{2}+x^{2a_{n}}dy^{2}+x^{2b_{n}}dy^{2},\quad \tilde{U}^{\mathbb{K}_{n}}=d_{n}+c_{n}\ln x \end{equation} and, \item there are diffeomorphisms $\phi_{n}:\mathcal{U}_{n}\rightarrow \mathcal{U}^{\mathbb{K}_{n}}=I_{n}\times T^{2}_{n}$ ($T^{2}_{n}$ is the quotient of $\mathbb{R}^{2}$) such that, \begin{align} & \label{IINI} \phi_{n*}U=U^{\mathbb{K}_{n}},\\ & \label{IINII} \|\phi_{n*}\hg_{\mathfrak{r}_{n}}-\hg^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hg^{\mathbb{K}_{n}}}(\mathcal{U}^{\mathbb{K}_{n}})}\leq C^{*}\diam^{m^{*}}(\phi_{*}(T_{p^{n}})) \end{align} \end{enumerate} What we have so far is close to the Definition \ref{KADEF} of Kasner asymptotic, except that we still need one single Kasner space and one global map $\phi$. Its construction is what we do next. \vspace{0.2cm} We will work with the `un-scaled' metrics defined by, \begin{equation}\label{USCAL} \hat{\hg}^{\mathbb{K}_{n}}:=(\mathfrak{r}_{n})^{2}\hg^{\mathbb{K}_{n}}=\hg^{\mathbb{K}_{n}}_{1/\mathfrak{r}_{n}}, \end{equation} and leave $U^{\mathbb{K}_{n}}=U$ unchanged. Thus, we will work with the data sets, $(\mathcal{U}^{\mathbb{K}_{n}}; \hat{\hg}^{\mathbb{K}_{n}},U^{\mathbb{K}_{n}})$. From the construction of the trivialisations $\phi$ in step \ref{PORSI-II}-A of Proposition \ref{PORSI}, we obtain that the transition functions, \begin{equation} \phi_{n-1}\circ \phi_{n}^{-1}:\phi_{n}(\mathcal{U}_{n-1}\cap \mathcal{U}_{n})(\subset \mathcal{U}^{\mathbb{K}_{n}})\rightarrow \phi_{n-1}(\mathcal{U}_{n-1}\cap \mathcal{U}_{n})(\subset \mathcal{U}^{\mathbb{K}_{n-1}}) \end{equation} are defined by just one map $\psi_{n-1,n}:T^{2}_{n}\rightarrow T^{2}_{n-1}$, namely, there is $\psi_{n-1,n}$ such that if \begin{equation} \phi_{n-1}\circ \phi_{n}^{-1}((x_{n},t_{n}))=(x_{n-1},t_{n-1}) \end{equation} where $x_{n-1}\in I_{n-1}$, $t_{n-1}\in T^{2}_{n-1}$, $x_{n}\in I_{n}$ and $t_{n}\in T^{2}_{n}$, then \begin{equation} t_{n-1}=\psi_{n-1,n}(t_{n}). \end{equation} We can use this fact to extend the Kasner data $(\mathcal{U}^{\mathbb{K}_{n}}; \hat{\hg}^{\mathbb{K}_{n}},U^{\mathbb{K}_{n}})$ to a Kasner data on $\mathcal{U}^{\mathbb{K}_{n}}\cup_{\#} \mathcal{U}^{\mathbb{K}_{n-1}}$ where $\#$ means we use the identification $\phi_{n-1}\circ \phi_{n}^{-1}$. The extension is performed as follows. Instead of the coordinate $x$ we use $U$. So, on $\mathcal{U}^{\mathbb{K}_{n-1}}$, $U$ ranges between $U_{1}^{n-1}$, and $U_{2}^{n-1}$, and on $\mathcal{U}^{\mathbb{K}_{n}}$, $U$ ranges between $U_{1}^{n}$ and $U_{2}^{n}$. Now, extend $\hat{g}^{\mathbb{K}_{n}}$ given by (\ref{USCAL}) from $[U^{n}_{1},U^{n}_{2}]\times T^{2}_{n}$ to $[U^{n-1}_{1},U^{n}_{2}]\times T^{2}_{n}$ in the obvious way (by using in (\ref{KCOSAS}) use $U$ instead of $x$), and then identify $[U^{n-1}_{1},U^{n-1}_{2}]\times T^{2}_{n}$ to $[U^{n-1}_{1},U^{n-1}_{2}]\times T^{2}_{n-1}$ by $(U,t_{n})\rightarrow (U,\psi_{n-1,n}(t_{n}))$. In this way we can extend uniquely $\hat{\hg}^{\mathbb{K}_{n}}$, from $\mathcal{U}^{\mathbb{K}_{n}}$, to $\mathcal{U}^{\mathbb{K}_{n-1}}$, then to $\mathcal{U}^{\mathbb{K}_{n-2}}$ and so on until, say, $\mathcal{U}^{\mathbb{K}_{n_{0}}}$. Thus we have a Kasner data, \begin{equation} (\mathcal{U}^{\mathbb{K}_{n_{0}}}\cup_{\#}\ldots\cup_{\#} \mathcal{U}^{\mathbb{K}_{n}}; \hat{\hg}^{\mathbb{K}_{n}},U) \end{equation} and we can use the map, \begin{equation} \phi:\mathcal{U}_{n_{0}}\cup\ldots\cup \mathcal{U}_{n}\rightarrow \mathcal{U}^{\mathbb{K}_{n_{0}}}\cup_{\#}\ldots\cup_{\#} \mathcal{U}^{\mathbb{K}_{n}} \end{equation} defined by, \begin{equation} \phi(p)=\phi_{j}(p),\quad {\rm if\ } p\in \mathcal{U}_{j} \end{equation} to translate it back to a Kasner data on $\mathcal{U}_{n_{0}}\cup\ldots \cup\mathcal{U}_{n}$. It is important to keep in mind in the following that, for each $n$, the metrics $\phi^{*}\hat{\hg}^{\mathbb{K}_{n}}$ are defined indeed on $\mathcal{U}_{n_{0}}\cup\ldots \cup\mathcal{U}_{n}$. The point now is that, as was mentioned earlier, one can take a convergent subsequence of the metrics $\hat{\hg}^{\mathbb{K}_{n}}$, and that will define the Kasner metric we were looking for. We pass to explain the calculations justifying the convergence. We observe the following inequalities for any $n_{0}<i<n$, \begin{equation}\label{OBSI1} \|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})} \leq \|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{i+1}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})} +\|\hat{\hg}^{\mathbb{K}_{i+1}}-\hat{\hg}^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})} \end{equation} \begin{equation}\label{OBSI2} \|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{i+1}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})}\leq c_{0}\|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{i+1}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}}\cap_{\#} \mathcal{U}^{\mathbb{K}_{i+1}})} \end{equation} \begin{equation}\label{OBSI3} \|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{i+1}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}}\cap_{\#} \mathcal{U}^{\mathbb{K}_{i+1}})}\leq \frac{c_{1}}{2^{im^{*}}} \end{equation} \begin{equation}\label{OBSI4} \|\hat{\hg}^{\mathbb{K}_{i+1}}-\hat{\hg}^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})}\leq c_{2}\|\hat{\hg}^{\mathbb{K}_{i+1}}-\hat{\hg}^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i+1}}}(\mathcal{U}^{\mathbb{K}_{i+1}})} \end{equation} Briefly: The inequality (\ref{OBSI1}) is the triangle inequality. The inequality (\ref{OBSI3}) follows by first adding and subtracting $\phi_{i*}\hg$ inside the norm, then use the triangle inequality and finally use (\ref{IINII}) after noting that scaling the metric by $(\mathfrak{r}_{n})^{2}$ decreases the norm (observe that (\ref{IINII} involves scaled metrics). The inequalities (\ref{OBSI2}) and (\ref{OBSI4}) follow by noting that, because Kasner metrics are determined by an ODE, the norms on the right or the left hand sides are controlled by the $C^{j}$-norms at just one level set of $U$ i.e. just one torus. The constant $c_{0}$ and $c_{2}$ are independent on $n$ and $m^{*}$, and $c_{1}$ is independent on $n$ but may depend on $j^{*}$ and $m^{*}$. Putting all together we have the following recursive inequality, \begin{equation} \|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})} \leq \frac{c_{4}}{2^{im^{*}}} +c_{2}\|\hat{\hg}^{\mathbb{K}_{i+1}}-\hat{\hg}^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i+1}}}(\mathcal{U}^{\mathbb{K}_{i+1}})} \end{equation} from which we deduce, \begin{align} \|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{n}}\|_{C^{j^{*}}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})} & \leq \frac{c_{4}}{2^{im^{*}}}+\frac{c_{4}c_{2}}{2^{(i+1)m^{*}}}+\frac{c_{4}c_{2}^{2}}{2^{(i+2)m^{*}}}+\ldots+\frac{c_{4}c_{2}^{n-i}}{2^{(i+n-i)m^{*}}}\\ & = \frac{c_{4}}{2^{im^{*}}}\sum_{l=0}^{l=n-i}\bigg(\frac{c_{2}}{2^{m^{*}}}\bigg)^{l} \end{align} Playing with the fact that $c_{2}$ does not depend on $m^{*}$, we take $m$ such that $2^{m^{*}}>c_{2}$, thus making the series $\sum_{l=0}^{l=\infty}\big(\frac{c_{2}}{2^{m^{*}}}\big)^{l}$ convergent. The work is essentially done. Using this bound we let $n\rightarrow \infty$ and we can take a subsequence of the metrics $\hat{\hg}^{\mathbb{K}_{n}}$ convergent in $C^{j^{*}-1}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})$ for every $i>n_{0}$. Say the limit is $\hat{\hg}^{\mathbb{K}_{\infty}}$. Then we have the bounds, \begin{equation} \|\hat{\hg}^{\mathbb{K}_{i}}-\hat{\hg}^{\mathbb{K}_{\infty}}\|_{C^{j^{*}-1}_{\hat{\hg}^{\mathbb{K}_{i}}}(\mathcal{U}^{\mathbb{K}_{i}})}\leq \frac{c_{5}}{2^{im^{*}}} \end{equation} for every $i>n_{0}$. Making $2=a_{*}^{\ln 2/\ln a_{*}}$ we get $2^{im^{*}}=a_{*}^{i(m^{*}\ln 2/\ln a_{*})}$ and recalling that $\mathfrak{r}_{n}=r(p_{n})=a_{*}^{n}r(p_{0})$ we get without difficulty, \begin{equation} \|\hg-\phi^{*}\hat{\hg}^{\mathbb{K}_{\infty}}\|_{C^{j^{*}}_{\hg}}(p)\leq \frac{c_{m^{*},j^{*}}}{(\dist_{\hg}(p,\partial \Sigma))^{m^{*}(\ln 2/\ln a^{*})}} \end{equation} Playing with the freedom in $j^{*}$ and $m^{*}$ and passing back to the variables $(g,N)$, KA is obtained as wished. \end{proof} \subsubsection{The asymptotic of free $\Sa$-symmetric data sets}\label{FTKASS} Free $\Sa$-symmetric ends have a well defined limit of $U$ at infinity that we denoted by $U_{\infty}$ (Proposition \ref{LPRO}). In this section we study free $\Sa$-symmetric ends with the property that, \begin{equation}\label{MAXU} U(p)\leq U_{\infty} \end{equation} for all $p$. We aim to prove the following theorem. \begin{Theorem}\label{SSKAA} Let $(\Sigma;\hg,U)$ be a free $\Sa$-symmetric static end such that $U(p)\leq U_{\infty}$ for all $p\in \Sigma$. Then, either the data set is flat and $U$ is constant, or is asymptotic to a Kasner different from $A$ and $C$. \end{Theorem} Suppose $(\Sigma;\hg,U)$ is a data set as in the last proposition. If $U(p)=U_{\infty}$ at some $p\in \Sigma^{\circ}$ then $U$ is constant by the maximum principle and the data set is flat. Due to this, from now on we are concerned with the case when $U<U_{\infty}$. A large part of the proof of Theorem \ref{SSKAA} is indeed quite general and is valid too for a class of data sets that will show up again crucially in the next section. They are the $\star$-static ends that we define below (the `$\star$" is just a notation). The level sets of $U$ will be denoted as follows, \begin{equation} U^{-1}_{*}=\{p\in \Sigma: U(p)=U_{*}\} \end{equation} For instance $U_{1}^{-1}=\{p\in \Sigma:U(p)=U_{1}\}$ and so forth. As for the critical and regular values of $U$, it follows from Theorem 1 in \cite{MR0308345} that the set of critical values of $U$ is discrete. We will use this information below. Besides of this, the critical set $\{|\nabla U|=0\}$ is well understood but this won't be necessary here, (see \cite{2015arXiv150404563A}). \begin{Definition}[$\star$-static end] \label{DEFSSS} Let $(\Sigma;\hg,U)$ be a (non-necessarily free $\Sa$-symmetric) static end. Then, we say that $(\Sigma;\hg,U)$ is a $\star$-static end iff \begin{enumerate} \item\label{URVA1} the limit of $U$ at infinity exists (denote it by $U_{\infty}\leq \infty$), \item\label{URVA2} $U<U_{\infty}$ everywhere, \item\label{URVA3} there is a regular value $U_{0}$ of $U$, with $U_{0}> \sup \{U(p):p\in \partial S\}$, such that for any regular value $U_{1}\geq U_{0}$, $U^{-1}_{1}$ is a compact and connected surface of genus greater than zero. \end{enumerate} \end{Definition} Note that condition \ref{URVA2} implies that $\star$-ends are non-flat. It is also easy to see that any two regular values $U_{2}>U_{1}$ greater or equal than $U_{0}$, enclose a compact region $\Omega_{12}$, that is $\partial \Omega_{12}=U_{1}^{-1}\cup U_{2}^{-1}$. The proof of Theorem \ref{SSKAA} follows from the next three propositions. \begin{Proposition}\label{BATAT1} Let $(\Sigma;\hg,U)$ be a free $\Sa$-symmetric static end such that $U(p)< U_{\infty}$ for all $p$. Then $(\Sigma;\hg,U)$ is a $\star$-static end and has a simple cut $\{\mathcal{S}_{j}\}$. \end{Proposition} \begin{Proposition}\label{BATAT2} Let $(\Sigma;\hg,U)$ be a static free $\Sa$-symmetric end such that $U(p)< U_{\infty}$ for all $p$. Then the end is asymptotic to a Kasner different from $A$ and $C$, or has sub-quadratic curvature decay. \end{Proposition} \begin{Proposition}\label{CORONA} Let $(\Sigma; \hg,U)$ be a $\star$-static end and let $\gamma$ be a ray. Suppose that the data set has a simple cut $\{\mathcal{S}_{i}\}$. Then the curvature does not decay sub-quadratically along $\gamma\cup (\cup_{j}\mathcal{S}_{j})$. \end{Proposition} \begin{proof}[Proof of Theorem \ref{SSKAA}] Direct from Propositions \ref{BATAT1}, \ref{BATAT2} and \ref{CORONA}. \end{proof} Propositions \ref{BATAT1} and \ref{BATAT2} concern only free $\Sa$-symmetric ends and are simple to prove. \begin{proof}[Proof of Proposition \ref{BATAT1}] We need to show only \ref{URVA2} of Definition \ref{DEFSSS}, items \ref{URVA1} and \ref{URVA3} are verified by hypothesis. Without loss of generality we can assume that the quotient manifold $S$ is diffeomorphic to $\Sa\times [0,\infty)$ (Propositions \ref{SUPO}, \ref{SUPO3}). We work on $(S;q,U,V)$ in particular we think $U$ as a function from $S$ into $\mathbb{R}$. Clearly there is a regular value $U_{0}$ such that for any regular value $U_{1}\geq U_{0}$, $U_{1}^{-1}$ is compact, that is, a collection of circles. None of such circles can be contractible otherwise we would violate the maximum principle. But if there are two such circles, then they enclose a compact manifold (a finite cylinder) hence the maximum principle would be also violated. Therefore $U_{1}^{-1}$ is just diffeomorphic to $\Sa$. Now thinking $U$ as a function from $\Sigma$ to $\mathbb{R}$, we have that $U_{1}^{-1}$ is diffeomorphic to a torus, hence of genus greater than zero. The existence of a simple cut $\{\mathcal{S}_{i}\}$ was shown in Proposition \ref{SIMPLECUTUS}. \end{proof} \begin{proof}[Proof of Proposition \ref{BATAT2}] We work on $(S;q,U,V)$. Let $\mu:=\lim A(B(\partial S,r))/r^{2}$. If $\mu>0$ then $(S;q)$ is asymptotic to a two-dimensional cone. Hence $\kappa$ decays sub-quadratically and therefore so does $|\nabla U|^{2}$ by (\ref{KAPPAF}). Suppose now that $\mu=0$. Let $\gamma$ be a ray from $\partial S$. If $\mu=0$ then any sequence of annuli $(\mathcal{A}^{c}_{r_{i}}(p_{i};1/2,2);q_{r_{i}})$, with $p_{i}\in \gamma$, metrically collapses to the segment $[1/2,2]$. For this reason, if $|\nabla U|^{2}$ decays sub-quadratically along any sequence $p_{i}\in \gamma$ then indeed $|\nabla U|^{2}$ decays sub-quadratically along the end. On the other hand if for a certain sequence $p_{i}$, $|\nabla U|_{r_{i}}^{2}(p_{i})\geq \rho_{*}>0$ ($\rho_{*}$ a given constant), then the end $(\Sigma;\hg,U)$ is indeed asymptotic to a Kasner different from $A$ and $C$ by Proposition \ref{KASYMPTOTIC}. (There is a caveat here. Proposition \ref{KASYMPTOTIC} requires that for $i$ large enough, the annulus $(\mathcal{A}_{r_{i}}(p_{i};1/2,2);\hg_{r_{i}})$ (annulus in $\Sigma$) to be metrically close to the segment $[1/2,2]$. For $i$ large enough the annulus $(\mathcal{A}_{r_{i}}(p_{i};1/2,2);q_{r_{i}})$ (annulus in $S$) is close to the segment $[1/2,2]$, then, if necessary, just make a scaling as in (\ref{SECSCA}), with $\lambda_{i}=1, \mu_{i}=0$ and with $\nu_{i}$ small enough that also the annulus $(\mathcal{A}_{r_{i}}(p_{i};1/2,2);\hg_{r_{i}})$ is close to $[1/2,2]$. Note that such scaling only changes the $\hg$-length of the $\Sa$-fibers in $\Sigma$ and so doesn't affect the norm $|\nabla U|^{2}$). \end{proof} The proof of Proposition \ref{CORONA} will be carried out through several steps (Proposition \ref{ANTES}, \ref{GPI}, \ref{P70}, Corollary \ref{P75}, and Proposition \ref{FCOK}). \begin{Proposition}\label{ANTES} Let $(\Sigma; \hg, U)$ be a $\star$-static end. Let $U_{0}$ be a regular value as in Definition \ref{DEFSSS} and consider another regular value $U_{1}\geq U_{0}$. Then, the set of points in $U_{0}^{-1}$ reaching $U_{1}^{-1}$ in time $U_{1}-U_{0}$ under the flow of $\partial_{U}=\nabla^{i}U/|\nabla U|$ is a set of total measure on $U_{0}^{-1}$ and its image under the flow is a set of total measure in $U_{1}^{-1}$. \end{Proposition} \begin{proof} Denote by $\Omega_{01}$ the manifold enclosed by $U_{0}^{-1}$ and $U_{1}^{-1}$. Let $\mathcal{C}=\{p:\nabla U(p)=0\}\cap \Omega_{01}$ be the set of critical points in $\Omega_{01}^{\circ}$. The closed set of points $C$ (note the font) in $U_{0}^{-1}$ that do not reach $U_{1}^{-1}$ in time $U_{1}-U_{0}$ under the flow of $\partial_{U}=\nabla^{i} U/|\nabla U|^{2}$, end in a smaller time at a point in $\mathcal{C}$. Let $\phi(x,t):C\times [0,\infty)\rightarrow \Omega_{01}$ be the map generated by the flow of the vector field $\nabla^{i} U$, (not the collinear field $\partial_{U}$), that is, that takes a point $x$ in $C$ and moves it a time $t$ by the flow of $\nabla^{i} U$ (note that indeed if $x\in C$, then the orbit under the flow of $\nabla^{i} U$ remains in $\Omega_{01}$ and is defined for all time). Suppose that the area of $C$ is positive. Then the set \begin{equation} C_{1}=\{\phi(x,t):x\in C,0\leq t\leq 1\} \end{equation} has positive volume $V(C_{1})$. But as $U$ is harmonic the flow of $\nabla^{i} U$ preserves volume and so we have $V(\phi(C_{1},t))=V(\phi(C_{1},0))$ for all $t\geq 0$. Let $\epsilon>0$ be small enough that \begin{equation} V(B(\mathcal{C},\epsilon)\setminus \mathcal{C})<V(C_{1})/2 \end{equation} where $B(\mathcal{C},\epsilon)$ is the ball of points at a distance less than epsilon from $\mathcal{C}$. Then a contradiction is reached by choosing $t$ large enough that $\phi(C_{1},t)\subset B(\mathcal{C},\epsilon)\setminus \mathcal{C}$ because then it would be \begin{equation} V(C_{1})=V(\phi(C_{1},t))\leq V(B(\mathcal{C},\epsilon)\setminus \mathcal{C})<V(C_{1})/2 \end{equation} To show that the image of $U^{-1}_{0}\setminus C$ under the flow of $\partial_{U}$ is a set of total measure in $U^{-1}_{1}$ just reverse the argument using the flow of $-\partial_{U}$ from $U^{-1}_{1}$ to $U^{-1}_{0}$. \end{proof} The following function of the level sets of $U$, ($U\geq U_{0}$), will be central in the analysis later, \begin{equation}\label{GMONOTONIC} G(U):=\int_{U^{-1}}|\nabla U|^{2}dA \end{equation} The function $G(U)$ is well defined at least for regular values of $U$. It is also well defined at the critical values but this won't be needed. As mentioned before Definition \ref{DEFSSS}, critical values of $U$ are discrete and, as we will show next, the lateral limits of $G(U)$ at any critical value $U_{c}$ coincide (and are finite). Let us see this property. Let $U_{2}>U_{1}$ be any two regular values with $U_{2}>U_{c}>U_{1}\geq U_{0}$ and let $\Omega_{12}$ be the region enclosed them. As in Proposition \ref{ANTES} let $C$ be the closed set of points in $U_{1}^{-1}$ that do not reach $U_{2}^{-1}$ in time $U_{2}-U_{1}$ under the flow of $\partial_{U}$. For any $\epsilon>0$ small enough let $R(\epsilon)$ be an open region in $U_{1}^{-1}$, with smooth boundary, containing $C$, and inside the ball $B(C,\epsilon)$. Let $C_{1}(\epsilon)=U_{1}^{-1}\setminus R(\epsilon)$. Let $\Omega_{12}(\epsilon)$ be the union of the set of integral curves (inside $\Omega_{12}$) of $\partial_{U}$ starting from points in $C_{1}(\epsilon)$ and ending in $U_{2}^{-1}$, and let $C_{2}(\epsilon)$ be the union of the end-points in $U_{2}^{-1}$ of these integral curves. Then the divergence theorem gives \begin{equation} \int_{C_{1}(\epsilon)}|\nabla U|^{2}dA-\int_{C_{2}(\epsilon)}|\nabla U|^{2}dA=\int_{\Omega_{12}(\epsilon)}\langle\nabla\nabla U,\frac{\nabla U}{|\nabla U|} \nabla U\rangle dV \end{equation} Take the limit $\epsilon\rightarrow 0$ and use Proposition \ref{ANTES} to deduce, \begin{equation} G(U_{2})-G(U_{1})=\int_{\Omega'_{12}}\langle\nabla\nabla U,\frac{\nabla U}{|\nabla U|} \nabla U\rangle dV \end{equation} where $\Omega'_{12}$ is the union of the set of integral curves of $\partial_{U}$ starting from points in $U_{1}^{-1}\setminus C$ and ending in $U_{2}^{-1}$ and is equal to $\Omega'_{12}$ minus a set of measure zero. Observe that the integrand is bounded. Take finally the limit $U_{1}\uparrow U_{c}$ and $U_{2}\downarrow U_{c}$ and note that the volume of $\Omega_{12}$ tends to zero (this is easy to see) to get \begin{equation} \lim_{U_{1}\uparrow U_{c}}G(U)=\lim_{U_{2}\downarrow U_{c}}G(U) \end{equation} as claimed. The function $G(U)$ will be thought as defined for all $U\geq U_{0}$, continuous everywhere and differentiable except perhaps on a discrete set (the critical values of $U$). The continuity will be used implicitly several times in what follows. \begin{Proposition}\label{GPI} Let $(\Sigma; \hg, U)$ be a $\star$-static end. Let $U_{0}$ be a regular value as in Definition \ref{DEFSSS}. Then for any two regular values $U_{2}>U_{1}\geq U_{0}$ we have, \begin{equation} G'(U_{2})\geq G'(U_{1}) \end{equation} where $G'=dG/dU$. \end{Proposition} \begin{proof} Let $U_{*}$ be a regular value. Identify nearby level sets $U^{-1}$ to $U^{-1}_{*}$ through the flow of $\partial_{U}:=\nabla^{i} U/|\nabla U|^{2}=n/|\nabla U|$ where $n$ is the unit normal to $U^{-1}$. As $U$ is harmonic, the form $|\nabla U|dA$ is preserved. Abusing notation we write $|\nabla U|dA=|\nabla U_{*}|dA_{*}$. Thus, \begin{equation} G(U)=\int_{U^{-1}} |\nabla U||\nabla U_{*}|dA_{*} \end{equation} Therefore \begin{equation} G'(U)=\int_{U^{-1}} (\nabla_{n}|\nabla U|)\frac{|\nabla U_{*}|}{|\nabla U|}dA_{*}=\int_{U^{-1}} \nabla_{n}|\nabla U| dA \end{equation} Let $\Omega_{12}$ be the region enclosed by $U^{-1}_{1}$ and $U^{-1}_{2}$. Now let $\epsilon^{2}>0$ be a regular value of $|\nabla U|^{2}$ smaller than the minimum of $|\nabla U|^{2}$ over $U^{-1}_{1}$ and $U^{-1}_{2}$. Let $E=\{p\in \Omega_{12}:|\nabla U|(p)\leq \epsilon\}$. The divergence theorem gives us \begin{equation} \int_{U^{-1}_{2}}\nabla_{n}|\nabla U|dA=\int_{U^{-1}_{1}}\nabla_{n}|\nabla U|dA+\int_{\Omega_{12}\setminus E^{\circ}}\Delta |\nabla U| dV+\int_{\partial E}\nabla_{n_{out}} |\nabla U|dA \end{equation} The last term on the right hand side is positive, and the second from last is non-negative because $\Delta |\nabla U|\geq 0$ (use Bochner or just see \cite{MR1809792} Lemma 3.5). The proposition follows. \end{proof} \begin{Proposition}\label{P70} Let $(\Sigma; \hg,U)$ be a $\star$-static end. Let $U_{0}$ be a regular value as in Definition \ref{DEFSSS}. Then, for any two regular values $U_{2}\geq U_{1}\geq U_{0}$, we have \begin{equation}\label{LNGP} \bigg(\frac{G'}{G}\bigg)(U_{2})\geq \bigg(\frac{G'}{G}\bigg)(U_{1}) \end{equation} where $G'=dG/dU$. \end{Proposition} \begin{proof} First, recall that the set of critical values of $U$ is discrete. We start proving that for any two regular values $U_{2}>U_{1}$ with no critical value in between, the inequality (\ref{LNGP}) holds. We write \begin{equation} \hg=\frac{1}{|\nabla U|^{2}}dU^{2}+h \end{equation} where $h$ is a two-metric over the leaves $U^{-1}$ between $U_{1}^{-1}$ and $U^{-1}_{2}$. Denote with a prime ($'$) the derivative with respect to $\partial_{U}=\nabla^{i} U/|\nabla U|^{2}$. We will use again the notation $\lambda:=1/|\nabla U|$. Let $\Theta$ and $\theta$ be the second fundamental form and mean curvature respectively of the leaves $U^{-1}$. Fix a leaf $U_{*}^{-1}$. Identify the leaves $U^{-1}$ to $U^{-1}_{*}$ through the flow of $\partial_{U}$. As $U$ is harmonic we have $|\nabla U|dA=|\nabla U_{*}|dA_{*}$. Hence \begin{equation} G=\int_{U^{-1}}|\nabla U|^{2}dA=\int_{U^{-1}}\frac{1}{\lambda}|\nabla U_{*}|dA_{*}. \end{equation} As $dA=\lambda |\nabla U_{*}|dA_{*}$ and $\theta =(\partial_{n} dA)/dA$ we deduce $\theta=-(1/\lambda)'$. Thus, \begin{equation} G'=-\int_{U^{-1}}\theta |\nabla U_{*}|dA_{*} \end{equation} \begin{equation} G''=-\int_{U^{-1}}\theta '|\nabla U_{*}|dA_{*}=-\int_{U^{-1}}\frac{\theta'}{\lambda}dA \end{equation} We use now that in dimension three $\theta'$ has the standard expression, \begin{equation} \theta'=-\Delta \lambda -(-2\kappa+tr_{h}Ric+\theta^{2})\lambda \end{equation} to deduce, \begin{equation} G''=-4\pi\chi+\int_{U^{1}}\big(\frac{|\nabla \lambda|^{2}}{\lambda^{2}}+tr_{h}Ric\big)dA+\int_{U^{-1}}\theta^{2}dA \end{equation} where $\chi$ is the Euler characteristic of the leaves $U^{-1}$. On the right hand side of this expression the first two terms are non-negative. For the last term we have \begin{equation} \int_{U^{-1}}\theta^{2}dA=\int_{U^{-1}}\theta^{2}\lambda |\nabla U_{*}|dA_{*}\geq \frac{\bigg(\int_{U^{-1}}\theta |\nabla U_{*}|dA_{*}\bigg)^{2}}{\int_{U^{-1}} \frac{1}{\lambda}|\nabla U_{*}|dA_{*}}=\frac{G'^{2}}{G} \end{equation} Therefore, \begin{equation} G''\geq \frac{G'^{2}}{G} \end{equation} which is equivalent to $(G'/G)'\geq 0$ from which (\ref{LNGP}) follows. We prove now that (\ref{LNGP}) also holds when $U_{2}>U_{1}$ are two regular values, and between them there is only one critical value $U_{c}$. This would complete the proof of the proposition. To see this we just compute, \begin{align} \label{LNGP1} \bigg(\frac{G'}{G}\bigg)(U_{2}) & \geq \lim_{U\rightarrow U^{+}_{c}}\bigg(\frac{G'}{G}\bigg)(U)=\bigg(\frac{\lim_{U\rightarrow U^{+}_{c}} G'(U)}{G(U_{c})}\bigg)\\ \label{LNGP2} & \geq \bigg(\frac{\lim_{U\rightarrow U^{-}_{c}} G'(U)}{G(U_{c})}\bigg)=\lim_{U\rightarrow U^{-}_{c}}\bigg(\frac{G'}{G}\bigg)(U)\\ & \geq \bigg(\frac{G'}{G}\bigg)(U_{1}) \end{align} where to pass from (\ref{LNGP1}) to (\ref{LNGP2}) we use Proposition \ref{GPI} (note $G(U)>0$ for all $U$). \end{proof} \begin{Corollary}\label{P75} Let $(\Sigma; \hg,U)$ be a $\star$-static end. Then, there is a divergent sequence of points $p_{i}$, and constants $C>0$ and $D>0$ such that \begin{equation}\label{TOTOT} |\nabla e^{CU}|(p_{i})\geq D \end{equation} \end{Corollary} \begin{proof} From Proposition \ref{P70} we get \begin{equation} G(U)\geq G(U_{0})e^{-C(U-U_{0})} \end{equation} where $C=-G'(U_{0})/G(U_{0})$. If $C\leq 0$ then $G(U)\geq G(U_{0})$. But \begin{equation} G(U)=\int_{U^{-1}}|\nabla U||\nabla U_{0}|dA_{0} \end{equation} which has a fixed integration measure $|\nabla U_{0}|dA_{0}$. It follows that there must be a divergent sequence of points $p_{i}$ for which $|\nabla U|(p_{i})$ is bounded away from zero (which is not the case). Thus $C>0$. In this case we have \begin{equation} G(U)e^{CU}\geq G(U_{0})e^{CU_{0}}>0. \end{equation} But as \begin{equation} G(U)e^{CU}=\int_{U^{-1}}\frac{1}{C}|\nabla e^{CU}||\nabla U_{0}| dA_{0} \end{equation} again we conclude that there must be a divergent sequence of points $p_{i}$ and a constant $D>0$ for which (\ref{TOTOT}) holds. \end{proof} \begin{Proposition}\label{FCOK} Let $(\Sigma; \hg,U)$ be a $\star$-static end and let $\gamma$ be a ray. Suppose that the data set has a simple cut $\{\mathcal{S}_{i}\}$ and that the curvature decays sub-quadratically along $\gamma\cup (\cup_{j}\mathcal{S}_{j})$. Then, for any constant $C>0$, $|\nabla e^{CU}|$ tends to zero at infinity. \end{Proposition} \begin{proof} Let $\gamma(s)$ be a ray from $\partial \Sigma$ and parametrised by arc-length $s$, (i.e. $\dist(\gamma(s),\partial \Sigma)=s$). As we have done before, we will use the notation $r(p)=\dist(p,\partial \Sigma)$, for $p\in \Sigma$. Thus $r(\gamma(s))=s$. As $|\nabla U|^{2}$ decays faster than quadratically along $\gamma$ we have, \begin{equation} r|\nabla U|(r)\rightarrow 0\quad {\rm as}\quad r\rightarrow \infty, \end{equation} where we have denoted $|\nabla U|(\gamma(r))$ by $|\nabla U|(r)$. Let $r_{0}$ be such that for all $r\geq r_{0}$ we have $|\nabla U|(r)\leq 1/(2Cr)$. Integrating we obtain \begin{equation} |U(r)-U(r_{0})|\leq \frac{1}{2C}\ln \frac{r}{r_{0}} \end{equation} where to simplify notation we made $U(r):=U(\gamma(r))$. Thus, \begin{equation}\label{LATERTOU} e^{CU(r)}\leq c_{1}r^{1/2} \end{equation} We will use this inequality below. The ray $\gamma$ intersects $\mathcal{S}_{j}$ and $\mathcal{S}_{j+1}$. So let $\alpha_{j,j+1}$ be the segment of $\gamma$ intersecting $\mathcal{S}_{j}$ and $\mathcal{S}_{j+1}$ only at its end points. Let $r_{j}$ be the number such that $\gamma(r_{j})$ is the end point of $\alpha_{j,j+1}$ in $\mathcal{S}_{j}$. The connected set \begin{equation} Z_{j}=\mathcal{S}_{j}\cup \alpha_{j,j+1}\cup \mathcal{S}_{j+1} \end{equation} is included inside $\mathcal{A}(2^{1+2j},2^{4+2j})$. So by Proposition \ref{MAXMINU} (with $Z=Z_{j}$) we deduce, \begin{equation}\label{ANTERIOR} U(q)\leq \eta+U(\gamma(r_{j})) \end{equation} for any $q$ in $\mathcal{S}_{j}\cup \mathcal{S}_{j+1}$, and where $\eta$ does not depend on $j$. Let $\mathcal{U}_{j,j+1}$ be the compact manifold enclosed by $\mathcal{S}_{j}$ and $\mathcal{S}_{j+1}$. By the maximum principle, the maximum of $U$ on $\mathcal{U}_{j,j+1}$ takes place at a point, say $x_{j}$, in $\mathcal{S}_{j}\cup \mathcal{S}_{j+1}$. So, \begin{equation} U(x)\leq U(x_{j}) \end{equation} for any $x\in \mathcal{U}_{j,j+1}$. Combining this with (\ref{ANTERIOR}), with $q=x_{j}$, we obtain, \begin{equation}\label{BEUS11} e^{CU(x)}\leq c_{2}e^{CU(\gamma(r_{j}))} \end{equation} for any $x\in \mathcal{U}_{j,j+1}$ and where the constant $c_{2}$ does not depend on $j$. Now, $\mathcal{S}_{j}$ is included in $\mathcal{A}(2^{1+2j},2^{2+2j})$ and so we have, \begin{equation}\label{BOUNDSTT} r_{j}\leq 2^{2+2j} \end{equation} which plugged in (\ref{LATERTOU}) gives \begin{equation}\label{BOUNDSTOU} e^{CU(\gamma(r_{j}))}\leq c_{1}2^{1+j} \end{equation} Combining this bound and (\ref{BEUS11}) we deduce \begin{equation}\label{BEUS1} e^{CU(x)}\leq c_{4}2^{j} \end{equation} for any $x\in \mathcal{U}_{j,j+1}$ and where $c_{4}$ does not depend on $j$. On the other hand we also have $\Delta |\nabla U|^{2}\geq 0$ and thus the maximum of $|\nabla U|^{2}$ over $\mathcal{U}_{j,j+1}$ is reached again at $\mathcal{S}_{j}\cup \mathcal{S}_{j+1}$. From this fact we conclude that for every point $x\in \mathcal{U}_{j,j+1}$ it must be, \begin{equation}\label{BEUS2} |\nabla U|(x)\leq \max\{|\nabla U|(q):q\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\}\leq \frac{c_{5}}{2^{2j}} \end{equation} where the constant $c_{5}$ does not depend on $j$ and where to obtain the last inequality it was used that $|\nabla U|(q)\leq K/r(q)$ (Anderson's estimate) and the bound $r(q)\geq 2^{1+2j}$ for any $q\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}$ because $\mathcal{S}_{j}\cup \mathcal{S}_{j+1}$ is included in $\mathcal{A}(2^{1+2j},2^{4+2j})$. Let $p_{j}$ be any divergent sequence such that $p_{j}\in \mathcal{U}_{j,j+1}$ for each $j$. Then, using (\ref{BEUS1}) and (\ref{BEUS2}) we reach, \begin{equation} |\nabla e^{CU}|(p_{j})=Ce^{CU(p_{j})}|\nabla U|(p_{j})\leq \frac{c_{6}}{2^{j}} \end{equation} where $c_{6}$ does not depend on $j$. Thus $|\nabla e^{CU}|(p_{j})$ tends to zero as $j$ goes to infinity. As the sequence $p_{j}$ is arbitrary we have proved the proposition. \end{proof} \subsubsection{Proof of the KA of static black hole ends}\label{POKA} In this section we aim to prove finally Theorem \ref{KAFR} stating that a static black hole data set with sub-cubic volume growth is indeed AK. {\it Terminology}. Let $\Sigma$ be the manifold of a static black hole data. An embedded connected surface $\mathcal{S}$ is {\it disconnecting} if $\Sigma\setminus \mathcal{S}$ has two connected components one of which contains $\partial \Sigma$ and the other infinity. The closure of the component of $\Sigma\setminus \mathcal{S}$ containing $\partial \Sigma$ is denoted by $\Omega(\partial \Sigma,\mathcal{S})$. For instance, the surfaces $\mathcal{S}_{j}$ of a simple cut are disconnecting. For any disconnecting surface $S$ on a static black hole data we have, \begin{equation}\label{MAXEQ} \max\{U(p):p\in \Omega(\partial \Sigma,S)\}=\max\{U(p):p\in S\} \end{equation} by the maximum principle. We will use this simple fact in the proof of the next proposition. \begin{Proposition}\label{DFNKA} Let $(\Sigma; \hg, U)$ be a static black hole end with sub-cubic volume growth. Let $\gamma$ be a ray and let $\{\mathcal{S}_{j}\}$ be a simple cut. Then the end is either asymptotically Kasner different from $A$ and $C$ or the curvature decays sub-quadratically along the set $\gamma\cup (\cup_{j} \mathcal{S}_{j})$. \end{Proposition} \begin{proof} Suppose that there is a sequence of points $p_{n}\in \gamma\cup (\cup_{j} \mathcal{S}_{j})$ such that for some $\rho_{*}>0$, \begin{equation}\label{UCONDITI} |\nabla U|_{r_{n}}(p_{n})\geq \rho_{*} \end{equation} If a subsequence of the annuli $(\mathcal{A}^{c}_{r_{n}}(p_{n}, 1/2,2);\hg_{r_{n}})$ collapses to a segment then $\gamma$ must pass through the annuli $\mathcal{A}^{c}_{r_{n}}(p_{n}, 1/2,2)$ and the end must be asymptotically Kasner by Theorem \ref{KASYMPTOTIC}. If no subsequence of these annuli metrically collapses to a segment then one can find a subsequence (also indexed by $n$) and neighbourhoods $\mathcal{B}_{n}$ of $\mathcal{A}^{c}_{r_{n}}(p_{n}, 1/2,2)$ such that $(\mathcal{B}_{n};\hg_{r_{n}})$ collapses to a two-dimensional orbifold. Having this, by a diagonal argument, one can find a subsequence of it (also indexed by $n$) and neighbourhoods $\mathcal{B}_{k_{n}}$ of $\mathcal{A}^{c}_{r_{n}}(p_{n};1/2,2^{k_{n}})$, with $k_{n}\rightarrow \infty$, and collapsing to a two-dimensional orbifold $(S_{\infty};q_{\infty})$. As the collapse is along $\Sa$-fibers (hence defining asymptotically a symmetry), we obtain, in the limit, a well defined reduced data $(S;q,\bar{U},V)$ where $\overline{U}$ is obtained as the limit of $U_{n}:=U-U(p_{n})$. This data has $|\nabla \overline{U}|_{q}\not\neq 0$ by (\ref{UCONDITI}) and therefore is non flat. Moreover it has at least one end containing a limit, say $\overline{\gamma}$, of the ray $\gamma$. Let us denote that end by $S_{\overline{\gamma}}$. As observed in subsection \ref{RDSACL} the limit orbifold has only a finite number of conic points, therefore the basic structure of the asymptotic of the reduced data on the end $S_{\overline{\gamma}}$ is described by Propositions \ref{REDCUR}, \ref{SUPO}, \ref{SUPO2} and \ref{SUPO3}. Furthermore $\overline{U}$ has a limit value $\overline{U}_{\infty}\leq \infty$ at infinity by Proposition \ref{LPRO}. We claim that we must have $\overline{U}\leq \overline{U}_{\infty}$. Let us see this. Assume $U_{\infty}<\infty$ otherwise there is nothing to prove. Let $j_{n}$ be an integer such that $j_{n}\leq r_{n}=r(p_{n})\leq j_{n}+1$. As $\gamma$ intersects all the surfaces $\mathcal{S}_{j}$, then fixed an integer $k\geq 1$, the surfaces $\mathcal{S}_{j_{n}+k}$ `collapse into sets' in $S_{\overline{\gamma}}$ as $n\rightarrow \infty$. The bigger $k$ is, the farther away the sets `collapse'. As $\overline{U}\rightarrow \overline{U}_{\infty}$ over the end $S_{\overline{\gamma}}$ then one can find a sequence $k_{n}\rightarrow \infty$ such that $U_{n}$ converges to $\overline{U}_{\infty}$ (as $n\rightarrow \infty$) when restricted to the surfaces $S_{j_{n}+k_{n}}$. Then, by (\ref{MAXEQ}), we have \begin{equation}\label{MAXEQ2} \max\{U_{n}(p):p\in \Omega(\partial \Sigma,S_{j_{n}+k_{n}})\}=\max\{U_{n}(p):p\in S_{j_{n}+k_{n}}\}\rightarrow \overline{U}_{\infty} \end{equation} and the claim follows because if $\overline{U}(q)\geq \overline{U}_{\infty}+\epsilon$ for some $\epsilon>0$ and for some $q\in S_{\overline{\gamma}}$ then there is a sequence of points $q_{n}\in \Omega(\partial \Sigma,S_{j_{n}+k_{n}})$ with $U_{n}(q_{n})>\overline{U}_{\infty}+\epsilon/2$ if $n\geq n_{0}$, that would eventually violate (\ref{MAXEQ2}). As $(S_{\overline{\gamma}}; q,\overline{U},V)$ is non-flat then it has to be AK different from the Kasner $A$ and $C$ by Proposition \ref{SSKAA}. Therefore one can find a sequence $k_{n}$ such that the annuli \begin{equation} (\mathcal{A}^{c}(\gamma(r_{n}2^{k_{n}});r_{n}2^{k_{n}-1},r_{n}2^{k_{n}+1});\hg_{r_{n}2^{k_{n}}}), \end{equation} neighbouring the points $\gamma(r_{n}2^{k_{n}})$, collapse to a segment $[1/2,2]$ while having \begin{equation} |\nabla U|_{\hg_{r_{n}2^{k_{n}}}}(\gamma(r_{n}2^{k_{n}}))\geq \rho^{**} \end{equation} for some $\rho^{**}>0$. Then the end must be asymptotically Kasner by Theorem \ref{KASYMPTOTIC}. We reach thus a contradiction. Hence, the curvature decays sub-quadratically along the set $\gamma\cup (\cup_{j} \mathcal{S}_{j})$. \end{proof} \begin{Corollary}\label{PARAFINA} Let $(\Sigma; \hg, U)$ be a static black hole data set with sub-cubic volume growth that is not AK. Then \begin{equation}\label{QPARAFINA} (\max\{U(p):p\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\}-\min\{U(p):p\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\})\rightarrow 0 \end{equation} where $\{\mathcal{S}_{j}\}$ is a simple cut. \end{Corollary} \begin{proof} If the data is not AK, then we deduce by Proposition \ref{DFNKA} that for any sequence of points $p_{j}\in \mathcal{S}_{j}$ we have \begin{equation}\label{777} |\nabla U|_{r_{j}}(p_{j})\rightarrow 0, \end{equation} where $r_{j}=r(p_{j})$ as usual. Now, if $p_{j}\in \mathcal{S}_{j}$ then $2^{1+2j}\leq r_{j}\leq 2^{4+2j}$, thus (\ref{777}) implies right away that, \begin{equation} \max\{|\nabla U|_{\hat{r}_{j}}(q):q\in \mathcal{S}_{j-1}\cup \mathcal{S}_{j+2}\}\rightarrow 0, \end{equation} as $j\rightarrow \infty$, where we made $\hat{r}_{j}=2^{2j}$. Now, as the maximum of $|\nabla U|_{\hat{r}_{j}}$ on $\mathcal{U}_{j-1,j+2}$ is reached at $\mathcal{S}_{j-1}\cup \mathcal{S}_{j+2}$ we conclude that, \begin{equation}\label{CAPICCI} \max\{|\nabla U|_{\hat{r}_{j}}(q):q\in \mathcal{U}_{j-1,j+2}\}\rightarrow 0 \end{equation} as $j\rightarrow \infty$. Observe that because $\mathcal{S}_{j}$ and $\mathcal{S}_{j+1}$ are intersected by any ray $\gamma$ ($\{\mathcal{S}_{j}\}$ is a simple cut), they belong to the same connected component of $\mathcal{A}(2^{1+2j},2^{4+2j}) =\mathcal{A}_{\hat{r}_{j}}(2,4)$. Denote that component by $\mathcal{A}_{\hat{r}_{j}}^{c}(2,4)$. We have, \begin{equation} \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\subset \mathcal{A}_{\hat{r}_{j}}^{c}(2,4)\subset \mathcal{A}_{\hat{r}_{j}}^{c}(1/2,2^{6})\subset \mathcal{U}_{j-1,j+2} \end{equation} and remember that by (\ref{CAPICCI}) the maximum of $|\nabla U|_{\hat{r}_{j}}$ over $\mathcal{A}_{\hat{r}_{j}}^{c}(1/2,2^{6})$ tends to zero. So (\ref{QPARAFINA}) is exactly item \ref{posterior} in Proposition \ref{MAXMINU} with $a=2$, $b=4$ and $Z_{j}=\mathcal{S}_{j}\cup \mathcal{S}_{j+1}$. \end{proof} \begin{Proposition}\label{EXISTLIM} Let $(\Sigma; \hg, U)$ be a static black hole data set with sub-cubic volume growth. Then $U$ tends uniformly to a constant $U_{\infty}\leq \infty$ at infinity. \end{Proposition} \begin{proof} The claim is obviously true if the end is AK. Let us assume then that the end is not AK. Let $\{\mathcal{S}_{j}\}$ be a simple cut and $\gamma$ a ray. By Corollary \ref{PARAFINA} we have, \begin{equation} (\max\{U(p):p\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\}-\min\{U(p):p\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\})\rightarrow 0 \end{equation} And by the maximum principle, \begin{align} \max\{U(p):&\ p\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\} \geq \max\{U(p):p\in \mathcal{U}_{j,j+1}\}\geq \\ & \geq \min\{U(p):p\in \mathcal{U}_{j,j+1}\}\geq \min\{U(p):p\in \mathcal{S}_{j}\cup \mathcal{S}_{j+1}\} \end{align} Therefore the function $U$ is becoming more and more constant over the manifolds $\mathcal{U}_{j,j+1}$ enclosed by $\mathcal{S}_{j}$ and $\mathcal{S}_{j+1}$. A simple application of this fact is that if there is a sequence of manifolds $\mathcal{U}_{j_{i},j_{i}+1}$ over which $U$ tends to infinity then $U$ must tend to infinity over any other sequence $\mathcal{U}_{j'_{i},j'_{i}+1}$, as if not then for some $i_{1}<i_{2}$ the minimum of $U$ over the manifold $\mathcal{U}_{j_{i_{1}},j_{i_{2}}}$ enclosed by $\mathcal{S}_{j_{i_{1}}}$ and $\mathcal{S}_{j_{i_{2}}}$ would not be reached at a point on either $\mathcal{S}_{j_{i_{1}}}$ or $\mathcal{S}_{j_{i_{2}}}$, but rather at a point on a manifold $\mathcal{U}_{j,j+1}$ with $j_{i_{1}}<j$ and $j+1<j_{i_{2}}$. This would violate the maximum principle. For the same reason if $U$ tends to a finite constant over a sequence of manifolds $\mathcal{U}_{j,j+1}$ then it must tend also to the same constant over any other sequence. \end{proof} {\it Notation}: Let $\Sigma$ and ${\rm H}$ be two manifolds with diffeomorphic compact boundaries and let $\#:\partial {\rm H}\rightarrow \partial \Sigma$ be a diffeomorphism. Then we denote by $\Sigma\# {\rm H}$ the manifold that is obtained by identifying $\partial \Sigma$ to $\partial {\rm H}$ through $\#$. We know from Part I that, if a static black hole data set $(\Sigma; g,N)$ is not the Boost, then every horizon component is a two-sphere. In that case, to every connected component of $\partial \Sigma$ we can glue a three-ball (in a unique way). Following the notation above we denote by ${\rm H}$ the set of balls and by $\Sigma\# {\rm H}$ the resulting boundary-less manifold. We will use this notation below. \begin{Proposition}\label{TOROROS} Let $(\Sigma; \hg, U)$ be a static black hole data set with sub-cubic volume growth. Then, either the data set is a Boost, is asymptotic to a Boost or there is a divergent sequence of disconnecting tori embedded in $\Sigma$ and enclosing solid tori in $\Sigma\# {\rm H}$. \end{Proposition} \begin{proof} Let us assume that the data set is not a Boost and in particular that the horizon components are spheres. Furthermore, let us assume that the data it is not asymptotic to $B$, a Boost. By Galloway's \cite{MR1201655} (see comments in footnote \ref{FN} in section \ref{TSOTP} of Part I), if there is a divergent sequence of disconnecting tori $T_{i}$ having outwards mean curvature positive in $(\Sigma;g)$ (not in the space $(\Sigma;\hg)$), then they bound solid tori in $\Sigma\cup {\rm H}$ (see definition of ${\rm H}$ above). Let us prove below the existence of such tori under the mentioned assumptions. If the data is asymptotic to a Kasner space, hence different from $B$ by assumption, then the existence of disconnecting tori with outwards mean curvature positive is direct (see further comments in subsection \ref{TSOTP} of Part I). On the other hand if the asymptotic is not a Kasner space then by Proposition \ref{DFNKA} the curvature of $\hg$ must decay sub-quadratically along $\gamma\cup (\cup_{j} \mathcal{S}_{j})$ where $\gamma$ is a ray and $\{\mathcal{S}_{j}\}$ is a simple cut. So, let us assume furthermore this decay, and hence that the asymptotic is not Kasner different from $A$ and $C$. Let $p_{j}$ be, for each $j$, a point in $\gamma\cap \mathcal{S}_{j}$. If for some subsequence $p_{j_{i}}$ the annuli $(\mathcal{A}^{c}_{r_{j_{i}}}(p_{j_{i}};1/2,2);\hg_{r_{j_{i}}})$ collapse to a segment $[1/2,2]$, then there are neighbourhoods $\mathcal{B}_{i}$ of $\mathcal{A}^{c}_{r_{i}}(p_{j_{i}};1/2,2)$ and finite coverings $\tilde{\mathcal{B}}_{i}$ such that the sequence $(\tilde{\mathcal{B}}_{i};\hg_{r_{j_{i}}})$ converges to a $\Sa\times \Sa$-symmetric flat space $([1/2,2]\times T;g_{F})$. The limit is flat due to the sub-quadratic curvature decay along the ray $\gamma$ that is crossing $\mathcal{B}_{i}$. For the same reason the lifts of $U-U(p_{j})$ to $\tilde{\mathcal{B}}_{i}$ converge to the constant zero. Hence the lifts of $N/N(p_{j})$ converge to the constant one. Let $T_{i}$ be a sequence of embedded tori in $\mathcal{B}_{i}$ such that the coverings $\tilde{T}_{i}$ converge (in $C^{2}$) to the torus $\{1\}\times T$ on $[1/2,2]\times T$. Observe that, as the disconnecting surfaces $\mathcal{S}_{j_{i}}$ are embedded in $\mathcal{B}_{i}$, the tori $T_{i}$ are also disconnecting. If the outwards mean curvature of the torus $\{1\}\times T$ is negative, then so is the mean curvature of the tori $T_{i}$ for $i$ sufficiently large. But this is not possible because as $Ric\geq 0$ any ray from $T_{i}$ would develop a focal point at a finite distance from $T_{i}$. On the other hand if the outwards mean curvature is positive, then for $i\geq i_{0}$ with $i_{0}$ large enough the mean curvature of the tori $T_{i}$ calculated using $g$ is also positive because the lifts of $N/N(p_{j})$ converge to one (so $g$ and $\hg$ different essentially by a numeric factor). Thus, the $T_{i}$ are the tori we are looking for. So let us suppose that the mean curvature of the torus $\{1\}\times T$ is zero and that this occurs for every subsequence $p_{j_{i}}$ for which the annuli $(\mathcal{A}^{c}_{r_{j_{i}}}(p_{j_{i}};1/2,2);\hg_{r_{j_{i}}})$ collapse to the segment $[1/2,2]$. Note that in such case the $\Sa\times \Sa$-symmetric space $([1/2,2]\times T;g_{F})$ must be a flat metric product $g_{F}=dx^{2}+h_{F}$. In such hypothesis we claim that if there is one such sequence then the end must be diffeomorphic to $[0,\infty)\times T^{2}$. This intuitive fact was proved essentially in \cite{MR3302042}, so let us postpone explaining it until later. Now, if the end is diffeomorphic to $[0,\infty)\times T^{2}$, then, as proved in Proposition \ref{SSEU} below, it must be a $\star$-static end. Then, by Proposition \ref{CORONA}, the curvature cannot decay sub-quadratically along $\gamma\cup(\cup \mathcal{S}_{j})$ which is against the hypothesis. We reach thus a contradiction. So let us assume now that there are no sequence of points $p_{j_{i}}$ with the property that the annuli $(\mathcal{A}^{c}_{r_{j_{i}}}(p_{j};1/2,2);g_{r_{j_{i}}})$ collapse to a segment. Then, exactly as was done inside the proof of proposition \ref{DFNKA}, we can find a subsequence $p_{j_{i}}$ and a sequence of neighbourhoods $\mathcal{B}_{k_{i}}$ of $\mathcal{A}^{c}_{r_{j_{i}}}(p_{j_{i}};1/2,2^{k_{i}})$ over which the static data collapses to a reduced data $(S;q,\overline{U},V)$, having at least one end (without orbifold points), containing a limit of the ray $\gamma$. Furthermore, over that end we have $\overline{U}\leq \overline{U}_{\infty}\leq \infty$. Let $(\overline{\Sigma};\overline{\hg},\overline{U})$ be a static data reducing to $(E;q,\overline{U},V)$, found by taking a limit of covers (unwrappings) of (regions of) the manifolds $\mathcal{B}_{k_{i}}$. Then, by Proposition \ref{SSKAA}, either the end $(\overline{\Sigma};\overline{\hg},\overline{U})$ is asymptotic to a Kasner space different from $A$ and $C$, or it is flat and $\overline{U}$ is constant. If it is asymptotic to a Kasner space different from $A$ and $C$ then we one can easily find a sequence of points $p'_{i}$ in $\gamma\cap \mathcal{B}_{k_{i}}$ such that (make $r'_{i}=r(p'_{i})$) the annuli $(\mathcal{A}^{c}_{r'_{i}}(p'_{i};1/2,2);\hg_{r'_{i}})$ metrically collapses to the interval $[1/2,2]$ while $\rho(p'_{i})\rightarrow \rho^{*}>0$. It follows then from Theorem \ref{KASYMPTOTIC} that the end is asymptotically Kasner different from $A$ and $C$ which is against the assumption made earlier. Suppose now that $(\overline{\Sigma};\overline{\hg})$ is flat. Again, we need to find a convex $\Sa$-symmetric torus on $\overline{\Sigma}$, from which we can obtain a sequence of convex disconnecting tori $T_{i}$ on the neighbourhoods $\mathcal{B}_{k_{i}}$. To prove this we will rely on the results obtained for reduced data. Indeed we know that $E$ is diffeomorphic to $\Sa\times [r_{0},\infty)$ and that, if the area growth is quadratic, then $q$ has the asymptotic form $dr^{2}+\mu^{2}r^{2}d\varphi^{2}$ with $\kappa$ and $|\nabla V|^{2}=|\nabla \ln \Lambda|^{2}$ decaying sub-quadratically. On the other hand $\overline{\hg}$ has de form, \begin{equation} \overline{\hg}=q_{ij}dx^{i}dx^{j}+\Lambda^{2}(d\varphi+ \theta_{i}dx^{i})^{2} \end{equation} where $(x_{1},x_{2})=(r,\theta)$. For $r_{0}$ sufficiently large, the mean curvature of the torus $\{r=r_{0}\}$ on $\overline{\Sigma}$ is approximated by $\partial_{r}\Lambda/\Lambda+1/r_{0}\sim 1/r_{0}$, hence positive. This provides the torus we are looking for. On the other hand if the area growth of $(E;q)$ is less than quadratic, then, for any divergent sequence of points $t_{i}$, the annuli $(\mathcal{A}(t_{i};1/2,2);q_{r_{i}})$ metrically collapse to $[1/2,2]$. Using this, one can easily find a sequence of points $p_{j_{l}}$ on $\gamma\cup \{\mathcal{S}_{j}\}$ such that the annuli $(\mathcal{A}^{c}_{r_{j_{l}}}(p_{j_{l}};1/2,2);\hg_{r_{j_{l}}})$ collapse to the segment $[1/2,2]$, reaching a contradiction as we are assuming such a sequence does not exist. \vspace{0.2cm} To conclude let us explain the claim that was left to be proved. Assume then that for every subsequence $p_{j_{i}}$ (of the original sequence $p_{j}$) for which the annuli $(\mathcal{A}^{c}_{r_{j_{i}}}(p_{j_{i}};1/2,2);\hg_{r_{j_{i}}})$ collapse to the segment $[1/2,2]$ there are neighbourhoods $\mathcal{B}_{i}$ of $\mathcal{A}^{c}_{r_{i}}(p_{j_{i}};1/2,2)$ and finite coverings $\tilde{\mathcal{B}}_{i}$ such that the sequence $(\tilde{\mathcal{B}}_{i};\hg_{r_{j_{i}}})$ converges to a $\Sa\times \Sa$-symmetric flat space $([1/2,2]\times T;g_{F}=dx^{2}+h_{F}$) where $h_{F}$ is a flat metric on $T$. We recall first a fact from \cite{MR3302042}: for any $\delta>0$ sufficiently small and $a\leq 1/2$ and $b\geq 2$, there is $\epsilon(\delta)$, such that if $(\mathcal{A}(p_{j};a,b);\hg_{r_{j}})$ is $\epsilon(\delta)$-close in the Gromov-Hausdorff metric to the segment $[a,b]$, then there is a neighbourhood $\mathcal{B}_{j}$ of the annulus $\mathcal{A}(p_{j};a,b)$, diffeomorphic to $[a,b]\times T^{2}$, on which $\hg_{r_{j}}$ has the form, \begin{equation} \hg_{r_{j}}=\alpha^{2}dx^{2}+h \end{equation} and where the function $\alpha$ and the family of metrics $h(x)$ on $T^{2}$, satisfy, \begin{equation}\label{SUPERCONDITIONS} 1-\delta\leq \alpha\leq 1+\delta,\quad (1-\delta)h(x')\leq h(x)\leq (1+\delta)h(x') \end{equation} Furthermore the Gromov-Hausdorff distance is controlled by the $h(x)$-diameter of $T^{2}$ (for any $x$), namely it is between $c_{1}\diam_{h(x)}(T^{2})$ and $c_{2}\diam_{h(x)}(T^{2})$ where $c_{1}$ and $c_{2}$ do not depend on $\epsilon$, $a$ or $b$. Using this fact, we fix $\delta$ small, let $k\geq 1$, and find $\epsilon(k)\leq \epsilon(\delta)$ small enough that if $(\mathcal{A}(p_{j};1/2,2);\hg_{r_{j}})$ is $\epsilon(k)$-close to $[1/2,2]$ then $(\mathcal{A}(p_{j};1/2,2^{k});\hg_{r_{j}})$ is $\epsilon(\delta)$-close to $[1/2,2^{k}]$. But then the relations (\ref{SUPERCONDITIONS}) hold for all $x\in [1/2,2^{k}]$, hence also for $x\in[1/2,2]$, and then the Gromov-Hausdorff distance from $(\mathcal{A}(p_{j};1/2,2^{k});\hg_{r_{j}})$ to $[1/2,2^{k}]$ is less or equal than $c_{3}\epsilon(k)$ where $c_{3}$ does not depend on $k$. Thus, if $k$ is big enough the Gromov-Hausdorff distance between $(\mathcal{A}(p_{j+k};1/2,2);\hg_{r_{j+k}})$ and $[1/2,2]$ is less than $\epsilon(k)/2$ (observe here that $r_{j+k}\geq 2^{2k}r_{j}$ and thus $\hg_{r_{j+k}}\leq 2^{-4k}\hg_{r_{j}}$). Therefore, if for some $j_{*}$ the GH-distance from $(\mathcal{A}(p_{j_{*}};1/2,2);\hg_{r_{j_{*}}})$ and the interval $[1/2,2]$ is less or equal than $\epsilon(k)$, then the same occurs for the annuli $(\mathcal{A}(p_{j_{*}+mk};1/2,2);\hg_{r_{j_{*}+mk}})$ for any $m\geq 1$. Then the end must then be diffeomorphic to $[0,\infty)\times T^{2}$. \end{proof} The following proposition completes the proof of the previous proposition. \begin{Proposition}\label{SSEU} Let $(\Sigma;\hg,U)$ be a static end with $\Sigma$ diffeomorphic to ${\rm T}^{2}\times [0,\infty)$. Suppose that the limit of $U$ at infinity exists and that $U<U_{\infty}(\leq \infty)$ everywhere. Then, $(\Sigma;\hg,U)$ is a $\star$-static end. \end{Proposition} \begin{proof} Let $U_{0}$ be a regular value of $U$ sufficiently close to $U_{\infty}$ such that for any regular value $U_{1}>U_{0}$, $U^{-1}_{1}$ is a compact manifold without boundary embedded in $\Sigma^{\circ}$. Let ${\rm H}$ be a solid torus and consider $\Sigma\# {\rm H}$. Let us prove first that $U_{1}^{-1}$ is connected. Suppose $U_{1}^{-1}$ has components $S_{1},\ldots,S_{m}$, $m\geq 2$. Then, each $S_{k}$ encloses a bounded region $\Omega_{k}$ in $\Sigma \# {\rm H}$ (think $\Sigma \# {\rm H}$ as an open solid torus embedded in $\mathbb{R}^{3}$). If one of the $\Omega_{k}$ does not contain ${\rm H}$ then $U$ must be constant by the maximum principle which is against the hypothesis. Therefore if $S_{i}\neq S_{j}$ then either $\Omega_{i}\subset \Omega_{j}$ or $\Omega_{j}\subset \Omega_{i}$. Whatever the case, the surfaces $S_{i}$ and $S_{j}$ bound a compact region inside $\Sigma$ which again is impossible by the maximum principle. So $U_{1}^{-1}=S_{1}$, is connected. Finally, $U_{1}^{-1}$ cannot be a sphere because if so the region $\Omega$ enclosed by it must be a ball containing ${\rm H}$ but ${\rm H}$ is not contractible. \end{proof} \begin{Proposition}\label{LALA} Let $(\Sigma;\hg,U)$ be a static black hole data set, asymptotic to a Boost $B$ but that is not a Boost. Then, $\Sigma$ is diffeomorphic to an open solid three-torus minus a finite number of open three-balls, and thus there is a divergent sequence of disconnecting tori $T_{i}$ enclosing solid tori in $\Sigma\# {\rm H}$. \end{Proposition} \begin{proof} Recall that a Boost has a data $\Sigma_{B}=[0,\infty)\times {\rm T}^{2}$, $g_{B}=dx^{2}+h$, $N_{B}=x$, where $h$ is a flat metric on ${\rm T}^{2}$. Following the Definition \ref{KADEF} of Kasner asymptotic, Let $\phi:\Sigma\setminus K\rightarrow \Sigma_{B}\setminus K^{\mathbb{K}}$ be a diffeomorphism into the image such that the components $(\phi_{*}g)_{ij}$ (and their derivatives) converge to the components $g_{B,ij}$ (and their derivatives, i.e. zero) faster than any inverse power of $x$. Denote by $T_{x}$ the tori $\phi^{-1}(\{x\}\times {\rm T}^{2})$ ($x\geq x_{0}$ such that $\{x\}\times {\rm T}^{2}\subset \Sigma^{\mathbb{K}}\setminus K^{\mathbb{K}}$). Note that by the fast decay, the Gaussian curvature and the second fundamental forms of the tori $T_{x}$ tend to zero faster than any inverse power of $x$ as $x\rightarrow \infty$. Let $A(T_{x})$ be the area of $T_{x}$ and $A(T_{\infty})=\lim_{x\rightarrow \infty} A(T_{x})$. The key point to prove the theorem is to to show that there is a torus, say $T_{*}$, isotopic to the tori $T_{x}$ and with area less than $A(T_{\infty})$. If this is the case, then one can essentially use Galloway's arguments in \cite{MR1201655} to conclude that indeed the tori $T_{x}$ enclose solid tori in $\Sigma\# {\rm H}$ (see footnote \ref{FN} in Part I). Shortly, the argument would be as follows. Let $\Sigma_{x}$ be the closure of the connected component of $\Sigma\setminus T_{x}$ containing $\partial \Sigma$. Let $x_{1}$ be large enough that for any $x\geq x_{1}$, the region near $T_{x}$ is so close to flat that one can extend $\Sigma_{x}$ by a small (Riemannian) ring (diffeomorphic to $[0,1]\times {\rm T}^{2}$), in such a way that the new boundary has positive outwards mean curvature, and furthermore that, if a stable minimal surface intersects the ring then it has area greater than $A(T^{*})$. Granted this, there is always a sequence of tori $S_{i}$ isotopic to $T_{x}$, disjoint from the ring and minimising area within the class of tori isotopic to $T_{x}$. One can repeat Galloway's argument directly. Let us show now the existence of $T^{*}$. It will follow from proving that there is an suitable integrable congruence of geodesics $\{\bar{\gamma}\}$ with respect to the optical metric $\bar{g}=N^{-2}g$ over the end of $\Sigma$. Integrable here means that the distribution of planes perpendicular to the geodesics integrate to surfaces, in this case two-tori. The congruence will cover $\Sigma$ outside a bounded closed set. Furthermore if we let $\bar{T}_{t}$ be the family of `integral' tori, where $t$ is the $\bar{g}$-distance between $\bar{T}_{t}$ and $\bar{T}_{0}$, then the Gaussian curvature and second fundamental form of the $\bar{T}_{t}$ tend to zero $t\rightarrow \infty$. Suppose that we have such a congruence. Let $\theta$ be, at every point, the $g$-mean curvature of the tori $\bar{T}_{t}$ passing through that point and with respect to the normal $n=-\partial_{t}/|\partial_{t}|_{\bar{g}}$ (`inwards'). Then, it was proved in \cite{MR1201655} (see also \cite{MR3077927}) that, the mean curvature $\theta$ evaluated on a geodesic $\bar{\gamma}$ is monotonically decreasing as $t$ decreases. As the mean curvature $\theta$ of the tori $\bar{T}_{t}$ tends to zero as $t\rightarrow \infty$, then $\theta\leq 0$ everywhere. As the areas of the tori $\bar{T}$ tend to $A(T_{\infty})$ then, at any $t$, $A(\bar{T}_{t})<A(T_{\infty})$ unless the mean curvature is identically zero in all the region between $\bar{T}_{t}$ and infinity. If such is the case, it also follows from \cite{MR1201655} that in that region the metric is a flat product, which is not the case because by hypothesis the data is not a Boost. So $A(\bar{T}_{t})<A(T_{\infty})$ and we define $T_{*}=\bar{T}_{t}$. The construction of the congruence of $\bar{g}$ geodesics is as follows. Consider the congruence of geodesics with respect to $\bar{g}$, emanating from $T_{x}$ and perpendicularly to it, and towards $\partial \Sigma$ (`inwards'). Due to the fast decay of $\phi_{*}g$ into $g_{\mathbb{K}}$, and of $\phi_{*}N$ to the function $x$ (indeed the fast decays of $\phi_{*}N-x$ to zero), the congruence converges as $x\rightarrow \infty$, to a (smooth) congruence and covering $\Sigma$ outside a bounded closed set as wished. \end{proof} The following proposition, that uses the previous ones, essentially proves that static black hole ends are $\star$-static ends. \begin{Proposition}\label{REGVALUO} Let $(\Sigma; \hg, U)$ be a static black hole data set with sub-cubic volume growth. Then, there is a regular value $U_{0}<U_{\infty}$, such that for any regular value $U_{1}$ of $U$ with $U_{\infty}>U_{1}\geq U_{0}$, $U_{1}^{-1}$ is a compact connected surface of genus greater than zero. \end{Proposition} \begin{proof} If the data is a Boost then we are done, so let us assume from now on that it is not. Let $\Omega$ be a an open connected set with a closure $\overline{\Omega}$ containing $\partial \Sigma$. Let $U_{0}<U_{\infty}$ be a regular value such that the set $\{U<U_{0}\}$ contains $\overline{\Omega}$. Suppose that for some regular value $U_{1}>U_{0}$, $U_{1}^{-1}$ has the connected components $S_{1},\ldots,S_{m}$, $m\geq 2$. If for one of the components, say $S_{i}$, $\Sigma\setminus S_{i}$ is connected, then we can glue two copies of $\Sigma\setminus S_{i}$ along $S_{i}$ to make a static black hole data set with more than one end which is not possible. So for every $S_{j}$, $\Sigma\setminus S_{j}$ has two connected components, and because $U_{1}>U_{0}$, one of the them must contain $\overline{\Omega}$. Call the closure of that component $\Sigma_{j}$. We have $\partial \Sigma_{j}=\partial \Sigma\cup S_{j}$. Observe now that $\Sigma\setminus \Sigma_{j}^{\circ}$ must be connected because first, no component of it can be compact (that would violate the maximum principle) and second, no two of the components can be non-compact (because there would be at least two ends). Hence, if $\Sigma\setminus \Sigma_{j}^{\circ}$ is non-compact then $\Sigma_{j}$ must be compact (if not there would be two ends again). In sum, every $S_{j}$ is disconnecting, and $\partial \Sigma_{j}=\partial \Sigma \cup S_{j}$. Therefore if $m\geq 2$ then, either $\Sigma_{1}\setminus \Sigma_{2}$ or $\Sigma_{2}\setminus \Sigma_{1}$ is a compact manifold with $U=U_{1}$ on its boundary contradicting the maximum principle (here, following the notation above, $\Sigma_{1}$ is connected component of $\Sigma\setminus S_{1}$ containing $\overline{\Omega}$ and similarly for $\Sigma_{2}$). So $U_{1}^{-1}$ is connected for every regular value $U_{1}>U_{0}$. Now, by contradiction suppose that there is a sequence of regular values $U_{i}>U_{1}$ tending to $U_{\infty}$ such that each $U^{-1}_{i}$ is a sphere. Clearly such sequence of spheres is divergent (i.e. escapes any compact set). Also, by Propositions \ref{TOROROS} and \ref{LALA}, every sphere is embedded inside a solid torus in $\Sigma\# {\rm H}$. Hence, every $U^{-1}_{i}$ bounds a ball. Thus $\Sigma\# {\rm H}$ must be diffeomorphic to $\mathbb{R}^{3}$. Hence, the complement of an open set of $\Sigma$ is diffeomorphic to ${\rm S}^{2}\times [0,\infty)$ and the end must have cubic-volume growth by \cite{MR3302042} which is against the hypothesis. \end{proof} The next Corollary is direct from Propositions \ref{EXISTLIM}, \ref{REGVALUO}. \begin{Corollary}\label{LERO} Let $(\Sigma; \hg,U)$ be a static black hole data set with sub-cubic volume growth. Then $(\Sigma; \hg,U)$ is a $\star$-static end. \end{Corollary} We are now ready to prove the Theorem \ref{KAFR}. \begin{proof}[Proof of Theorem \ref{KAFR}] Suppose that the data is not AK. Let $\{\mathcal{S}_{i}\}$ be a simple cut and let $\gamma$ be a ray. Then, by Proposition \ref{DFNKA}, the curvature decays sub-quadratically along $\gamma\cup(\cup \mathcal{S}_{i})$. By Corollary \ref{LERO} the data is $\star$-static and by Proposition \ref{CORONA} the curvature cannot decay sub-quadratically along $\gamma\cup(\cup \mathcal{S}_{i})$. We obtain a contradiction. Therefore the data is AK. \end{proof} \section{The proof of the classification theorem}\label{TCTH} \begin{proof}[Proof of the classification theorem \ref{TCTHM}] Let $(\Sigma;\sg,N)$ be a static black hole data set. By Proposition \ref{SOFOR} we know that one of the following holds, \begin{enumerate} \item\label{CCC1} $\partial \Sigma = H$, where $H$ is a two-torus, or, \item\label{CCC2} $\partial \Sigma =H_{1}\cup\ldots\cup H_{h}$, $h\geq 1$, where each $H_{j}$ is a two-sphere, and $(\Sigma;\hg)$ has cubic volume growth, or, \item\label{CCC3} $\partial \Sigma =H_{1}\cup\ldots\cup H_{h}$, $h\geq 1$, where each $H_{j}$ is a two-sphere, and $(\Sigma;\hg)$ has sub-cubic volume growth. \end{enumerate} Then depending on whether $1, 2$ or $3$ holds, we can conclude the following, \begin{enumerate} \item If $\partial \Sigma = H$, then the data is a Boost as explained in Proposition \ref{SOFOR}. \item In this case the data is asymptotically flat (with Schwarzschildian fall off), as discussed in Section \ref{ENDSAF}. By Galloway's \cite{MR1201655}, $\Sigma$ is diffeomorphic to $\mathbb{R}^{3}$ minus $h$-balls and the uniqueness theorem of Israel-Robinson-Bunting-Masood-um-Alam, shows that the solution is Schwarzschild. \item By Theorem \ref{KAFR} the data is asymptotically Kasner different from a Kasner $A$ or $C$. If the asymptotic is a Boost, that is $B$, then $\Sigma$ is diffeomorphic to a solid three-torus minus a finite number of open three-balls, Proposition \ref{LALA}. If the asymptotic is different from $B$ (and also from $A$ and $C$) then one can clearly find an embedded torus $T$ sufficiently far away that its outwards mean curvature is positive and that separate $\Sigma$ into (i) a compact manifold $\Sigma_{\partial}$ containing the horizons (i.e. $\partial \Sigma$) and another manifold (the `end') diffeomorphic to $[0,\infty)\times {\rm T}^{2}$. It follows again by Galloway's \cite{MR1201655}, that $\Sigma_{\partial}$ is diffeomorphic to a solid torus minus a finite number of open balls. Thus, $\Sigma$ is diffeomorphic to a solid three-torus minus a finite number of open balls. Hence, according to Definition \ref{KNTDEF}, $(\Sigma; g,N)$ is of Myers/Korotkin-Nicolai type. \end{enumerate} \end{proof} \bibliographystyle{plain}
{ "timestamp": "2018-06-05T02:11:30", "yymm": "1806", "arxiv_id": "1806.00819", "language": "en", "url": "https://arxiv.org/abs/1806.00819" }
\section{Introduction} Models of physics that extend the standard model (SM) often require new particles that couple to quarks (\PQq) and/or gluons (\Pg) and decay to dijets. The natural width of resonances in the dijet mass (\ensuremath{m_{\mathrm{jj}}}\xspace) spectrum increases with the coupling, and may vary from narrow to broad compared to the experimental resolution. For example, in a model in which dark matter (DM) particles couple to quarks through a DM mediator, the mediator can decay to either a pair of DM particles or a pair of jets and therefore can be observed as a dijet resonance~\cite{Chala:2015ama,Abercrombie:2015wmb} that is either narrow or broad, depending on the strength of the coupling. When the resonance is broad, its observed line-shape depends significantly on the resonance spin. Here we report a search for narrow dijet resonances and a complementary search for broad resonances that considers multiple values of the resonance spin and widths as large as 30\% of the resonance mass. Both approaches are sensitive to resonances with intrinsic widths that are small compared to the experimental resolution, but the broad resonance search is also sensitive to resonances with larger intrinsic widths. We explore the implications for multiple specific models of dijet resonances and for a range of quark coupling strength for a DM mediator. \subsection{Searches} This paper presents the results of searches for dijet resonances that were performed with proton-proton ($\Pp\Pp$) collision data collected at $\sqrt{s}=13$\TeV. The data correspond to an integrated luminosity of up to 36\fbinv and were collected in 2016 with the CMS detector at the CERN LHC. Similar searches for narrow resonances have been published previously by the ATLAS and CMS Collaborations at $\sqrt{s}=13$\TeV~\cite{Aaboud:2018fzt,Aaboud:2017yvp,Sirunyan:2016iap,Khachatryan:2015dcf,ATLAS:2015nsi}, 8\TeV~\cite{Khachatryan:2016ecr,Khachatryan:2015sja,Aad:2014aqa,Chatrchyan:2013qhXX}, and 7\TeV~\cite{CMS:2012yf,Aad201237,ATLAS:2012pu,Chatrchyan2011123,Aad:2011aj,Khachatryan:2010jd,ATLAS2010} using strategies reviewed in Ref.~\cite{Harris:2011bh}. A search for broad resonances considering natural widths as large as 30\% of the resonance mass, directly applicable to spin-2 resonances only, has been published once before by CMS at $\sqrt{s}=8$\TeV~\cite{Khachatryan:2015sja}. Here we explicitly consider spin-1 and spin-2 resonances that are both broad. The narrow resonance search is conducted in two regions of the dijet mass. The first is a low-mass search for resonances with masses between 0.6 and 1.6\TeV. This search uses a dijet event sample corresponding to an integrated luminosity of 27\fbinv, less than the full data sample, as discussed in Section~\ref{sec:trigger}. The events are reconstructed, selected, and recorded in a compact form by the high-level trigger (HLT)~\cite{Khachatryan:2016bia} in a technique referred to as ``data scouting"~\cite{Mukherjee:2017wcl}, which is conceptually similar to the strategy that is reported in Ref.~\cite{Aaij:2016rxn}. Data scouting was previously used for low-mass searches published by CMS at $\sqrt{s}=13$\TeV~\cite{Sirunyan:2016iap} and at 8\TeV~\cite{Khachatryan:2016ecr}, and is similar to a trigger-level search at 13\TeV recently published by ATLAS~\cite{Aaboud:2018fzt}. The second search is a high-mass search~\cite{Aaboud:2017yvp,Sirunyan:2016iap,Khachatryan:2015dcf,ATLAS:2015nsi, Khachatryan:2015sja,Aad:2014aqa,Chatrchyan:2013qhXX,CMS:2012yf,Aad201237,ATLAS:2012pu,Chatrchyan2011123,Aad:2011aj,Khachatryan:2010jd, ATLAS2010} for resonances with masses above 1.6\TeV, based on dijet events that are reconstructed offline in the full data sample corresponding to an integrated luminosity of 36\fbinv. The search for broad resonances uses the same selected events as does the high-mass search for narrow resonances. \subsection{Models} We present model independent results for $s$-channel dijet resonances and apply the results to the following narrow dijet resonances predicted by eleven benchmark models: \begin{enumerate} \item[1.] String resonances~\cite{Anchordoqui:2008di,Cullen:2000ef}, which are the Regge excitations of the quarks and gluons in string theory. There are multiple mass-degenerate states with various spin and color multiplicities. The $\Pq\Pg$ states dominate the cross section for all masses considered. \item[2.] Scalar diquarks, which decay to $\cPq\cPq$ and \cPaq\hspace{1 pt}\cPaq, predicted by a grand unified theory based on the E$_6$ gauge symmetry group~\cite{ref_diquark}. The coupling constant is conventionally assumed to be of electromagnetic strength. \item[3.] Mass-degenerate excited quarks (\ensuremath{\PQq^*}\xspace), which decay to \cPq\cPg, predicted in quark compositeness models~\cite{ref_qstar,Baur:1989kv}; the compositeness scale is set to be equal to the mass of the excited quark. We consider production and decay of the first generation of excited quarks and antiquarks ($\PQu^*$, $\PQd^*$, $\PAQu^*$, and $\PAQd^*$) via quark-gluon fusion ($\cPq\cPg \rightarrow \ensuremath{\PQq^*}\xspace \rightarrow \cPq\cPg$). We do not include production or decay via contact interactions ($\cPq\cPq \rightarrow \cPq\ensuremath{\PQq^*}\xspace$)~\cite{Baur:1989kv}. \item[4--5.] Axigluons and colorons, axial-vector and vector particles, which are predicted in the chiral color~\cite{ref_axi} and the flavor-universal coloron~\cite{ref_coloron} models, respectively. These are massive color-octet particles, which decay to \cPq\cPaq. The coloron coupling parameter is set at its minimum value $\cot\theta=1$~\cite{ref_coloron}, which gives identical production cross section values for colorons and axigluons. \item[6.] Color-octet scalars~\cite{Han:2010rf}, which decay to \cPg\cPg, appear in dynamical electroweak symmetry breaking models such as technicolor. The value of the squared anomalous coupling of color-octet scalars to gluons is chosen to be $k_\mathrm{s}^2=1/2$~\cite{Chivukula:2014pma}. \item[7--8.] New gauge bosons (\PWpr and \PZpr), which decay to \cPq\cPaq, predicted by models that include new gauge symmetries~\cite{ref_gauge}; the \PWpr and \PZpr\ bosons are assumed to have standard-model-like couplings. \item[9.] Randall--Sundrum (RS) gravitons (G), which decay to \cPq\cPaq\ and \cPg\cPg, predicted in the RS model of extra dimensions~\cite{ref_rsg}. The value of the dimensionless coupling $k/\overline{M}_\text{Pl}$ is chosen to be 0.1, where $k$ is the curvature scale and $\overline{M}_\text{Pl}$ is the reduced Planck mass. \item[10.] Dark matter mediators, which decay to \cPq\cPaq\ and pairs of DM particles, are the mediators of an interaction between quarks and dark matter~\cite{Chala:2015ama,Abercrombie:2015wmb,Boveia:2016mrp,Abdallah:2015ter}. For the DM mediator we follow the recommendations of Ref.~\cite{Boveia:2016mrp} on the model choice and coupling values, using a simplified model~\cite{Abdallah:2015ter} of a spin-1 mediator decaying only to $\PQq\PAQq$ and pairs of DM particles, with an unknown mass \ensuremath{m_{\text{DM}}}\xspace, and with a universal quark coupling $\ensuremath{g_\PQq}\xspace = 0.25$ and a DM coupling $\ensuremath{g_{\text{DM}}}\xspace=1.0$. \item[11.] Leptophobic $\PZpr$ resonances~\cite{Dobrescu:2013coa}, which decay to \cPq\cPaq\ only, with a universal quark coupling $\ensuremath{g_\PQq}\xspace^{\prime}$ related to the coupling of Ref.~\cite{Dobrescu:2013coa} by $\ensuremath{g_\PQq}\xspace^{\prime}=g_B/6$. \end{enumerate} \section{Measurement} \label{sec:reco} \subsection{Detector} A detailed description of the CMS detector and its coordinate system, including definitions of the azimuthal angle $\phi$ (in radians) and pseudorapidity variable $\eta$, is given in Ref.~\cite{refCMS}. The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter providing an axial field of 3.8\unit{T}. Within the solenoid volume are located the silicon pixel and strip tracker ($\abs{\eta}<2.4$) and the barrel and endcap calorimeters ($\abs{\eta}<3.0$), consisting of a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter. An iron and quartz-fiber hadron forward calorimeter is located in the region ($3.0<\abs{\eta}<5.0$), outside the solenoid volume. \subsection{Reconstruction} A particle-flow (PF) event algorithm is used to reconstruct and identify each individual particle with an optimized combination of information from the various elements of the CMS detector~\cite{CMS-PRF-14-001}. Particles are classified as muons, electrons, photons, and either charged or neutral hadrons. Jets are reconstructed either from particles identified by the PF algorithm, yielding ``PF-jets", or from energy deposits in the calorimeters, yielding ``Calo-jets". The PF-jets, reconstructed offline, are used for the high-mass search, while Calo-jets, reconstructed at the HLT, are used for the low-mass search. To reconstruct either type of jet, we use the anti-\kt algorithm~\cite{Cacciari:2005hq,Cacciari:2008gp} with a distance parameter of 0.4, as implemented in the \textsc{FastJet} package~\cite{Cacciari:2011ma}. For the high-mass search, at least one reconstructed vertex is required. The reconstructed vertex with the largest value of summed physics-object $\pt^2$ is taken to be the primary $\Pp\Pp$ interaction vertex. Here the physics objects are the jets made of tracks, clustered using the jet finding algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma} with the tracks assigned to the vertex as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the \pt of those jets. For PF-jets, charged PF candidates not originating from the primary vertex are removed prior to the jet finding. For both PF-jets and Calo-jets, an event-by-event correction based on the jet area~\cite{jetarea_fastjet_pu,Khachatryan:2016kdb} is applied to the jet energy to remove the estimated contribution from additional collisions in the same or adjacent bunch crossings (pileup). \subsection{Trigger and minimum dijet mass} \label{sec:trigger} Events are selected using a two-tier trigger system~\cite{Khachatryan:2016bia}. Events satisfying loose jet requirements at the first level (L1) trigger are examined by the HLT. We use single-jet triggers that require a jet in the event to satisfy a predefined \pt threshold. We also use triggers that require \HT to exceed a predefined threshold, where \HT is the scalar sum of the \pt of all jets in the event with $\abs{\eta}<3.0$. Both PF-jets and Calo-jets are available at the HLT. For the high-mass search, the full event information is reconstructed if the event satisfies the HLT trigger. In the early part of the data taking period, the HLT trigger required $\HT>800$\GeV, with \HT calculated using PF-jets with $\pt>30$\GeV. For the remainder of the run, an HLT requiring $\HT>900$\GeV with this same jet \pt threshold was used. The latter \HT trigger suffered from an inefficiency. The efficiency loss occurred within the \HT trigger at L1, towards the end of the data taking period used in this analysis. To recover the lost efficiency we used single-jet triggers at the HLT that did not rely on the \HT trigger at L1 but instead used an efficient single-jet trigger at L1. There were three such triggers at the HLT: the first requiring a PF-jet with $\pt>500$\GeV, a second requiring a Calo-jet with $\pt>500$\GeV, and a third requiring a PF-jet with an increased distance parameter of 0.8 and $\pt>450$ GeV. The trigger used for the high-mass search was the logical OR of these five triggers. We select events with $\ensuremath{m_{\mathrm{jj}}}\xspace>\RECOminMjjCut$, where the dijet mass is fully reconstructed offline using wide jets, defined later. For this selection, the combined L1 trigger and HLT was found to be fully efficient for the full 36\fbinv sample, as shown in Fig.~\ref{figTriggerEff}. Here the absolute trigger efficiency is measured using a sample acquired with an orthogonal trigger requiring muons with $\pt>45$ GeV at the HLT. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_001-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_001-b.pdf} \caption{ The efficiency of the trigger for the low-mass search (\cmsLeft) and the high-mass search (\cmsRight) as a function of dijet mass for wide jets, defined in Section~\ref{sec:wideJets}, after all jet calibrations and event selections discussed in Section~\ref{sec:reco}. The horizontal lines on the data points show the variable bin sizes.} \label{figTriggerEff} \end{figure} The data scouting technique is used for the low-mass search. When an event passes a data scouting trigger, the Calo-jets reconstructed at the HLT are saved along with the event energy density and the missing transverse momentum reconstructed from the calorimeter. The energy density is defined for each event as the median calorimeter energy per unit area calculated in a grid of $\eta-\phi$ cells~\cite{Khachatryan:2016kdb} covering the calorimeter acceptance. The shorter time required for the reconstruction of the calorimetric quantities and the reduced size of the data recorded for these events allow a reduced \HT threshold compared to the high-mass search. For the low-mass search, Calo-jets with $\pt>40$\GeV are used to compute \HT. The trigger threshold is $\HT>250$\GeV, and we select events with $\ensuremath{m_{\mathrm{jj}}}\xspace>\CALOminMjjCut$ for which the trigger is fully efficient, as shown in Fig.~\ref{figTriggerEff}. Here the trigger efficiency is measured using a prescaled sample acquired with a data scouting trigger which required only that the event passed the jet trigger at L1 with $\HT>175$\GeV. This L1 trigger is also fully efficient for $\ensuremath{m_{\mathrm{jj}}}\xspace>\CALOminMjjCut$, measured using another prescaled sample acquired with an even looser trigger with effectively no requirements (zero-bias) at L1 and requiring at least one Calo-jet with $\pt>40$\GeV at the HLT. Unlike the high-mass search, there were no single-jet triggers at the HLT in data scouting that would allow for the recovery of the inefficiency in the L1 trigger in 9\fbinv of data at the end of the run, so only the first 27\fbinv of integrated luminosity was used for the low-mass search. The trigger efficiencies for the low-mass and high-mass regions are shown as functions of dijet mass in Fig.~\ref{figTriggerEff}. The binning choices are the same as those adopted for the dijet mass spectra: bins of width approximately equal to the dijet mass resolution determined from simulation. All dijet mass bin edges and widths throughout this paper are the same as those used by previous dijet resonances searches performed by the CMS collaboration~\cite{Sirunyan:2016iap,Khachatryan:2015dcf,Khachatryan:2016ecr, Khachatryan:2015sja,Chatrchyan:2013qhXX,CMS:2012yf,Chatrchyan2011123,Khachatryan:2010jd}. Fig.~\ref{figTriggerEff} illustrates that the searches are fully efficient for the chosen dijet mass thresholds. For the purpose of our search, full efficiency requires the measured trigger inefficiency in a bin to be less than the fractional statistical uncertainty in the number of events in the same bin in the dijet mass spectrum. For example, the measured trigger efficiency in the bin between 1246 and 1313\GeV in Fig.~\ref{figTriggerEff}~(right) is $99.95\pm0.02$\%, giving a trigger inefficiency of 0.05\% in that bin, which is less than the statistical uncertainty of 0.08\% arising from the 1.6 million events in that same bin of the dijet mass spectrum. This criterion for choosing the dijet mass thresholds, $\ensuremath{m_{\mathrm{jj}}}\xspace>\RECOminMjjCut$ for the high mass search and $\ensuremath{m_{\mathrm{jj}}}\xspace>\CALOminMjjCut$ for the low mass search, ensures that the search results are not biased by the trigger inefficiency. \subsection{Offline calibration and jet identification} The jet momenta and energies are corrected using calibration constants obtained from simulation, test beam results, and pp collision data at $\sqrt{s}=13$\TeV. The methods described in Ref.~\cite{Khachatryan:2016kdb} are applied using all \textit{in-situ} calibrations obtained from the current data, and fit with analytic functions so the calibrations are forced to be smooth functions of \pt. All jets, the PF-jets in the high-mass search and Calo-jets in the low-mass search, are required to have $\pt>30$\GeV and $\abs{\eta}<2.5$. The two jets with largest \pt are defined as the leading jets. Jet identification (ID) criteria are applied to remove spurious jets associated with calorimeter noise as well as those associated with muon and electron candidates that are either mis-reconstructed or isolated~\cite{CMS:2017wyc}. For all PF-jets, the jet ID requires that the neutral hadron and photon energies are less than 90\% of the total jet energy. For PF-jets that satisfy $\abs{\eta}<2.4$, within the fiducial tracker coverage, the jet ID additionally requires that the jet has non-zero charged hadron energy, and muon and electron energies less than 80 and 90\% of the total jet energy, respectively. The jet ID for Calo-jets requires that the jet be detected by both the electromagnetic and hadron calorimeters with the fraction of jet energy deposited within the electromagnetic calorimeter between 5 and 95\% of the total jet energy. An event is rejected if either of the two leading jets fails the jet ID criteria. These requirements are sufficient to reduce background events from detector noise and other sources to a negligible level. \subsection{Wide jet reconstruction and event selection} \label{sec:wideJets} Spatially close jets are combined into ``wide jets'' and used to determine the dijet mass, as in the previous CMS searches~\cite{Sirunyan:2016iap,Khachatryan:2015dcf,Khachatryan:2016ecr,Chatrchyan2011123,CMS:2012yf,Chatrchyan:2013qhXX,Khachatryan:2015sja}. The wide-jet algorithm, designed for dijet resonance event reconstruction, reduces the analysis sensitivity to gluon radiation from the final-state partons. The two leading jets are used as seeds and the four-vectors of all other jets, if within $\Delta R=\sqrt{\smash[b]{(\Delta\eta)^2 + (\Delta\phi)^2}}<1.1$, are added to the nearest leading jet to obtain two wide jets, which then form the dijet system. The dijet mass is the magnitude of the momentum-energy 4-vector of the dijet system, which is the invariant mass of the two wide jets. The wide jet algorithm thereby collects hard gluon radiation, satisfying the jet requirement $\pt>30$\GeV and found nearby the leading two final state partons, in order to improve the dijet mass resolution. This is preferable to only increasing the distance parameter within the anti-\kt algorithm to 1.1, which would include in the leading jets the unwanted soft energy from pile-up and initial state radiation. The wide jet algorithm is similar to first increasing the distance parameter and then applying jet trimming~\cite{Krohn:2009th} to remove unwanted soft energy. The angular distribution of background from $t$-channel dijet events is similar to that for Rutherford scattering, approximately proportional to $1/[1-\tanh(\ensuremath{\abs{\Delta\eta}}\xspace/2)]^2$, which peaks at large values of $\ensuremath{\abs{\Delta\eta}}\xspace$. This background is suppressed by requiring the pseudorapidity separation of the two wide jets to satisfy $\ensuremath{\abs{\Delta\eta}}\xspace<1.3$. This requirement also makes the trigger efficiency in Fig.~\ref{figTriggerEff} turn on quickly, reaching a plateau at 100\% for relatively low values of dijet mass. This is because the jet \pt threshold of the trigger at a fixed dijet mass is more easily satisfied at low $\ensuremath{\abs{\Delta\eta}}\xspace$, as seen by the approximate relation $\ensuremath{m_{\mathrm{jj}}}\xspace\approx 2\pt\cosh(\ensuremath{\abs{\Delta\eta}}\xspace/2)$. The above requirements maximize the search sensitivity for isotropic decays of dijet resonances in the presence of dijet background from quantum chromodynamics (QCD). \subsection{Calibration of wide jets in the low-mass search} The jet energy scale of the low-mass search has been calibrated to be the same as the jet energy scale of the high-mass search. For the low-mass search, after wide jet reconstruction and event selection, we calibrate the wide jets reconstructed from Calo-jets at the HLT to have the same average response as the wide jets reconstructed from PF-jets. We use a smaller \textit{monitoring} data set, which includes both Calo-jets at the HLT and the fully reconstructed PF-jets, to measure the \pt difference between the two types of wide jets, as shown in Fig.~\ref{fig:FinalJEC}. \begin{figure}[hbt] \begin{center} \includegraphics[width=0.5\textwidth]{Figure_002.pdf} \caption{The calibration of jets in the low-mass analysis. The percent difference in data (points), between the \pt of the wide jets reconstructed from Calo-jets at the HLT and the wide jets reconstructed from PF-jets, is fit to a smooth parameterization (curve), as a function of the HLT \pt.} \label{fig:FinalJEC} \end{center} \end{figure} A dijet balance ``tag-and-probe" method similar to that discussed in Ref.~\cite{Khachatryan:2016kdb} is used. One of the two jets in the dijet system is designated as the tag jet, and the other is designated as the probe jet, and the \pt difference between Calo-Jets at the HLT and fully reconstructed PF-jets is measured for the probe jet as a function of the \pt of the tag PF-jet. We avoid jet selection bias of the probe Calo-Jet \pt, which would result from resolution effects on the steeply falling \pt spectrum, by measuring the \pt difference as a function of the \pt of the tag PF-jet instead of the \pt of the probe Calo-jet at the HLT. This calibration is then translated into a function of the average \pt of the probe Calo-jets measured within each bin of \pt of the tag PF-jets. Figure~\ref{fig:FinalJEC} shows this measurement of the \pt difference, as a function of jet \pt, from the monitoring data set. The measured points are fit with a parameterization and the resulting smooth curve is used to calibrate the wide jets in the low-mass search. \subsection{Dijet data and QCD background predictions} As the dominant background for this analysis is expected to be the QCD production of two or more jets, we begin by performing comparisons of the data to QCD background predictions for the dijet events. The predictions are based upon a sample of 56 million Monte Carlo events produced with the \PYTHIA8.205~\cite{Sjostrand:2007gs} program with the CUETP8M1 tune~\cite{Khachatryan:2015pea,Skands:2014pea} and including a \GEANTfour-based \cite{refGEANT} simulation of the CMS detector. The QCD background predictions are normalized to the data by multiplying them by a factor of 0.87 for the high-mass search and by a factor of 0.96 for the low-mass search, so that for each search the prediction for the total number of events agrees with the number observed. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_003-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_003-b.pdf} \caption{ The azimuthal angular separation between the two wide jets (in radians) from the low-mass search (\cmsLeft) and the high-mass search (\cmsRight). Data (points) are compared to QCD predictions from the \PYTHIA8 MC including detector simulation (histogram) normalized to the data.} \label{figDeltaPhi} \end{figure} In Fig.~\ref{figDeltaPhi}, we observe that the measured azimuthal separation of the two wide jets, $\Delta\phi$, displays the "back-to-back" distribution expected from QCD dijet production. The strong peak at $\Delta\phi=\pi$, with very few events in the region $\Delta\phi\sim 0$, shows that the data sample is dominated by genuine parton-parton scattering, with negligible backgrounds from detector noise or other nonphysical sources that would produce events more isotropic in $\Delta\phi$. In Fig.~\ref{figDeltaEta}, we observe that dijet $\ensuremath{\abs{\Delta\eta}}\xspace$ has a distribution dominated by the $t$-channel parton exchange as does the QCD production of two jets. Note that the production rate increases with increasing $\ensuremath{\abs{\Delta\eta}}\xspace$, whereas $s$-channel signals from most models of dijet resonances would decrease with increasing $\ensuremath{\abs{\Delta\eta}}\xspace$. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_004-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_004-b.pdf} \caption{ The pseudorapidity separation between the two wide jets from the low-mass search (\cmsLeft) and the high-mass search (\cmsRight). Data (points) are compared to QCD predictions from the \PYTHIA8 MC including detector simulation (histogram) normalized to the data.} \label{figDeltaEta} \end{figure} In Fig.~\ref{figDijetMass}, we observe that the number of dijets produced falls steeply and smoothly as a function of dijet mass. The observed dijet mass distributions are very similar to the QCD prediction from \PYTHIA, which includes a leading order QCD calculation and parton shower effects. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_005-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_005-b.pdf} \caption{ The dijet mass of the two wide jets from the low-mass search (\cmsLeft) and the high-mass search (\cmsRight). Data (points) are compared to QCD predictions from the \PYTHIA8 MC including detector simulation (histogram) normalized to the data. The horizontal lines on the data points show the variable bin sizes.} \label{figDijetMass} \end{figure} In Fig.~\ref{figDijetMassPowheg}, we also compare the dijet mass data to a next-to-leading order (NLO) QCD prediction from \POWHEG 2.0~\cite{Alioli:2010xd} normalized to the data. For this prediction, we used 10 million dijet events from an NLO calculation of two jet production~\cite{Alioli:2010xa} using NNPDF3.0 NLO parton distribution functions~\cite{Ball:2014uwa}, interfaced with the aforementioned \PYTHIA8 parton shower and simulation of the CMS detector. The \POWHEG prediction models the data better than the \PYTHIA prediction does. It is clear from these comparisons that the dijet mass data behave approximately as expected from QCD predictions. However, the intrinsic uncertainties associated with QCD calculations make them unreliable estimators of the backgrounds in dijet resonance searches. Instead we will use the dijet data to estimate the background. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{Figure_006.pdf} \caption{ The dijet mass distribution of the two wide jets from the high-mass search. (Upper) Data (points) are compared to predictions from the \POWHEG MC in red (darker) and the \PYTHIA8 MC in green (lighter), including detector simulation, each normalized to the data. (Lower) The ratio of data to the \POWHEG prediction, compared to unity and compared to the ratio of the \PYTHIA8 MC to the \POWHEG prediction. The horizontal lines on the data points show the variable bin sizes.} \label{figDijetMassPowheg} \end{figure} \clearpage \section{Search for narrow dijet resonances} \subsection{Dijet mass spectra and background parameterizations} Figure~\ref{figDataAndFit} shows the dijet mass spectra, defined as the observed number of events in each bin divided by the integrated luminosity and the bin width. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_007-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_007-b.pdf} \caption{ Dijet mass spectra (points) compared to a fitted parameterization of the background (solid curve) for the low-mass search (\cmsLeft) and the high-mass search (\cmsRight). The horizontal lines on the data points show the variable bin sizes. The lower panel in each plot shows the difference between the data and the fitted parametrization, divided by the statistical uncertainty of the data. Examples of predicted signals from narrow gluon-gluon, quark-gluon, and quark-quark resonances are shown with cross sections equal to the observed upper limits at 95\% \CL.} \label{figDataAndFit} \end{figure} The dijet mass spectrum for the high-mass search is fit with the parameterization \begin{equation} \frac{{\rd}\sigma}{{\rd}\ensuremath{m_{\mathrm{jj}}}\xspace} = \frac{P_{0} (1 - x)^{P_{1}}}{x^{P_{2} + P_{3} \ln{(x)}}}, \label{eqBackgroundParam} \end{equation} where $x=\ensuremath{m_{\mathrm{jj}}}\xspace/\sqrt{s}$; and $P_0$, $P_1$, $P_2$, and $P_3$ are four free fit parameters. The chi-squared per number of degrees of freedom of the fit is $\chi^2/\mathrm{NDF}=38.9/39$. The functional form in Eq.~(\ref{eqBackgroundParam}) was also used in previous searches \cite{Sirunyan:2016iap,Khachatryan:2016ecr,Khachatryan:2015dcf,Khachatryan:2010jd,Chatrchyan2011123,CMS:2012yf,Chatrchyan:2013qhXX,Khachatryan:2015sja, ATLAS:2015nsi,ATLAS2010,Aad:2011aj,Aad201237,ATLAS:2012pu,Aad:2014aqa,refCDFrun2} to describe the data. For the low-mass search we used the following parameterization, which includes one additional parameter $P_4$, to fit the dijet mass spectrum: \begin{equation} \frac{{\rd}\sigma}{{\rd}\ensuremath{m_{\mathrm{jj}}}\xspace} = \frac{P_{0} (1 - x)^{P_{1}}}{x^{P_{2} + P_{3} \ln{(x)} + P_{4} \ln^2{(x)}}}. \label{eqBackgroundParam5} \end{equation} Equation~(\ref{eqBackgroundParam5}) with five parameters gives $\chi^2/\mathrm{NDF}=20.3/20$ when fit to the low-mass data, which is better than the $\chi^2/\mathrm{NDF}=27.9/21$ obtained using the four parameter functional form in Eq.~(\ref{eqBackgroundParam}). An F-test with a size $\alpha=0.05$~\cite{FisherTest} was used to confirm that no additional parameters are needed to model these distributions, i.e. in the low-mass search including an additional term $P_5\ln^3{(x)}$ in Eq.(~\ref{eqBackgroundParam5}) gave $\chi^2/\mathrm{NDF}=20.1/19$, which corresponds to a smaller $p$-value than the fit with five parameters, and this six parameter functional form was found to be unnecessary by the Fisher F-test. The historical development of this family of parameterizations is discussed in Ref.~\cite{Harris:2011bh}. The functional forms of Eqs.~(\ref{eqBackgroundParam}) and (\ref{eqBackgroundParam5}) are motivated by QCD calculations, where the term in the numerator behaves like the parton distribution functions at an average fractional momentum $x$ of the two partons, and the term in the denominator gives a mass dependence similar to the QCD matrix elements. In Fig.~\ref{figDataAndFit}, we show the result of the binned maximum likelihood fits, performed independently for the low-mass and high-mass searches. The dijet mass spectra are well modeled by the background fits. The lower panels of Fig.~\ref{figDataAndFit} show the pulls of the fit, which are the bin-by-bin differences between the data and the background fit divided by the statistical uncertainty of the data. In the overlap region of the dijet mass between 1.2 and 2.0\TeV, the pulls of the fit are not identical in the two searches because the fluctuations in reconstructed dijet mass for Calo-jets and PF-jets are not fully correlated. \subsection{Signal shapes, injection tests, and significance} Examples of dijet mass distributions for narrow resonances generated with the \PYTHIA8.205 program with the CUETP8M1 tune and including a \GEANTfour-based simulation of the CMS detector are shown in Fig.~\ref{figDataAndFit}. The quark-quark ($\PQq\PQq$) resonances are modeled by $\PQq\PAQq\to \PXXG \to \PQq\PAQq$, the quark-gluon ($\PQq\Pg$) resonances are modeled by $\PQq\Pg\to \ensuremath{\PQq^*}\xspace \to \PQq\Pg$, and the gluon-gluon ($\Pg\Pg$) resonances are modeled by $\Pg\Pg\to \PXXG \to \Pg\Pg$. The signal distributions shown in Fig.~\ref{figDataAndFit} are for $\PQq\PQq$, $\PQq\Pg$, and $\Pg\Pg$ resonances with signal cross sections corresponding to the limits at 95\% confidence level (\CL) obtained by this analysis, as described below. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_008-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_008-b.pdf} \caption{ Signal shapes of narrow resonances with masses of 0.5, 1, and 2\TeV in the low-mass search (\cmsLeft) and masses of 2, 4, 6, and 8\TeV in the high-mass search (\cmsRight). These reconstructed dijet mass spectra show wide jets from the {\PYTHIA8} MC event generator including simulation of the CMS detector. } \label{figSignalShapes} \end{figure} A more detailed view of the narrow-resonance signal shapes is provided in Fig.~\ref{figSignalShapes}. The predicted mass distributions have Gaussian cores from jet energy resolution, and tails towards lower mass values primarily from QCD radiation. The observed width depends on the parton content of the resonance ($\PQq\PQq$, $\PQq\Pg$, or $\Pg\Pg$). The dijet mass resolution within the Gaussian core of gluon-gluon (quark-quark) resonances in Fig.~\ref{figSignalShapes} varies from 15\,(11)\% at a resonance mass of 0.5\TeV to 7.5\,(6.3)\% at 2\TeV for wide jets reconstructed using Calo-Jets, and varies from 6.2\,(5.2)\% at 2\TeV to 4.8\,(4.0)\% at 8\TeV for wide jets reconstructed using PF-Jets. This total observed resolution for the parton-parton resonance includes theoretical contributions, arising from the parton shower and other sources, in addition to purely experimental contributions arising from uncertainties in measurements of the particles forming the jets. The contribution of the low mass tail to the line shape also depends on the parton content of the resonance. Resonances decaying to gluons, which emit more QCD radiation than quarks, are broader and have a more pronounced tail. For the high-mass resonances, there is also a significant contribution that depends both on the parton distribution functions and on the natural width of the Breit--Wigner distribution. The low-mass component of the Breit--Wigner distribution of the resonance is amplified by the rise of the parton distribution function at low fractional momentum, as discussed in Section~7.3 of Ref.~\cite{Sjostrand:2006za}. These effects cause a large tail at low mass values. Interference between the signal and the background processes is model dependent and not considered in this analysis. In some cases interference can modify the effective signal shape appreciably~\cite{Martin:2016bgw}. The signal shapes in the quark-quark channel come from quark-antiquark (\Pq\Paq) resonances, which likely has a longer tail caused by parton distribution effects than that for diquark (\Pq\Pq) resonances, tending to make the quoted limits in the quark-quark channel conservative when applied to diquark signals. Signal injection tests were performed to investigate the potential bias introduced through the choice of background parameterization. Two alternative parameterizations were found that model the dijet mass data using different functional forms: \begin{equation} \frac{{\rd}\sigma}{{\rd}\ensuremath{m_{\mathrm{jj}}}\xspace} = P_0\exp(P_1x^{P_2} + P_3(1-x)^{P_4}) \label{eqAltParam1} \end{equation} and \begin{equation} \frac{{\rd}\sigma}{{\rd}\ensuremath{m_{\mathrm{jj}}}\xspace} = \frac{P_0}{x^{P_1}}\exp(-P_2x - P_3x^2 -P_4x^3). \label{eqAltParam2} \end{equation} Pseudo-data were generated, assuming a signal and these alternative parameterizations of the background, and then were fit with the nominal parameterization given in Eq.~(\ref{eqBackgroundParam5}). The bias in the extracted signal was found to be negligible. There is no evidence for a narrow resonance in the data. The p-values of the background fits are $0.47$ for the high-mass search and $0.44$ for the low-mass search, indicating that the background hypothesis is an adequate description of the data. Using the statistical methodology discussed in Section~\ref{sec:statistics}, the local significance for $\PQq\PQq$, $\PQq\Pg$, and $\Pg\Pg$ resonance signals was measured from 0.6 to 1.6\TeV in 50-\GeVns steps in the low-mass search, and from 1.6 to 8.1\TeV in 100-\GeVns steps in the high-mass search. The significance values obtained for $\PQq\PQq$ resonances are shown in Fig.~\ref{fig:pfsignif}. The most significant excess of the data relative to the background fit comes from the two consecutive bins between 0.79 and 0.89\TeV. Fitting these data to $\PQq\PQq$, $\PQq\Pg$, and $\Pg\Pg$ resonances with a mass of 0.85\TeV yields local significances of 1.2, 1.6, and 1.9 standard deviations, including systematic uncertainties, respectively. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{Figure_009-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_009-b.pdf} \caption{Local significance for a narrow resonance from the low-mass search (\cmsLeft) and the high-mass search (\cmsRight).} \label{fig:pfsignif} \end{figure} \section{Limits on narrow resonances} We use the dijet mass spectrum from wide jets, the background parameterization, and the dijet resonance shapes to set limits on the production cross section of new particles decaying to the parton pairs $\PQq\PQq$ (or $\PQq\PAQq$), $\Pq\Pg$, and $\Pg\Pg$. A separate limit is determined for each final state because of the dependence of the dijet resonance shape on the types of the two final-state partons. \subsection{Systematic uncertainty and statistical methodology} \label{sec:statistics} The dominant sources of systematic uncertainty are the jet energy scale and resolution, integrated luminosity, and the value of the parameters within the functional form modeling the background shape in the dijet mass distribution. The uncertainty in the jet energy scale in both the low-mass and the high-mass search is 2\%\xspace and is determined from $\sqrt{s}=13$\TeV data using the methods described in Ref.~\cite{Khachatryan:2016kdb}. This uncertainty is propagated to the limits by shifting the dijet mass shape for signal by $\pm$2\%\xspace. The uncertainty in the jet energy resolution translates into an uncertainty of 10\% in the resolution of the dijet mass~\cite{Khachatryan:2016kdb}, and is propagated to the limits by observing the effect of increasing and decreasing by 10\% the reconstructed width of the dijet mass shape for signal. The uncertainty in the integrated luminosity is 2.5\%\xspace~\cite{CMS-PAS-LUM-17-001}, and is propagated to the normalization of the signal. Changes in the values of the parameters describing the background introduce a change in the signal yield, which is accounted for as a systematic uncertainty as discussed in the next paragraph. The asymptotic approximation~\cite{Cowan:2010js} of the modified frequentist \CLs method~\cite{Junk1999,bib-cls} is utilized to set upper limits on signal cross sections, following the prescription described in Ref.~\cite{ATLAS:1379837}. We use a multi-bin counting experiment likelihood, which is a product of Poisson distributions corresponding to different bins. We evaluate the likelihood independently at each value of the resonance pole mass from 0.6 to 1.6\TeV in 50-\GeVns steps in the low-mass search, and from 1.6 to 8.1\TeV in 100-\GeVns steps in the high-mass search. The contribution from each hypothetical resonance signal is evaluated in every bin of dijet mass greater than the minimum dijet mass requirement in the search and less than 150\% of the resonance mass (e.g. the high mass tail of a 1\TeV resonance is truncated, removing any contribution above a dijet mass of 1.5\TeV, but the low mass tail is not truncated). The systematic uncertainties are implemented as nuisance parameters in the likelihood model, with Gaussian constraints for the jet energy scale and resolution, and log-normal constraints for the integrated luminosity. The systematic uncertainty in the background is automatically evaluated via profiling, effectively refitting for the optimal values of the background parameters for each value of resonance cross section. This allows the background parameters to float freely to their most likely value for every signal cross section value within the likelihood function. Since the observed data are effectively constraining the sum of signal and background, the most likely value of the background decreases as the signal cross section increases within the likelihood function. This statistical methodology therefore gives a smaller background for larger signals within the likelihood function than methodologies that hold the background parameters fixed within the likelihood. This leads to larger probabilities for larger signals and hence higher upper limits on the signal cross section. The extent to which the background uncertainty affects the limit depends significantly on the signal shape and the resonance mass, with the largest effect occurring for the $\Pg\Pg$ resonances, because they are broader, and the smallest effect occurring for $\PQq\PQq$ resonances. The effect increases as the resonance mass decreases, and is most severe at the lowest resonance masses within each search, where the sideband used to constrain the background, available at lower dijet mass, is smaller. The effect of the systematic uncertainties on the limit for $\PQq\PQq$ resonances is shown in Fig.~\ref{fig:systematics}. For almost all resonance mass values, the background systematic uncertainty produces the majority of the effect on the limit shown here. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{Figure_010-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_010-b.pdf} \caption{The observed (points) and expected (dashed) ratio between the 95\% \CL limit on the cross section, including systematic uncertainties, and the limit including statistical uncertainties only for dijet resonances decaying to quark-quark in the low-mass search (\cmsLeft) and in the high-mass search (\cmsRight).} \label{fig:systematics} \end{figure} \subsection{Limits on the resonance production cross section} Tables~\ref{tab:limits} and ~\ref{tab:pflimits}, and Figs.~\ref{figLimitAll} and \ref{figLimitSummary}, show the model-independent observed upper limits at 95\% \CL on the product of the cross section ($\sigma$), the branching fraction to dijets ($B$), and the acceptance ($A$) for narrow resonances, with the kinematic requirements $\ensuremath{\abs{\Delta\eta}}\xspace<1.3$ for the dijet system and $\abs{\eta}<2.5$ for each of the jets. The acceptance of the minimum dijet mass requirement in each search has been evaluated separately for $\PQq\PQq$, $\PQq\Pg$, and $\Pg\Pg$ resonances, and has been taken into account by correcting the limits and therefore does not appear in the acceptance $A$. The resonance mass boundary of 1.6\TeV between the high- and low-mass searches was chosen to maintain a reasonable acceptance for the minimum dijet mass requirement imposed by the high-mass search. For a 1.6\TeV dijet resonance, the acceptance of the 1.25\TeV dijet mass requirement is 57\% for a gluon-gluon resonance, 76\% for a quark-gluon resonance, and 85\% for a quark-quark resonance. At this resonance mass, the expected limits we find on $\sigma B A$ for a quark-quark resonance are the same in the high and low mass search. Figure \ref{figLimitAll} also shows the expected limits on $\sigma B A$ and their bands of uncertainty. The difference in the limits for $\PQq\PQq$, $\PQq\Pg$, and $\Pg\Pg$ resonances at the same resonance mass originates from the difference in their line shapes. For the RS graviton model, which decays to both $\PQq\PAQq$ and $\Pg\Pg$ final states, the upper limits on the cross section are derived using a weighted average of the $\PQq\PQq$ and $\Pg\Pg$ resonance shapes, where the weights correspond to the relative branching fractions for the two final states. \begin{table*}[hbtp] \topcaption{Limits from the low-mass search. The observed and expected upper limits at 95\% \CL on $\sigma B A$ for gluon-gluon, quark-gluon, and quark-quark resonances, and an RS graviton are given as a function of the resonance mass. \label{tab:limits}} \centering \cmsTable{ \begin{tabular}{cllllllll} \hline \multirow{3}{*}{Mass~[\TeVns{}]} & \multicolumn{8}{c}{95\% \CL upper limit [pb]} \\ &\multicolumn{2}{c}{$\cPg\cPg$} &\multicolumn{2}{c}{$\cPq\cPg$} &\multicolumn{2}{c}{$\cPq\cPq$} &\multicolumn{2}{c}{RS graviton} \\ & Observed & Expected & Observed & Expected & Observed & Expected & Observed & Expected \\\hline 0.60 & 3.93$\times 10^{+1}$ & 2.10$\times 10^{+1}$ & 3.37$\times 10^{+1}$ & 1.90$\times 10^{+1}$ & 1.38$\times 10^{+1}$ & 1.05$\times 10^{+1}$ & 2.59$\times 10^{+1}$ & 1.46$\times 10^{+1}$\\ 0.65 & 1.55$\times 10^{+1}$ & 1.77$\times 10^{+1}$ & 1.01$\times 10^{+1}$ & 1.14$\times 10^{+1}$ & 4.92$\times 10^0$ & 5.15$\times 10^0$ & 6.92$\times 10^0$ & 8.28$\times 10^0$\\ 0.70 & 6.14$\times 10^0$ & 1.12$\times 10^{+1}$ & 4.73$\times 10^0$ & 6.32$\times 10^0$ & 2.47$\times 10^0$ & 3.16$\times 10^0$ & 3.59$\times 10^0$ & 4.77$\times 10^0$\\ 0.75 & 5.50$\times 10^0$ & 8.13$\times 10^0$ & 3.82$\times 10^0$ & 4.68$\times 10^0$ & 2.64$\times 10^0$ & 2.49$\times 10^0$ & 3.57$\times 10^0$ & 3.67$\times 10^0$\\ 0.80 & 1.02$\times 10^{+1}$ & 7.15$\times 10^0$ & 5.73$\times 10^0$ & 4.06$\times 10^0$ & 3.14$\times 10^0$ & 2.14$\times 10^0$ & 4.39$\times 10^0$ & 3.11$\times 10^0$\\ 0.85 & 1.13$\times 10^{+1}$ & 5.93$\times 10^0$ & 5.45$\times 10^0$ & 3.33$\times 10^0$ & 2.46$\times 10^0$ & 1.79$\times 10^0$ & 4.35$\times 10^0$ & 2.55$\times 10^0$\\ 0.90 & 7.56$\times 10^0$ & 4.04$\times 10^0$ & 3.21$\times 10^0$ & 2.42$\times 10^0$ & 1.17$\times 10^0$ & 1.36$\times 10^0$ & 2.45$\times 10^0$ & 2.04$\times 10^0$\\ 0.95 & 3.23$\times 10^0$ & 3.32$\times 10^0$ & 1.40$\times 10^0$ & 1.86$\times 10^0$ & 7.30$\times 10^{-1}$ & 1.10$\times 10^0$ & 1.01$\times 10^0$ & 1.59$\times 10^0$\\ 1.00 & 1.66$\times 10^0$ & 2.60$\times 10^0$ & 9.67$\times 10^{-1}$ & 1.45$\times 10^0$ & 5.72$\times 10^{-1}$ & 9.06$\times 10^{-1}$ & 8.19$\times 10^{-1}$ & 1.23$\times 10^0$\\ 1.05 & 1.41$\times 10^0$ & 2.22$\times 10^0$ & 1.11$\times 10^0$ & 1.26$\times 10^0$ & 9.11$\times 10^{-1}$ & 7.90$\times 10^{-1}$ & 1.14$\times 10^0$ & 1.11$\times 10^0$\\ 1.10 & 2.06$\times 10^0$ & 1.96$\times 10^0$ & 1.90$\times 10^0$ & 1.13$\times 10^0$ & 1.51$\times 10^0$ & 7.11$\times 10^{-1}$ & 1.90$\times 10^0$ & 9.86$\times 10^{-1}$\\ 1.15 & 3.90$\times 10^0$ & 1.79$\times 10^0$ & 2.58$\times 10^0$ & 1.04$\times 10^0$ & 1.74$\times 10^0$ & 6.55$\times 10^{-1}$ & 1.87$\times 10^0$ & 9.12$\times 10^{-1}$\\ 1.20 & 4.49$\times 10^0$ & 1.63$\times 10^0$ & 2.74$\times 10^0$ & 9.49$\times 10^{-1}$ & 1.38$\times 10^0$ & 6.00$\times 10^{-1}$ & 1.91$\times 10^0$ & 8.39$\times 10^{-1}$\\ 1.25 & 3.48$\times 10^0$ & 1.45$\times 10^0$ & 2.04$\times 10^0$ & 8.64$\times 10^{-1}$ & 1.23$\times 10^0$ & 5.40$\times 10^{-1}$ & 1.96$\times 10^0$ & 7.72$\times 10^{-1}$\\ 1.30 & 3.58$\times 10^0$ & 1.26$\times 10^0$ & 2.00$\times 10^0$ & 7.60$\times 10^{-1}$ & 8.61$\times 10^{-1}$ & 4.85$\times 10^{-1}$ & 1.48$\times 10^0$ & 6.87$\times 10^{-1}$\\ 1.35 & 1.96$\times 10^0$ & 1.11$\times 10^0$ & 1.01$\times 10^0$ & 6.62$\times 10^{-1}$ & 4.85$\times 10^{-1}$ & 4.24$\times 10^{-1}$ & 9.35$\times 10^{-1}$ & 6.01$\times 10^{-1}$\\ 1.40 & 1.14$\times 10^0$ & 9.55$\times 10^{-1}$ & 5.56$\times 10^{-1}$ & 5.71$\times 10^{-1}$ & 3.00$\times 10^{-1}$ & 3.69$\times 10^{-1}$ & 4.47$\times 10^{-1}$ & 5.16$\times 10^{-1}$\\ 1.45 & 6.32$\times 10^{-1}$ & 8.33$\times 10^{-1}$ & 3.52$\times 10^{-1}$ & 4.97$\times 10^{-1}$ & 1.86$\times 10^{-1}$ & 3.27$\times 10^{-1}$ & 2.75$\times 10^{-1}$ & 4.55$\times 10^{-1}$\\ 1.50 & 4.20$\times 10^{-1}$ & 7.23$\times 10^{-1}$ & 2.66$\times 10^{-1}$ & 4.30$\times 10^{-1}$ & 1.45$\times 10^{-1}$ & 2.84$\times 10^{-1}$ & 2.29$\times 10^{-1}$ & 4.00$\times 10^{-1}$\\ 1.55 & 3.57$\times 10^{-1}$ & 6.38$\times 10^{-1}$ & 1.93$\times 10^{-1}$ & 3.81$\times 10^{-1}$ & 1.44$\times 10^{-1}$ & 2.59$\times 10^{-1}$ & 1.97$\times 10^{-1}$ & 3.57$\times 10^{-1}$\\ 1.60 & 3.37$\times 10^{-1}$ & 5.58$\times 10^{-1}$ & 1.87$\times 10^{-1}$ & 3.45$\times 10^{-1}$ & 1.64$\times 10^{-1}$ & 2.35$\times 10^{-1}$ & 2.01$\times 10^{-1}$ & 3.20$\times 10^{-1}$\\ \hline \end{tabular}} \end{table*} \begin{table*}[hbtp] \topcaption{Limits from the high-mass search. The observed and expected upper limits at 95\% \CL on $\sigma B A$ for gluon-gluon, quark-gluon, and quark-quark resonances, and an RS graviton are shown as functions of the resonance mass. \label{tab:pflimits}} \centering \resizebox{0.80\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline \multirow{3}{*}{Mass~[\TeVns{}]} & \multicolumn{8}{c}{95\% \CL upper limit [pb]} \\ &\multicolumn{2}{c}{$\cPg\cPg$} &\multicolumn{2}{c}{$\cPq\cPg$} &\multicolumn{2}{c}{$\cPq\cPq$} &\multicolumn{2}{c}{RS graviton} \\ & Observed & Expected & Observed & Expected & Observed & Expected & Observed & Expected \\\hline 1.6 & 3.72$\times 10^{-1}$ & 6.72$\times 10^{-1}$ & 2.74$\times 10^{-1}$ & 4.08$\times 10^{-1}$ & 2.07$\times 10^{-1}$ & 2.38$\times 10^{-1}$ & 2.65$\times 10^{-1}$ & 3.46$\times 10^{-1}$\\ 1.7 & 6.50$\times 10^{-1}$ & 5.02$\times 10^{-1}$ & 4.33$\times 10^{-1}$ & 2.96$\times 10^{-1}$ & 2.99$\times 10^{-1}$ & 1.79$\times 10^{-1}$ & 4.06$\times 10^{-1}$ & 2.61$\times 10^{-1}$\\ 1.8 & 6.17$\times 10^{-1}$ & 3.55$\times 10^{-1}$ & 3.86$\times 10^{-1}$ & 2.10$\times 10^{-1}$ & 2.62$\times 10^{-1}$ & 1.34$\times 10^{-1}$ & 3.66$\times 10^{-1}$ & 1.92$\times 10^{-1}$\\ 1.9 & 4.71$\times 10^{-1}$ & 2.63$\times 10^{-1}$ & 2.69$\times 10^{-1}$ & 1.60$\times 10^{-1}$ & 1.61$\times 10^{-1}$ & 1.06$\times 10^{-1}$ & 2.46$\times 10^{-1}$ & 1.48$\times 10^{-1}$\\ 2.0 & 2.97$\times 10^{-1}$ & 2.07$\times 10^{-1}$ & 1.67$\times 10^{-1}$ & 1.29$\times 10^{-1}$ & 1.08$\times 10^{-1}$ & 8.71$\times 10^{-2}$& 1.59$\times 10^{-1}$ & 1.22$\times 10^{-1}$\\ 2.1 & 1.88$\times 10^{-1}$ & 1.74$\times 10^{-1}$ & 1.12$\times 10^{-1}$ & 1.10$\times 10^{-1}$ & 7.56$\times 10^{-2}$& 7.44$\times 10^{-2}$& 1.08$\times 10^{-1}$ & 1.03$\times 10^{-1}$\\ 2.2 & 1.34$\times 10^{-1}$ & 1.50$\times 10^{-1}$ & 7.53$\times 10^{-2}$& 9.49$\times 10^{-2}$& 4.90$\times 10^{-2}$& 6.43$\times 10^{-2}$& 7.18$\times 10^{-2}$& 8.95$\times 10^{-2}$\\ 2.3 & 8.15$\times 10^{-2}$& 1.30$\times 10^{-1}$ & 4.62$\times 10^{-2}$& 8.32$\times 10^{-2}$& 2.86$\times 10^{-2}$& 5.57$\times 10^{-2}$& 4.19$\times 10^{-2}$& 7.78$\times 10^{-2}$\\ 2.4 & 5.89$\times 10^{-2}$& 1.13$\times 10^{-1}$ & 3.84$\times 10^{-2}$& 7.21$\times 10^{-2}$& 2.80$\times 10^{-2}$& 4.82$\times 10^{-2}$& 3.75$\times 10^{-2}$& 6.78$\times 10^{-2}$\\ 2.5 & 5.96$\times 10^{-2}$& 9.73$\times 10^{-2}$& 4.15$\times 10^{-2}$& 6.23$\times 10^{-2}$& 3.05$\times 10^{-2}$& 4.16$\times 10^{-2}$& 4.04$\times 10^{-2}$& 5.86$\times 10^{-2}$\\ 2.6 & 6.67$\times 10^{-2}$& 8.32$\times 10^{-2}$& 4.71$\times 10^{-2}$& 5.33$\times 10^{-2}$& 3.47$\times 10^{-2}$& 3.58$\times 10^{-2}$& 4.61$\times 10^{-2}$& 5.05$\times 10^{-2}$\\ 2.7 & 7.32$\times 10^{-2}$& 7.09$\times 10^{-2}$& 5.22$\times 10^{-2}$& 4.55$\times 10^{-2}$& 3.88$\times 10^{-2}$& 3.08$\times 10^{-2}$& 5.19$\times 10^{-2}$& 4.33$\times 10^{-2}$\\ 2.8 & 7.79$\times 10^{-2}$& 6.04$\times 10^{-2}$& 5.26$\times 10^{-2}$& 3.91$\times 10^{-2}$& 3.87$\times 10^{-2}$& 2.63$\times 10^{-2}$& 5.27$\times 10^{-2}$& 3.70$\times 10^{-2}$\\ 2.9 & 7.37$\times 10^{-2}$& 5.18$\times 10^{-2}$& 4.82$\times 10^{-2}$& 3.35$\times 10^{-2}$& 3.53$\times 10^{-2}$& 2.28$\times 10^{-2}$& 4.82$\times 10^{-2}$& 3.20$\times 10^{-2}$\\ 3.0 & 6.42$\times 10^{-2}$& 4.43$\times 10^{-2}$& 3.96$\times 10^{-2}$& 2.90$\times 10^{-2}$& 2.68$\times 10^{-2}$& 1.96$\times 10^{-2}$& 3.89$\times 10^{-2}$& 2.77$\times 10^{-2}$\\ 3.1 & 4.20$\times 10^{-2}$& 3.86$\times 10^{-2}$& 2.46$\times 10^{-2}$& 2.53$\times 10^{-2}$& 1.36$\times 10^{-2}$& 1.74$\times 10^{-2}$& 2.08$\times 10^{-2}$& 2.43$\times 10^{-2}$\\ 3.2 & 2.95$\times 10^{-2}$& 3.37$\times 10^{-2}$& 2.11$\times 10^{-2}$& 2.24$\times 10^{-2}$& 1.64$\times 10^{-2}$& 1.54$\times 10^{-2}$& 2.15$\times 10^{-2}$& 2.16$\times 10^{-2}$\\ 3.3 & 3.41$\times 10^{-2}$& 2.96$\times 10^{-2}$& 2.36$\times 10^{-2}$& 1.96$\times 10^{-2}$& 1.78$\times 10^{-2}$& 1.36$\times 10^{-2}$& 2.39$\times 10^{-2}$& 1.91$\times 10^{-2}$\\ 3.4 & 3.47$\times 10^{-2}$& 2.63$\times 10^{-2}$& 2.34$\times 10^{-2}$& 1.75$\times 10^{-2}$& 1.69$\times 10^{-2}$& 1.22$\times 10^{-2}$& 2.32$\times 10^{-2}$& 1.70$\times 10^{-2}$\\ 3.5 & 3.19$\times 10^{-2}$& 2.33$\times 10^{-2}$& 2.14$\times 10^{-2}$& 1.58$\times 10^{-2}$& 1.48$\times 10^{-2}$& 1.10$\times 10^{-2}$& 2.06$\times 10^{-2}$& 1.53$\times 10^{-2}$\\ 3.6 & 2.74$\times 10^{-2}$& 2.08$\times 10^{-2}$& 1.82$\times 10^{-2}$& 1.41$\times 10^{-2}$& 1.19$\times 10^{-2}$& 9.81$\times 10^{-3}$ & 1.70$\times 10^{-2}$& 1.37$\times 10^{-2}$\\ 3.7 & 2.25$\times 10^{-2}$& 1.87$\times 10^{-2}$& 1.52$\times 10^{-2}$& 1.27$\times 10^{-2}$& 1.01$\times 10^{-2}$& 8.86$\times 10^{-3}$ & 1.44$\times 10^{-2}$& 1.24$\times 10^{-2}$\\ 3.8 & 1.96$\times 10^{-2}$& 1.68$\times 10^{-2}$& 1.31$\times 10^{-2}$& 1.16$\times 10^{-2}$& 9.02$\times 10^{-3}$ & 8.03$\times 10^{-3}$ & 1.27$\times 10^{-2}$& 1.12$\times 10^{-2}$\\ 3.9 & 1.72$\times 10^{-2}$& 1.53$\times 10^{-2}$& 1.13$\times 10^{-2}$& 1.05$\times 10^{-2}$& 7.72$\times 10^{-3}$ & 7.25$\times 10^{-3}$ & 1.09$\times 10^{-2}$& 1.01$\times 10^{-2}$\\ 4.0 & 1.47$\times 10^{-2}$& 1.37$\times 10^{-2}$& 9.57$\times 10^{-3}$ & 9.45$\times 10^{-3}$ & 6.29$\times 10^{-3}$ & 6.57$\times 10^{-3}$ & 9.04$\times 10^{-3}$ & 9.16$\times 10^{-3}$\\ 4.1 & 1.21$\times 10^{-2}$& 1.25$\times 10^{-2}$& 8.06$\times 10^{-3}$ & 8.67$\times 10^{-3}$ & 5.17$\times 10^{-3}$ & 5.98$\times 10^{-3}$ & 7.46$\times 10^{-3}$ & 8.33$\times 10^{-3}$\\ 4.2 & 1.02$\times 10^{-2}$& 1.14$\times 10^{-2}$& 6.93$\times 10^{-3}$ & 7.89$\times 10^{-3}$ & 4.52$\times 10^{-3}$ & 5.40$\times 10^{-3}$ & 6.45$\times 10^{-3}$ & 7.59$\times 10^{-3}$\\ 4.3 & 9.12$\times 10^{-3}$ & 1.03$\times 10^{-2}$& 6.55$\times 10^{-3}$ & 7.20$\times 10^{-3}$ & 4.61$\times 10^{-3}$ & 4.91$\times 10^{-3}$ & 6.29$\times 10^{-3}$ & 6.86$\times 10^{-3}$\\ 4.4 & 9.27$\times 10^{-3}$ & 9.35$\times 10^{-3}$ & 7.01$\times 10^{-3}$ & 6.57$\times 10^{-3}$ & 5.35$\times 10^{-3}$ & 4.46$\times 10^{-3}$ & 7.02$\times 10^{-3}$ & 6.23$\times 10^{-3}$\\ 4.5 & 1.02$\times 10^{-2}$& 8.47$\times 10^{-3}$ & 7.52$\times 10^{-3}$ & 5.98$\times 10^{-3}$ & 5.65$\times 10^{-3}$ & 4.04$\times 10^{-3}$ & 7.60$\times 10^{-3}$ & 5.64$\times 10^{-3}$\\ 4.6 & 1.05$\times 10^{-2}$& 7.69$\times 10^{-3}$ & 7.51$\times 10^{-3}$ & 5.44$\times 10^{-3}$ & 5.55$\times 10^{-3}$ & 3.65$\times 10^{-3}$ & 7.54$\times 10^{-3}$ & 5.10$\times 10^{-3}$\\ 4.7 & 1.03$\times 10^{-2}$& 6.96$\times 10^{-3}$ & 7.27$\times 10^{-3}$ & 4.91$\times 10^{-3}$ & 5.26$\times 10^{-3}$ & 3.31$\times 10^{-3}$ & 7.16$\times 10^{-3}$ & 4.63$\times 10^{-3}$\\ 4.8 & 9.62$\times 10^{-3}$ & 6.27$\times 10^{-3}$ & 6.72$\times 10^{-3}$ & 4.46$\times 10^{-3}$ & 4.79$\times 10^{-3}$ & 2.99$\times 10^{-3}$ & 6.51$\times 10^{-3}$ & 4.19$\times 10^{-3}$\\ 4.9 & 8.56$\times 10^{-3}$ & 5.69$\times 10^{-3}$ & 5.86$\times 10^{-3}$ & 4.04$\times 10^{-3}$ & 3.88$\times 10^{-3}$ & 2.70$\times 10^{-3}$ & 5.44$\times 10^{-3}$ & 3.77$\times 10^{-3}$\\ 5.0 & 6.90$\times 10^{-3}$ & 5.10$\times 10^{-3}$ & 4.62$\times 10^{-3}$ & 3.67$\times 10^{-3}$ & 2.85$\times 10^{-3}$ & 2.44$\times 10^{-3}$ & 4.12$\times 10^{-3}$ & 3.41$\times 10^{-3}$\\ 5.1 & 5.34$\times 10^{-3}$ & 4.70$\times 10^{-3}$ & 3.53$\times 10^{-3}$ & 3.33$\times 10^{-3}$ & 2.14$\times 10^{-3}$ & 2.22$\times 10^{-3}$ & 3.12$\times 10^{-3}$ & 3.11$\times 10^{-3}$\\ 5.2 & 4.11$\times 10^{-3}$ & 4.28$\times 10^{-3}$ & 2.77$\times 10^{-3}$ & 3.04$\times 10^{-3}$ & 1.73$\times 10^{-3}$ & 2.01$\times 10^{-3}$ & 2.50$\times 10^{-3}$ & 2.82$\times 10^{-3}$\\ 5.3 & 3.35$\times 10^{-3}$ & 3.94$\times 10^{-3}$ & 2.28$\times 10^{-3}$ & 2.77$\times 10^{-3}$ & 1.45$\times 10^{-3}$ & 1.81$\times 10^{-3}$ & 2.09$\times 10^{-3}$ & 2.58$\times 10^{-3}$\\ 5.4 & 2.85$\times 10^{-3}$ & 3.60$\times 10^{-3}$ & 1.92$\times 10^{-3}$ & 2.50$\times 10^{-3}$ & 1.22$\times 10^{-3}$ & 1.64$\times 10^{-3}$ & 1.76$\times 10^{-3}$ & 2.34$\times 10^{-3}$\\ 5.5 & 2.43$\times 10^{-3}$ & 3.28$\times 10^{-3}$ & 1.62$\times 10^{-3}$ & 2.29$\times 10^{-3}$ & 1.01$\times 10^{-3}$ & 1.50$\times 10^{-3}$ & 1.47$\times 10^{-3}$ & 2.13$\times 10^{-3}$\\ 5.6 & 2.05$\times 10^{-3}$ & 3.02$\times 10^{-3}$ & 1.38$\times 10^{-3}$ & 2.08$\times 10^{-3}$ & 8.54$\times 10^{-4}$ & 1.36$\times 10^{-3}$ & 1.25$\times 10^{-3}$ & 1.93$\times 10^{-3}$\\ 5.7 & 1.78$\times 10^{-3}$ & 2.77$\times 10^{-3}$ & 1.22$\times 10^{-3}$ & 1.90$\times 10^{-3}$ & 7.88$\times 10^{-4}$ & 1.23$\times 10^{-3}$ & 1.13$\times 10^{-3}$ & 1.76$\times 10^{-3}$\\ 5.8 & 1.65$\times 10^{-3}$ & 2.53$\times 10^{-3}$ & 1.15$\times 10^{-3}$ & 1.73$\times 10^{-3}$ & 8.00$\times 10^{-4}$ & 1.11$\times 10^{-3}$ & 1.12$\times 10^{-3}$ & 1.61$\times 10^{-3}$\\ 5.9 & 1.64$\times 10^{-3}$ & 2.33$\times 10^{-3}$ & 1.14$\times 10^{-3}$ & 1.58$\times 10^{-3}$ & 8.09$\times 10^{-4}$ & 1.02$\times 10^{-3}$ & 1.13$\times 10^{-3}$ & 1.47$\times 10^{-3}$\\ 6.0 & 1.64$\times 10^{-3}$ & 2.13$\times 10^{-3}$ & 1.13$\times 10^{-3}$ & 1.43$\times 10^{-3}$ & 7.91$\times 10^{-4}$ & 9.25$\times 10^{-4}$ & 1.11$\times 10^{-3}$ & 1.34$\times 10^{-3}$\\ 6.1 & 1.66$\times 10^{-3}$ & 2.01$\times 10^{-3}$ & 1.11$\times 10^{-3}$ & 1.34$\times 10^{-3}$ & 7.45$\times 10^{-4}$ & 8.39$\times 10^{-4}$ & 1.07$\times 10^{-3}$ & 1.24$\times 10^{-3}$\\ 6.2 & 1.63$\times 10^{-3}$ & 1.89$\times 10^{-3}$ & 1.06$\times 10^{-3}$ & 1.24$\times 10^{-3}$ & 6.84$\times 10^{-4}$ & 7.66$\times 10^{-4}$ & 1.01$\times 10^{-3}$ & 1.14$\times 10^{-3}$\\ 6.3 & 1.56$\times 10^{-3}$ & 1.79$\times 10^{-3}$ & 1.00$\times 10^{-3}$ & 1.16$\times 10^{-3}$ & 6.26$\times 10^{-4}$ & 6.99$\times 10^{-4}$ & 9.36$\times 10^{-4}$ & 1.05$\times 10^{-3}$\\ 6.4 & 1.49$\times 10^{-3}$ & 1.69$\times 10^{-3}$ & 9.41$\times 10^{-4}$ & 1.08$\times 10^{-3}$ & 5.75$\times 10^{-4}$ & 6.44$\times 10^{-4}$ & 8.66$\times 10^{-4}$ & 9.74$\times 10^{-4}$\\ 6.5 & 1.42$\times 10^{-3}$ & 1.61$\times 10^{-3}$ & 8.82$\times 10^{-4}$ & 1.00$\times 10^{-3}$ & 5.21$\times 10^{-4}$ & 5.89$\times 10^{-4}$ & 8.00$\times 10^{-4}$ & 9.00$\times 10^{-4}$\\ 6.6 & 1.36$\times 10^{-3}$ & 1.53$\times 10^{-3}$ & 8.26$\times 10^{-4}$ & 9.37$\times 10^{-4}$ & 4.72$\times 10^{-4}$ & 5.40$\times 10^{-4}$ & 7.33$\times 10^{-4}$ & 8.39$\times 10^{-4}$\\ 6.7 & 1.29$\times 10^{-3}$ & 1.47$\times 10^{-3}$ & 7.79$\times 10^{-4}$ & 8.82$\times 10^{-4}$ & 4.30$\times 10^{-4}$ & 4.91$\times 10^{-4}$ & 6.81$\times 10^{-4}$ & 7.78$\times 10^{-4}$\\ 6.8 & 1.24$\times 10^{-3}$ & 1.41$\times 10^{-3}$ & 7.35$\times 10^{-4}$ & 8.27$\times 10^{-4}$ & 4.06$\times 10^{-4}$ & 4.55$\times 10^{-4}$ & 6.46$\times 10^{-4}$ & 7.23$\times 10^{-4}$\\ 6.9 & 1.21$\times 10^{-3}$ & 1.36$\times 10^{-3}$ & 7.11$\times 10^{-4}$ & 7.78$\times 10^{-4}$ & 4.00$\times 10^{-4}$ & 4.18$\times 10^{-4}$ & 6.38$\times 10^{-4}$ & 6.74$\times 10^{-4}$\\ 7.0 & 1.24$\times 10^{-3}$ & 1.32$\times 10^{-3}$ & 7.08$\times 10^{-4}$ & 7.29$\times 10^{-4}$ & 3.98$\times 10^{-4}$ & 3.81$\times 10^{-4}$ & 6.44$\times 10^{-4}$ & 6.32$\times 10^{-4}$\\ 7.1 & 1.31$\times 10^{-3}$ & 1.30$\times 10^{-3}$ & 7.27$\times 10^{-4}$ & 7.05$\times 10^{-4}$ & 3.94$\times 10^{-4}$ & 3.57$\times 10^{-4}$ & 6.52$\times 10^{-4}$ & 5.89$\times 10^{-4}$\\ 7.2 & 1.38$\times 10^{-3}$ & 1.30$\times 10^{-3}$ & 7.40$\times 10^{-4}$ & 6.81$\times 10^{-4}$ & 3.86$\times 10^{-4}$ & 3.27$\times 10^{-4}$ & 6.50$\times 10^{-4}$ & 5.58$\times 10^{-4}$\\ 7.3 & 1.46$\times 10^{-3}$ & 1.30$\times 10^{-3}$ & 7.53$\times 10^{-4}$ & 6.62$\times 10^{-4}$ & 3.74$\times 10^{-4}$ & 3.02$\times 10^{-4}$ & 6.39$\times 10^{-4}$ & 5.28$\times 10^{-4}$\\ 7.4 & 1.54$\times 10^{-3}$ & 1.32$\times 10^{-3}$ & 7.61$\times 10^{-4}$ & 6.50$\times 10^{-4}$ & 3.57$\times 10^{-4}$ & 2.84$\times 10^{-4}$ & 6.22$\times 10^{-4}$ & 4.97$\times 10^{-4}$\\ 7.5 & 1.62$\times 10^{-3}$ & 1.36$\times 10^{-3}$ & 7.62$\times 10^{-4}$ & 6.38$\times 10^{-4}$ & 3.33$\times 10^{-4}$ & 2.66$\times 10^{-4}$ & 5.91$\times 10^{-4}$ & 4.73$\times 10^{-4}$\\ 7.6 & 1.71$\times 10^{-3}$ & 1.42$\times 10^{-3}$ & 7.59$\times 10^{-4}$ & 6.38$\times 10^{-4}$ & 3.10$\times 10^{-4}$ & 2.47$\times 10^{-4}$ & 5.55$\times 10^{-4}$ & 4.55$\times 10^{-4}$\\ 7.7 & 1.81$\times 10^{-3}$ & 1.51$\times 10^{-3}$ & 7.53$\times 10^{-4}$ & 6.38$\times 10^{-4}$ & 2.84$\times 10^{-4}$ & 2.29$\times 10^{-4}$ & 5.15$\times 10^{-4}$ & 4.36$\times 10^{-4}$\\ 7.8 & 1.93$\times 10^{-3}$ & 1.65$\times 10^{-3}$ & 7.43$\times 10^{-4}$ & 6.44$\times 10^{-4}$ & 2.50$\times 10^{-4}$ & 2.17$\times 10^{-4}$ & 4.65$\times 10^{-4}$ & 4.18$\times 10^{-4}$\\ 7.9 & 2.06$\times 10^{-3}$ & 1.87$\times 10^{-3}$ & 7.19$\times 10^{-4}$ & 6.56$\times 10^{-4}$ & 2.20$\times 10^{-4}$ & 2.11$\times 10^{-4}$ & 4.20$\times 10^{-4}$ & 4.18$\times 10^{-4}$\\ 8.0 & 2.25$\times 10^{-3}$ & 2.24$\times 10^{-3}$ & 7.03$\times 10^{-4}$ & 6.93$\times 10^{-4}$ & 1.99$\times 10^{-4}$ & 2.11$\times 10^{-4}$ & 3.98$\times 10^{-4}$ & 4.24$\times 10^{-4}$\\ 8.1 & 2.26$\times 10^{-3}$ & 2.41$\times 10^{-3}$ & 7.05$\times 10^{-4}$ & 7.35$\times 10^{-4}$ & 1.97$\times 10^{-4}$ & 2.17$\times 10^{-4}$ & 4.05$\times 10^{-4}$ & 4.55$\times 10^{-4}$\\ \hline \end{tabular}} \end{table*} \clearpage \begin{figure*}[hbtp!] \centering \includegraphics[width=0.48\textwidth]{Figure_011-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_011-b.pdf} \includegraphics[width=0.48\textwidth]{Figure_011-c.pdf} \includegraphics[width=0.48\textwidth]{Figure_011-d.pdf} \caption{The observed 95\% \CL upper limits on the product of the cross section, branching fraction, and acceptance for dijet resonances decaying to quark-quark (upper left), quark-gluon (upper right), gluon-gluon (lower left), and for RS gravitons (lower right). The corresponding expected limits (dashed) and their variations at the 1 and 2 standard deviation levels (shaded bands) are also shown. Limits are compared to predicted cross sections for string resonances~\cite{Anchordoqui:2008di,Cullen:2000ef}, excited quarks~\cite{ref_qstar,Baur:1989kv}, axigluons~\cite{ref_axi}, colorons~\cite{ref_coloron}, scalar diquarks~\cite{ref_diquark}, color-octet scalars~\cite{Han:2010rf}, new gauge bosons $\PWpr$ and $\PZpr$ with SM-like couplings~\cite{ref_gauge}, dark matter mediators for $m_{\mathrm{DM}}=1$\GeV~\cite{Boveia:2016mrp,Abdallah:2015ter}, and RS gravitons~\cite{ref_rsg}. } \label{figLimitAll} \end{figure*} \begin{figure*}[hbtp!] \centering \includegraphics[width=0.48\textwidth]{Figure_012.pdf} \caption{The observed 95\% \CL upper limits on the product of the cross section, branching fraction, and acceptance for quark-quark, quark-gluon, and gluon-gluon dijet resonances. Limits are compared to predicted cross sections for string resonances~\cite{Anchordoqui:2008di,Cullen:2000ef}, excited quarks~\cite{ref_qstar,Baur:1989kv}, axigluons~\cite{ref_axi}, colorons~\cite{ref_coloron}, scalar diquarks~\cite{ref_diquark}, color-octet scalars~\cite{Han:2010rf}, new gauge bosons $\PWpr$ and $\PZpr$ with SM-like couplings~\cite{ref_gauge}, dark matter mediators for $m_{\mathrm{DM}}=1$\GeV~\cite{Boveia:2016mrp,Abdallah:2015ter}, and RS gravitons~\cite{ref_rsg}. } \label{figLimitSummary} \end{figure*} \subsection{Limits on the resonance mass for benchmark models} All upper limits presented can be compared to the parton-level predictions of $\sigma B A$, without detector simulation, to determine mass limits on new particles. The model predictions shown in Figs.~\ref{figLimitAll} and \ref{figLimitSummary} are calculated in the narrow-width approximation~\cite{Harris:2011bh} using the CTEQ6L1~\cite{refCTEQ} parton distribution functions at leading order. A next-to-leading order correction factor of $K=1+ 8\pi\alpS/9\approx 1.3$ is applied to the leading order predictions for the $\PWpr$ model and $K=1+(4\alpS/6\pi)(1+4\pi^2/3)\approx 1.3$ for the $\PZpr$ model (see pages 248 and 233 of Ref.~\cite{Barger:1987nn}), where $\alpS$ is the strong coupling constant evaluated at a renormalization scale equal to the resonance mass. Similarly, for the axigluon/coloron models a correction factor is applied which varies between $K=1.08$ at a resonance mass of 0.6\TeV and $K=1.33$ at 8.1\TeV~\cite{Chivukula:2013xla}. The branching fraction includes the direct decays of the resonance into the five light quarks and gluons only, excluding top quarks from the decay, although top quarks are included in the calculation of the resonance width. The signal acceptance evaluated at the parton level for the resonance decay to two partons can be written as $A=A_{\Delta}A_{\eta}$, where $A_{\Delta}$ is the acceptance of requiring $\ensuremath{\abs{\Delta\eta}}\xspace<1.3$ alone, and $A_{\eta}$ is the acceptance of also requiring $\abs{\eta}<2.5$. The acceptance $A_{\Delta}$ is model dependent. In the case of isotropic decays, the dijet angular distribution as a function of $\tanh{(\ensuremath{\abs{\Delta\eta}}\xspace/2)}$ is approximately constant, and $A_{\Delta}\approx\tanh(1.3/2)=0.57$, independent of resonance mass. The acceptance $A_{\eta}$ is maximal for resonance masses above 1\TeV---greater than 0.99 for all models considered. The acceptance $A_{\eta}$ decreases as the resonance mass decreases below 1\TeV, and for a resonance mass of 0.6\TeV it is 0.92 for excited quarks, 0.98 for RS gravitons, and between those two values for the other models. For a given model, new particles are excluded at 95\% \CL in mass regions where the theoretical prediction lies at or above the observed upper limit for the appropriate final state of Figs.~\ref{figLimitAll} and \ref{figLimitSummary}. Mass limits on all benchmark models are summarized in Table~\ref{tab:MassLimit}. \clearpage \begin{table}[hbtp] \topcaption{ Observed and expected mass limits at 95\% \CL from this analysis compared to previously published limits on narrow resonances from CMS with 12.9\fbinv~\cite{Sirunyan:2016iap}. The listed models are excluded between 0.6\TeV and the indicated mass limit by this analysis. In addition to the observed mass limits listed below, this analysis also excludes the RS graviton model within the mass interval between 1.9 and 2.5\TeV and the $\PZpr$ model within roughly a 50\GeV window around 3.1\TeV.} \centering \begin{tabular}{lccc} \hline Model & Final & \multicolumn{2}{c}{\ \ \ Observed (expected) mass limit [\TeVns{}]} \\ & State & \hspace{0.5in} 36\fbinv & Ref.~\cite{Sirunyan:2016iap}, 12.9\fbinv \\ \hline String resonance & $\PQq\Pg$ & \hspace{0.5in} 7.7\ (7.7) & 7.4\ (7.4) \\ Scalar diquark & $\PQq\PQq$ & \hspace{0.5in} 7.2\ (7.4) & 6.9\ (6.8) \\ Axigluon/coloron & $\PQq\PAQq$ & \hspace{0.5in} 6.1\ (6.0) & 5.5\ (5.6) \\ Excited quark & $\PQq\Pg$ & \hspace{0.5in} 6.0\ (5.8) & 5.4\ (5.4) \\ Color-octet scalar ($k_s^2=1/2$) & $\Pg\Pg$ & \hspace{0.5in} 3.4\ (3.6) & 3.0\ (3.3) \\ $\PWpr$ SM-like& $\PQq\PAQq$ & \hspace{0.5in} 3.3\ (3.6) & 2.7\ (3.1) \\ $\PZpr$ SM-like& $\PQq\PAQq$ & \hspace{0.5in} 2.7\ (2.9) & 2.1\ (2.3) \\ RS graviton ($k/\overline{M}_\text{Pl}=0.1$) & $\PQq\PAQq$, $\Pg\Pg$ & \hspace{0.5in} 1.8\ (2.3) & 1.9\ (1.8) \\ DM mediator ($m_{\text{DM}}=1$~GeV) & $\PQq\PAQq$ & \hspace{0.5in} 2.6\ (2.5) & 2.0\ (2.0) \\ \hline \end{tabular} \label{tab:MassLimit} \end{table} \subsection{Limits on the coupling to quarks of a leptophobic \texorpdfstring{$\PZpr$}{Z'}} Mass limits on new particles are sensitive to the assumptions about their coupling. Furthermore, at a fixed resonance mass, as the search sensitivity increases we can exclude models with smaller couplings. Figure~\ref{figCoupling} shows upper limits on the coupling as a function of mass for a leptophobic $\PZpr$ resonance which has a natural width \begin{equation} \Gamma = \frac{3(\ensuremath{g_\PQq}\xspace^{\prime})^2 M}{2\pi} \label{eqWidthZp} \end{equation} where $M$ is the resonance mass. Limits are only shown in Fig.~\ref{figCoupling} for coupling values $\ensuremath{g_\PQq}\xspace^{\prime}<0.45$, corresponding to a width less than 10\% of the resonance mass, for which our narrow resonance limits are approximately valid. Up to this width value, for resonance masses less than roughly 4\TeV, the Breit-Wigner natural line shape of the quark-quark resonance does not significantly change the observed line shape, and the dijet resonance can be considered effectively narrow. To constrain larger values of the coupling we will consider broad resonances in Section~\ref{sec:Wide}. \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{Figure_013.pdf} \caption{ The 95\% \CL upper limits on the universal quark coupling $\ensuremath{g_\PQq}\xspace^{\prime}$ as a function of resonance mass for a leptophobic $\PZpr$ resonance that only couples to quarks. The observed limits (solid), expected limits (dashed) and their variation at the 1 and 2 standard deviation levels (shaded bands) are shown. The dotted horizontal lines show the coupling strength for which the cross section for dijet production in this model is the same as for a DM mediator (see text).} \label{figCoupling} \end{figure} \section{Limits on a dark matter mediator} We use our limits to constrain simplified models of DM, with leptophobic vector and axial-vector mediators that couple only to quarks and DM particles~\cite{Boveia:2016mrp,Abdallah:2015ter}. Figure~\ref{figDM} shows the excluded values of mediator mass as a function of \ensuremath{m_{\text{DM}}}\xspace, for both types of mediators. For \ensuremath{m_{\text{DM}}}\xspace = 1\GeV the observed excluded range of the mediator mass (\ensuremath{M_{\text{Med}}}\xspace) is between 0.6 and 2.6\TeV, as also shown in Fig.~\ref{figLimitAll} and listed in Table~\ref{tab:MassLimit}. The limits on a dark matter mediator are indistinguishable for $\ensuremath{m_{\text{DM}}}\xspace = 0$ and 1\GeV. In Fig.~\ref{figDM} the expected upper value of excluded \ensuremath{M_{\text{Med}}}\xspace increases with \ensuremath{m_{\text{DM}}}\xspace because the branching fraction to $\PQq\PAQq$ increases with \ensuremath{m_{\text{DM}}}\xspace. In Fig.~\ref{figDM} our exclusions are compared to constraints from the cosmological relic density of DM determined from astrophysical measurements~\cite{Spergel:2006hy,Ade:2013zuv} and from \textsc{MadDM} version 2.0.6~\cite{Backovic:2013dpa,Backovic:2015cra} as described in Ref.~\cite{Pree:2016hwc}. \begin{figure}[hbtp] \centering \includegraphics[width=0.75\textwidth]{Figure_014-a.pdf} \includegraphics[width=0.75\textwidth]{Figure_014-b.pdf} \caption{ The 95\% \CL observed (solid) and expected (dashed) excluded regions in the plane of dark matter mass vs. mediator mass, for an axial-vector mediator (upper) and a vector mediator (lower), compared to the excluded regions where the abundance of DM exceeds the cosmological relic density (light gray). Following the recommendation of the LHC DM working group~\cite{Boveia:2016mrp, Abdallah:2015ter}, the exclusions are computed for Dirac DM and for a universal quark coupling $\ensuremath{g_\PQq}\xspace = 0.25$ and for a DM coupling of $\ensuremath{g_{\text{DM}}}\xspace=1.0$. It should also be noted that the excluded region strongly depends on the chosen coupling and model scenario. Therefore, the excluded regions and relic density contours shown in this plot are not applicable to other choices of coupling values or models.} \label{figDM} \end{figure} \subsection{Relationship of the DM mediator model to the leptophobic \texorpdfstring{$\PZpr$}{Z'} model} If $\ensuremath{m_{\text{DM}}}\xspace>\ensuremath{M_{\text{Med}}}\xspace/2$, the mediator cannot decay to DM particles ``on-shell", and the dijet cross section from the mediator models~\cite{Boveia:2016mrp} becomes identical to that in the leptophobic $\PZpr$ model ~\cite{Dobrescu:2013coa} used in Fig.~\ref{figCoupling} with a coupling $\ensuremath{g_\PQq}\xspace^{\prime}=\ensuremath{g_\PQq}\xspace=0.25$. Therefore, for these values of \ensuremath{m_{\text{DM}}}\xspace the limits on the mediator mass in Fig.~\ref{figDM} are identical to the limits on the $\PZpr$ mass at $\ensuremath{g_\PQq}\xspace^{\prime}=0.25$ in Fig.~\ref{figCoupling}. Similarly, if $\ensuremath{m_{\text{DM}}}\xspace=0$, the limits on the mediator mass in Fig.~\ref{figDM} are identical to the limits on the $\PZpr$ mass at $\ensuremath{g_\PQq}\xspace^{\prime}=\ensuremath{g_\PQq}\xspace/\sqrt{\smash[b]{1+16/(3N_f)}}\approx 0.182$ in Fig.~\ref{figCoupling}. Here $N_f$ is the effective number of quark flavors contributing to the width of the resonance, $N_f=5+\sqrt{\smash[b]{1-4m_\cPqt^2/\ensuremath{M_{\text{Med}}}\xspace^2}}$, where $m_\cPqt$ is the top quark mass. \subsection{Limits on the coupling to quarks of a narrow DM mediator} In Fig.~\ref{fig:DMCouplingExclusion} limits are presented on the coupling \ensuremath{g_\PQq}\xspace as a function of \ensuremath{m_{\text{DM}}}\xspace and \ensuremath{M_{\text{Med}}}\xspace. The limits on \ensuremath{g_\PQq}\xspace decrease with increasing \ensuremath{m_{\text{DM}}}\xspace, again because the branching fraction to $\PQq\PAQq$ increases with \ensuremath{m_{\text{DM}}}\xspace. The minimum value of excluded \ensuremath{g_\PQq}\xspace at a fixed value of \ensuremath{M_{\text{Med}}}\xspace is obtained for \ensuremath{m_{\text{DM}}}\xspace greater than \ensuremath{M_{\text{Med}}}\xspace/2. In Figs.~\ref{figCoupling} and \ref{fig:DMCouplingExclusion} we show exclusions from the narrow resonance search as a function of resonance mass and quark coupling up to a maximum coupling value of approximately 0.4, corresponding to a maximum resonance mass of 3.7\TeV. At larger values of coupling the natural width of the resonance influences significantly the observed width and our narrow resonance limits become noticeably less accurate. In the next section we quantify more precisely the accuracy of our narrow-resonance limits, extend them to larger widths, and extend the limits on a dark matter mediator to higher masses and couplings. \begin{figure}[htb] \begin{center} \includegraphics[width=0.75\textwidth]{Figure_015-a.pdf} \includegraphics[width=0.75\textwidth]{Figure_015-b.pdf} \caption{The 95\% \CL observed upper limits on a universal quark coupling \ensuremath{g_\PQq}\xspace (color scale at right) in the plane of the dark matter particle mass versus mediator mass for an axial-vector mediator (upper) and a vector mediator (lower). } \label{fig:DMCouplingExclusion} \end{center} \end{figure} \clearpage \section{Limits on broad resonances} \label{sec:Wide} The search for narrow resonances described in the previous sections assumes the intrinsic resonance width $\Gamma$ is negligible compared to the experimental dijet mass resolution. Here we extend the search to cover broader resonances, with the width up to 30\% of the resonance mass $M$. This allows us to be sensitive to more models and larger couplings, and also quantifies the level of approximation within the narrow-resonance search by giving limits as an explicit function of $\Gamma/M$. We use the same dijet mass data and background parameterization as in the high-mass narrow resonance search. The shapes of broad resonances are then used to derive limits on such states decaying to $\Pq\Pq$ and $\Pg\Pg$. \subsection{Breit--Wigner distributions} The shape of a broad resonance depends on the relationship between the width and the resonance mass, which in turn depends on the resonance spin and the decay channel. The sub-process cross section for a resonance with mass $M$ as a function of di-parton mass $m$ is described by a relativistic Breit--Wigner (e.g. Eq.~(7.47) in Ref.~\cite{Sjostrand:2006za}): \begin{equation} \hat{\sigma} \propto \frac{\pi}{m^2} \, \frac{[\Gamma^{(i)}M] \, [\Gamma^{(f)}M]} {(m^2 - M^2)^2 + [\Gamma M]^2}\ , \label{eq:Breit-Wigner} \end{equation} where $\Gamma$ is the total width and $\Gamma^{(i,f)}$ are the partial widths for the initial state $i$ and final state $f$. To obtain the correct expression when the di-parton mass is far from the resonance mass, important for broad resonances, generators like \PYTHIA~8 replace in Eq.~(\ref{eq:Breit-Wigner}) all $\Gamma M$ terms with $\Gamma(m) m$ terms, where $\Gamma(m)$ is the width the resonance would have if its mass were $m$. This general prescription for modifying the Breit--Wigner distribution is defined at Eq.~(47.58) in Ref.~\cite{Patrignani:2016xqp}. The replacement is done for the partial width terms in the numerator, as well as the full width term in the denominator, and the resulting di-parton mass dependence within the numerator significantly reduces the cross section at low values of $m$ far from the resonance pole. We consider explicitly the shapes of spin-1 resonances in the quark-quark channel and the shape of spin-2 resonances in the quark-quark and gluon-gluon channels. For a spin-1 $\PZpr$ resonance in the quark-quark channel, both for the CP-even vector and the CP-odd axial-vector cases, the partial width is proportional to the resonance mass ($\Gamma\propto M$)~\cite{Kim:2015vba} and generators make the well known replacement \begin{equation} \Gamma M \to \left(\frac{m^2}{M^2}\right) \Gamma M \label{eq:vectorReplacement} \end{equation} for the terms $[\Gamma^{(i)} M]$, $[\Gamma^{(f)} M]$ and $[\Gamma M]$ in Eq.~(\ref{eq:Breit-Wigner}). The factor $(m^2/M^2)$ in Eq.~(\ref{eq:vectorReplacement}) converts the terms evaluated at the resonance mass to those evaluated at the di-parton mass for the case of widths proportional to mass, as discussed at Eq.~(7.43) in Ref.~\cite{Sjostrand:2006za}. For a spin-2 resonance, a CP-even tensor such as a graviton, the partial widths in both the gluon-gluon channel~\cite{Kim:2015vba,Bijnens:2001gh} and the quark-quark channel~\cite{Bijnens:2001gh} are proportional to the resonance mass cubed ($\Gamma\propto M^3$) and \PYTHIA~8 makes the following replacement for an RS graviton: \begin{equation} \Gamma M \to \left(\frac{m^4}{M^4}\right) \Gamma M \label{eq:tensorReplacement} \end{equation} for the above mentioned terms. The factor $(m^4/M^4)$ in Eq.~(\ref{eq:tensorReplacement}) converts the terms evaluated at the resonance mass to those evaluated at the di-parton mass for the case of widths proportional to mass cubed. Applying the replacements in Eq.~(\ref{eq:vectorReplacement}) and (\ref{eq:tensorReplacement}) to the $[\Gamma^{(i)} M][\Gamma^{(f)} M]$ in the numerator of the Breit--Wigner distribution results in an extra factor of $(m^2/M^2)(m^2/M^2)=m^4/M^4$ for a spin-2 resonance compared to a spin-1 resonance decaying to dijets. At low di-parton mass, $m\ll M$, the replacement in the denominator does not matter, and the replacement in the numerator will suppress the tail at low $m$ for spin-2 resonances compared to spin-1 resonances, as can be seen in the figures in the next section. At high diparton mass, $m\gg M$, the replacement in the denominator will tend to cancel the replacement in the numerator and the high mass tail is not significantly affected by the replacement. This is true for the dijet decays of all spin-2 resonances calculated within effective field theory~\cite{Kim:2015vba,Han:1998sg}. We note that spin-2 resonances decaying to dijets are required to be CP-even, because the dijet decays of any spin-2 CP-odd resonances are suppressed~\cite{Kim:2015vba}. Spin-0 resonances coupling directly to pairs of gluons (e.g. color-octet scalars) or to pairs of gluons through fermion loops (e.g. Higgs-like bosons) will have a partial width proportional to the resonance mass cubed~\cite{Chivukula:2014pma,Kim:2015vba,Ellis:1975ap} and should have a similar shape as a spin-2 resonance in the gluon-gluon channel. Spin-0 resonances coupling to quark-quark (e.g. Higgs-like bosons or scalar diquarks) will have a partial width proportional to the resonance mass~\cite{Ellis:1975ap,Cakir:2005iw} and should have a similar shape as a spin-1 resonance in the quark-quark channel. Therefore, the three shapes we consider in Section~\ref{sec:wideShape}, for spin-2 resonances coupling to quark-quark and gluon-gluon and for spin-1 resonances coupling to quark-quark, are sufficient to determine the shapes of all broad resonances decaying to quark-quark or gluon-gluon. We do not consider broad resonances with non-integer spin decaying to quark-gluon in this paper. Further discussion of the model dependence of the shape of broad resonances can be found in the Appendix of Ref.~\cite{Khachatryan:2015sja}. \subsection{Resonance signal shapes and limits} \label{sec:wideShape} \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth]{Figure_016-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_016-b.pdf} \caption{ The resonance signal shapes (\cmsLeft) and observed 95\% \CL upper limits on the product of the cross section, branching fraction, and acceptance (\cmsRight) for spin-2 resonances produced and decaying in the quark-quark channel are shown for various values of intrinsic width and resonance mass. The reconstructed dijet mass spectrum for these resonances is estimated from the {\PYTHIA8} MC event generator, followed by the simulation of the CMS detector response. } \label{fig:wide_qq} \end{center} \end{figure} In Figs.~\ref{fig:wide_qq} and ~\ref{fig:wide_gg} we show resonance signal shapes and observed CMS limits for various widths of spin-2 resonances modeled by an RS graviton signal in the quark-quark and gluon-gluon channels, respectively. The limits become less stringent as the resonance intrinsic width increases. While the extra factor of $m^4/M^4$ in the Breit--Wigner distribution discussed in the previous section suppresses the tail at low dijet mass for $\Pq\Pq$ resonances, increased QCD radiation and a longer tail due to parton distributions partially compensates this effect for $\Pg\Pg$ resonances. As a consequence and similar to narrow resonances, the broad resonances decaying to $\Pg\Pg$ have a more pronounced tail at low mass, and hence the limits for these resonances are weaker than those for resonances decaying to $\Pq\Pq$. In Fig.~\ref{fig:wide_vector} we show the signal shapes and limits for spin-1 resonances in the quark-quark channel. The spin-1 resonances in Fig.~\ref{fig:wide_vector} do not contain the extra factor of $m^4/M^4$ in the Breit--Wigner distribution and are therefore significantly broader than the spin-2 $\Pq\Pq$ resonances in Fig.~\ref{fig:wide_qq}. For the same reason, the limits in Fig.~\ref{fig:wide_vector} are weaker than those in Fig.~\ref{fig:wide_qq}. The difference in the angular distribution of spin-1 and spin-2 resonances has a negligible effect on the resonance shapes and the cross section upper limits. In Fig.~\ref{fig:wide_vector} we use a model of a vector DM mediator, and find the signal shapes and limits indistinguishable from an axial-vector model. \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth]{Figure_017-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_017-b.pdf} \caption{ The resonance signal shapes (\cmsLeft) and observed 95\% \CL upper limits on the product of the cross section, branching fraction, and acceptance (\cmsRight) for spin-2 resonances produced and decaying in the gluon-gluon channel are shown for various values of intrinsic width and resonance mass. The reconstructed dijet mass spectrum for these resonances is estimated from the {\PYTHIA8} MC event generator, followed by the simulation of the CMS detector response. } \label{fig:wide_gg} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth]{Figure_018-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_018-b.pdf} \caption{ The resonance signal shapes (\cmsLeft) and observed 95\% \CL upper limits on the product of the cross section, branching fraction, and acceptance (\cmsRight) for spin-1 resonances produced and decaying in the quark-quark channel are shown for various values of intrinsic width and resonance mass. The reconstructed dijet mass spectrum for these resonances is estimated from the {\PYTHIA8} MC event generator, followed by the simulation of the CMS detector response. } \label{fig:wide_vector} \end{center} \end{figure} \subsection{Validity tests of the limits} The limits are calculated up to a resonance mass of 8\TeV but are only quoted up to the maximum resonance mass for which the presence of the low-mass tails in the signal shape does not significantly affect the limit value. For these quoted values, the limits on the resonance cross section are well understood, increasing monotonically as a function of resonance width at each value of resonance mass. To obtain this behavior in the limit, we find it is sufficient to require that the expected limit derived for a truncated shape agrees with that derived for the full shape within 15\%. The truncated shape is cut off at a dijet mass equal to 70\% of the nominal resonance mass, while the full shape is cut off at a dijet mass of \RECOminMjjCut. For both the truncated and the full limits, the cross section limit of the resonance signal is corrected for the acceptance of this requirement on the dijet mass in order to obtain limits on the total signal cross section. The difference between the expected limits using the full shape and the truncated shape is negligible for most resonance masses and widths, because the signal tail at low mass is insignificant compared to the steeply falling background. For some resonance masses beyond our maximum, the low dijet mass tail causes the limit to behave in an unphysical manner as a function of increasing width. This condition does not affect the maximum resonance mass presented for a spin-2 $\Pq\Pq$ resonance in Fig.\ref{fig:wide_qq}, but it does restrict the maximum masses presented for a spin-2 $\Pg\Pg$ resonance in Fig.\ref{fig:wide_gg} and a vector resonance in Fig.\ref{fig:wide_vector}. For example, for a vector resonance, we find that the highest resonance mass that satisfies this condition is 5\TeV for a resonance with 30\% width, 6\TeV for 20\% width, 7\TeV for 10\% width, and 8\TeV for a narrow resonance. It is useful to define the signal pseudo-significance distribution $\text{S}/\sqrt{\text{B}}$ where S is the resonance signal and B is the QCD background. The signal pseudo-significance indicates sensitivity to the signal in the presence of background as a function of dijet mass, and has been used as an alternative method of evaluating the sensitivity of the search to the low mass tail. The maximum resonance mass values we present correspond to a 70\% acceptance for the signal pseudo-significance, when the signal shape is truncated at 70\% of the nominal resonance mass. This demonstrates that, for resonance masses and widths which satisfy our resonance mass condition, the signals are being constrained mainly by data in the dijet mass region near the resonance pole. Signal injection tests analogous to those already described for the narrow resonance search were repeated for the broad resonance search, and the bias in the extracted signal was again found to be negligible. As discussed in the previous CMS search for broad dijet resonances~\cite{Khachatryan:2015sja}, our signal shapes consider only the $s$-channel process, which dominates the signal, and our results do not include the possible effects of the $t$-channel exchange of a new particle or the interference between the background and signal processes. \subsection{Limits on the coupling to quarks of a broad DM mediator} The cross section limits in Fig.~\ref{fig:wide_vector} have been used to derive constraints on a DM mediator. The cross section for mediator production for $\ensuremath{m_{\text{DM}}}\xspace=1$ GeV and $\ensuremath{g_{\text{DM}}}\xspace=1$ is calculated at leading order using \MADGRAPH{5}\_a\MCATNLO version 2.3.2~\cite{Alwall:2014hca} for mediator masses within the range $1.6 < \ensuremath{M_{\text{Med}}}\xspace < 4.1$\TeV in 0.1\TeV steps and for quark couplings within the range $0.1<\ensuremath{g_\PQq}\xspace<1.0$ in 0.1 steps. For these choices the relationship between the width and $\ensuremath{g_\PQq}\xspace$ given in Ref.~\cite{Boveia:2016mrp,Abdallah:2015ter} simplifies to \begin{equation} \Gamma_{\text{Med}} \approx \frac{(18\ensuremath{g_\PQq}\xspace^2 + 1)\ensuremath{M_{\text{Med}}}\xspace}{12\pi}, \label{eqWidth} \end{equation} for both vector and axial-vector mediators. For each mediator mass value, the predictions for the cross section for mediator production as a function of $\ensuremath{g_\PQq}\xspace$ are converted to a function of the width, using Eq.~(\ref{eqWidth}), and are then compared to our cross section limits from Fig.~\ref{fig:wide_vector} to find the excluded values of $\ensuremath{g_\PQq}\xspace$ as a function of mass for a spin-1 resonance shown in Fig.~\ref{figCouplingWide}. \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{Figure_019.pdf} \caption{ The 95\% \CL upper limits on the universal quark coupling $\ensuremath{g_\PQq}\xspace$ as a function of resonance mass for a vector mediator of interactions between quarks and DM particles. The right vertical axis shows the natural width of the mediator divided by its mass. The observed limits taking into account the natural width of the resonance are in red(upper solid curve), expected limits (dashed) and their variation at the 1 and 2 standard deviation levels (shaded bands) are shown. The observed limits from the narrow resonance search are in blue (lower solid curve), but are only valid for the width values up to approximately 10\% of the resonance mass. The exclusions are computed for a spin-1 mediator and, Dirac DM particle with a mass $\ensuremath{m_{\text{DM}}}\xspace=1$\GeV and a coupling $\ensuremath{g_{\text{DM}}}\xspace=1.0$.} \label{figCouplingWide} \end{figure} Also shown in Fig.~\ref{figCouplingWide} is the limit on $\ensuremath{g_\PQq}\xspace$ from the quark-quark narrow resonance shape we used in the previous sections to set narrow-resonance limits. These are equal to the limits on $\ensuremath{g_\PQq}\xspace$ in Fig.~\ref{fig:DMCouplingExclusion} and are derived from the limits on $\ensuremath{g_\PQq}\xspace^{\prime}$ in Fig.~\ref{figCoupling} using the formula \begin{equation} \ensuremath{g_\PQq}\xspace = \ensuremath{g_\PQq}\xspace^{\prime} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{4} + \frac{1}{18(\ensuremath{g_\PQq}\xspace^{\prime})^2}}}. \label{eqDMcoupling} \end{equation} Equation~(\ref{eqDMcoupling}) is applicable for a narrow mediator with $\ensuremath{g_{\text{DM}}}\xspace=1$ and mass much larger than the quark and DM particle masses. The quark-quark narrow-resonance limits are derived from a narrow spin-2 resonance shape, which is approximately the same as a spin-1 resonance shape for small values of $\ensuremath{g_\PQq}\xspace$, and therefore in Fig.~\ref{figCouplingWide} at small values of $\ensuremath{g_\PQq}\xspace$ the narrow-resonance limits are roughly the same as the limits which take into account the width of the resonance. For resonance masses smaller than about 2.5\TeV, the acceptance of the dijet mass requirement ${\ensuremath{m_{\mathrm{jj}}}\xspace>1.25}$\TeV is reduced by taking into account the resonance natural width, resulting in a small increase in the limits compared to the narrow-resonance limits, which can be seen in Fig.~\ref{figCouplingWide}. At 3.7\TeV, the largest value of the resonance mass considered approximately valid for the narrow-resonance limits on \ensuremath{g_\PQq}\xspace, the narrow-resonance limit is ${\ensuremath{g_\PQq}\xspace>0.42}$, while the more accurate limit taking into account the width for the spin-1 resonance is $\ensuremath{g_\PQq}\xspace>0.53$. The limits taking into account the natural width can be calculated up to a resonance mass of 4.1\TeV for a width up to 30\% of the resonance mass. The limits from the narrow resonance search are approximately valid up to coupling values of about 0.4, corresponding to a width of 10\%, while the limits taking into account the natural width of the resonance probe up to a coupling value of 0.76, corresponding to a natural width of 30\%. We conclude that these limits on a vector DM mediator, taking into account the natural width of the resonance, improve on the accuracy of the narrow-width limits and extend them to larger values of the resonance mass and coupling to quarks. \section{Summary} Searches have been presented for resonances decaying into pairs of jets using proton-proton collision data collected at $\sqrt{s} = 13$\TeV corresponding to an integrated luminosity of up to 36\fbinv. A low-mass search, for resonances with masses between 0.6 and 1.6\TeV, is performed based on events with dijets reconstructed at the trigger level from calorimeter information. A high-mass search, for resonances with masses above 1.6\TeV, is performed using dijets reconstructed offline with a particle-flow algorithm. The dijet mass spectra are observed to be smoothly falling distributions. In the analyzed data samples, there is no evidence for resonant particle production. Generic upper limits are presented on the product of the cross section, the branching fraction to dijets, and the acceptance for narrow quark-quark, quark-gluon, and gluon-gluon resonances that are applicable to any model of narrow dijet resonance production. String resonances with masses below 7.7\TeV are excluded at 95\% confidence level, as are scalar diquarks below 7.2\TeV, axigluons and colorons below 6.1\TeV, excited quarks below 6.0\TeV, color-octet scalars below 3.4\TeV, $\PWpr$ bosons with the SM-like couplings below 3.3\TeV, $\PZpr$ bosons with the SM-like couplings below 2.7\TeV, Randall--Sundrum gravitons below 1.8\TeV and in the range 1.9 to 2.5\TeV, and dark matter mediators below 2.6\TeV. The limits on both vector and axial-vector mediators, in a simplified model of interactions between quarks and dark matter particles, are presented as functions of dark matter particle mass. Searches are also presented for broad resonances, including for the first time spin-1 resonances with intrinsic widths as large as 30\% of the resonance mass. The broad resonance search improves and extends the exclusions of a dark matter mediator to larger values of its mass and coupling to quarks. The narrow and broad resonance searches extend limits previously reported by CMS in the dijet channel, resulting in the most stringent constraints on many of the models considered. \begin{acknowledgments} \hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren Rachada-pisek} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Research, Development and Innovation Fund, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science center, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on, Programa Consolider-Ingenio 2010, Plan Estatal de Investigaci\'on Cient\'{\i}fica y T\'ecnica y de Innovaci\'on 2013-2016, Plan de Ciencia, Tecnolog\'{i}a e Innovaci\'on 2013-2017 del Principado de Asturias and Fondo Europeo de Desarrollo Regional, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract No. 675440 (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science - EOS" - be.h project n. 30820817; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850 and 125105 (Hungary); the Council of Scientific and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa de Excelencia Mar\'{i}a de Maeztu and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). \end{acknowledgments}
{ "timestamp": "2018-09-05T02:06:30", "yymm": "1806", "arxiv_id": "1806.00843", "language": "en", "url": "https://arxiv.org/abs/1806.00843" }
\section{Introduction} A common economic problem is deciding where a public facility should be located to service a population of agents with heterogenous preferences. For example, a government needs to decide the location of a public hospital, or library. More abstractly, the `location' may represent a type or quality of a service. For example, a government may have a fixed hospital location but must decide on the type of service the hospital will specialize, and, in particular, whether the service will be targeted to those suffering from acute, moderate, or mild severity of a certain illness. In such problems, participants may benefit by misreporting their preferences, and this can be problematic for a decision maker trying to find a socially optimal solution. This leads to the mechanism design problem of providing optimal, or approximately optimal, solutions while also being \emph{strategyproof}, i.e., no agent can profit from misreporting their preferences regardless of what others report.\footnote{We focus on the `mechanism design without money' problem where the use of money is assumed to not be permitted. This is a natural assumption for environments where the use of money is considered unlawful (e.g., organ donations) or unethical (e.g., political decision making, or locating a public good).} We call this the \emph{facility location problem}. A large literature has studied the facility location problem under the assumption that the facility does not face capacity constraints. When the facility is not capacity constrained, all agents can benefit from the facility and hence it is modeled as a \emph{public good}.\footnote{A public good is non-rivalrous and non-excludable.} Under this assumption, the mechanism design problem is explored in several classic papers~\cite{Blac48,Gibb73,Gibb77a,Satt75,Moul80,BoJo83}, and more recently in algorithmic mechanism design~\cite{PrTe13,NiRo01,FFG16}. To the best of our knowledge, an unexplored setting for the mechanism design problem is where the public facility is capacity constrained.\footnote{There is a distinct setting sometimes referred to as the `constrained facility location' problems~\cite{SuBo15} where the feasible locations for the facility are constrained. The algorithmic problem, of locating multiple capacity constrained facilities when agents are not strategic, has also been studied~\cite{BrCh89,KPTW01,Vygen05:Approximation}.} Capacity constraints limit the number of agents who can benefit from the facility's services. Such constraints are ubiquitous in practice: a hospital is capacity constrained by the number of beds and doctors, and a library may have limited seating. When present, capacity constraints introduce a particular form of \emph{rivalry} to the facility, since once the facility reaches its capacity limit additional agents are prevented from using, and hence benefiting, from the facility. A number of new strategic challenges arise for the mechanism designer when the public facility is capacity constrained but is still non-excludable. For example, when the mechanism designer chooses a location for the facility, we cannot stipulate which agents will be served, instead these decisions are made by participants, through strategic interactions once the facility has been located. That is, the ex-post Nash equilibrium of a subgame induced by the facility location determines the agents who ultimately benefit from the facility and those who do not. This introduces a technical challenge, because it leads agents to have interdependent utilities, since the utility for a particular location depends on who else will use the location (and in turn on their preferences). Furthermore, the reporting game is made in anticipation of the extensive-form game and ex-post Nash equilibrium, and the designer must consider mechanisms that are strategyproof in this broader game-theoretic context. In this paper, we initiate the study of the capacity constrained facility location problem from the viewpoint of mechanism design. In our model, $n$ agents are located in the $[0,1]$ interval, and there is a single facility to be located, this facility is able to service at most $k$ agents, where $k$ is some positive integer. When $k\ge n$ the capacity constraint is of no effect, and the capacity constrained facility location problem is equivalent to the classic problem. Agent locations are privately known, and, given a facility location, the ex-post Nash equilibrium of an induced subgame determines which agents are served. The mechanism designer's problem is to design mechanisms that are strategyproof and maximize social welfare. In our model, we take strategyproof to mean ex-post dominant-strategy incentive compatible (DIC\xspace) at the reporting stage. That is, conditional on the ex-post Nash equilibrium being attained in the induced subgame, an agent never benefits ex-post from misreporting their location to the mechanism regardless of what other agents report, and regardless of other agents' true locations. For ease of exposition, a mechanism that is DIC\xspace at the reporting stage will simply be said to be DIC\xspace. Unlike the classic facility location problem where the facility is not capacity constrained, the social welfare optimal mechanism is not DIC\xspace except when the capacity constraint is trivial, i.e., $k=1$ or $n$. As a result, we follow the approach of Procaccia and Tennenholtz~\cite{PrTe13} and consider the approximate mechanism design problem. We adopt the worst-case approximation measure for social welfare, and ask what is the best approximation achievable with a DIC\xspace mechanisms and how does this vary as a function of the capacity constraint? The literature studying the facility location problem without capacity constraints, or simply $k=n$, provides a number of important results. Gibbard-Satterthwaite~\cite{Gibb73,Satt75} showed a powerful impossibility result: when agents can have unrestricted preferences there need not exist any strategyproof mechanism. As a result, more recent works typically restrict agent preferences' over the location of the facility to be single-peaked and sometimes in addition symmetric.\footnote{A single-peaked preference is symmetric if equidistant locations on either side of the ideal, or `peak', location are always equally preferred.} We focus on the case where, conditional on the agent being served, the agent has preferences that are both single-peaked and symmetric. When the objective of the mechanism designer is to maximize social welfare, i.e., utilitarian welfare, the standard median mechanism is both strategyproof and social welfare optimal~\cite{Blac48}. More generally, a goal of the social choice literature has been to characterize the complete family of strategyproof mechanisms. Closest to our setting, Border and Jordan~\cite{BoJo83} provide a partial characterization of strategyproof mechanisms via the family of Generalized Median Mechanisms (GMMs). Border and Jordan show that a mechanism is strategyproof and unanimity respecting\footnote{Unanimity respecting simply means that if there is a unanimously most preferred facility location then the mechanism must locate the facility at this location.} if and only if it is a GMM, and that the family of GMMs is strictly smaller than the complete family of strategyproof mechanisms.\footnote{We note that in a slightly different setting, where the single-peaked preferences are possibly asymmetric, GMMs provide a complete characterization of strategyproof and `peak only' mechanisms (Proposition 3 of Moulin~\cite{Moul80}). Example~\ref{Example: DIC hard} in the present paper provides an example of a mechanism that is strategyproof in the Border and Jordan~\cite{BoJo83} setting but not the Moulin~\cite{Moul80} setting.} This has left a gap in the literature to characterize the complete family of strategyproof mechanisms and understand the difference in strategyproof mechanisms that are GMMs and those that are not. Figure~\ref{Figure: Summary of border jordan 1983} schematically illustrates this gap.\\ \textbf{Our Contributions:} We introduce a new mechanism problem, the capacity constrained facility location problem. This problem is a natural variant of the classic facility problem where the facility is assumed to face capacity constraints. A conceptual contribution is to formalize the effect of capacity constraints when the facility is non-excludable but cannot service all agents. We do this by defining an extensive-form game involving the mechanism designer and agents. First, agents report their preferences to the designer, and then the facility is located by the mechanism. Once the facility is located a subgame is induced where agents strategically choose whether or not to attempt to be served by the facility. The ex-post Nash equilibrium determines which agents are served by the facility and which are not. We seek mechanisms that are strategyproof in this broader game-theoretic context, i.e., ex-post dominant-strategy incentive compatible; that is, conditional on the ex-post Nash equilibrium being achieved in the subgame, no agent can benefit from misreporting their location regardless of what other agents report and regardless of other agents' true locations. Our main theoretical contribution is a complete characterization of DIC\xspace mechanisms for the capacity constrained facility location problem. We show that a mechanism is DIC\xspace if and only if it belongs to the established family of mechanisms called the Generalized Median Mechanisms (GMMs), which appear in Moulin~\cite{Moul80} and Border and Jordan~\cite{BoJo83}. Thus, the framework we introduce surprisingly provides a new characterization of GMMs. This result contributes to a novel perspective to a ``major open question" (Barbar{\`a}, Mass{\'o} and Serizawa~\citep{BMS98}) posed in Border and Jordan~\cite{BoJo83} (further discussion is provided in Section~\ref{Section: related lit}). We also provide algorithmic results and study the performance of DIC\xspace mechanisms in optimizing social welfare. We adopt the worst-case approximation measure, and provide a lower bound on the approximation ratio of any DIC\xspace mechanism. We show that at best the approximation ratio of a DIC\xspace mechanism is $2\frac{k}{k+1}$ when $k\le \ceil{(n-1)/2}$, and $\max\{\frac{n-1}{k+1}, 1\}$ otherwise. Interestingly, this lower bound is achieved by the standard median mechanism (which is also DIC\xspace) when $k\le \ceil{(n-1)/2}$ or $k=n$, and hence the median mechanism is optimal among all DIC\xspace mechanisms in those ranges. Figure~\ref{figure: illustration DIC lower boundxxy} illustrates these approximation results. Finally, we consider an extension of our framework where the mechanism designer can also restrict access to the facility, and hence dictate which agents are served. This extension is relevant to settings where the designer can issue permits, and prevent certain agents from accessing the facility. Under an anonymity assumption, we show that no mechanism that both locates the facility and stipulates which agents can be served is DIC\xspace. \begin{figure}[H] \centering \begin{tikzpicture}[scale=1.05, declare function={ func(\x)= (\x <= 50) * (2*(\x)/(\x+1)) + (\x>50) * (99/(\x+1)) ; funcy(\x)= (\x <= 50) * (2*(\x)/(\x+1)) + (\x>50) *(2*(\x)/(\x+1)) ; funcz(\x)= (\x <= 81) * (2*(\x)/(\x+1)) + (\x>81) *( 1+ 2*(100-\x+1)/(3*\x-2*(100)-2)) ; } ] \begin{axis}[ axis x line=middle, axis y line=middle, ymin=0, ymax=3, ytick={0,1,2,3}, ylabel={$\alpha$-approximation}, xmin=0, xmax=100, xlabel=$k$, xtick={0,1,25,50,75,100}, xticklabels={$0$, 1,$n/4$,$n/2$, $3n/4$,$n$}, domain=0:100,samples=201, ] \addplot [blue,ultra thick, domain=1:100] {func(x)}; \addplot [red,ultra thick,dotted, domain=1:100] {funcz(x)}; \legend{DIC\xspace lower bound, median mech. upper bound} \end{axis} \end{tikzpicture} \captionsetup{justification=centering,margin=2cm} \caption{Worst-case approximation ratio as a function of the capacity constraint, $k$.} \label{figure: illustration DIC lower boundxxy} \end{figure} \textbf{Outline:} Section~\ref{Section: related lit} provides a brief literature review. Section~\ref{sec:model} presents our model and formalizes the objective of the mechanism designer, Section~\ref{Section: characterization} then presents our key characterization result of DIC\xspace mechanisms. Section~\ref{Section: approxim of dic} explores the performance, i.e., approximation results, of DIC\xspace mechanisms. Section~\ref{Section: Excludable} considers an extension of our framework where the mechanism designer is able to dictate which agents are served by the facility. Lastly, we conclude with a discussion in Section~\ref{section: discussion and conlc}. \section{Related literature}\label{Section: related lit} A number of papers have considered related mechanism problems where the use of money is not permitted~\citep{AsRo11,AbSo03,PrTe13,Moul80,BoJo83,Gibb73,Satt75,SuBo15,MLY+16}. Most closely related to our paper is~\cite{PrTe13}, where agents with single-peaked preferences are located along the real line and the problem of locating a (non-capacity constrained) public facility is studied with the goal of minimizing two distinct objective functions; the total social cost and the maximum social cost. This problem is often referred to as a single facility location problem, or single facility location game.\footnote{We do not review a large segment of computer science and operations research literature on facility location problems that assumes complete information and hence does not require a mechanism design approach to overcome strategic tensions (for a survey see~\cite{BrCh89}). Furthermore, this literature, when incorporating capacity constraints, typically focuses on the problem of locating multiple capacity constrained facilities that have sufficient capacity to service all agents~\cite{Cygan12:LP,Charikar02:Constant,KPTW01,Vygen05:Approximation}. Instead we review the subset of literature that assumes strategic agents and takes a mechanism design approach.} In this paper, we focus on minimizing the first objective function in the new environment where the facility is capacity constrained. In contrast to the setting studied by~\cite{PrTe13}, agents have interdependent utilities, in our model, due to the capacity constraints of the facility and the induced subgame. Accordingly, the mechanism design problem requires consideration of a broader game-theoretic environment where agents face an extensive-form game when reporting preferences. Another large body of literature has been concerned with characterizing DIC\xspace mechanisms for the unconstrained facility location problem. The key pioneering works in this area are by Moulin~\cite{Moul80}, and Border and Jordan~\cite{BoJo83}. In one-dimensional space and for symmetric and single-peaked preferences, Border and Jordan~\cite{BoJo83} characterize a general class of DIC\xspace mechanisms which have become to be known as \emph{generalized median mechanisms} (GMM), and in addition show that when the property of unanimity is enforced every DIC\xspace mechanism is a GMM.\footnote{Border and Jordan~\cite{BoJo83} also consider the problem in higher dimensions.} These results differ slightly from the characterization results of Moulin~\cite{Moul80} since the setting studied in~\cite{Moul80} does not restrict the single-peaked preferences to be symmetric. Characterizing DIC\xspace but non-unanimity respecting mechanisms was posed as an open problem; as stated by Border and Jordan in~\cite{BoJo83} ``\emph{[the characterization] leaves several open problems. The most obvious question is: what happens if the unanimity assumption is dropped?}" Characterizations however, have remained elusive and it has become known as a ``\emph{major open question}"~\citep{BMS98} with only partial progress towards a resolution~\citep{Chin97,BMS98,PPS+97,Weym11}. In this paper we focus on the one-dimensional case where open questions still remain; in particular, the results of~\cite{BoJo83} in one-dimensional space leaves two gaps: \begin{enumerate} \item there exist non-unanimity respecting DIC\xspace mechanisms that are not GMM, and \item there exist DIC\xspace mechanisms that are GMMs but do not respect unanimity. \end{enumerate} Our characterization of DIC\xspace mechanisms via the family of GMM, although considered in a different setting where the facility is capacity constrained, applies more generally to mechanisms that are not unanimity respecting. Hence, we contribute a novel perspective to these gaps in characterization, showing that a mechanism is DIC\xspace for all possible capacity constraint $k\le n$ if and only if it is a GMM. This means that any mechanism in gap (1) is not DIC\xspace when the facility is capacity constrained with $k<n$. Furthermore, the unanimity property is sufficient to ensure that a mechanism that is DIC\xspace in the non-capacity constrained setting remains DIC\xspace when capacity constraints are present. \section{Model, Basic Properties, and Definitions} \label{sec:model} \textbf{Model:} Let $N=\{1,\ldots, n\}$ be a finite set of $n$ agents and let $X=[0,1]$ be the domain of agent locations. Each agent $i\in N$ has a location $x_i\in X$, which is privately known, the profile of agent locations is denoted by $\boldsymbol{x}=(x_1, \ x_2, \ \ldots \ , \ x_n)$. The profile of all agent except some agent $i\in N$ is denoted by $\boldsymbol{x}_{-i}=(x_1, \ x_2,\ \ldots \ , x_{i-1}, \ x_{i+1}, \ \ldots \ , \ x_n)$. There is a single facility to be located in $X$ by some mechanism. A \emph{mechanism} is a function $M: \prod_{i\in N} X\rightarrow X$ mapping a profile of locations to a single location.\footnote{We restrict our attention to deterministic mechanisms.} We denote the mechanism's output, or facility location, by $s\in X$. The facility faces a capacity constraint $k \ : \ k \le n$, which provides a limit on the number of agents that can be served. A \emph{served} agent attains utility $u_i=1-d(s, x_i)\ge 0$, where $d(\cdot, \ \cdot)$ denotes the Euclidean metric; an \emph{unserved} agent attains zero utility, $u_i=0$.\footnote{Our characterization results (Section~\ref{Section: characterization}) do not rely on this specific utility function -- we only require that agents weakly prefer to be served than not, and conditional on being served the agent's utility is symmetric and (strictly) single-peaked. However, our approximation results do rely on the choice of utility function.} The set of agents served by the facility's limited capacity is not directly controlled by the mechanism, since the facility is assumed to be non-excludable.\footnote{In Section~\ref{Section: Excludable} we weaken this assumption and consider the problem when the facility can be made excludable.} Instead, this is determined by the equilibrium outcome of a subgame induced by the mechanism's choice of facility location. Given an instance $\langle \boldsymbol{x}, s, k\rangle$, we assume that the set of agents served by the facility is determined via the ex-post Nash equilibrium\footnote{That is, no agent has an incentive to unilaterally deviate, whatever the preferences of each agent.} of a subgame, $\Gamma_{\boldsymbol{x}}(s,k)$. The subgame $\Gamma_{\boldsymbol{x}}(s,k)$ is as follows. Each agent $i\in N$ chooses an action $a_i\in A= \{\emptyset, s\}$ of whether, or not, to travel from their location $x_i$ to the facility location $s$. Action $a_i=s$ denotes agent $i$'s choice to travel to the facility, and action $a_i=\emptyset$ denotes the agent's choice to not travel to the facility. We denote the profile of agent actions by $\boldsymbol{a}=(a_1, \ a_2, \ \ldots \ , a_n)$. An agent $i$ is \emph{served} by the facility if they travel to the facility, $a_i=s$, and strictly less than $k$ other agents travel to the facility, i.e., $|N(\boldsymbol{a},s)|\le k$ where $N(\boldsymbol{a},s):=\{i\in N\ : \ a_i=s\}$. If instead they travel to the facility and at least $k$ other agents also travel to the facility, i.e., $|N(\boldsymbol{a},s)|>k$, then a tie-breaking rule is used to determine which subset of $k$ agents in $N(\boldsymbol{a},s)$ are served. We assume a distance-based tie-breaking rule ($\triangleright$) whereby agent $i$ has higher priority than agent $j$, denoted $i\triangleright j$, if agent $i$ is closer to the facility than agent $j$, i.e., $d(s, x_i)<d(s, x_j)$; if agent $i$ and $j$ are equidistant, i.e., $d(s, x_i)=d(s, x_j)$, then we apply some deterministic tie-breaking rule.\footnote{This ensure that the binary relation $\triangleright$ is complete.} This distance-based tie-breaking rule can be motivated by a `first-come-first-serve' protocol when the location $s$ is geographical and agents physically travel to the facility to be served. If the location $s$ corresponds to a type, or quality, of service the `first-come-first-serve' protocol is analogous to a `best-fit' tie-breaking protocol that prioritizes agents according to how close the type of service being offered is to their true needs, i.e., $d(s,x_i)$. An agent $i$ with location $x_i$ attains utility $1-d(s, x_i)$ if $a_i=s$ and they are served, if $a_i=s$ and they are not served they attain utility $-d(s,x_i)$, and otherwise $a_i=\emptyset$ and agent $i$ attains zero utility. Abusing terminology slightly, given a profile of locations $\boldsymbol{x}$ and facility location $s$, we will refer to $k$ highest priority agents with respect to the distance-based tie-breaking rule ($\triangleright$) as the \emph{$k$-closest} agents. We denote this set of agents by $N_k^*(\boldsymbol{x}, s)$. \\ \textbf{Basic properties of the model:} For any instance $\langle \boldsymbol{x}, s, k\rangle$, the subgame $\Gamma_{\boldsymbol{x}}(s, k)$ has an (essentially) unique equilibrium. There always exists an equilibrium where the $k$-closest agents, $N_k^*(\boldsymbol{x}, s)$, choose to travel to the facility and are served by the facility. In instances where one or more of the $k$-closest agents are indifferent between being served and not traveling to the facility, i.e., whenever $d(s, x_i)=1$ for some $i\in N_k^*(\boldsymbol{x}, s)$, multiple equilibria arise. For the purposes of this paper these equilibria are all `equivalent' since every agent attains the same utility in each of the equilibria. Proposition~\ref{Proposition: Basic prop 1} states this basic property. The proof is straightforward and left to the appendix for the interested reader. \begin{proposition}\label{Proposition: Basic prop 1} For any instance $\langle \boldsymbol{x}, s, k\rangle$, there exists an equilibrium of the subgame $\Gamma_{\boldsymbol{x}}(s, k)$ and, furthermore, in every equilibrium agent $i\in N$ attains utility $1-d(s, x_i)$ if $i\in N_k^*(\boldsymbol{x}, s)$, and otherwise, attains zero utility. \end{proposition} Given Proposition~\ref{Proposition: Basic prop 1}, we can denote agent $i$'s ex-post equilibrium utility from the facility location $s$ by simply $u_i^*(s, \boldsymbol{x}, k)$. A useful observation is that the agent's ex-post utilities are (weakly) single-peaked, this result is stated in Proposition~\ref{Proposition: Basic prop 2}. The proof is straightforward and left to the appendix for the interested reader. Intuitively, the result holds because under the distance-based priority ($\triangleright$) an agent's priority only (weakly) improves when the facility moves from a location $s<x_i$ to a new location $s' \ : \ s<s'\le x_i$ (similarly for $s>x_i$). \begin{proposition}\label{Proposition: Basic prop 2} For any agent $i\in N$ and any pair of instances $\langle \boldsymbol{x}, s, k\rangle$\xspace and $\langle \boldsymbol{x}, s', k\rangle$, if $s<s'\le x_i$ or $x_i\le s'<s$ then $u_i^*(s, \boldsymbol{x}, k)\le u_i^*(s', \boldsymbol{x}, k)$. \end{proposition} In this paper we are interested in `strategyproof' mechanisms where agents do not have an incentive to misreport their location. In particular, we use the ex-post \emph{Dominant-strategy Incentive Compatible} (DIC) concept of strategyproofness. That is, a mechanism $M$ is DIC\xspace if for every agent $i\in N$ $$u_i^*\Big(M(x_i, \hat{\boldsymbol{x}}_{-i}), \boldsymbol{x}, k \Big)\ge u_i^*\Big(M(x_i', \hat{\boldsymbol{x}}_{-i}), \boldsymbol{x}, k \Big)$$ for every $x_i'$, for every $\hat{\boldsymbol{x}}_{-i}$, and for every $\boldsymbol{x}_{-i}$. Note that DIC\xspace implies that, conditional on the ex-post Nash equilibrium being achieved in the subgame $\Gamma_{\boldsymbol{x}}(s, k)$, the mechanism is dominant-strategy incentive compatible at the reporting stage. Formally speaking, the DIC\xspace definition depends on the capacity constraint $k$ however, abusing notation slightly, we omit the $k$ dependence as this will be clear from the context.\\ \textbf{Objective of the mechanism designer:} In this paper we are interested in DIC mechanisms that perform well with respect to \emph{social welfare}, i.e., the sum of agents' equilibrium utilities. As is now standard in the algorithmic mechanism design literature we measure the performance of a DIC mechanism by the worst-case approximation ratio. Given an instance $\langle \boldsymbol{x}, s, k\rangle$\xspace, denote the optimal social welfare by $\Pi^*(\boldsymbol{x}, k):=\max_{s\in X} \sum_{i=1}^n u_i^*(s, \boldsymbol{x}, k),$ and given a mechanism $M$ let $\Pi_M(\boldsymbol{x}, k)$ denote the social welfare attained by the mechanism, i.e., \begin{align*} \Pi_M(\boldsymbol{x}, k)&:=\sum_{i=1}^n u_i^*(s, \boldsymbol{x}, k) &&\text{where $s=M(\boldsymbol{x})$.} \end{align*} The mechanism $M$ is an $\alpha$-approximation if \begin{align}\label{Equation: approx 1} \max_{\boldsymbol{x}\in \prod_{i=1}^n X}\Bigg\{\frac{\Pi^*(\boldsymbol{x},k)}{\Pi_M(\boldsymbol{x}, k)}\Bigg\}\le \alpha, \end{align} the LHS of (\ref{Equation: approx 1}) is referred to as the \emph{approximation ratio}. A mechanism (or family of mechanisms) is said to have a \emph{lower bound}, $\bar{\alpha}$, on the approximation ratio if \begin{align}\label{Equation: approx 2} \bar{\alpha}\le \max_{\boldsymbol{x}\in \prod_{i=1}^n X}\Bigg\{\frac{\Pi^*(\boldsymbol{x},k)}{\Pi_M(\boldsymbol{x}, k)}\Bigg\}.\end{align} We refer to a mechanism $M$ that attains the optimal social welfare for all instances $\langle \boldsymbol{x}, s, k\rangle$\xspace, and hence is an $\alpha=1$-approximation, as an \emph{optimal mechanism}. Again, the optimal mechanism definition depends on the capacity constraint $k$ however, abusing notation, we will omit the $k$ dependence as this will be clear from the context. Note that the optimal mechanism need not, and in general will not, be DIC\xspace for a given $k$.\\ \begin{remark}\label{Remark: special case} When $k=n$ our model reduces to the well-known facility location problem studied in~\cite{PrTe13,Moul80,Blac48}. Accordingly, this case ($k=n$) is fully resolved: the `median' mechanism which always locates the facility at the median reported location is both optimal and DIC\xspace. However, the case for $k<n$ has not been studied before -- this is the focus of the present paper. \end{remark} To illustrate how the case where $k<n$ differs from the standard $k=n$ setting we provide an example. The example considers a mechanism that is DIC\xspace when $k=n$ but for any capacity constraint $k<n$ is not DIC\xspace. \begin{example}\label{Example: DIC hard} Let $M$ be the mechanism such that $M(\boldsymbol{x})=\text{arg}\min_{s\in \{ 1/4, \ 3/4\}} d(s, x_i)$ for some $i\in N$, tie-breaking in favor of $s=1/4$ if necessary. That is, the mechanism locates the facility at either location $1/4$ or $3/4$ depending on which is closest to agent $i$'s report. First, notice that the mechanism $M$ is DIC\xspace when $k=n$. If $k=n$ then every agent $i$ is always served by the facility and hence attains utility $1-d(s, x_i)$ for any facility location $s$. It is immediate that agent $i$ can never strictly benefit from misreporting their location. However, when $k<n$ the mechanism is not DIC\xspace. To see this, consider an instance where agent $i$ is located at $3/8$ and all other agents are located at $1/4$. When agent $i$ truthfully reports, the facility is located at $1/4$ and is not served -- leading to zero utility. On the other hand, misreporting to $x_i'\in (1/2,1]$ leads to the facility location $3/4$ and agent $i$ is the closest agent to the facility. In this case agent $i$ attains strictly higher utility equal to $1-d(3/4, 3/8)>0$. Thus, the mechanism is not DIC\xspace for any $k<n$.\hfill $\diamond$ \end{example} \subsection{A complete characterization of DIC\xspace mechanisms}\label{Section: characterization} We begin by defining a family of mechanisms called \emph{Generalized Median Mechanisms (GMM)}. This family was introduced by Border and Jordan~\cite{BoJo83} for the $k=n$ setting, and provides a partial characterization of DIC\xspace mechanisms. The main result of the present paper shows that GMMs provide a complete characterization of mechanisms that are (1) DIC\xspace for all $k\le n$, and (2) DIC\xspace for some $k<n$. \begin{definition}{[Generalized Median Mechanism (GMM)]} A mechanism $M$ is said to be a \emph{Generalized Median Mechanism} (GMM) if for each $S\subseteq N$ there are constants $a_S$, such that for all location profiles $\boldsymbol{x}$ \begin{align}\label{equation: GMM} M(\boldsymbol{x})=\min_{S\subseteq N} \max\Big\{\max_{i\in S}\{x_i\}, a_S\Big\}. \end{align} \end{definition} To build some intuition, we highlight some well-known GMM mechanisms: \begin{enumerate} \item The \emph{median mechanism}\footnote{A mechanism that always outputs the median of the reported location profile i.e., the $\floor{(n+1)/2}$-th smallest report.} is attained from (\ref{equation: GMM}) by setting $a_S=1$ for all subsets $S\subseteq N$ with $|S|<\floor{(n+1)/2}$ and $a_S=0$ otherwise. \item The \emph{$s$-constant mechanism}\footnote{A mechanism that always outputs the location $s$.} for some location $s\in X$, i.e., the mechanism that always outputs location $s$, is attained from (\ref{equation: GMM}) by setting $a_\emptyset=s$ and $a_S=1$ for all other (non-empty) subsets $S\subseteq N$. \item The \emph{agent $i$ dictatorship mechanism}\footnote{A mechanism that always outputs the location of agent $i$'s report.} is attained from (\ref{equation: GMM}) by setting $a_S=0$ for $S=\{i\}$ and $a_S=1$ for other subsets $S\subseteq N$. \end{enumerate} An example of a mechanism that is not a GMM is the dictatorial-style mechanism considered in Example~\ref{Example: DIC hard}. The main result of the present paper is the following characterization: A mechanism $M$ is DIC\xspace for some $k< n$ if and only if $M$ is DIC\xspace for every $k\le n$ if and only if $M$ is a GMM. This result is stated in Theorem~\ref{Corollary: equivalence results}. \begin{theorem}\label{Corollary: equivalence results} Let $M$ be a mechanism. The following are equivalent: \begin{enumerate} \item $M$ is a GMM, \item $M$ is DIC\xspace for some $k<n$, \item $M$ is DIC\xspace for every $k\le n$. \end{enumerate} \end{theorem} We present the proof via a series of propositions, and utilize a characterization of Border and Jordan~\citet{BoJo83}. Before presenting these propositions we illustrate the contribution of Theorem~\ref{Corollary: equivalence results}, benchmarked against the results of~\citet{BoJo83}: where GMM are shown to be a strict subset of DIC\xspace mechanisms when $k=n$. Below, in Figure~\ref{Figure: Summary of border jordan 1983}, we present the result of~\citet{BoJo83}. Figure~\ref{Figure: illustration of our results} illustrates our contribution. When considering the capacity constrained problem, with $k<n$, the family of DIC\xspace mechanisms coincides precisely with the GMM family. \begin{figure}[H] \centering \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=0.8, dot/.style = {circle, inner sep=0pt, minimum size=1mm, fill, node contents={}} ] \def(0,0) coordinate (a) circle (2.75cm){(0,0) coordinate (a) circle (2.75cm)} \def(0,-0.24) coordinate (b) circle (2.3cm){(0,-0.24) coordinate (b) circle (2.3cm)} \def(0,-0.9) coordinate (b) circle (1.5cm){(0,-0.9) coordinate (b) circle (1.5cm)} \begin{scope} \fill[gray] (0,-0.24) coordinate (b) circle (2.3cm); \end{scope} \draw (0,0) coordinate (a) circle (2.75cm); \draw (0,-0.24) coordinate (b) circle (2.3cm); \node[circle] at (0, 2.35) {DIC}; \node[circle] at (0, 0.8) {GMM}; \end{tikzpicture} \caption{Setting where $k=n$~\cite{BoJo83}.} \label{Figure: Summary of border jordan 1983} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=0.8, dot/.style = {circle, inner sep=0pt, minimum size=1mm, fill, node contents={}} ] \def(0,0) coordinate (a) circle (2.75cm){(0,0) coordinate (a) circle (2.75cm)} \def(0,-0.24) coordinate (b) circle (2.3cm){(0,-0.24) coordinate (b) circle (2.3cm)} \def(0,-0.9) coordinate (b) circle (1.5cm){(0,-0.9) coordinate (b) circle (1.5cm)} \begin{scope} \fill[white] (0,0) coordinate (a) circle (2.75cm); \fill[gray] (0,-0.24) coordinate (b) circle (2.3cm); \end{scope} \draw (0,-0.24) coordinate (b) circle (2.3cm); \node[circle] at (0, 0.8) {DIC $\equiv$ GMM}; \end{tikzpicture} \caption{Setting where $k<n$.} \label{Figure: illustration of our results} \end{minipage} \end{figure} First, we present a result of~\citet{BoJo83} characterizing the family of GMMs via a property of the mechanism that they call `uncompromising'. Informally speaking, an uncompromising mechanisms means that an agent cannot influence the mechanism output in their favor by reporting extreme locations. The most obvious mechanism satisfying this property is the median mechanism. Formally, a mechanism $M$ is said to be \emph{uncompromising} if for every profile of locations $\boldsymbol{x}$, and each agent $i\in N$, if $M(\boldsymbol{x})=s$ then \begin{align}\label{eqaution: uncompromising 1} x_i>s &\implies M(x_i',\boldsymbol{x}_{-i})=s\qquad \text{ for all } x_i'\ge s &&\text{ and,}\\ x_i<s &\implies M(x_i' , \boldsymbol{x}_{-i})=s \qquad \text{ for all } x_i'\le s.\label{eqaution: uncompromising 2} \end{align} \begin{lemma}[Border and Jordan~\cite{BoJo83}]\label{theorem: generalized median} A mechanism $M$ is uncompromising if and only if it is a GMM. \end{lemma} Note that Lemma~\ref{theorem: generalized median}, although proved in the setting where $k=n$, does not rely on any strategic properties of the mechanism and so applies more generally to our setting of interest where $k\le n$. We now prove our first proposition towards the characterization result. Proposition~\ref{proposition: uncompromise connections} says that, every GMM is DIC\xspace for any $k\le n$. \begin{proposition}\label{proposition: uncompromise connections} Every GMM is DIC\xspace for any $k\le n$. \end{proposition} \begin{proof} Fix $k\le n$ and let $M$ be a GMM. For the sake of a contradiction suppose that $M$ not DIC\xspace. That is, for some agent $i$ with location $x_i$, there exist a profile of other agent locations $\boldsymbol{x}_{-i}$, and reports $\hat{\boldsymbol{x}}_{-i}$ such that for some $x_i'\neq x_i$ \begin{align}\label{equation: uncompromising truthful} u_i^*(M(x_i', \hat{\boldsymbol{x}}_{-i}), \boldsymbol{x}, k)> u_i^*(M(x_i, \hat{\boldsymbol{x}}_{-i}), \boldsymbol{x}, k). \end{align} Define $s'=M(x_i', \hat{\boldsymbol{x}}_{-i})$ and $s=M(x_i, \hat{\boldsymbol{x}}_{-i})$. It is immediate from (\ref{equation: uncompromising truthful}) that $s\neq x_i$ and $s\neq s'$. Without loss of generality we assume that $x_i>s$. By assumption, $M$ is a GMM and hence by Lemma~\ref{theorem: generalized median} satisfies the uncompromising property. It follows that $x_i'<s$, since otherwise $x_i' \ge s $ and (\ref{eqaution: uncompromising 1}) would imply $s'=s$ contradicting (\ref{equation: uncompromising truthful}). Case 1: Suppose $s<s'$. Then $x_i'<s'$ and the uncompromising property (\ref{eqaution: uncompromising 2}) implies that $$M(x_i'', \boldsymbol{x}_{-i})=s' \qquad \text{for all } x_i'' \le s'.$$ If $x_i''\in [s, s']$ the uncompromising property implies that $M(x_i'', \boldsymbol{x}_{-i})=M(x_i, \boldsymbol{x}_{-i})$, i.e., $s'=s$, which contradicts (\ref{equation: uncompromising truthful}). Thus, we conclude that $x_i''<s$. Now consider a new instance where agent $i$ has true location $y_i=\varepsilon\in (0, s)$, all other agents have true location $y_j=0$ but collectively report $\hat{\boldsymbol{x}}_{-i}$. If agent $i$ reports $y_i=\varepsilon$ then the facility location is $s'$ and $i$ attains utility $1-d(s', \varepsilon)$. If instead agent $i$ reports $y_i'=x_i$ then the facility location is $s<s'$ and $i$ attains strictly higher utility $1-d(s, \varepsilon)$. Thus, the mechanism is not $k$-DIC -- a contradiction. Case 2: Suppose $s>s'$. Since $x_i>s>s'$, it follows from the single-peaked property (Proposition~\ref{Proposition: Basic prop 2}) that $u_i^*(s, \boldsymbol{x}, k)\ge u_i^*(s', \boldsymbol{x}, k)$. This contradicts (\ref{equation: uncompromising truthful}). \end{proof} We now prove our second proposition towards the characterization result. Proposition~\ref{lemma: SP translation} says that, if a mechanism is DIC\xspace for some $k<n$ then it is DIC\xspace for $k=n$. Thus, the DIC\xspace requirement is more restrictive for $k<n$ than for $k=n$ -- meaning that the capacity constraints induce new strategic concerns for the mechanism designer. \begin{proposition}\label{lemma: SP translation} If a mechanism $M$ is DIC\xspace, for some $k<n$, then it is DIC\xspace for $k=n$. The converse is not true. \end{proposition} \begin{proof} We prove the contrapositive. Suppose that $M$ is not DIC\xspace for $k=n$. That is, for some agent $i$ with location $x_i$ there exists a report $x_i'$, a profile of other agent reports $\hat{\boldsymbol{x}}_{-i}$, and a profile of other agent locations $\boldsymbol{x}_{-i}$ such that \begin{align}\label{equation: sp translation} u_i^*(M(x_i', \hat{\boldsymbol{x}}_{-i}), \boldsymbol{x}, n)>u_i^*( M(x_i, \hat{\boldsymbol{x}}_{-i}), \boldsymbol{x}, n). \end{align} Let $s'=M(x_i', \hat{\boldsymbol{x}}_{-i})$ and $s=M(x_i, \hat{\boldsymbol{x}}_{-i})$. When $k=n$ all agents are served and so (\ref{equation: sp translation}) simplifies to \begin{align}\label{Equation: k<n implies n} 1-d(s', x_i)>1-d(s, x_i). \end{align} Now we consider the same profile of reports but for an arbitrary $k<n$. Furthermore, suppose all agents have location equal to $x_i$ and agent $i$ has highest priority ($\triangleright$), i.e., after tie-breaking. The mechanism output is independent of agent true locations and so we still attain $M(x_i', \hat{\boldsymbol{x}}_{-i})=s'$ and $M(x_i', \hat{\boldsymbol{x}}_{-i})=s$. Furthermore, since $i$ has highest priority (recall that the priority is distance-based but in this instance all agents are equidistant for every facility location) they are always served for every facility location. In particular, the utility from reporting truthfully is $1-d(s, x_i)$ and misreporting is $1-d(s', x_i)$ -- the latter provides strictly higher utility, as per (\ref{Equation: k<n implies n}). We conclude that the mechanism is not DIC\xspace, and since $k<n$ was chosen arbitrarily it holds for all $k<n$. The final statement in the proposition was shown in Example~\ref{Example: DIC hard}. \end{proof} We now prove our third and final proposition, which completes the characterization result. Proposition~\ref{theorem: capacitated SP equivalence} says that, if a mechanism is DIC\xspace for some $k<n$ then it is a GMM. \begin{proposition}\label{theorem: capacitated SP equivalence} If a mechanism $M$ is DIC\xspace, for some $k<n$, then it is a GMM. \end{proposition} \begin{proof} Let $M$ be a mechanism that is DIC\xspace for some $k<n$. First, consider an instance where an arbitrary agent $i$ has location $x_i$, and the other agents report $\hat{\boldsymbol{x}}_{-i}$. If $i$ reports truthfully the mechanism outputs some location that we denote as $s$, i.e., \begin{align}\label{Equation: truthful report DIC capacatited leads to GMM s} s:=M(x_i, \hat{\boldsymbol{x}}_{-i}). \end{align} If $s=x_i$ then consider an alternate location and profile of other agents' reports so that the equality does not hold. If no such location and report profile exists then the mechanism always coincides with agent $i$'s report; that is, the mechanism is the agent $i$ dictatorship mechanism, which is a GMM. Now suppose $s\neq x_i$, and without loss of generality assume $s<x_i$. By assumption $M$ is DIC\xspace, for some $k<n$, and so it must be that for all $x_i'$ \begin{align}\label{equation: sp translationx1} u_i^*( s, \boldsymbol{x}, k)\ge u_i^*(M(x_i', \hat{\boldsymbol{x}}_{-i}), \boldsymbol{x}, k), \end{align} where $\boldsymbol{x}$ denotes the location profile of all agents. We now show that deviations by agent $i$ satisfy the uncompromising property, i.e., for any $x_i'\ge s$ $M(x_i', \hat{\boldsymbol{x}}_{-i})=s$. To do so, we analyze different cases and sequential refine the possible values of $M(x_i', \hat{\boldsymbol{x}}_{-i})$, we the derive a contradiction to eventually conclude that $M(x_i', \hat{\boldsymbol{x}}_{-i})=s$.\\ \underline{Case 1:} Suppose all other agents have location $s$. When agent $i$ truthfully reports $x_i$ the facility location is $s$ and they attain zero utility. Now consider some report $x_i' \ge s$, leading to facility location $$s_{x_i'}:=M(x_i', \hat{\boldsymbol{x}}_{-i}).$$ If $s_{x_i'}\in (\frac{s+x_i}{2}, 1]$ for any $x_i'\ge s$ we attain a contradiction, since this agent $i$ would be served from this report and attain strictly more utility than being truthful. We conclude that $$s_{x_i'}\in [0,s)\cup \{s\}\cup (s, \frac{s+x_i}{2}) \qquad \text{ for all } x_i'\ge s.$$ \underline{Case 2:} Suppose all other agents have location $1$, noting that $s<x_i\le 1$. In the event that $x_i=1$ (in which case all agents are equidistant from every facility location), assume agent $i$ has the highest priority in the tie-breaking rule ($\triangleright$). When agent $i$ truthfully reports their location they are served and attain utility $1-d(s, x_i)$. To avoid a contradiction of (\ref{equation: sp translationx1}), it must be that $s_{x_i'}\le s$. Thus, we conclude \begin{align*} s_{x_i'}&\in [0,s)\cup \{s\}&& \text{ for all } x_i'\ge s. \end{align*} For the sake of a contradiction suppose there exists some $x_i''\ge s$ such that \begin{align}\label{equation: sp translationx3} s_{x_i''}&\in [0,s). \end{align} Consider a new instance where agent $i$'s location is $y_i=x_i''$ (note that $x_i''\ge s$), all other agents have location $1$, and the other agents report $\hat{\boldsymbol{x}}_{-i}$ (the same profile of reports as per (\ref{equation: sp translationx1})). In the event that $y_i=x_i''=1$ (in which case all agents are equidistant from every facility location), assume agent $i$ has the highest priority in the tie-breaking rule ($\triangleright$). If agent $i$ reports their location $y_i$ the facility location is $s_{y_i}=s_{x_i''}<s$, as per (\ref{equation: sp translationx3}), and they attain utility $1-d(s_{y_i}, y_i)$. But now misreporting to $y_i'=x_i$ then as per (\ref{Equation: truthful report DIC capacatited leads to GMM s}) the facility location is $s$ where $$s_{y_i}< s \le y_i,$$ leading to utility $1-d(s, y_i)$. This is a contradiction of the mechanism being DIC\xspace, since $d(s, y_i)<d(s_{y_i}, y_i)$; that is, agent $i$ by reporting $y_i'$ instead of their true location $y_i$ attains strictly higher utility. We conclude that $s_{x_i'}=s$ for all $x_i'\ge s$. Thus, the mechanism is uncompromising and hence a GMM. \end{proof} \section{Approximation of DIC\xspace mechanisms}\label{Section: approxim of dic} Given the characterization result (Theorem~\ref{Corollary: equivalence results}) of the previous section, there is no distinction between the family of mechanisms that are DIC\xspace for some $k<n$, and the family of mechanisms that are DIC\xspace for all $k\le n$: both families are equal to the GMM family. Accordingly, we will now simply refer to a mechanism as being DIC\xspace. \subsection{Optimal mechanism is not DIC\xspace} We first show that, in general for $k<n$, the optimal mechanism is not DIC\xspace. Note that this result contrasts with the $k=n$ setting where the median mechanism is both optimal and DIC\xspace (Remark~\ref{Remark: special case}). \begin{theorem}\label{Theorem: optimal not dic} The optimal mechanism is DIC\xspace if and only if $k\in \{1,n\}$. \end{theorem} \begin{proof} The backward direction of the theorem statement is straightforward: If $k=1$ then for any $i\in N$ the agent $i$ dictator mechanism, where the mechanism output always coincides with agent $i$'s report, is both optimal and DIC\xspace. This is trivial and we do not provide further details. If $k=n$ then the median mechanism is both optimal and DIC\xspace. This result has long been known and can be found in~\citet{Blac48,Moul80,PrTe13}. We now prove the forward direction using the contrapositive. Let $k\notin \{1,n\}$ and partition the agent into $\floor{n/k}$ groups of size $k$, denoted by $N_{t}$ for $t=1, 2, \ldots, \floor{n/k}$, and one group of size $n-\floor{n/k}$, denoted by $N_{\floor{n/k}+1}$. We now identify $\floor{n/k}+1$ locations in $[0,1]$, let $$y_t= \frac{t}{\floor{n/k}+1}\qquad \text{ for } t=1, 2, \ldots, \floor{n/k}+1.$$ Consider a scenario such that for each $t=1, 2, \ldots, \floor{n/k}+1$, all but one agent in $N_t$ is located at $y_t$ and a single agent is located at $y_t- t\, \varepsilon$ for some sufficiently small $\varepsilon>0$. In each instance denote the single agent located at $y_t- t\, \varepsilon$ by $i_t\in N_t$. In this scenario it is immediate the optimal welfare is attained by locating the facility at location $y_1$, leading to a social welfare of $k-\varepsilon$ and agent $i_1$ attain utility $1-\varepsilon$. Now in a new scenario where agent $i_1$ is located at $y_1-3\varepsilon$ the optimal mechanism must locate the facility at $y_2$. In this case agent $i_1$ attains utility zero. However, if agent $i_1$ misreport their location to $y_1-\varepsilon$ then (as shown above) the facility location will be $y_1$ and they will attain strictly higher utility $1-\varepsilon$. That is, the optimal mechanism is not DIC\xspace for $k\notin \{1,n\}$. \end{proof} Despite Theorem~\ref{Theorem: optimal not dic} stating a stark impossibility result, we note that absent strategic manipulations by the agents the optimal mechanism can be efficiently computed. Remark~\ref{Remark: Computational result} says that, for any $k\le n$ the optimal mechanism's output and corresponding welfare can be computed in polynomial time. \begin{remark}\label{Remark: Computational result} The optimal facility location and welfare can be computed in polynomial time for any $k\le n$. \end{remark} We sketch an informal argument for Remark~\ref{Remark: Computational result}. Order the agents $i\in N$ such that $x_i\le x_{j}$ if and only if $i\le j$. It is straightforward to show that an optimal solution has two features (1) the facility serves a \emph{contiguous} set of $k$ agents, i.e., if agent $i$ and $i+2$ are served then agent $i+1$ is served, and (2) the facility is located at the median of these $k$ served agents. Given these features, it is immediate that a polynomial-time procedure exists by simply comparing the welfare produced by, the at most $n$, sets of $k$ contiguous agents. \subsection{Lower bound on DIC\xspace approximation} Utilizing the characterization result of DIC\xspace mechanisms via the family of GMMs, we provide a lower bound on the approximation ratio for all DIC\xspace mechanisms. Theorem~\ref{theorem: uncompromising at best 2} shows that at best a DIC\xspace mechanisms provides a $2\frac{k}{k+1}$-approximation when $k\le \ceil{(n-1)/2}$, and otherwise provides at best an $\max\{\frac{n-1}{k+1},1\}$-approximation. This lower bound on the approximation ratio is illustrated in Figure~\ref{figure: illustration DIC lower boundxxy}. \begin{theorem}\label{theorem: uncompromising at best 2} Let $n\ge 2$. A DIC\xspace mechanism is at best an $\alpha$-approximation with $\alpha=2\frac{k}{k+1}$ when $1\le k\le \ceil{(n-1)/2}$, and $\alpha=\max\{\frac{n-1}{k+1},1\}$ otherwise. \end{theorem} \begin{proof} Let $M$ be a DIC\xspace mechanism, and consider a scenario where all $n$ agents have distinct locations contained in the interval $I=(1/2-1/2\varepsilon, \ 1/2 +1/2\varepsilon)$ for some sufficiently small $\varepsilon>0$. Denote the profile of agent locations by $\boldsymbol{x}$, and the mechanism's corresponding output by $s=M(\boldsymbol{x})$. We consider two cases. \underline{Case 1:} Suppose $s\notin I$ and without loss of generality assume $s<1/2-1/2\varepsilon$. Now suppose that sequentially agents $i=1, 2, \ldots, n$ have their locations changed and kept at $x_i=1$, and consider the sequence of facility locations produced by the mechanism $s_1, s_2, \ldots, s_n$. By the uncompromising property (satisfied by $M$ since it is a GMM) the location of the facility never changes from $s$. That is, $s_n=s$ despite every agent having location at $1$. The optimal social welfare in this scenario is clearly $k$, however, the mechanism provides welfare of \begin{align*} k(1-d(s, 1))&=k\, s< k (1/2-1/2\varepsilon)\rightarrow k/2 &&\text{as $\varepsilon\rightarrow 0$.} \end{align*} Thus, the approximation ratio is at best $k/(k/2)=2$. \underline{Case 2:} Suppose $s\in I$ and without loss of generality assume $s\le 1/2$. Let $\lambda_1, \lambda_2$ be the number of agents with locations strictly less than $s$, and strictly above $s$, respectively. Note that $\lambda_1+\lambda_2\in \{n-1, n\}$. Similar to Case 1, suppose the $\lambda_1$ agents instead had location at $0$ and the $\lambda_2$ agents had location at $1$ -- by the uncompromising property the facility location is unchanged. To attain the bound on the approximation ratio we consider two subcases where $k\le \ceil{(n-1)/2}$ and $k>\ceil{(n-1)/2}$. In the first subcase ($k\le \ceil{(n-1)/2}$): the optimal welfare is $k$, since either $\lambda_1$ or $\lambda_2$ exceeds $k$ meaning that $k$ agents can be served at either 0 or 1. The mechanism's welfare is at most \begin{align*} 1+(k-1) (1-d(s,0))&< 1+(k-1) (1/2-1/2\varepsilon)\rightarrow 1/2 + k/2&&\text{as $\varepsilon\rightarrow 0$.} \end{align*} Thus, the approximation ratio is at best $k/(1/2+k/2)=2 \, k/(k+1)$. In the second subcase ($k> \ceil{(n-1)/2}$): the optimal welfare is at worst $\ceil{(n-1)/2}$, i.e., when the facility serves either $\lambda_1$ or $\lambda_2$ agents (whichever is larger) from location 0 or 1. The mechanism's welfare is at most \begin{align*} 1+ (k-1)(1-d(0, s))&<k-(k-1)(1/2-1/2\varepsilon)\rightarrow k/2+1/2&&\text{as $\varepsilon\rightarrow 0$.} \end{align*} Thus, the approximation ratio is at best $\ceil{(n-1)/2}/(k/2+1/2)$, but \begin{align*} \ceil{(n-1)/2}/(k/2+1/2)&\ge \frac{(n-1)/2}{(k+1)/2}= \frac{n-1}{k+1}. \end{align*} Furthermore since $k> (n-1)/2$ it follows that $ \frac{n-1}{k+1}<2$. Of course, this bound is only meaningful when $n-1/k+1 >1$. We conclude that when $k\le \ceil{(n-1)/2}$ the approximation ratio is at best $2\frac{k}{k+1}$ and otherwise is at best $\max\{\frac{n-1}{k+1},1\}$. \end{proof} \subsection{Optimized approximation ratio for DIC\xspace Mechanism} We now analyze the performance of the median mechanism for general $k\le n$. In instances where $k\in\{1,n\}$, the median mechanism is both optimal mechanism and DIC\xspace (Theorem~\ref{Theorem: optimal not dic}). Furthermore, this mechanism is DIC\xspace for all $k\le n$ since the median mechanism is a GMM (Theorem~\ref{Corollary: equivalence results}). Theorem~\ref{Theorem: median performance} says that the median mechanism is an $\alpha$-approximation where $\alpha=2\frac{k}{k+1}$ when $k\le \floor{(n+1)/2}$, and $\alpha=\min\{2\frac{k}{k+1}, 1+2\frac{n-k+1}{3k-2n-2}\}$ otherwise. In particular, this means that the median mechanism is optimal among DIC\xspace mechanism for $k\le \floor{(n-1)/2}$ since the approximation-ratio matches the lower bound found in Theorem~\ref{theorem: uncompromising at best 2}. These approximation results are illustrated in Figure~\ref{figure: illustration DIC lower boundxxy}. \begin{theorem}\label{Theorem: median performance} Let $n\ge 5$. The median mechanism is an $\alpha$-approximation with $\alpha=2\frac{k}{k+1}$ for $k\le \floor{(n+1)/2}$, and $\alpha=\min\{2\frac{k}{k+1}, 1+2\frac{n-k+1}{3k-2n-2}\}$ otherwise.% \end{theorem} \begin{proof} Let $n\ge 5$. Throughout the proof let $i_m$ denote the agent with median location (choose the agent arbitrarily if multiple such agents exist), and let $s_m$ denote the median location. The median mechanism provides welfare \begin{align*} \Pi_M(\boldsymbol{x}, k)=\max_{N' \in N_{k}} \sum_{i\in N_{k}} (1-d(s_m, x_i))=1+\max_{N' \in N_{k-1, i_m}} \sum_{i\in N_{k-1, i_m}} (1-d(s_m, x_i)), \end{align*} where $N_k$ is the set of all $k$-sized subsets of $N$ and $N_{k-1, i_m}$ is the set of all $(k-1)$-sized subsets of $N\backslash \{i_m\}$. This follows since the subset of agents served are always the $k$-closest to the facility location. Hence, given a facility location, the served subset is welfare maximizing. Furthermore, the median location coincides with at least one agent's location, i.e., agent $i_m$.\\ First, we provide an upper bound on the approximation-ratio for all $k$. The median mechanism locates the facility at the $\floor{(n+1)/2}$-th location and hence there are $\floor{(n+1)/2}-1$ agents with locations (weakly) below and $\ceil{(n+1)/2}-1$ with locations (strictly) above. A lower bound on the median mechanism's welfare is attained when the agents below and above the median location at located at 0 and 1, respectively. Thus, $$\Pi_M(\boldsymbol{x}, k)\ge 1+ (k-1)\max\{1-d(s_m, 0),\ 1-d(s_m,1)\},$$ and since either $d(s_m,0)\le 1/2$ or $d(s_m, 1)\le 1/2$ it follows that $\Pi_M(\boldsymbol{x}, k)\ge (k+1)/2$. This leads to an upper bound on the approximation-ratio of $k/((k+1)/2)=2\frac{k}{k+1}$ for all $k$, since the optimal welfare is always bounded above by $k$.\\ Now we attain a tighter upper bound for certain values of $k$. To do so, we bound the median welfare using the optimal welfare. Let $s^*$ be the location of the facility under the optimal mechanism. Let $N_m^*$ denote the set of $k$ agents served under the median mechanism, and let $N^*$ denote the set of $k$ agents served under the optimal mechanism. We have \begin{align*} \Pi_M(\boldsymbol{x}, k)&\ge \sum_{i\in N^*} \big(1-d(s_m, x_i)\big)\\ & =\sum_{i\in N^*} \Big(1-d(s_m, x_i) -d(s^*, x_i) +d(s^*, x_i) \Big)\\ &=\Pi^*(\boldsymbol{x}, k)-\sum_{i\in N^*} \Big(d(s_m, x_i)-d(s^*, x_i)\Big). \end{align*} Clearly, the lower bound is smallest when $s_m\neq s^*$, without loss of generality assume that $s_m<s^*$. Let $N_1^*, N_2^*$ be a partition of $N^*$ such that $|N_1^*|, |N_2^*|\le \floor{(n+1)/2}$ and all agents in $N_1^*$ have location in $[0,s_m]$ and agent in $N_2^*$ have location in $[s_m, 1]$. Such a partition of $N^*$ exists since the location $s_m$ coincides with the $ \floor{(n+1)/2}$ highest location. Using this partition we further bound the median mechanism's welfare: \begin{align*} \Pi_M(\boldsymbol{x}, k)&\ge \Pi^*(\boldsymbol{x}, k)-\sum_{i\in N_1^*} (d(s_m, x_i)-d(s^*, x_i))-\sum_{i\in N_2^*} (d(s_m, x_i)-d(s^*, x_i))\\ &\ge \Pi^*(\boldsymbol{x}, k)- |N_1^*| \max_{x\in [0, s_m]} \Big(s_m- s^*-2x\Big)-|N_2^*| \max_{x\in [s_m,1]}\Big(x_i-s_m-|s^*-x_i|\Big)\\ &\ge \Pi^*(\boldsymbol{x}, k)-|N_1^*| (s_m-s^*)-|N_2^*| (s^*-s_m)\\ &\ge \Pi^*(\boldsymbol{x}, k)-(|N_2^*|-|N_1^*|) (s^*-s_m)\\ &\ge \Pi^*(\boldsymbol{x}, k)-(|N_2^*|-|N_1^*|). \end{align*} We now attain our lower bound by considering the maximum value of $|N_2^*|-|N_1^*|$. For $k\le \floor{(n+1)/2}$, the value can only be guaranteed to be no larger than $k$ -- leading to a trivial zero lower on $\Pi_M(\boldsymbol{x}, k)$. However, for $k>\floor{(n+1)/2}$ we attain a more useful bound by noting that $$(|N_2^*|-|N_1^*|) \le \floor{(n+1)/2}-(k-\floor{(n+1)/2})=2\floor{(n+1)/2}-k\le n+1-k.$$ This lead to an approximation-ratio upper bound of $$\max_{\boldsymbol{x}\in \prod_{i=1}^n X}\Bigg\{\frac{ \Pi^*(\boldsymbol{x}, k)}{ \Pi^*(\boldsymbol{x}, k)-n-1+k}\Bigg\}=\max_{\boldsymbol{x}\in \prod_{i=1}^n X}\Bigg\{1+\frac{ n+1-k}{ \Pi^*(\boldsymbol{x}, k)-n-1+k}\Bigg\}.$$ Furthermore, for any instance $\Pi^*(\boldsymbol{x}, k)\ge k/2$ since at least as much welfare is attained by locating the facility at $s=1/2$. Thus, an upper bound on the approximation ratio is \begin{align*}1+\frac{ n+1-k}{ k/2-n-1+k}=1+2\frac{ n+1-k}{ 3k-2n-2}. \end{align*} \end{proof} \section{Extension: Location-Allocation Mechanisms }\label{Section: Excludable} In this section we consider an extension of our framework where the mechanism designer is able to dictate which agents are served by the facility. Note that this extension introduces an underlying assumption that the facility is excludable. In practice, a designer may be able to dictate which agents are served by issuing permits or, when costs are not prohibitive, checking the identities of agents attempting to benefit from the facility. Previously, a mechanism $M \ : \prod_{i\in N} X\rightarrow X$ was defined as a function mapping a profile of locations to a single facility location. In our extension, a mechanism not only locates the facility but also chooses a subset of at most $k$ agents to be served by the facility, if they so choose. We denote these extended mechanisms by $$M_A \ : \prod_{i\in N} X\rightarrow X \times N_k,$$ where $N_k=\{A\subseteq N \ : 0<|A|\le k\}$. We call these mechanisms \emph{location-allocation} mechanisms, to distinguish them from the (location-only) mechanisms considered in earlier sections of the present paper. The output of the mechanism is a pair $(s,A)\in X\times N_k$ where $s\in X$ denotes the facility location and $A\in N_k$ denotes the subset of agents allocated to the facility. Abusing notation slightly we will denote the mechanism output from a location profile $\boldsymbol{x}$ by $s_{\boldsymbol{x}}$ and $A_{\boldsymbol{x}}$ where $M_A (\boldsymbol{x})=\big(s_{\boldsymbol{x}},\ A_{\boldsymbol{x}}\big)$. An agent $i\in A$ is guaranteed to be served by the facility if they so choose, whilst an agent $i\notin A$ is never served. We omit the details, but it is immediate that the modified subgame $\Gamma_{\boldsymbol{x}}(s, k, A)$ has an essentially unique ex-post Nash equilibrium where all agents $i\in A$ are served by the facility and the remaining agents are not. Thus, we assume that agent $i$ reports their location to the mechanism designer with the understanding that they will be served by the facility if and only if $i\in A$, as per the ex-post Nash equilibrium. Note that the strategyproof concept, DIC\xspace, in this section still coincides with the concept used in the earlier sections, albeit with the modified subgame explained above. We first remark that the revelation principle~\cite{Gibb73} does not apply. A location-only mechanism, based on the profile of agent reports, $\hat{\boldsymbol{x}}$, outputs a facility location $s$ -- that depends on $\hat{\boldsymbol{x}}$ -- and a subset of $k$ agents are then allocated to the facility, via the ex-post Nash equilibrium, $A\subseteq N$ -- this subset depends on the agent true locations $\boldsymbol{x}$ and not the reports $\hat{\boldsymbol{x}}$. In contrast, a location-allocation mechanism outputs both a facility location and an allocation of $\le k$ agents to the facility depending on agent reports $\hat{\boldsymbol{x}}$, and not true locations $\boldsymbol{x}$. Thus, an agent misreporting their location -- in a way that does not affect the facility location -- will never affect whether or not they are served by the facility under a location-only mechanisms. However, under a location-allocation mechanism the agent may potentially benefit from the misreport if they are now allocated to the facility by the mechanism. We now show that no `reasonable' location-allocation-mechanism is DIC\xspace. In particular, we only enforce one criteria, which is a weak form of \emph{anonymity}. Informally speaking, we require that the location-allocation mechanism allocates agents to the facility independently of their label if their report is distinct from all other agents. The usual definition of anonymity is not directly applicable since with a deterministic mechanism, if all agents report identical locations the mechanism must discriminate against at least $n-k$ agents who will not be included in the allocation set $A$. To formally define our anonymity condition we first introduce the notion of an $i$-identifiable location profile. This is simply a profile where agent $i$ is uniquely identified by their report. \begin{definition}[$i$-identifiable location profile]Let $i\in N$. A location profile $\boldsymbol{x}$ is \emph{$i$-identifiable} if $$x_i\neq x_j \qquad \text{ for all } j\in N\backslash \{i\}.$$ \end{definition} We now define our anonymity condition, which we call {\em allocation-anonymous} since the condition only applies to the allocation set rather than the facility location. Informally speaking, the allocation-anonymous condition requires that for every $i$-identifiable location profile, whether or not agent $i$ is allocated to the facility does not depend on $i$'s label. Given that allocation-anonymity only applies to $i$-identifiable location profiles the condition is relatively weak. \begin{definition}[Allocation-anonymous]\label{allocation anon definition} The mechanism $M_A$ is said to be \emph{allocation-anonymous} if for every distinct $i,j\in N$ and every $i$-identifiable location profile $\boldsymbol{x}$, the modified profile $\boldsymbol{x}'$ such that $x_\ell=x_\ell' $ for all $\ell\neq i,j$ and $$x_i'=x_j \qquad \text{ and }\qquad x_j'=x_i,$$ we have $$i\in A_{\boldsymbol{x}} \iff j\in A_{\boldsymbol{x}'},$$ where $A_{\boldsymbol{x}}$ is such that $M_A (\boldsymbol{x})=\big(s_{\boldsymbol{x}},\ A_{\boldsymbol{x}}\big)$. \end{definition} We now show that if we restrict our attention to allocation-anonymous mechanisms there is no DIC\xspace location-allocation mechanism. \begin{theorem}\label{theorem: impossibility of location-allocation} Let $k<n$, any location-allocation mechanism $M_A$ that is allocation-anonymous is not DIC\xspace. \end{theorem} \begin{proof} For the sake of a contradiction suppose that $M_A$ is a location-allocation mechanism that is both allocation-anonymous and DIC\xspace. First consider a location profile $\boldsymbol{x}$ where $x_i=3/4$ for all $i\in N$, and denote the output of the mechanism by $M_A(\boldsymbol{x})=(s,A)$. Let $i^*$ be some agent such that $i^*\in A$ and $j^*$ some agent such that $j^*\notin A$. In this outcome agent $j^*$ attains utility zero, since $j^*\notin A$. Now consider another location profile $\boldsymbol{x}'$ such that $x_i'=3/4$ for all $i\in N\backslash \{j^*\}$ and $x_{j^*}'=1/2$. Note that the profile $\boldsymbol{x}'$ can be achieved via a unilateral deviation from the profile $\boldsymbol{x}$ by agent $j^*$. Denote the mechanism's output from this location profile by $M_A(\boldsymbol{x}')=(s', A')$. We consider two cases and derive a contradiction in each case.\\ Case 1: Suppose $j^*\in A'$ and suppose that $\boldsymbol{x}$ is the true location of all agents. In this case agent $j^*$ by misreporting their location to $x_{j^*}'=1/2$ strictly profits, since under the profile $\boldsymbol{x'}$ we have $j^*\in A'$ and their utility is now $1-d(s',\frac{3}{4})>0$ rather than zero. Thus, we have a contradiction. \\ Case 2: Suppose $j^*\notin A'$ and suppose that the agents have true locations $y_\ell=3/4$ for all $\ell \in N\backslash \{i^*\}$ and $y_{i^*}=1/2$. Denote the mechanism's output from location profile $\boldsymbol{y}$ by $M_A(\boldsymbol{x}')=(s'', A'')$. Notice that $\boldsymbol{x}'$ is a $j^*$-identifiable location profile and the profile $\boldsymbol{y}$ satisfies the condition in Definition~\ref{allocation anon definition}, and so by the allocation-anonymous property we require that $$i^*\in A'' \iff j^*\in A'.$$ Thus, we infer that $i^*\notin A''$ and attain zero utility under the location profile $\boldsymbol{y}$. Now suppose agent $i^*$ unilaterally deviates and reports the location $y_{i^*}'=3/4$. In this case, the location profile coincides with the profile $\boldsymbol{x}$ where $x_\ell=3/4$ for all $\ell\in N$. But recall that $M_A(\boldsymbol{x})=(s,A)$ and $i^*$ was taken to be some agent such that $i^*\in A$. Thus, under this unilaterally misreport agent $i^*$ is now served and attain strictly positive utility of $1-d(s, y_{i^*})>0$. This is a profitable deviation and contradicts our assumption that the mechanism $M_A$ was DIC\xspace. We conclude that there is no location-allocation mechanism that is both allocation-anonymous and DIC\xspace. \end{proof} The above impossibility result means that the extensive-form approach taken in the main body of this paper is crucial for DIC\xspace mechanisms that are non-dictatorial. The use of an extensive-form game and corresponding ex-post Nash equilibria to decide the allocation of agents to the facility reduces the incentive compatibility constraints faced by the mechanism designer. Furthermore, the result suggests that the excludability of the facility presents a greater challenge for incentive compatibility than rivalry. \section{Discussion and Conclusion}\label{section: discussion and conlc} We now conclude the paper with a brief discussion of future research directions. \underline{Extensions to multiple facilities:} In the present paper we focused on the case of a single capacity constrained facility location problem. Extending the capacity constrained to multiple facilities presents a number of challenges. Firstly, the subgame induced from a profile of facility location will lead to multiple equilibria that are not welfare (nor utility) equivalent. Furthermore, even when ignoring the multiplicity of equilibria issues, the mechanism design problem is drastically more complicated -- as is the algorithmic problem of finding the optimal facility locations (see Brimberg et al.~\cite{BKE-C+01}). A recent contribution by Golowich, Narasimhan and Parkes~\cite{GNP18} explores the mechanism design problem for multiple facilities without capacity constraints. \underline{Weakening DIC\xspace:} A natural direction to consider is weakening the strategproofness concept (DIC\xspace) that we use in the present paper. The DIC\xspace requirement is very strong: agents must attain maximal ex-post utility from reporting their location no matter what other agents report, and other agents' true locations. The weaker notion of \emph{ex-post Incentive Compatible (IC)} may be interesting to be explore for both characterization and performance results. This notion requires that agents attain maximal ex-post utility from reporting their location no matter the other agents' true locations, but conditional on the other agents reporting truthfully. It is straightforward to construct IC mechanisms that out-perform the median mechanism for certain parameter ranges. \textbf{Conclusion:} In this paper we initiated the study of the capacity constrained facility location problem from a mechanism design perspective. We formalized a model that allows the subset of served agents to be endogenously derived from equilibrium outcomes. Our main contribution is a complete characterization of all DIC\xspace mechanisms via the family of GMM mechanisms. This characterization also provides a novel perspective to an open problem in regard to GMM mechanisms, posed in~\cite{BoJo83}. Our second contribution is an analysis of the performance of DIC\xspace mechanisms with respect to social welfare -- where we also show that the well-known median mechanism is optimal among DIC\xspace mechanism for certain parameter ranges. Finally, we show that extending the space of mechanisms to allow the mechanism to allocate agents to the facility leads to a stark impossibility result. Namely, there is no allocation-anonymous DIC\xspace mechanism which both locates the facility and stipulates the subset of agents to be served. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2019-02-19T02:01:11", "yymm": "1806", "arxiv_id": "1806.00960", "language": "en", "url": "https://arxiv.org/abs/1806.00960" }
\section{Introduction} \label{sec:introduction} Nowadays, chip manufacturing is a complex and costly process, where more often than not third-party facilities are involved. As a result, protecting intellectual property (IP) as well as ensuring trust in the chips becomes challenging. The IARPA agency proposed \emph{split manufacturing (SM)} as a protection technique to ward off threats like IP piracy, unauthorized overproduction, and insertion of hardware Trojans~\cite{mccants11}. In the simplest embodiment of SM, the FEOL is handled by a high-end, competitive off-shore fab which is potentially \emph{untrusted}, while the BEOL is manufactured subsequently at a low-end, \emph{trusted} facility (Fig.~\ref{fig:SM_concept}). Hill \emph{et al.\ }\cite{hill13} successfully demonstrated the viability of SM by fabricating a 1.3 million-transistor asynchronous FPGA. Further studies also bear testament to the applicability of SM~\cite{vaidyanathan14_2,bi2015beyond,vaidyanathan14}. However, the overall acceptance of SM remains behind expectations so far, mainly due to concerns about cost. \begin{figure}[tb] \centering \includegraphics[width=.95\columnwidth]{figures/concept_SM.pdf} \caption{Concept of split manufacturing, i.e., the separation of a layout into the FEOL and BEOL parts. Note the different pitches across the metal layers. As the FEOL part is outsourced, it may require additional protection (such as placement perturbation or lifting of wires) against fab-based adversaries. \label{fig:SM_concept} } \end{figure} The protection offered by SM is based on the fact that the FEOL fab does not have access to the complete design, and an attacker \emph{may} thus be hindered from malicious activities. The threat models for SM~\cite{garg2017split} are accordingly focused on FEOL-based adversaries which either seek to ($i$) retrieve the design and/or its IP, or ($ii$) insert hardware Trojans. Some studies also consider both at the same time~\cite{xiao15,glsvlsi2017}. Here, we address ($i$). Prior art suggests splitting after M1, as such a scenario forces an attacker to tackle a ``vast sea of gates'' with only a few transistor-level interconnects provided along~\cite{vaidyanathan14_2}. Although splitting after M1 arguably provides the best protection, it also necessitates a high-end BEOL fab for trusted fabrication of all remaining metal layers, including the lower layers with very small pitches. Since this requirement may be considered too costly, some studies propose to split after M4~\cite{xiao15,magana16,magana17}.\footnote{We advocate the terminology ``to split after'' instead of the commonly used ``to split at.'' For example, ``to split at M2'' remains ambiguous whether M2 is still within the FEOL or already in the BEOL. Further, the same uncertainty applies to the vias of V12 and V23, i.e., those between M1/M2 and M2/M3, respectively. Our definition for ``to split after M2'' is that M2 and V12 are still in the FEOL, while the vias of V23 are already in the BEOL.} However, doing so can undermine security by revealing more structural connectivity information to an attacker~\cite{rajendran13_split,wang16_sm,sengupta17}. The key challenge for SM is thus: \emph{how to render split manufacturing practical regarding both security and cost?} Here, we address this challenge with a secure and effective approach for SM. Our key principle is to lift wires to the BEOL in a controlled and concerted manner, considering both cost and security. Our work can be summarized as follows: \begin{itemize*} \item Initially, we revisit the cost-security trade-offs for SM. We explore the prospects of wire lifting and find that naive lifting to higher metal layers can improve the security albeit at high layout cost. Thus, we proclaim the need for cost- and security-aware, concerted lifting schemes. \item We put forward multiple strategies to select and lift nets. The key ideas to achieve strong protection are ($i$)~to increase the number of protected/lifted nets and ($ii$)~to dissolve hints of physical proximity for those nets. \item Based on our strategies, we propose a method for the \emph{concerted lifting of wires} with controllable impact on power, performance, and area (PPA). Since we lift wires to higher metal layers (M6, without loss of generality), our method also helps to lower the commercial cost of SM. \item For the actual layout-level lifting, we design custom ``elevating cells.'' Unlike the prior art, our techniques allow to lift and route wires in a controlled manner in the BEOL. \item We promote a new metric, \emph{Percentage of Netlist Recovery (PNR)}, which quantifies the resilience offered by any SM protection scheme against varyingly effective attacks. \item We conduct a thorough evaluation of layout cost and security on finalized layouts for various benchmarks, including the large-scale \emph{IBM superblue} suite. We contrast the superior resilience of our layouts with prior protection schemes, and make our layouts publicly available, along with the library definition for elevating cells~\cite{webinterface}. \end{itemize*} \section{Background and Motivation for Our Work} \label{sec:background_motivation} \subsection{On Prior Studies and Some Limitations} \textbf{Attack Schemes:} Naive SM (i.e., splitting a layout as is) likely fails to avert skillful attackers. That is because physical design tools arrange gates to be connected as close as possible, subject to available routing resources and other constraints. Rajendran \emph{et al.\ }\cite{rajendran13_split} introduced the concept of \emph{proximity attack} where that insight is exploited to infer undisclosed interconnects. More recently, Wang \emph{et al.\ }\cite{wang16_sm} proposed a network-flow-based attack which utilizes further hints such as the direction of dangling wires and constraints on both load capacitances and delays. Maga\~{n}a \emph{et al.\ }\cite{magana16,magana17} utilized routing-based proximity in conjunction with placement-centric proximity. \textbf{Protection Schemes:} Various techniques have been put forward to protect SM-based designs from proximity attacks. Swapping of block pins was proposed by Rajendran \emph{et al.\ }\cite{rajendran13_split} to obtain an unbiased \emph{Hamming distance} of 50\% between the outputs of the original netlist and the outputs of the netlist restored by an attacker. Wang \emph{et al.\ }\cite{wang16_sm} proposed gate-level placement perturbation within an optimization framework, to maximize resilience and minimize wirelength overhead at the same time. Sengupta \emph{et al.\ }\cite{sengupta17} also pursued various placement perturbation techniques, along with a discussion on information leakage for SM. Wang \emph{et al.\ }\cite{wang17} proposed a routing-based protection scheme applying wire lifting, deliberate re-routing, and VLSI test principles, all to tailor the Hamming distance towards 50\%. Maga\~{n}a \emph{et al.\ }\cite{magana16,magana17} advocated inserting routing blockages to lift wires and, thus, to mitigate routing-centric attacks as those proposed in their study. Besides those studies addressing proximity attacks, Imeson \emph{et al.\ }\cite{imeson13}, Li \emph{et al.\ }\cite{li18}, and Chen \emph{et al.\ }\cite{chen2016secure} focus on hardware Trojans. Patnaik \emph{et al.\ }\cite{patnaik17} pursue BEOL-centric and large-scale layout camouflaging; the authors note that their scheme is also promising in the context of split manufacturing. \textbf{Limitations of Protection Schemes:} The approach of Rajendran \emph{et al.\ }\cite{rajendran13_split} is only applicable to hierarchical designs. More importantly, pin swapping is rather limited in practice; on average, 87\% correct connections were reported in~\cite{rajendran13_split}. Placement-centric schemes would ideally (re-)arrange gates randomly, thereby ``dissolving'' any hint of spatial proximity. As this likely induces excessive PPA overheads~\cite{sengupta17,imeson13}, placement perturbation is typically applied more carefully~\cite{wang16_sm,sengupta17}. However, as we reveal in Sec.~\ref{sec:experiments}, overly restricted perturbation can provide only a little protection, especially for splitting after higher layers. Similar to placement-centric schemes seeking to limit PPA overheads, some routing-based schemes such as~\cite{wang17} also protect only a small subset of the design (a few nets). These techniques are further subject to available routing resources, and re-routing may be restricted to short local detours which can be easy to resolve for an advanced attacker. Besides, implicit re-routing by insertion of blockages~\cite{magana16,magana17} falls short of explicitly protecting selected nets and controlling the routing of wires. \subsection{On the Trade-Offs for Cost Versus Security} \label{sec:motivation} It is challenging to determine the most appropriate split layer as such a decision has direct and typically opposing impact on security and cost. Recall that some prior art promoted to split after lower metal layers. However, this comes at a high commercial cost for the trusted BEOL fab. In contrast, splitting after higher layers allows for large-pitch and low-end processing setups at the BEOL fab, thus reducing cost (but possibly undermining security). For example, considering the pitches for the 45nm node (Table \ref{tab:pitch}), one may prefer to split after M3 (or M6, or even M8) over splitting after M1.\footnote{Splitting after other layers is also possible, but considering cost and applicability we suggest that any split should occur just below the next larger pitch. This way, the BEOL fab has to manufacture only those larger pitches.} Further aspects promoting higher split layers are also discussed in~\cite{xiao15}. \begin{table}[tb] \centering \scriptsize \setlength{\tabcolsep}{1.6mm} \caption{Pitch dimensions for the metal layers in the \emph{45nm} node~\cite{rajendran13_split}.} \vspace{-2pt} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Layer} & M1 & M2 & M3 & M4 & M5 & M6 & M7 & M8 & M9 & M10 \ \\ \hline \hline Pitch (nm) & 130 & 140 & 140 & 280 & 280 & 280 & 800 & 800 & 1600 & 1600 \\ \hline \end{tabular} \label{tab:pitch} \end{table} \begin{figure}[tb] \vspace{1em} \centering \includegraphics[width=\columnwidth]{figures/OPPs.pdf} \vspace{-2pt} \vspace{-2pt} \vspace{-2pt} \vspace{-2pt} \caption{{(a)~Conceptional illustration of a regular, unprotected layout. The red dots represent open pins, which would induce dangling wires once the layout is split at each respective layer. Note that the majority of nets are completed in lower layers, hence fewer open pins are observed for higher layers. (b)~Conceptional illustration of a layout protected by wire lifting. Here the majority of nets are completed in M7 (without loss of generality). Hence, any split below M7 induces many open pins to be tackled by an attacker.} \label{fig:SM_OPP} } \end{figure} When a net is cut across FEOL and BEOL by SM, at least two \emph{dangling wires} arise in the topmost layer of the FEOL.\footnote{The reverse is not necessarily true, i.e., not all dangling wires represent a cut net---dangling wires may also be used for obfuscation. Such wires are routed in the FEOL but remain open in the BEOL; see also Sec.~\ref{sec:concept}. Besides, the number of dangling wires depends both on the net's pin count and how/where exactly it is cut. See also Fig.~\ref{fig:concept1} for an illustrative example.} Dangling wires remain unconnected at one end; these open ends indicate the locations where the vias linking the FEOL and BEOL are to be manufactured (by the BEOL fab). We refer to those via locations as open pins (Fig.~\ref{fig:SM_OPP}). Further, we define \emph{open pin pairs (OPPs)} as pairs $(p_d, p_s)$ where $p_d$ is connected to a driver and $p_s$ to at least one sink. The related routing is observable in the FEOL, but the true mapping of drivers to sinks is comprehensible only with the help of the BEOL. For an attacker operating at the FEOL, observing fewer OPPs directly translates to a reduced solution space and, thus, may lower her/his efforts for recovery of the protected design. In Fig.~\ref{fig:motivation}, we plot an attacker's success rate versus the OPP count for various split layers. There are strongly reciprocal correlations across the layers, confirming that layouts split after higher layers are much easier to attack. That is because more and more nets are routed completely within the FEOL once we split after higher layers. Naturally, these FEOL-routed nets yield no OPPs and, hence, impose no efforts for the attack. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/motivation_final.pdf} \caption{Percentage of netlist recovery (PNR, see also Sec.~\ref{sec:metrics}) versus open pin pairs (OPPs), plotted as a function of the split layer. Note that the split layers are ordered from M10 to M1. The unprotected layouts are naively split as is and the attack is based on~\cite{wang16_sm}.} \label{fig:motivation} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/wirelifting.pdf} \caption{PNR versus OPPs, plotted as a function of randomly selected nets lifted to M6. The set of benchmarks and the legend are the same as in Fig.~\ref{fig:motivation}. The protected layouts are split after M3 and the attack is based on~\cite{wang16_sm}. The OPP baselines (normalized OPP count of 1.0) are derived from each respectively unprotected layout, i.e., where 0\% of all nets are lifted. } \label{fig:PNR_naive_lifting} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/PPA_naive_lifting-eps-converted-to.pdf} \caption{PPA overheads for naive lifting of randomly selected nets to M6. The plot is based on the same benchmarks as in Fig.~\ref{fig:motivation}. The boxes span from the \nth{5} to the \nth{95} percentile, the whiskers represent the minimal and maximal values, and the black bars represent the medians, respectively. } \label{fig:motivation_PPA} \end{figure} One way to enforce many OPPs while splitting only after higher layers is \emph{wire lifting}, i.e., the deliberate routing of nets towards and within the BEOL (Figs.~\ref{fig:SM_OPP} and~\ref{fig:PNR_naive_lifting}). There is a common concern of overly large PPA cost for large-scale wire lifting~\cite{glsvlsi2017,imeson13}. We confirm this in Fig.~\ref{fig:motivation_PPA}, where we plot PPA overheads for \emph{naive lifting} of randomly selected nets.\footnote{We implement naive lifting by placing one ``elevating cell'' next to the driver; see Secs.~\ref{sec:concept} and~\ref{sec:methodology} for details on those cells and their use. Such naive lifting enforces routing at least to some degree above the split layer, thereby inducing OPPs and hampering an attacker's recovery rate (Fig.~\ref{fig:PNR_naive_lifting}). However, naive lifting cannot handle OPPs in a controlled manner.} Note that the die area is particularly impacted. That is because lifting more and more wires in an uncontrolled manner induces notable routing congestion which, in order to obtain DRC-clean layouts, can only be managed by enlarging the die outlines. Overall, there is a need for an \emph{SM scheme ensuring a large number of OPPs, while splitting after higher layers (at low commercial cost), but without inducing excessive PPA cost}. We believe that such a scheme will expedite the acceptance of SM in the industry. In this work, we propose such a scheme through the notion of \emph{concerted wire lifting}. \section{Strategies for Concerted Wire Lifting} \label{sec:concept} As motivated in Sec.~\ref{sec:background_motivation}-\ref{sec:motivation}, the number of OPPs in the FEOL should be as large as possible, but not at high cost---pertaining to commercial and PPA cost. We tackle this problem with the help of our custom \emph{elevating cells (ECs)}. The key idea of routing nets through ECs is to establish pins in the metal layer of choice (above the split layer), thereby inducing OPPs for those nets. (See Sec.~\ref{sec:methodology} and Fig.~\ref{fig:ECs} for implementation details.) Next, we introduce our strategies for concerted wire lifting. They are based on exploratory but comprehensive layout-level experiments. These strategies outperform naive lifting as well as recent prior art regarding security while inducing only little PPA overhead at the same time (see Sec.~\ref{sec:experiments}). \textbf{Strategy 1, Lifting High-Fanout Nets:} We begin by lifting high-fanout nets (HiFONs) for two reasons: ($i$) any wrong connection made by an attacker propagates the error to multiple locations, and ($ii$) lifting HiFONs helps introduce many OPPs. We define nets with two or more sinks as HiFONs.\footnote{Although large fanouts may be subject to timing-driven optimization such as buffering or cloning~\cite{KLMH11}, we found that on average 20--30\% of all the nets in the benchmarks we consider have a fanout of at least 2. In any case, our techniques are generic and can be readily applied for any degree of fanout.} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/concept1_final.pdf} \caption{The impact of lifting high-fanout nets on the number of open pin pairs.} \label{fig:concept1} \end{figure} Consider Fig.~\ref{fig:concept1} as an example. Here, a HiFON is originally connecting to four gates/sinks. Depending on how and where the HiFON is lifted, the attacker has different scenarios to cope with. In (a), only one OPP arises which is trivial to attack/resolve. In (b), two OPPs are to be tackled, (A,B) and (A,C). Assuming that an attacker cannot tell how many sinks \emph{exactly} to consider,\footnote{While a skillful attacker may understand the driving strengths of any gate, she/he cannot easily resolve their original use given only the FEOL. Any high-strength driver may have to be reconnected either to many sinks nearby or to few sinks far away. Given that wire lifting is conducted across the whole layout, drivers and sinks of various nets will be ``spatially intermixed,'' notably increasing an attacker's efforts to map drivers to sinks correctly.} either one of the two OPPs or both OPPs at the same time are equally likely representing the original net. Thus, the attacker has three options to consider. In (c), even up to 14 options arise; there are four OPPs (A,B), (A,C), (A,D), and (A,E), as well as 10 possible combinations of those OPPs. Naturally, once other nets are lifted as well, the set of OPPs scales up even further, in fact in a combinatorial manner. We lift all wires of any HiFON (Fig.~\ref{fig:concept1}(c)), to induce as many OPPs as possible. We do so by inserting separate ECs for the driver as well as for all the sinks. \textbf{Strategy 2, Controlling the Distances for OPPs:} Besides increasing the number of OPPs, it is also necessary to control the distances between their pins. For example in Fig.~\ref{fig:concept2}(a), only a short open remains in M5 for the lifted wire/net, motivating an attacker to reconnect that particular OPP. Such a scenario may arise for implicit wire lifting, e.g., as proposed in~\cite{magana16,magana17}. There, only the FEOL metal layers to avoid are declared, but the actual routing paths in the BEOL layers are not. {\setlength\textfloatsep{8pt} \begin{figure}[tb] \centering \includegraphics[width=0.80\columnwidth]{figures/concept2_final.pdf} \caption{Distances for open pin pairs (OPPs). In (a), only a short distance arises due to implicit wire lifting, whereas in (b) we precisely control the distance using two elevating cells (ECs).} \label{fig:concept2} \end{figure} }% In our method, we can control the distances for OPPs at will, simply by controlling the placement of the ECs (Figs.~\ref{fig:concept2}(b)). We place ECs close to the driver and the sink(s), thereby enlarging the distances and increasing an attacker's efforts. To mitigate any advanced attack, e.g., based on learning the distance distribution for OPPs while reverse-engineering other available chips, one may also place the ECs randomly within (or even beyond) the bounding boxes of the nets. \textbf{Strategy 3, Obfuscation of Short Nets:} Above we assumed that enlarging the distances of OPPs is practical and effective, which is straightforward for HiFONs (as well as for relatively long nets). For short nets, however, enlarging those distances requires some routing detours out of the net's bounding box. Furthermore, short nets may be easy for an attacker to identify and localize, based on the typically low driver strength. To tackle both issues, we design another EC (Figs.~\ref{fig:concept4} and~\ref{fig:ECs}(b)). This EC places two pins close to each other: one ``true'' pin is connected to the short net's driver, and the other ``dummy'' pin is connected to a randomly but carefully selected gate, representing a dummy driver. An attacker cannot easily distinguish these two drivers: ($i$) the dummy driver is selected such that no combinatorial loops would arise were the driver connected to the short net's sink(s), and ($ii$) we adapt both drivers' strength, also accounting for the routing detours, via ECO optimizations. Besides obfuscation, this EC induces a \emph{dummy OPP} which naturally increases the overall number of OPPs. Note that we insert only one EC for short nets, specifically between their real and dummy driver. We refrain from inserting another EC near the sink of short nets, as we observed that doing so contributes little for security but hampers routability. \begin{figure}[tb] \centering \includegraphics[width=0.92\columnwidth]{figures/concept4_new.pdf} \caption{We obfuscate short nets by connecting the real and dummy driver to another, dedicated type of EC. The dummy driver's wire connected with the EC is left dangling beyond the split layer. Note that the dummy driver is a real driver for another net; the related wires are not illustrated here. Both drivers' strengths are adapted such that they might seemingly connect to the sink of the obfuscated short net. The EC is placed between the real and the dummy driver, thereby increasing the OPP distances. } \label{fig:concept4} \end{figure} \section{Methodology} \label{sec:methodology} Next, we discuss our methodology (Fig.~\ref{fig:flow}), which is integrated with \emph{Cadence Innovus} using custom in-house scripts. Given an HDL netlist, we first synthesize, place, and route the design. The resulting layout is protected as follows. For each net we wish to lift, \emph{elevating cells (EC)} are temporarily inserted next to the net's driver (as well as next to all the net's sinks for HiFONs and long nets). It is important to note that ECs do not impact the FEOL device layer; they are designed to solely elevate/lift a given net. Next, we perform ECO optimization and legalization based on customized scripts. Then, we re-route the design, remove the ECs, extract the RC information, and report the PPA numbers. In case the PPA budget allows for additional wire lifting, we continue iteratively. Finally, a DEF file split into FEOL/BEOL is exported for security analysis against proximity attacks. \begin{figure}[tb] \centering \includegraphics[width=.85\columnwidth]{figures/methodology.pdf} \caption{The flow of our protection scheme. \label{fig:flow} } \end{figure} \textbf{Strategy for Selecting Nets to Lift:} In general, we lift nets according to the strategies discussed in Sec.~\ref{sec:concept}. More specifically, considering the iterative flow outlined above, we take the following steps to determine all nets to be lifted. \begin{enumerate*} \item Given a ratio of nets to lift, we initially lift HiFONs and then long nets using Strategies 1 and 2. Here we prioritize HiFONs based on their fanout degree; large-fanout HiFONs are lifted first. Furthermore, it is easy to see that the longer a net, the more freedom we have for controlling its OPP distance(s), and the less likely it is for an attacker to reconnect that net successfully. Therefore, we prioritize nets not already lifted as HiFONs by their wirelength. \item We then lift short nets using Strategy 3, until a given PPA budget is utilized. We prioritize nets based on their wirelength---the shortest nets are selected first. That is because the shorter a net, the easier it is to successfully reconnect by an attacker. Since the additional wires required to connect with dummy drivers consume notable routing resources, we lift short nets in small steps of 10\% and iteratively monitor the PPA impact. \end{enumerate*} \begin{figure}[tb] \centering \includegraphics[width=.92\columnwidth]{figures/EC.pdf} \caption{Post-processed snapshots of our two types of ECs. Wires in M6 are in orange, wires/vias in M1 are in blued/red. In (a), the EC (dark grey) is seen overlapping an inverter (light grey, dotted). In (b), the EC is seen alone; this EC has two inputs (C, D), as required for obfuscation of short nets. Recall that the OPPs arise after the split layer, i.e., below M6---OPPs are not visible here. }\label{fig:ECs} \end{figure} \textbf{Design of Elevating Cells:} As with any custom cell, our ECs are embedded in a library of choice. We make our EC implementation for the \emph{Nangate 45nm} library publicly available in~\cite{webinterface}. Fig.~\ref{fig:ECs} illustrates the two different types of ECs. The key properties of our ECs are discussed next. \vspace{-2pt} \begin{enumerate*} \item All I/O pins are set up in one metal layer. Since the pins must reside above the split layer to effectuate wire lifting, we implement different ECs as needed for various layers. \item The pin dimensions and offsets are chosen such that the pins can be placed onto the respective metal layer's tracks. This helps minimize the routing congestion. \item ECs may overlap with any other standard cell (Fig~\ref{fig:ECs}(a)). That is because the latter have their pins exclusively in lower metal layers, whereas ECs neither impact those layers nor the FEOL device layer. \item Custom legalization scripts have been set up to prevent the pins of different ECs to overlap with each other. \item Timing and power characteristics of a \emph{BUFX2} cell (buffer with driving strength 2) are leveraged for the ECs. A detailed library characterization is not required since ECs only translate to some interconnects in the BEOL. \item To enable proper ECO optimization, the ECs are set up for load annotation at design time. That is required to capture the capacitive load of ($i$) the wire running from the EC to the sink and ($ii$) the sink itself. Note that this annotation is also essential for obfuscating the dummy drivers' strength as outlined in Strategy 3 (Sec.~\ref{sec:concept}). \end{enumerate*} \section{Metrics for Layout Protection} \label{sec:metrics} Here we discuss metrics to gauge the resilience of layouts when accounting for FEOL-based attacks. First we review established metrics and then we introduce a novel metric. The \textbf{Hamming Distance (HD)} quantifies the average bit-level mismatch between the outputs of the original and the attacker's reconstructed design~\cite{rajendran13_split}. Note that the HD reveals the degree of functional mismatch, but not necessarily structural mismatches. (That is because any Boolean function can be represented by different gate-level designs.) Hence, the HD cannot adequately quantify the potential for gate-level IP theft. The \textbf{Output Error Rate (OER)} indicates the probability for any bit per output being wrong while applying a possibly large set of inputs to the attacker's netlist~\cite{wang16_sm,wang17}. As this metric tends to approach 100\% for any imperfect attack, it does not reflect well on the degree and type of errors made by an attacker, but rather whether any error was made at all. Like the HD, it should not be used to quantify the gate-level resilience. The \textbf{Correct Connection Rate (CCR)} is the ratio of connections correctly inferred by an attacker over the number of protected nets. For example, if 20 out of 100 protected nets are correctly reconnected, the CCR is 20\%. Note that Wang \emph{et al.\ }\cite{wang17} defined an \emph{Incorrect Connection Rate (ICR)}, which is simply the inverse of this metric. Unlike the HD or OER, this metric can quantify the gate-level protection (or its failure). Our metric \textbf{Percentage of Netlist Recovery (PNR)} captures the ratio of correctly inferred connections over the total number of nets. It quantifies the structural similarity between the original netlist and the attacker's netlist. Thus, the PNR is more generic and comprehensive than the CCR, as it accounts for the entire netlist, not only for protected nets. Vice versa, the CCR can be considered a special case of the PNR. For unprotected layouts, both metrics shall be equal by definition. For example, consider again that an attacker reconstructs 20 out of 100 protected nets, out of 10,000 nets in total. Now consider further that an attacker can readily identify all nets completely routed in the FEOL. Assuming that 2,000 nets are routed in the FEOL, the PNR would be 20.2\%. For 6,000 nets routed in the FEOL, however, the PNR would be already 60.2\%---while the CCR remains 20\% for both cases. In short, the PNR quantifies ($i$) the overall potential of IP theft and ($ii$) the resilience of any SM protection scheme against varyingly effective attacks, and for varying split layers. \section{Experimental Investigation} \label{sec:experiments} Recall that we propose an SM scheme enabling a large number of OPPs while splitting after higher layers, and with controllable PPA overheads. Hence, we evaluate our scheme thoroughly regarding security as well as layout cost. \textbf{Setup for Layout Assessment:} Our techniques are implemented as custom procedures for \emph{Cadence Innovus 16.15}. Our procedures impose negligible runtime overheads. We use the \emph{Nangate 45nm Open Cell Library}~\cite{nangate11}; we utilize all ten metal layers. The PPA analysis has been carried out at 0.95V and 125$^{\circ}$C for the slow process corner with a default switching activity of 0.2. Timing results are obtained by \emph{Innovus} as well. Our ECs lift wires to M6 unless stated otherwise. \textbf{Setup for Security Analysis:} We empower an attacker with the FEOL layout and with the technology libraries. We do not assume a working chip being available---it is yet to be manufactured. We utilize the network-flow-based attack provided by Wang \emph{et al.\ }\cite{wang16_sm}. Other attacks such as those in~\cite{magana16,magana17} have not been available to us at the time of writing.\footnote{Given the focus on academic tools in~\cite{magana16,magana17}, we further assume that these attacks are not readily compatible with our industrial design flow.} Functional equivalence was validated using \emph{Synopsys Formality}. The OER and HD are calculated using \emph{Synopsys VCS} by applying 100,000 random input vectors. \textbf{Benchmarks:} We conduct our comprehensive experiments using in total 28 benchmarks, selected not only from the ``traditional'' suites (i.e., \emph{ISCAS-85}, \emph{MCNC}, and \emph{ITC-99}), but also from the large-scale \emph{IBM superblue} suite~\cite{viswanathan2011ispd}. For the latter, we leverage scripts from~\cite{kahng14} to generate LEF/DEF files, but we also use the \emph{Nangate 45nm} library~\cite{nangate11} while doing so. \textbf{Setup for Comparisons:} The unsplit but protected, full layouts of~\cite{wang16_sm, wang17} have been provided to us as DEF files. However, we were not made aware of ($i$) the intended split layer, ($ii$) the selection of protected nets, or ($iii$) the library files. As for ($i$), there are indications in the layouts that they have been tailored for splitting either after M3, M4, or M5. Hence, we calculate any comparative PNR values as average over those layers. As a result of ($ii$), we cannot verify the other metrics but simply quote them from the respective publications. Because of ($iii$) we cannot contrast PPA numbers. \textbf{Public Release:} We provide our EC implementation in~\cite{webinterface}, enabling others to protect their layouts likewise. Moreover, we provide our final layouts as reference cases as well in~\cite{webinterface}. \subsection{Security Analysis} \label{sec:security_analysis} \textbf{Increase in OPPs:} Recall that more OPPs helps make proximity attacks challenging, which is corroborated by a reduction in PNR (Figs.~\ref{fig:motivation} and~\ref{fig:PNR_naive_lifting}). From our exploratory comparison of lifting strategies in Table~\ref{tb:open-pin-pairs} it is apparent that our strategies successfully increase the number of OPPs over both original layouts and layouts where naive wire lifting is employed. As it depends on the benchmark whether the lifting of HiFONs and long nets (Strategies 1 and 2) or short nets (Strategy 3) induces more OPPs, we suggest to apply our strategies in conjunction, as proposed in Sec.~\ref{sec:methodology}. Next, we confirm the superior resilience of our strategies while evaluating the PNR. \begin{table}[tb] \centering \scriptsize \setlength{\tabcolsep}{1mm} \caption{Total number of OPPs while splitting after M5. For a fair comparison, here we lift without loss of generality 30\% of the nets for all benchmarks and strategies. } \vspace{-2pt} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Benchmark}} & \multicolumn{4}{|c|}{\textbf{OPPs}} \\ \cline{2-5} & \textbf{Original} & \textbf{Naive Lifting} & \textbf{Strategies 1 and 2} & \textbf{Strategy 3} \\ \hline \hline c432 & 6 & 67 & 86 & 103\\ \hline c880 & 8 & 116 & 170 & 204\\ \hline c1355 & 6 & 120 & 147 & 164\\ \hline c1908 & 13 & 142 & 219 & 269\\ \hline c2670 & 112 & 228 & 356 & 315\\ \hline c3540 & 53 & 386 & 576 & 458\\ \hline c5315 & 196 & 582 & 845 & 780\\ \hline c6288 & 38 & 1,235 & 1,590 & 1,630\\ \hline c7552 & 127 & 533 & 900 & 795\\ \hline \end{tabular} \label{tb:open-pin-pairs} \end{table} \textbf{On the Effectiveness of Our Scheme:} Fig.~\ref{fig:compare1} compares the PNR for ($i$) naive lifting, ($ii$) lifting using our Strategies 1 and 2, and ($iii$) lifting using our Strategy 3. For a fair and comprehensive comparison, as with the exploratory comparison of induced OPPs above, here we lift the same percentage of nets (i.e., 30\%) for all benchmarks. We derive the average PNR while splitting the layouts after M3, M4, and M5. We achieve an improvement (i.e., reduction of PNR) of 10--11\% for our strategies over naive lifting on average. Also here we observe that it depends on the benchmark which lifting technique is more effective. Thus, we apply our techniques in conjunction for all remaining experiments---this helps to lower PNR values even further (see below and Table~\ref{tab:metrics_compare}). \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/figure3-eps-converted-to.pdf} \caption{Comparison of PNR values for naive lifting and our proposed strategies. The attack is based on~\cite{wang16_sm}. The data reflects the average recovery/piracy rate for splitting layouts after M3, M4, and M5. } \label{fig:compare1} \vspace{-2pt} \vspace{-2pt} \end{figure} \textbf{Comparison with Prior Art:} Initially, as a baseline comparison to the most recent work of Wang \emph{et al.\ }\cite{wang17}, we protect the same number of nets as they do, but the actual selection of nets to protect/lift is based on our strategies. We achieve an average improvement of $\approx$24\% for the PNR here (Table~\ref{tb:comparison_with_JV}). \begin{table}[tb] \centering \scriptsize \setlength{\tabcolsep}{1mm} \caption{Comparison with~\cite{wang17} on average PNRs (over M3, M4, and M5) for the same number of nets selected for protection. The attack is based on~\cite{wang16_sm}.} \vspace{-2pt} \begin{tabular}{|c|c|c|c|} \hline \textbf{Benchmark} & \textbf{Protected Nets} & \textbf{PNR for~\cite{wang17}} & \textbf{PNR for Proposed Scheme} \\ \hline \hline c432 & 34 & 87.5 & 70.7\\ \hline c1355 & 74 & 84.8 & 60.1\\ \hline c1908 & 56 & 91.2 & 58.9\\ \hline c7552 & 66 & 93.9 & 70.2\\ \hline \end{tabular} \label{tb:comparison_with_JV} \end{table} In Table~\ref{tab:metrics_compare}, we contrast the schemes of~\cite{wang16_sm, wang17} and our regular scheme, where the scope for protection/wire lifting depends on the allocated PPA budgets (see also Subsec.~\ref{sec:PPA_analysis}). Naturally, original layouts without any protection are most vulnerable, and an attacker recovers 96\% of the netlist on average. Constrained placement perturbation as proposed in~\cite{wang16_sm} provides only little improvement, reducing the average PNR to 95\%. That is because routing eventually compensates for any gate-level perturbation, with small displacements typically being re-routed in lower metal layers (which may be readily available to an attacker). The routing-centric scheme of~\cite{wang17} can lower the PNR to 88.5\%. In contrast, our scheme offers significantly better protection---with 31\% PNR on average, the resilience improves by 57--64\% over the prior art of~\cite{wang16_sm, wang17}. Besides the comparison based on our PNR metric, we also contrast our scheme using established metrics (Sec.~\ref{sec:metrics}). As for the CCR, we note that the approach of~\cite{wang16_sm} provides again only little improvement (2.4\%) over unprotected, original layouts. The scheme of~\cite{wang17}, however, achieves an improvement of 21.9\%, reducing the average CCR to 72.4\%. Also here, our approach provides superior protection, by means of 0\% CCR. Our scheme further achieves an optimal OER of 100\% (as is~\cite{wang17}, but not~\cite{wang16_sm}). Finally, we observe an average HD of 40.3\%. This translates to improvements of 25\% and 11\% over~\cite{wang16_sm} and~\cite{wang17}, respectively, despite the fact that we do not specifically target for optimal HD (50\%) in our scheme. \begin{table*}[tb] \centering \scriptsize \setlength{\tabcolsep}{0.9mm} \caption{Comparison with original layouts and prior art. We calculate all PNR values as average over splitting layouts after M3, M4, and M5. CCR, OER, and HD values for original layouts, \cite{wang16_sm}, and \cite{wang17} are all quoted from \cite{wang17}; CCR is derived as (100\% - ICR).\newline We also report our PPA cost. All values are in percentage. The attack is based on~\cite{wang16_sm}. } \vspace{-2pt} \begin{tabular}{|*{24}{c|}} \hline {\textbf{Benchmark}} & \multicolumn{4}{|c|}{\textbf{Original Layout}} & \multicolumn{4}{|c|}{\textbf{Placement Perturbation \cite{wang16_sm}}} & \multicolumn{4}{|c|}{\textbf{Routing Perturbation \cite{wang17}}} & \multicolumn{7}{|c|}{\textbf{Proposed Scheme}} \\ \cline{2-20} & \textbf{\emph{PNR$^*$}} & \textbf{CCR$^*$} & \textbf{OER} & \textbf{HD} & \textbf{\emph{PNR}} & \textbf{CCR} & \textbf{OER} & \textbf{HD} & \textbf{\emph{PNR}} & \textbf{CCR} & \textbf{OER} & \textbf{HD} & \textbf{\emph{PNR}} & \textbf{CCR} & \textbf{OER} & \textbf{HD$^{**}$} & \textbf{Die-Area Cost} & \textbf{Power Cost} & \textbf{Delay Cost} \\ \hline \hline c432 & 97.9 & 92.4 & 75.4 & 23.4 & 95.1 & 90.7 & 98.8 & 41.8 & 87.5 & 78.8 & 99.4 & 46.1 & 32.3 & 0 & 100 & 45.9 & 7.7 & 13.1 & 11.6 \\ \hline c880 & 100 & 100 & 0 & 0 & 95.3 & 96.8 & 15.8 & 1.2 & 86.8 & 47.5 & 99.9 & 18.0 & 28.3 & 0 & 100 & 39.8 & 0 & 12.1 & 19.9 \\ \hline c1355 & 98.3 & 95.4 & 59.5 & 2.4 & 97.2 & 93.2 & 94.5 & 8.0 & 84.8 & 77.1 & 100 & 26.6 & 32.8 & 0& 100 & 46.1 &0 &12.2 & 21.3 \\ \hline c1908 & 98.9 & 97.5 & 52.3 & 4.3 & 96.9 & 91.0 & 97.8 & 17.7 & 91.2 & 83.8 & 100 & 38.8 & 29.5 & 0 & 100 & 48.1 &7.7 &14.6 &18.9 \\ \hline c2670 & 89.6 & 86.3 & 99.9 & 7.0 & 94.3 & 86.3 & 100 & 7.5 & 86.3 & 58.3 & 100 & 14.0 & 34.3 & 0 & 100 & 35.1 &7.7 &10.0 &12.0 \\ \hline c3540 & 93.8 & 88.2 & 95.4 & 18.2 & 88.5 & 82.6 & 98.8 & 27.9 & 86.2 & 77.0 & 100 & 36.1 & 30.8 & 0 & 100 & 46.4 &7.7 &5.0 &2.8 \\ \hline c5315 & 91.7 & 93.5 & 98.7 & 4.3 & 94.1 & 91.1 & 98.7 & 12.5 & 87.7 & 74.7 & 100 & 18.1 & 31.6 & 0 & 100 & 35.4 &7.7 &7.9 &16.9 \\ \hline c6288 & 97.0 & 97.8 & 36.8 & 3.0 & 95.9 & 97.6 & 74.2 & 16.5 & 92.1 & 80.9 & 100 & 42.1 & 35.6 & 0 & 100 & N/A$^{**}$ &27.3 &12.3 &15.7 \\ \hline c7552 & 93.8 & 97.8 & 69.5 & 1.6 & 97.1 & 97.9 & 81.7 & 3.1 & 93.9 & 73.9 & 100 & 20.3 & 26.9 & 0 & 100 & 25.7 &16.7 &9.3 &15.7 \\ \hline \hline \textbf{Average} & 95.7 & 94.3 & 65.3 & 7.1 & 94.9 & 91.9 & 84.5 & 15.1 & 88.5 & 72.4 & 99.9 & 28.9 & 31.3 & 0 & 100 & 40.3 &9.2\% &10.7\% &15.0\% \\ \hline \end{tabular} \\ $^*$ By definition, PNR and CCR values for original, unprotected layouts shall be equal. For consistency, however, we report average PNR values here as well.\\ $^{**}$ The attack of~\cite{wang16_sm} tends to provide netlists with combinatorial loops, hindering their simulation. Those netlists have been post-processed using scripts of~\cite{kahng14}. For the benchmark \emph{c6288}, the post-processed netlist still fails simulation, due to ``UNKNOWN'' nets. \label{tab:metrics_compare} \end{table*} We also seek to compare with the work of Maga\~{n}a \emph{et al.\ }\cite{magana16,magana17}. However, having no access to their protected layouts of the \emph{IBM superblue} benchmarks, we can only compare on a qualitative level. In Table~\ref{tab:comparison_for_superblue}, we contrast their and our counts of {\emph{additional vias} above their assumed split layer, i.e., M4, and up to M6, where we lift wires to in our scheme. Note that only the total via counts across all layers before lifting and the layer-wise differences in via counts after lifting are given in~\cite{magana16}, but not the original via counts per layer. Considering the respective total via counts before lifting as independent baselines, our scheme increases the vias for V45 and V56 by 2.25--3.71\% (with respect to total vias), whereas Maga\~{n}a \emph{et al.\ }increase those vias counts only by 0.67--2.03\%. In their recent study~\cite{magana17}, Maga\~{n}a \emph{et al.\ }also report on the relative vias increases per layer; we contrast their increases with ours in Table~\ref{tab:comparison_for_superblue_TVLSI}. We observe on average 74\% and 101\% more vias for V45 and V56, respectively, while the respective increases reported in~\cite{magana17} are roughly only 16\% and 49\%. Note that we achieve the underlying wire lifting while keeping the die area fixed as in~\cite{magana17}, i.e., we induce zero area cost (and only marginal power and delay overheads, see also Subsec.~\ref{sec:PPA_analysis}). Any increase of vias above the split layer is a direct indication of more nets being routed in the BEOL, hence inducing more OPPs and a higher complexity for proximity attacks. Therefore, we believe that our scheme generally renders the \emph{IBM superblue} benchmarks more resilient. \begin{table*}[tb] \centering \scriptsize \setlength{\tabcolsep}{0.1em} \caption{Comparison with~\cite{magana16}. For fair comparison, we also allow no die-area overhead. We report PPA numbers on DRC-clean layouts. } \vspace{-2pt} \begin{tabular}{|*{12}{c|}} \hline \multicolumn{3}{|c|}{\textbf{Benchmarks}} & \multicolumn{3}{|c|}{\textbf{Implicit Wire Lifting~\cite{magana16}}} & \multicolumn{6}{|c|}{\textbf{Proposed Scheme}} \\ \hline {\textbf{Name}} & \textbf{ Nets$^*$} & \textbf{Placement} & \textbf{Total Vias$^*$} & \textbf{$\Delta_+$V45} & \textbf{$\Delta_+$V56} & \textbf{Total Vias$^*$} & \textbf{$\Delta_+$V45} & \textbf{$\Delta_+$V56} & \textbf{Power (mW)} & \textbf{Delay (ns)} & \textbf{Die Area $(\mu m^2)$} \\ & & \textbf{Util. (\%)} & \textbf{Before Lifting} & \textbf{After Lifting} & \textbf{After Lifting} & \textbf{Before Lifting} & \textbf{After Lifting} & \textbf{After Lifting} & \textbf{After / Before} & \textbf{After / Before} & \textbf{After = Before} \\ \hline \hline \emph{superblue1} & 879,168 & 69 & 4,597,616 & 40,051 (0.87\%) & 70,355 (1.53\%) & 6,679,733 & 247,739 (3.71\%) & 233,749 (3.50\%) & 82.7 / 81.9 & 29.8 / 29.4 & 1,520,868 \\ \hline \emph{superblue5} & 764,445 & 77 & 4,650,756 & 34,828 (0.75\%) & 62,704 (1.35\%) & 5,523,805 & 139,900 (2.53\%) & 133,052 (2.41\%) & 79.2 / 78.6 & 24.7 / 24.6 & 1,298,221 \\ \hline \emph{superblue10} & 1,158,282 & 75 & 6,304,110 & 42,210 (0.67\%) & 50,999 (0.81\%) & 8,875,439 & 228,454 (2.57\%) & 220,176 (2.48\%) & 116.3 / 115.5 & 29.8 / 29.5 & 2,176,080 \\ \hline \emph{superblue12} & 1,523,108 & 56 & 8,913,075 & 151,018 (1.69\%) & 175,614 (1.97\%) & 11,813,683 & 265,992 (2.25\%) & 274,908 (2.33\%) & 127.2 / 126.3 & 28.9 / 28.8 & 2,276,426 \\ \hline \emph{superblue18} & 672,084 & 67 & 3,582,687 & 45,417 (1.27\%) & 72,897 (2.03\%) & 4,852,381 & 164,971 (3.40\%) & 163,412 (3.37\%) & 81.3 / 80.4 & 19.7 / 19.5 & 1,158,182 \\ \hline \end{tabular} \\ $^*$ Values are different as we use \emph{Cadence Innovus} whereas Maga\~{n}a \emph{et al.\ }\cite{magana16} employ academic tools. Moreover, the metal layer corresponding to M10 in the \emph{Nangate 45nm} library~\cite{nangate11} is missing for~\cite{magana16}. As the contribution for overall routing tracks from M10 is only 1.41\%, the comparison can be considered fair nevertheless. \label{tab:comparison_for_superblue} \vspace{-2pt} \vspace{-2pt} \vspace{-2pt} \end{table*} \begin{table}[tb] \centering \scriptsize \setlength{\tabcolsep}{0.17em} \caption{Comparison with~\cite{magana17}. Note that our layouts are the same as for Table~\ref{tab:comparison_for_superblue}.$^*$ } \vspace{-2pt} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{3}{*}{\textbf{Benchmark}} & \multicolumn{2}{|c|}{\textbf{Implicit Wire Lifting~\cite{magana17}}} & \multicolumn{3}{|c|}{\textbf{Proposed Scheme}} \\ \cline{2-6} & \textbf{$\Delta_+$V45 (\%)} & \textbf{$\Delta_+$V56 (\%)} & \textbf{Wire Lifting} & \textbf{$\Delta_+$V45 (\%)} & \textbf{$\Delta_+$V56 (\%)} \\ & & & \textbf{(in \% of Nets)} & & \\ \hline \hline \emph{superblue1} & 19.18 & 80.10 & 7 & 101.20 & 133.55 \\ \hline \emph{superblue5} & 9.43 & 29.84 & 5 & 57.53 & 76.33 \\ \hline \emph{superblue10} & 26.05 & 54.18 & 5 & 63.97 & 81.93 \\ \hline \emph{superblue12} & 9.72 & 32.40 & 5 & 55.73 & 80.27 \\ \hline \emph{superblue18} & 14.32 & 47.41 & 8 & 91.84 & 135.27 \\ \hline \hline \textbf{Average} & 15.74 & 48.79 & 6 & 74.05 & 101.47 \\ \hline \end{tabular} \\ $^*$ This implies that the percentage of lifted wires is the same in Table~\ref{tab:comparison_for_superblue}. We simply report it here due to lack of space in Table~\ref{tab:comparison_for_superblue}. \label{tab:comparison_for_superblue_TVLSI} \vspace{-2pt} \vspace{-2pt} \vspace{-2pt} \end{table} \subsection{PPA Analysis} \label{sec:PPA_analysis} Recall that we cannot directly compare to the works of Wang \emph{et al.\ }\cite{wang16_sm, wang17} (and Maga\~{n}a \emph{et al.\ }\cite{magana16,magana17}). That is because we have no access to the library (and DEF) files, and PPA cost are not reported in the respective publications. As for our qualitative comparison with Maga\~{n}a \emph{et al.\ }\cite{magana16,magana17}, we also report our PPA numbers on the large-scale \emph{IBM superblue} benchmarks (Table~\ref{tab:comparison_for_superblue}). Notably, we observe only 0.85\%, 0.83\%, and 0\% overheads for power, delay, and die area, respectively. We next discuss in detail the PPA cost as incurred for the comparative experiments (Subsec.~\ref{sec:security_analysis}, Table~\ref{tab:metrics_compare}). Empirically, we allow for different PPA budgets since large benchmarks such as {\em c6288} require more die area to maintain DRC-fixable layouts (and reasonably low PNR values). The average budgets for the experiments in Table~\ref{tab:metrics_compare} are 10\% for power and die area, and 15\% for delay, respectively. Using our flow and given these budgets, we can lift on average 50--60\% of all nets. This ratio of lifted nets over PPA budgets is reasonable---that is especially true in contrast to naive lifting (Fig.~\ref{fig:motivation_PPA}). \textbf{On Area:} Recall that our elevating cells do not impact the FEOL area. Besides, we initially set the utilization targets such that less than 1\% routing congestion can be obtained. Whenever required to enable lifting, we stepwise increase die outlines, which is then reported as die-area cost. \textbf{On Power and Performance:} As we move selected nets to higher metal layers, an increase of wirelength is expected. As a result, we also observe average overheads of 10.7\% and 15.0\% for power and delays, respectively. One can attribute those reasonable overheads to the relatively low resistance of higher layers. Once more and more nets are lifted, however, that positive effect is offset by a steady increase of routing congestion. Typically, congestion is managed by re-routing, which lengthens nets further, aggravating the overheads further to some degree. Besides, we conservatively estimate the impact of dummy OPPs. That is because we consider the annotated load of ECs, capturing the wires and sink, whereas only the capacitance of the dangling wire has to be driven in reality. \begin{table}[tb] \centering \scriptsize \setlength{\tabcolsep}{1mm} \caption{PPA cost and PNR when using two additional metal layers. The attack is based on~\cite{wang16_sm}.} \vspace{-2pt} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Benchmark} & \textbf{\em{PNR}} & \textbf{Die-Area Cost} & \textbf{Power Cost} & \textbf{Delay Cost} \\ \hline \hline c5315 & 28.1 & 0 & 2.9 & 3.3\\ \hline c6288 & 34.5 & 0 & 7.2 & 5.6\\ \hline c7552 & 24.6 & 0 & 3.5 & 4.3\\ \hline \hline \textbf{Average} & 29.1 & 0 & 4.5 & 4.4\\ \hline \end{tabular} \label{tb:extra-metal-layers} \vspace{-2pt} \vspace{-2pt} \vspace{-2pt} \end{table} \textbf{On the Use of Additional Metal Layers:} Finally, we observe that the PPA cost (and PNR) can be further improved once additional metal layers are employed (Table~\ref{tb:extra-metal-layers}). Here we duplicate M6 two times, resulting in 12 layers in total. Also, here we focus on relatively large and challenging benchmarks. Additional metal layers can even provide a commercial benefit for SM and wire lifting, as long as higher layers are used. That is because the relatively low mask and manufacturing cost of large-pitch, higher layers may be more than compensated for by the achievement of zero cost for die area---this reduces the overall footprint of SM on commercial cost significantly. \section{Conclusion} \label{sec:conclusion} We propose a BEOL-centric scheme towards concerted wire lifting, advancing the prospects of split manufacturing (SM). Besides, our novel PNR metric helps to properly quantify the resilience against gate-level theft of intellectual property (IP). The objectives we addressed here are ($i$)~to enable splits after higher metal layers, thereby reducing the commercial footprint of SM, ($ii$)~superior resilience, and ($iii$)~reasonable and controllable PPA cost. We believe that schemes like ours are essential to expedite the acceptance of SM in the industry. We demonstrated exhaustively that our concerted lifting scheme is more effective and efficient than naive lifting, both in terms of protection and PPA cost. In our comparative analysis, we also found that our scheme excels prior art. For example, we achieve 0\% CCR for commonly considered benchmarks (selected from \emph{ISCAS-85}, \emph{MCNC}, and \emph{ITC-99} suites), whereas some prior art experiences CCR well above 70\%. Besides 0\% CCR, we enable PNR as low as 31\% on average. This directly translates to much better IP protection than prior art, which tends to experience 89\% PNR or even more. Note that we may further reduce the PNR by lifting more wires, at least once higher PPA budgets are considered as acceptable. For future work, besides employing other upcoming proximity attacks, we will evaluate the resilience of our scheme within a formal security model. Also, we will further study the prospects of additional higher metal layers. \section*{Acknowledgements} The authors are grateful to Yujie Wang, Tri Cao and Jeyavijayan (JV) Rajendran (Texas A\&M University) for providing their network-flow attack and their protected layouts of~\cite{wang16_sm, wang17}. Further, the authors thank Jonathon Maga\~{n}a and Azadeh Davoodi (University of Wisconsin--Madison) for discussion. \footnotesize \newcommand{\setlength{\itemsep}{-0.14em}}{\setlength{\itemsep}{-0.14em}} \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:10:52", "yymm": "1806", "arxiv_id": "1806.00787", "language": "en", "url": "https://arxiv.org/abs/1806.00787" }
\section{Introduction} \IEEEPARstart{W}{e} all compose text messages in our daily lives. We send emails to our colleagues, share our movie review on social media platforms, some of us write medical reports or publications like this one. Moreover, machines often produce logs which can be easily consumed by a human, but the unstructured or semi-structured nature of these messages pose a challenge for a compute system. Within all these messages lie pieces of information that scientists, doctors or marketeers would like to extract and work with~\cite{Weikum2013}. The amount of data has reached an enormous volume, it continues to double each year~\cite{Manyika2011}, and generating value from it is a key competitive advantage. Information extraction is the task of extracting desired information from textual data and transforming it into a tabular data structure. A number of frameworks exist to perform this task, like the open-source applications GATE~\cite{Cunningham2002} and NLTK~\cite{Bird2006}. IBM's SystemT software~\cite{Krishnamurthy2009} couples a declarative rule language with a modular runtime based on relational algebra, augmented with special operators for information extraction primitives such as regular expressions and gazetteers. This approach improves the expressive power of the rule language, while enabling cost-based rule optimization that significantly improves extraction throughput. The desired information to be extracted can be formulated as a query written in an annotation rule language called AQL, which is similar to SQL but includes text-specific operators also. The AQL query gets compiled into an operator graph (AOG) which can be executed by the SystemT runtime on a given set of documents. A user typically creates and refines an AQL query in a development environment running on a set of sample documents before deploying the query on a compute cluster. The SystemT software uses a document-per-thread execution model, enabling each software thread to work on an independent document in parallel. A similar approach is taken also by the GATE software through the GATECloud.net service, which enables deployment of an annotation pipeline on a compute cloud. Measurements have shown an up to ten-fold speedup~\cite{Tablan2013} compared with a single-node server system. However, the experiments nearly doubled the CPU time, which makes the efficiency of such an approach questionable. The mismatch between the modern scale-out workloads and the existing server processor designs is significant~\cite{Ferdman2012}. These workloads often cannot make use of or do not profit from features, such as wide instruction windows, cache coherence, and out-of-order execution. As a result, modern server processor architectures use only a fraction of their available internal and external memory bandwidth when executing such tasks~\cite{dimitrov2013memory}. Although text analytics might not be the classic scale-out workload, it has similar symptoms when deployed. To overcome this inefficiency in modern server processors, two main trends have emerged in recent years. The first one is the use of many simple parallel processing cores either at the chip level~\cite{lotfi2012scale} or at the node level in the form of micro-servers~\cite{Intel}. A typical scale-out workload executes simple instructions profiting from small and efficient cores, while many cores operate independently as there is little or no data dependency. Another trend is the use of specialized and heterogeneous architectures~\cite{Shao2013, chung2010single}, such as system-on-chip processors in mobile devices or network processors in the telecommunication industry. These architectures either have custom instruction sets or include dedicated accelerators that are tailored to an application domain. Dedicated hardware accelerators can yield high performance and efficiency gains, but often lack flexibility when different or new tasks need to be executed. On the one hand, a text analytics query remains unchanged for a long period of time, and operates on large volumes of data. On the other hand, the query is hand-crafted by a domain expert and can become very complex. A fixed architecture might not be flexible enough to execute new and complex queries. Thus, the text analytics system must provide the flexibility of processing arbitrary text analytics queries while identifying and accelerating bottleneck operations to improve the overall efficiency and the processing rates. In this work, we propose a reconfigurable accelerator to accelerate text analytics queries. The main contributions of this work are: \begin{enumerate} \item a deployment flow for a hardware-accelerated text analytics system that exploits the reconfigurability of field programmable gate arrays (FPGAs) to adapt to a wide range of text analytics queries; \item a multi-threaded hardware-software interface to support scale-out systems that operate on streams of text documents; \item implementation and evaluation of the proposed flow on real text analytics queries estimating an up to 16-fold speed-up with respect to the multi-threaded software implementation. \end{enumerate} \section{Related Work} The use of hardware accelerators for efficient query processing has been explored by several research groups. Such approaches have also been incorporated into commercial appliances. One of the earliest examples of such an approach is given in~\cite{Kung1980}, in which Kung and Lehman described systolic array-based accelerators for relational algebra operations. More recently, Muller et al. proposed a query compiler that produces FPGA bitstreams for complex event detection queries that consist mainly of relational algebra operations~\cite{Mueller2009}. Dennl et al. propose a system that enables on-the-fly composition of FPGA-based SQL query accelerators by combining a static stream-based communication interface and partially relocatable module libraries on the FPGA~\cite{Dennl2012}. Such an approach enables creation of FPGA bit streams for dynamically changing relational queries without going through time-consuming FPGA synthesis tools. Sukhwani et al.~\cite{SukhwaniPACT2012} describe an FPGA-based accelerator engine for database processing that offers a software-programmed interface to eliminate the need for FPGA reconfiguration. Chung et al.~\cite{chung2013linqits} present a query compiler for a domain-specific language called LINQ that can be mapped to accelerator templates. Wu et al.~\cite{wu2013navigating} describe a programmable hardware accelerator for range partitioning that is directly attached to a CPU core. The accelerator operates in a streaming fashion, but only accelerates the range partitioning step of query processing. Our approach is inspired by IBM's PureData System~\cite{IBMPureData}, which attaches FPGAs directly to storage devices to deal with large volumes of data. Although our accelerator architecture uses a shared-memory setup, a direct I/O attachment can be beneficial for specific use-cases, e.g. when documents are read from a database. To the best of our knowledge, our work is the first to produce FPGA-based accelerators that support a combination of information extraction operations (i.e., regular expressions) and relational algebra operations. \section{A Reconfigurable Accelerator} Our system improves information extraction throughput by executing selected operators on a reconfigurable device, such as an FPGA. One advantage of a reconfigurable device is that once it has been configured, it does not require any instructions to execute its tasks. The only data that is required to be transferred between the memories and the FPGA is the actual data to be processed, together with some negligible control information. In the case of text analytics applications, which are typically applied to large volumes of data, the same query is run for several hours or several days. Thus, fast reconfiguration capability is not needed. \begin{figure}[!t] \centering \includegraphics[width=3.0in]{aog} \caption{Example of partitioning an operator graph (a) into a supergraph that is executed by the runtime (b) and an accelerated subgraph that gets compiled into a hardware netlist (c).} \label{aog} \end{figure} Another advantage of FPGAs is their capability to compute in space. On the one hand, a reconfigurable device can implement a deep custom pipeline working on different data sets at different stages. On the other hand, multiple parallel instances can operate simultaneously on the same data set executing different tasks, such as our architectures for the extraction operators~\cite{Atasu2013, Polig2013}. This high degree of parallelism makes up for the comparably low clock frequencies FPGAs provide. By moving not only single operators to the FPGA but also larger subgraphs of the operator graph, the parallelism can be fully exploited and the amount of communication between the software-based operators and the hardware-accelerated operators can be minimized. Fig. \ref{aog} illustrates how an operator graph (a) can be partitioned into a supergraph (b) and a hardware-accelerated subgraph (c). Operators that are moved to the accelerator are removed from the original operator graph and replaced with a new subgraph operator. It is also possible to extract multiple independent subgraphs that can be executed in parallel or in sequence on the FPGA for the same or a different set of text documents. In this way, most of the unnecessary data gets filtered out before reaching the software modules running on the server processors, which greatly improves the processing rates. In this work, we have used the concept of maximal convex subgraphs~\cite{ReddingtonTVLSI2012} to identify the subgraphs that are maximal in size and that can be atomically executed without processor intervention. To automate the generation of query-specific accelerators, we have extended the compilation flow of SystemT. Fig. \ref{system} shows the acceleration flow added to the original SystemT text analytics system. The AQL query gets compiled into the operator graph, which is further processed by the original SystemT optimizer. Before deploying the operator graph, we perform a partitioning step that generates the software supergraph and the subgraphs that are run on the FPGA. We have also developed a query compiler~\cite{PoligTechRep2014} that uses a set of configurable operator modules which can be linked using an elastic interface to generate a streaming hardware design for a given subgraph. The document is processed on the FPGA as a sequence of ASCII characters and is the only variable-length data structure used. The main data structure used is a so-called \emph{span} that defines a segment within the document text. A span is composed of a start and an end offset, both of which are represented as 32-bit integers. Additional data types are integers, floats, and boolean. The same type of operator can have different types of input schemas consisting of different number and types of data. However, all of these schemas are known at compile time, and our compiler generates a custom operator for each node in the operator graph. The compiler leverages the possibility to implement a large set of operators in streaming fashion when the input data is sorted in a certain direction. Sorting itself is a blocking operation, but many operators produce sorted or nearly sorted output data naturally. By adding simple sorting buffers or configuring preceding operators properly, the compiler ensures the streaming operation of the accelerator. After the compiler generates the hardware description, it is synthesized and the configuration is loaded onto the FPGA. The supergraph will be executed by the SystemT runtime on the host CPU, whereas the subgraphs are run on the reconfigurable accelerator. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{system_architecture} \caption{Extending SystemT's compilation flow to support FPGA-based hardware-accelerators.} \label{system} \end{figure} The SystemT runtime uses multiple worker threads, all of which execute the same supergraph on different documents. When a worker thread reaches a subgraph operator, it signals that to a dedicated communication thread, which coordinates the data transfers between the runtime and the FPGA. Because the document-per-thread execution model, we set the worker thread to sleep while the subgraph is being executed on the accelerator. To avoid the CPU cores from idling, a high number of worker threads is run in parallel to hide the execution time of the FPGA. Ideally, the reconfigurable device would have a very low latency when accessing the data after receiving the instruction to execute its configured subgraph. However, traditionally, FPGA accelerators are attached via the system bus and access the data via DMA transfers, which have an at least three- to four-fold higher memory access latency than the processor itself~\cite{LatencyHT}. Our accelerators use the load-store units of an early version of the coherent accelerator processor interface (CAPI)~\cite{Stuecheli2013}. A service layer implemented on the FPGA enables the accelerators to access the processor's main memory and operate in a common virtual address space with the applications running on the processor. The address translation is software-based in the system that is available to us, and occurs within our communication thread, resulting in an additional communication overhead. To minimize the impact of this overhead, larger data blocks ($>$ 1000 bytes) should be transferred at once to fully use the system bus bandwidth. Therefore, the communication thread collects the data submitted by some of the worker threads and generates a larger combined work package. It then sends the data to the accelerator's work queue and starts again to check for submissions from the worker threads. When the FPGA finishes working on a work package, it signals it via a status register to the communication thread, which wakes up the software threads that belong to this work package. \begin{figure}[!t] \centering \includegraphics[width=3.0in]{interface} \caption{Communication scheme using multiple SystemT software threads. The communication thread orchestrates the transfers between the SystemT runtime and the hardware accelerator.} \label{elasticif} \end{figure} \section{Experiments} \label{experiments} We carried out a number of performance and profiling experiments on an IBM POWER7 server running at 3.55 GHz, capable of 64 logical threads and 64 GB of DDR3 memory. We synthesized the accelerator designs for an Altera Stratix IV FPGA running at 250 MHz. The FPGA was attached to a proprietary bus interface that is capable of 2.5 GB/s DMA transfers. \subsection{Software measurements} We evaluated five customer queries that we ran over the same set of input documents. The SystemT profiler captures the time spent at each operator and accumulates it over the total runtime. From these numbers we derived a relative distribution to get comparable profiles of our testcases, as shown in Fig.~\ref{prof}. Queries T1 to T4 are dominated by the processing time spent on extraction operators (RegularExpression \& Dictionaries), whereas query T5 spends more than 80\% at relational operators. \begin{figure}[!t] \centering \includegraphics[width=3.0in]{profiling} \caption{Relative time spent on executing different operators for five real-life text analytics queries.} \label{prof} \end{figure} Extraction primitives operate across the entire document, whereas relational operators usually work on the results produced by extractors. The extraction operators are typically the slowest operations in software. As a result, the throughput for testcase T5 is higher than for T1-T4. Fig.~\ref{throughput} shows the system throughput for all testcases running with different numbers of threads. Initially, the throughput scales nearly linearly with the number of threads before starting to roll off at eight. Surprisingly, the throughput increases again strongly between 32 and 40 worker threads. This behavior appears to be because of the operating system scheduler, which uses all logical threads on one processor before spawning to another one. \begin{figure}[!t] \centering \includegraphics[width=3.0in]{sw_throughput} \caption{Throughput of the original software vs. the number of threads for 256 byte documents.} \label{throughput} \end{figure} \subsection{Accelerator measurements} As the query profiles show, a significant amount of time is spent on extraction operators that operate on the entire document data. As a result, we have optimized our HW-SW interface for this type of input so that the extraction operators on the FPGA determine the maximum achievable throughput rate regardless of the subgraph configured on the accelerator. A significant backpressure from the relational operators was never observed in our test cases, and could be removed by using shallow buffers at critical stages. In our experiments, we measured the throughput rate for an accelerator with four parallel streams and a maximum peak bandwidth of 500 MB/s. Fig.~\ref{throughput_hw} shows the measured throughput rate for different document sizes, which are submitted by parallel SystemT worker threads to the interface. We observe that we achieve the peak bandwidth when using document sizes of 2 kB or larger. News entries typically have a few kBs of text, and thus can be processed at the peak bandwidth of the accelerator. In contrast, when using 128-byte-sized documents, the throughput diminishes by a factor of ten, and when using 256-byte-sized documents, the throughput diminishes by a factor of five even though the communication thread combines small documents into a larger workpackage. Although these numbers do not represent the size of a typical text document, they are representative of the typical size of Twitter messages and RSS feeds. \begin{figure}[!t] \centering \includegraphics[width=3.0in]{throughput_hw} \caption{Throughput of the FPGA executing all extraction operators of query T1 using four parallel text streams for different document sizes.} \label{throughput_hw} \end{figure} \section{Analysis} Our existing implementation of the SystemT runtime is not capable of executing the generated supergraph indicated by the dashed line in Fig.~\ref{system}. Therefore we estimate the achievable overall system bandwidth by analyzing the results from section~\ref{experiments}. We observe that the runtime of most queries is dominated by extraction type operators consuming up to 82\% of the overall runtime. As all of these operators operate on the same document data source, they are an ideal candidate for acceleration on the reconfigurable device, where they can operate in parallel on a single document pass. Additional relational operators that are supported for hardware processing can add up to 97\% of the total runtime. The software throughput rate varies with the profile of the query, whereas the hardware throughput is determined by the input operator of the subgraph. We choose to always offload the extraction operators, which allowed us to focus on the document data transfers. The document size has a significant impact on the throughput of the accelerator as Fig.~\ref{throughput_hw} shows. Although the peak bandwidth can only be reached by using larger documents, the throughput rate for smaller documents is still much higher than that of the pure software. We estimate the overall system throughput using (\ref{throughput_estimate}), in which we add the remaining time spent on software processing, $rt_{SW}$, to the time spent on the accelerator using the measured throughput rates $tp_{SW}$ and $tp_{HW}$. The interface cost is included in our measurements for the accelerator throughput and does not need to be added as an extra penalty. We estimate the throughput achieved 1) by offloading only the extraction operations to the FPGA, 2) by offloading a single maximal convex subgraph that contains all extraction operations and as many hardware-supported operators as possible, and 3) by offloading all hardware-supported operators to the FPGA using multiple maximal convex subgraphs. In the first two cases, the estimations we present are pessimistic because we do not take into account potential processing overlaps between the FPGA and the CPU. In the third case, our estimations are optimistic because we do not take into account the communication overhead incurred by the additional subgraphs. Fig.~\ref{acceleration} summarizes our estimations when using 64 software threads, four hardware streams and average document sizes of 256 or 2048 bytes. \begin{equation} \label{throughput_estimate} tp_{est} = \dfrac{1}{\dfrac{1}{tp_{HW}} + \dfrac{rt_{SW}}{tp_{SW}}} \end{equation} \begin{figure}[!t] \centering \includegraphics[width=3.0in]{acceleration} \caption{Throughput using 64 software threads and estimated throughput when executing the extraction operators, a single subgraph or multiple subgraphs on the accelerator for 256 and 2048 byte documents.} \label{acceleration} \end{figure} Although the throughput rates of the query T1-T4 increase up to 4.8 fold by offloading the extraction operators, query T5 sees a limited impact. Only by running multiple subgraphs on the accelerator query T5 does gain an up to three-fold improvement. Query T1 improves by a factor of ten by offloading multiple subgraphs to the accelerator for small documents and by a factor of 16 for larger documents. \section{Conclusion} As we enter the Big Data era, deriving value from large amounts of data efficiently becomes a necessity. We believe that text analytics will be a key application of this new era, but that it is challenged by the growing complexity of the queries and ever more data to process. We have presented a prototype system that includes an FPGA as a reconfigurable accelerator and a hardware compiler that enables offloading selected parts of a given text analytics query. Projections based on profiling results and actual measurements on the FPGA-attached system promise an up to 16-fold speed-up over purely software-based solutions. The speed-up results reported in this paper can be further improved by including support for additional relational operators in our hardware compiler. Further optimizations to the interface are also being investigated to minimize the latency penalty of small documents. Our future work will cover hardware/software partitioning algorithms to maximize the overall system's throughput rate under resource constraints of the FPGA. We also plan to identify the most power-efficient design choices for a given query. \ifCLASSOPTIONcaptionsoff \newpage \fi \balance
{ "timestamp": "2018-06-05T02:18:08", "yymm": "1806", "arxiv_id": "1806.01103", "language": "en", "url": "https://arxiv.org/abs/1806.01103" }
\section{Introduction} The problem of program synthesis is an important one in AI. Synthesis has many settings, from the fully automated settings, where code is synthesized at the level of machine code, to the interactive, where synthesis assists a professional developer in an IDE. We focus on a novel synthesis setting where synthesis may be used to facilitate computer science education. \begin{figure}[htb] \begin{center} \begin{tabular}{c c c} \includegraphics[width=0.1275\textwidth]{cs-drawing-eps-converted-to.pdf} & \includegraphics[width=0.1\textheight]{cs-prog-trajectory-eps-converted-to.pdf} & \includegraphics[width=0.1\textheight]{cs-solution-trajectory-eps-converted-to.pdf} \\ & \includegraphics[width=0.14\textheight]{cs-prog-eps-converted-to.pdf} & \includegraphics[width=0.14\textheight]{cs-solution-eps-converted-to.pdf} \\ (a) An intended trajectory. & (b) A partial program. & (c) The synthesized solution. \\ \end{tabular} \end{center} \caption{} \label{fig:star} \end{figure} Consider a student in an educational programming task, drawing an image with a turtle-style program. They may have an intention expressible as a trajectory they would like to draw, as shown in \textbf{Figure}~\ref{fig:star} (a). This trajectory can be considered a \emph{complete} but \emph{noisy} specification of the intended program. And they may have some program built up in their workspace, but this program may be ``incomplete'' or ``buggy''. For example, \textbf{Figure}~\ref{fig:star} (b) shows a program the user might have composed with the intention of drawing \textbf{Figure}~\ref{fig:star} (a), along with the trajectory it actually creates. The user might be uncertain how to proceed in order to correct this. It is in this setting where we seek to formulate a synthesis task that can provide the solution shown in \textbf{Figure}~\ref{fig:star} (c) where the arguments to both the \texttt{repeat} and \texttt{turn} blocks are corrected and a \texttt{move} block is added to yield the program nearest to the intended trajectory. In this paper, we describe how to synthesize code from such user-provided visual specification. We ultimately formulate the synthesis problem as an optimization problem suitable for combinatorial search. \section{Setting} \subsection{Programming Language} We will consider the space $\ensuremath{\mathcal{\bar{P}}}$ of turtle programs generated by the following grammar: \begin{align*} \textit{Program} \rightarrow \ &\textit{Statement}* \\ \textit{Statement} \rightarrow \ &\textsc{Move} \\ &| \ \textsc{Turn}\,\textit{Angle} \\ &| \ \textsc{Repeat } \textit{Int Program} \end{align*} Here $\textit{Angle}$ takes on values at increments of 30 degrees and $\textit{Int}$ takes on values from 2 to 5. Throughout the paper we will speak of elements of this space equivalently as either programs or blocks. As can be seen in \textbf{Figure}~\ref{fig:star} and elsewhere, blocks may be connected by being nested horizontally within \texttt{repeat} statements or vertically. If a block is connected vertically beneath another, we refer to it as a \emph{child} block of its \emph{ancestor}. If a block has no ancestors, we refer to it as a \emph{root} block. \begin{figure}[htb] \begin{center} \begin{tabular}{c c} (a) Two programs. & (b) One program. \\ \includegraphics[width=0.2\textwidth,trim={0 5mm 0 5mm},clip]{disconnected-blocks-eps-converted-to.pdf} & \includegraphics[width=0.17\textwidth]{connected-blocks-eps-converted-to.pdf} \end{tabular} \end{center} \caption{Examples of workspaces.} \label{fig:workspace} \end{figure} We implement this turtle language using the Blockly\footnote{https://developers.google.com/blockly} visual block programming language and its editor. The semantics of this language are as follows: \begin{itemize} \item{A \textsc{move} statement translates the turtle in its current direction by some fixed magnitude.} \item{A \textsc{turn} statement rotates the turtle by the specified number of degrees.} \item{A \textsc{repeat} statement executes a subprogram some number of times.} \end{itemize} A user may position several such elements of $\ensuremath{\mathcal{\bar{P}}}$ on their \textbf{workspace} as shown in \textbf{Figure}~\ref{fig:workspace} (a). Taking a more abstract perspective, we can define a workspace as a list of elements in $\ensuremath{\mathcal{\bar{P}}}$ and we denote the space of workspaces by $\ensuremath{\mathcal{P}}$. We call a set of points on the two-dimensional plane $t \in \ensuremath{{\mathcal T}}$ a \textbf{trajectory}. And we write $\ensuremath{I}(p) = t$ for the interpretation function which maps a workspace $p$ to its trajectory $t$. This interpretation function can be thought of as executing each block in the workspace on the canvas in the order that it appears in workspace list. \subsection{Editing Environment} \label{sec:editing} We can describe the user interface of an editor for our programming language through \textbf{editing commands}. These commands represent discrete mouse manipulations performable through Blockly. Note that for these commands to make sense we must label each block on the workspace with an identifying number. The families of commands are: \begin{figure}[H] \begin{enumerate} \item Get a \emph{\{Type\}} block. \item Remove block \emph{\{BlockId\}}. \item Connect block \emph{\{BlockId\}} under block \emph{\{BlockId\}}. \item Connect block \emph{\{BlockId\}} inside block \emph{\{BlockId\}}. \item Disconnect block \emph{\{BlockId\}}. \item Change \emph{\{Val\}} in block \emph{\{BlockId\}} to \emph{\{Val\}}. \end{enumerate} \caption{Family of available editing commands.} \label{fig:commands} \end{figure} (1) adds a new block to the workspace, as a root block not connected to any other block on the workspace. The type parameter can be one of Move, Turn, or Repeat. (2) removes the block and all its child blocks. This command matches the Blockly semantics of dragging a block to the trash bin. (3) and (4) move a block and all its children to a new location. (3) moves a source block under a target block and connects them. If the target block has children, they are appended under the source block's children. (4) is distinguished from (3) in that the target must be a repeat block, and it places the source block at the top of the repeat body. (5) disconnects a block from its parent, making it into a root block on the workspace. (6) modifies the parameter values of blocks, such as the angle in the turn block or the integer in the repeat block. The user writes a program by applying a sequence of editing commands beginning from an empty workspace. That is, a command, when specialized by a choice of feasible values for its parameters, can be thought of as mapping a workspace $p$ to a successor workspace $p'$. For example, beginning at an empty workspace, the following sequence of commands produces the program in \textbf{Figure}~\ref{fig:star} (b): \begin{enumerate} \item Get a repeat block. \item Get a move block. \item Connect block 2 inside block 1. \item Get a turn block. \item Connect block 3 under block 2. \item Change 30 in block 3 to 120. \end{enumerate} This family of commands represents an abstraction of the editor's capabilities, as the user would typically be manipulating the editing environment with keyboard and mouse. \section{Problem Formulation} Having described the programming language and editing environment, we are now in a position to formulate our synthesis problem as search. The user in our a programming environment intends to produce a trajectory $t^* \in \ensuremath{{\mathcal T}}$ by means of a turtle program. Consider this as a search on a graph $\ensuremath{{\mathcal G}}\xspace$ whose vertices are the set of workspaces $\ensuremath{\mathcal{P}}$. There is an edge from $p$ to $p'$ in $\ensuremath{{\mathcal G}}\xspace$ if there is an editing command which produces workspace $p'$ when applied to workspace $p$. All edges have unit cost. We let $\textrm{cost}(p,p')$ designate the weight of the shortest path from $p$ to $p'$ in $\ensuremath{{\mathcal G}}\xspace$. We will designate the initial state of their workspace by $p_0$. We measure similarity of trajectories with Hausdorff distance. The Hausdorff distance is a commonly used metric for tasks in object matching and image analysis \cite{jesorsky2001robust} and serves as a natural metric for the quality of the fit of a candidate program to a trajectory. We denote the Hausdorff distance between sets of points $X$ and $Y$ by $$d_H(X,Y) := \max(\max_{x \in X} \min_{y \in Y} d(x,y), \max_{y \in Y} \min_{x \in X} d(x,y))$$ where $d(\cdot,\cdot)$ is the ordinary Euclidean distance. Synthesizing a good solution $\hat{p}$ from $p_0$ for trajectory $t^*$ involves a tradeoff between two types of distance or error. On the one hand, a candidate solution $p$ has some distance from the target trajectory $d_H(I(p),t^*)$, reflecting the quality of its fit. On the other hand, it may depart significantly from $p_0$, reflected in a large value of $\textrm{cost}(p_0,p)$, the minimum number of editing steps required to create $p$ if beginning from $p_0$. We trade these off in a constraint formulation. Given intended trajectory $t^*$ and current program $p_0$, we define our synthesis problem as: \begin{equation*} \begin{aligned} & \underset{p \in \ensuremath{\mathcal{P}}}{\text{argmin}} & & d_H(\ensuremath{I}(p), t^*) \\ & \,\,\,\,\,\,\text{st} & & \textrm{cost}(p,p_0) \le C. \end{aligned} \end{equation*} In all experiments below, we choose $C = 6$. This choice was made to ensure practical run times. \section{Algorithms} \subsection{IDPS} \begin{figure} \begin{center} \begin{tabular}{c c c} \includegraphics[width=0.11\textheight,height=0.11\textheight,trim={0 2cm 3cm 0},clip]{seq-plot-1-eps-converted-to.pdf} & \includegraphics[width=0.11\textheight,height=0.11\textheight,trim={0 2cm 3cm 0},clip]{seq-plot-2-eps-converted-to.pdf} & \includegraphics[width=0.11\textheight,height=0.11\textheight,trim={0 2cm 3cm 0},clip]{seq-plot-3-eps-converted-to.pdf} \\ \includegraphics[width=0.11\textheight,height=0.11\textheight,trim={0 2cm 3cm 0},clip]{seq-plot-4-eps-converted-to.pdf} & \includegraphics[width=0.11\textheight,height=0.11\textheight,trim={0 2cm 3cm 0},clip]{seq-plot-5-eps-converted-to.pdf} & \includegraphics[width=0.11\textheight,height=0.11\textheight,trim={0 2cm 3cm 0},clip]{seq-plot-6-eps-converted-to.pdf} \end{tabular} \end{center} \caption{ A noisy trajectory (white) atop improving completions $\hat{p}_1, \hat{p}_2, \dots, \hat{p}_6$ (red) and the initial program $p$ (blue). } \label{fig:searchseq} \end{figure} Our first algorithm is a variant of iterative deepening, which we call iterative deepening program search (IDPS). We use a path-checking depth-limited search as a subroutine to minimize space requirements. Unlike traditional AI search procedures, IDPS returns not one but a sequence of programs $\hat{p}_1,\hat{p}_2,\dots$ whose Hausdorff distance from the target trajectory $t^*$ strictly decreases. At iteration $d$, we initialize $\epsilon$ to $d_H(I(p_0),t^*)$ and expand the search graph up to depth $d$. Then, we iterate through all depth-$d$ programs $p$, checking if $d_H(I(p), t^*) \le \epsilon$. When such a program is found, we emit it to our output sequence, and update $\epsilon$ to $d_H(I(p), t^*)$ so that future goal states must be strictly better than $p$. \textbf{Figure}~\ref{fig:searchseq} displays an example of such a sequence, where $t^*$ is a noisy square and the initial program draws only a line. By convention, we include the initial program $p_0$ in the sequence. \subsection{Sampling search} Our second algorithm is a sample-based search shown in \textbf{Algorithm}~\ref{alg:samplesearch}. Here, we use a corpus of user programs and trajectories to guide our search. \textbf{Algorithm}~\ref{alg:samplesearch} can be described as searching from a graph rooted at $p_0$. The initial program $p_0$ is represented by a sequence of editing commands. We use the notation $\bar{p} = p:c$ to represent the program $\bar{p}$ resulting in appending command $c$ to the end of command list $p$. The algorithm samples commands $c$ from a distribution $\ensuremath{\mathbb{P}}_{c|p}$ parameterized by command sequence $p$. A budget of $b$ candidates are sampled in total. Each of the $\frac{b}{C}$ sampling rounds begins at $p_0$ and samples $C$ commands, sequentially appending and evaluating them. The candidate minimizing $d_H(I(\cdot),t^*)$ among all samples candidates is returned. \begin{algorithm} \begin{algorithmic} \caption{Sampling Search}\label{alg:samplesearch} \Require Budget $b$, cost $C$, initial program $p_0$, visual specification $t^*$ \Ensure Program $best$ \Procedure{SamplingSearch}{$b, C, p_0, t^*$} \State $best \gets p_0$ \For{each $i$ in $1 \dots \frac{b}{C}$} \For{each $j$ in $1 \dots C$} \State{$c_j \sim \ensuremath{\mathbb{P}}(c|p_{j-1})$} \State{$p_j \gets p_{j-1} : c_j$} \If{$d_H(I(p_j), t^*) < d_H(I(best), t^*)$} \State $best \gets p_j$ \EndIf \EndFor \EndFor \Return{$best$} \EndProcedure \end{algorithmic} \end{algorithm} What distinguishes variants of this algorithm is how the distribution $\ensuremath{\mathbb{P}}(c|p)$ is modeled. We factor $\ensuremath{\mathbb{P}}(c|p)$ this into a bigram model over command types and a distribution over command arguments. That is, we map the command sequence $p = \{ c_i \}_{i=1}^{|p|}$ to a coarsened sequence $\{ \tilde{c}_i \}_{i=1}^{|p|}$ by discarding arguments so that $\tilde{c}_i \in$ \{\textrm{Get}, \textrm{Remove}, \textrm{Connect}, \textrm{Change}, \textrm{Separate}\}. We then draw $\tilde{c}_{|p|+1}$ from $\ensuremath{\mathbb{P}}(\cdot|\tilde{c}_{|p|})$, a Markov chain over the coarsened tag sequence. To sample $c_{|p|+1}$ from $\tilde{c}_{|p|+1}$, we have two models: a \textbf{uniform} model and a \textbf{non-uniform} model. On the uniform model, we uniformly sample the $\{\textrm{Type}\}$, $\{BlockId\}$, $\{\textrm{Val}\}$ arguments from the command language from all available types, values, and -- in the case $\{BlockId\}$ -- all available positions in the current program. For the non-uniform model, we make the following observation about the process of editing programs: locations to modify the current program are chosen with a particular focus in mind. Specifically, when a user is connecting a block to another, the source block is more often the last block added to the workspace, while the destination block is often the next-to-last block added to the workspace. We construct a simple model to accommodate this observation. In choosing $\{BlockId\}$ arguments, we designate a probability $\lambda_{-1}$ that the source block is the last block added to the workspace. We assign probability mass $(1 - \lambda_{-1})$ uniformly over other feasible blocks. Then we sample a destination block as the next-to-last block with probability $\lambda_{-2}$, reserving probability $(1 - \lambda_{-2})$ to be uniformly distributed over all remaining feasible choices. All of the these probabilities in the above models can be estimated from our corpus. We estimate the transition probabilities $\ensuremath{\mathbb{P}}(\tilde{c_j}|\tilde{c}_{j-1})$ over the coarsened command sequences smoothing with pseudo-counts of 1. We estimate $\lambda_{-1}$ and $\lambda_{-2}$ by taking the empirical proportion of such decisions over the corpus. The product of the distribution over command types and the distribution over command arguments defines the distribution $\ensuremath{\mathbb{P}}(c|p_{j-1})$ shown in \textbf{Algorithm}~\ref{alg:samplesearch}. \subsection{A Computational Speedup} While the Hausdorff distance $d_H$, used in both IDPS and sampling search algorithms, is a natural choice for scoring the quality of a fit, the computation of $d_H(X,Y)$ becomes prohibitive, as it is quadratic in the number of points of the two point sets $x$ and $y$ to be compared. Let us say we have some threshold $\alpha > 0$ and we wish to determine if $d_H(X,Y) < \alpha$. In many cases where $d_H(X,Y) \ge \alpha$, we may avoid some of this computation. If there exists a point $x \in X$ such that $\forall y \in Y, d(x,y) \ge \alpha$, then we may terminate our computation, as $d_H(X,Y) \ge \alpha$. Furthermore, if for any $x \in X$, we can find some $y \in Y$ such that $d(x,y) < \alpha$, we may omit computation of any further $d(x,y')$ for $y' \in Y$, because $\min_{y \in Y} d(x,y) < \alpha$. These speed ups, however, avoid computation only if $d_H \ge \alpha$. They do not compute $d_H$. Therefore, these speed ups are employed in the algorithms above by replacing any inequality condition involving $d_H$. If the $d_H < \alpha$, the full quadratic computation of $d_H$ is performed to compute its value. \section{Evaluation} We solicit a corpus of $\ensuremath{n} = 23$ programs and their visual specifications from 11 volunteer study participants, who range in programming experience from novice to professional. \textbf{Figure}~\ref{fig:corpus} and \textbf{Figure}~\ref{fig:sample} give an examples of a participant program and some participant specifications, respectively. \begin{figure}[t] \begin{itemize} \setlength\itemsep{0.1em} \item Get a repeat block \item Get a turn block \item Connect block 2 inside block 1 \item Change 30 in block 2 to 270 \item Get a move block \item Connect block 3 under block 2 \item Get a repeat block \\ \dots \end{itemize} \caption{Fragment of a program from the corpus.} \label{fig:corpus} \end{figure} To construct this corpus, we employ the following data collection procedure: \textbf{Step 1} Each participant is educated in the capabilities of our turtle language and the editor environment by completing an introductory set of exercises. \textbf{Step 2} The participant is instructed to draw a trajectory on a standard canvas. Let $\ensuremath{t}^{(i)}$ represent this visual specification of the intended of the program, as drawn by participant $i$. \textbf{Step 3} The participant is instructed to compose a program in the turtle language which follows the drawn trajectory as closely as possible. Let $\ensuremath{p}^{(i)}$ represent the matching program from participant $i$. We record each step in the participant's programming process as a formal editing command, c.f. Section~\ref{sec:editing}. We represent the complete program $\ensuremath{p}^{(i)}$ as the sequence of editing commands $\{\ensuremath{c}^{(i)}_1,\dots,\ensuremath{c}^{(i)}_{|\ensuremath{p}^{(i)}|}\}$ which produce $\ensuremath{p}^{(i)}$ if performed in the editor begining at an empty workspace. \begin{figure}[htb] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.14\textwidth]{drawing1-eps-converted-to.pdf} & \includegraphics[width=0.14\textwidth]{drawing2-eps-converted-to.pdf} & \includegraphics[width=0.14\textwidth]{drawing3-eps-converted-to.pdf} \\ \includegraphics[width=0.14\textwidth]{drawing4-eps-converted-to.pdf} & \includegraphics[width=0.14\textwidth]{drawing5-eps-converted-to.pdf} & \includegraphics[width=0.14\textwidth]{drawing6-eps-converted-to.pdf} \end{tabular} \end{center} \caption{Sample trajectories drawn by participants.} \label{fig:sample} \end{figure} Now consider each of these programs with the final $k$ commands removed, that is $\ensuremath{p}_{-k}^{(i)} = \{\ensuremath{c}^{(i)}_1,\dots,\ensuremath{c}^{(i)}_{|\ensuremath{p}^{(i)}|-k}\}$. Letting $\mathcal{A}$ denote the search algorithm at hand, our interest is in evaluating the performance of $\hat{\ensuremath{p}}^{(i)} = \mathcal{A}(\ensuremath{p}_{-k}^{(i)},\ensuremath{t}^{(i)})$, the $k$-ahead performance of our search algorithm. \begin{table}[b] \begin{center} \def\arraystretch{2}% \begin{tabular}{p{22mm} | l | l} \hline \textbf{Metric} & \textbf{Notation} & \textbf{Equation} \\ \hline Accuracy & $Acc_k(p^{(i)},t^{(i)})$ & $ \ensuremath{\mathbb{I}}[\hat{\ensuremath{p}}^{(i)} \in SemEq(p^{(i)})]$ \\ Hausdorff Error & $Err_k(p^{(i)}, t^{(i)})$ & $d_H(t^{(i)},I(\hat{\ensuremath{p}}^{(i)}))$ \\ Relative Error Reduction & $\Delta_k(p^{(i)},t^{(i)})$ & $\frac{Err_k(p_{-k}^{(i)}, t^{(i)}) - Err_k(\hat{\ensuremath{p}}^{(i)}, t^{(i)})}{Err_k(p_{-k}^{(i)}, t^{(i)})}$ \\ \hline \end{tabular} \end{center} \caption{$k$-ahead metrics}\label{table:metrics} \end{table} As our search procedure does not discriminate between syntactically differing programs which produce the same trajectory (share the same semantics), we consider programs up to semantic equivalence. We define the semantic equivalence class of $p \in \ensuremath{\mathcal{P}}$ as $SemEq(p) = \{ p' \in \ensuremath{\mathcal{P}} : I(p) = I(p') \}$. We then define our $k$-ahead accuracy in terms of our procedure achieving any semantically equivalent program to the target. Unfortunately, if we only consider $\hat{p}^{(i)}$ correct when $\hat{p}^{(i)} \in SemEq(p)$, we overlook the important case where $I(\hat{\ensuremath{p}}^{(i)})$ is a better fit for $t^{(i)}$ than $I(\ensuremath{p}^{(i)})$. This may happen, for example, if the participant has little programming experience and writes $\ensuremath{p}_i$ incorrectly. A softer measure of accuracy is simply the Hausdorff distance between $I(\hat{p}^{(i)})$ and the user's trajectory, though this does not consider scaling. Still softer is the relative reduction in Hausdorff distance from $\ensuremath{p}_{-k}^{(i)}$ to $\hat{\ensuremath{p}}^{(i)}$. \textbf{Table}~\ref{table:metrics} summarizes these $k$-ahead metrics. We evaluate our method against all metrics. We also report runtime. \begin{figure*}[t] \begin{center} \begin{tabular}{ccc} \resizebox{0.30\textwidth}{!}{\input{fig-c.tex}} & \resizebox{0.30\textwidth}{!}{\input{fig-a.tex}} & \resizebox{0.30\textwidth}{!}{\input{fig-e.tex}} \\ (a) Mean $Acc_k$ vs. $k$ & (b) Mean $Err_k$ vs. $k$ & (c) Mean $\Delta_k$ vs. $k$ \\ \end{tabular} \end{center} \caption{Performance of each algorithm against $k$. (a) mean accuracy (b) mean Hausdorff distance (c) the relative error reduction.} \label{fig:three} \end{figure*} \section{Related Work} The problem of generating a computer program from some specification has been studied since the beginnings of AI. Relevant work here falls into the two broad camps of \emph{synthesis} where an explicit program is generated and \emph{induction} where a latent representation may be used to generate input output pairs \cite{devlin2017robustfill}. In the inductive setting, there is a large literature of work relating to the search for algorithms which correct a sketch. \cite{patidar2017correcting} uses recurrent neural networks to model conditional sketch generation, an image-to-image transformation problem. \cite{lake15science} uses probabilistic program induction to perform one-shot modeling. In the programming synthesis community, there is a large body of work synthesizing program from logical specifications \cite{srinivasan2015synthesis}. Synthesis settings can further be distinguished by whether a given specification is \emph{partial} or \emph{total}. Abstractly, we may think of synthesis as attempting to infer some $f \in \mathcal{F}$ where $\mathcal{F}$ is a family of programs taking inputs from $\mathcal{X}$ and outputting $\mathcal{Y}$. A partial specification comes in the form of a set of input-output pairs $\{(x_1,y_1),\dots,(x_n,y_n)\}$ whose input items are a proper subset of $\mathcal{X}$. This induces a problem of inference as the synthesis algorithm must settle on some choice of output for unobserved input. By way of contrast, a total specification fully specifies $f$ as a function. But even if we know the specification, we may have difficulty finding a matching program. The total specification synthesis problem is a problem of search rather than inference. A further distinction can be made between settings where the specification is \emph{noisy} or \emph{noiseless}. More recent work such as \cite{devlin2017robustfill} and \cite{murray2016probabilistic} naturally handle noisy specifications owing to their use of neural models. \cite{gaunt2016terpret} and \cite{riedel2016programming} allow the user to sketch a partial program in addition to a specification through inputs/output pairs. Our method is distinct in that the user provides not a sketch but a partial program which may contain errors. That is, our synthesis is not constrained by the given partial program. \section{Results} We compute $k$-ahead completions for $k=1,\ldots,6$. To ensure practical run times, we allot a state budget of $b = 50,000$ programs for each algorithm and $k$. Likewise, we enforce a static horizon of $C = 6$ so that $\textrm{cost}(\ensuremath{p}^{(i)}, \hat{\ensuremath{p}}^{(i)}) \le 6$ for all completions. \textbf{Figure}~\ref{fig:three} (a) plots mean $Acc_k$ for each algorithm against $k$. For small $k$ such as $k=1$, IDPS will always recover $\ensuremath{p}^{(i)}$ unless a better fit is nearby, while sampling search manages a less reliable 63\% recovery rate. Under our state budget constraint, IDPS's performance decreases almost monotonically because it exhausts its budget before exploring deep states. By contrast, sampling search distributes its exploration equally across all depths, and consequently scales well to large $k$. Viewed another way, the best IDPS can do for small $k$ is recover the original trajectory. \textbf{Figure}~\ref{fig:three} (b) and \textbf{Figure}~\ref{fig:three} (c), which plot mean $Err_k$ and mean $\Delta_k$ respectively against $k$, reflect that sampling search has greater opportunity to explore deeper and more structured programs. The dashed line of \textbf{Figure}~\ref{fig:three} (b) represents the mean Hausdorff distances of the user's true completion $p^{(i)}$, that is, the mean of $d_H(I(p^{(i)}),t^{(i)})$ across our corpus. As we regard the trajectory $t^{(i)}$ as the true label, we can see that for smaller values of $k$ our algorithms improved upon the user's completed program, $p^{(i)}$. For small lookaheads, this suggests that users may benefit from viewing programs returned by our synthesis method. Nonuniform sampling search dominates the sampling search regime, outperforming uniform sampling in $Err_k$ and $\Delta_k$ for all $k$. By localizing block targets in a manner statistically more consistent with observed human behavior, the algorithm restricts its search to programs produced by high-level groupings of commands, such as adding a turn block, then immediately connecting it to the penultimate block, and then changing its angle parameter. This appears to produce more reliable completion candidates. It is worth noting that both uniform and nonuniform sampling search converge on better-fit trajectories when faced with idiosyncratic programming techniques by the programmer. For example, one participant wrote a large loop body and only added the loop itself as his last step, while most other programmers added the loop first. Under this setup, the block being connected inside the loop body is not local (in fact it is as distant as possible), so the nonuniform algorithm is unlikely to sample an essential command. Nevertheless, the algorithm produced a better Hausdorff fit through an altogether different program, an indication of robustness in our method. \begin{figure}[t] \begin{center} \resizebox{0.25\textwidth}{!}{\input{fig-f.tex}} \end{center} \vspace*{-5mm} \caption{Mean running time (s) per algorithm} \label{fig:runtime} \end{figure} \textbf{Figure}~\ref{fig:runtime} shows that sampling search is consistently faster than IDPS, the nonuniform (resp. uniform) variant requiring an average of 5.97 minutes (resp. 4.82 minutes) for convergence as opposed to 9.83 minutes. That the former is nearly twice as fast as the latter is not surprising. For both the uniform and nonuniform variant, sampling search is bottlenecked by the quadratic Hausdorff computation, while IDPS, on top of Hausdorff, must expand many useless states. A qualitative look at some specific results can further insight. For example, consider item 18 from our corpus as presented in \textbf{Figure}~\ref{fig:eh15}. Here we show in the top row the solution obtained by the uniform algorithm for lookaheads $k = 3$ and $k = 4$ and in the bottom row the user-drawn trajectory and the nonuniform solution, which was the same in both cases of $k$. As we can see the uniform algorithm returns a solution which ``cheats'' with respect to our metric, constructing a trajectory which covers the region, without really approximating it. The Hausdorff distance of the $k = 3$ and $k = 4$ solutions in the top row is 89 and 90, respectively. By way of contrast, the nonuniform algorithm found the user's solution, which had a distance to the specification trajectory of 9. We conjecture that two factors contributed to the poor performance of the uniform algorithm. First, intuitively, there are a small number of ways to correctly complete the program, but a vast number of ways to construct spirals of the form found by the uniform algorithm. Second, the user's partial program, the starting point of search, was characterized by a large number of \texttt{repeat} blocks, from which spiral solutions of the form found were plentiful in the search space, as attachments which tended to nest \texttt{repeat}s tend to draw such trajectories. The nonuniform solution, modeling as it does the attention or focus of the programmer, overcame these shortcomings. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.14\textwidth]{eh-15-uniform-n-gram-synth-traj-3} & \includegraphics[width=0.14\textwidth]{eh-15-uniform-n-gram-synth-traj-4} \\ $k=3$, uniform & $k=4$, uniform \\ \includegraphics[width=0.14\textwidth]{eh-15-nonuniform-n-gram-synth-traj-3} & \includegraphics[width=0.14\textwidth]{eh-15-drawing} \\ $k=3,4$, nonuniform & user trajectory \\ \end{tabular} \end{center} \caption{Trajectories drawn by participants and uniform, nonuniform solutions} \label{fig:eh15} \end{figure} Nevertheless, the \texttt{nonuniform} algorithm did not perform strictly better. For example, item 3 in our corpus is presented in \textbf{Figure}~\ref{fig:av3}. Here, the uniform algorithm returned a reasonable solution, while the nonuniform seemed to lose its way. This example is notable for another reason in that it demonstrates how the Hausdorff distance may not encode all features of the user's intention. We see here that the program returned by the \texttt{uniform} algorithm adds a bend to the line inside the square, while the user's trajectory is suggests a straight curve. This raises the following question: is the angle of the line within the square the more salient feature of the user's intention or its straightness? The Hausdorff distances ``fudges'' by adding a curve, while it could be argued the trajectory suggests that the particular angle is less important than the fact that the line is straight. \begin{figure}[htb] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.14\textwidth]{av-3-uniform-n-gram-synth-traj-6} & \includegraphics[width=0.14\textwidth]{av-3-nonuniform-n-gram-synth-traj-6} & \includegraphics[width=0.14\textwidth]{av-3-drawing} \\ $k=6$, uniform & $k=6$, nonuniform & user trajectory \end{tabular} \end{center} \caption{Trajectories drawn by participants and uniform, nonuniform solutions} \label{fig:av3} \end{figure} As the quantitative results show, however, the \texttt{nonuniform} algorithm fared significantly better. We wish to emphasize the conceptual significance of this better performance. From a statistical point of view, we would argue that our corpus is not drawn from the ``true distribution'' for our task, namely some kind of distribution arising from expert performance. So we may wonder if such a distribution would even be helpful in guiding a search a algorithm. This can be framed as a tradeoff: if we hew too closely to the observed behavior of users struggling to complete a task, we could overfit their idiosyncrasies and blind spots, and our algorithm could inherit their limitations. If we ignore their focus entirely and do not guide our search by some knowledge of how programs are written, our search would be too uninformed to perform well. What our succession of models suggests is that there is still enough statistical signal even in the work of novice programmers to guide and improve search. \section{Conclusion} We have formulated program synthesis with visual specification in the frame of classical AI search and have proposed two algorithms. Sampling methods produce improved solutions and scale more readily to larger problem instances. A sampling method informed by the attention and distribution of behaviors observed from novice programmers leads to further improvement. We demonstrated that these algorithms can outperform humans at their own intended tasks for smaller lookahead values, suggesting a practical benefit for a synthesis method that can complete a user's programs. \section*{Acknowledgments} We would like to thank Scott Alfeld for insightful comments and discussion. This work is supported by NSF grant 1423237. \bibliographystyle{plain}
{ "timestamp": "2018-06-05T02:13:59", "yymm": "1806", "arxiv_id": "1806.00938", "language": "en", "url": "https://arxiv.org/abs/1806.00938" }
\section*{Appendix #1. #2} \renewcommand{\thesection.\arabic{equation}}}{#1.\arabic{equation}} \renewcommand{\thesection}{#1} } \newcommand{\appsektion}[1]{\setcounter{equation}{0}\setcounter{subsection}{0} \section*{Appendix. #1} \renewcommand{\thesection.\arabic{equation}}}{A.\arabic{equation}} \renewcommand{\thesection}{A} } \newcommand{\textcolor{black}}{\textcolor{black}} \catcode`\@=11 \def\@addtoreset{equation}{section{\@addtoreset{equation}{section} \def\thesection.\arabic{equation}}{\thesection.\arabic{equation}}} \@addtoreset{equation}{section \parskip 2mm \begin{document} \begin{titlepage} \begin{center} {\Large \bf Dynamical off-equilibrium scaling across magnetic \\[.25cm] first-order phase transitions} \end{center} \vskip 2.0 cm \centerline{{\bf Stefano Scopa}$^{a,}$\footnote{email: stefano.scopa@univ-lorraine.fr, ORCid: 0000-0001-7638-8804} and {\bf Sascha Wald}$^{b,}$\footnote{email: swald@sissa.it, ORCid: 0000-0003-1013-2130} } \vskip 0.5 cm \begin{center} $^a$ Laboratoire de Physique et Chimie Th\'eoriques, UMR CNRS 7019, Universit\'e de Lorraine BP 70239, F-54506 Vandoeuvre-l\`es-Nancy Cedex, France \\ \vspace{0.5cm} $^b$ SISSA - International School for Advanced Studies and INFN,\\ via Bonomea 265, I--34136 Trieste, Italy \vskip 0.5 cm \today \end{center} \vskip 1.0 cm \begin{abstract} We investigate the off-equilibrium dynamics of a classical spin system with $O(n)$ symmetry in $2< D <4$ spatial dimensions and in the limit $n\to \infty$. The system is set up in an ordered equilibrium state and is subsequently driven out of equilibrium by slowly varying the external magnetic field $h$ across the transition line $h_c=0$ at fixed temperature $T\leq T_c$. We distinguish the cases $T = T_c$ where the magnetic transition is continuous and $T<T_c$ where the transition is discontinuous. In the former case, we apply a standard Kibble-Zurek approach to describe the non-equilibrium scaling and formally compute the correlation functions and scaling relations. For the discontinuous transition we develop a scaling theory which builds on the coherence length rather than the correlation length since the latter remains finite for all times. Finally, we derive the off-equilibrium scaling relations for the hysteresis loop area during a round-trip protocol that takes the system across its phase transition and back. Remarkably, our results are valid beyond the large-$n$ limit. \end{abstract} \vfill \end{titlepage} \onecolumn\tableofcontents \setcounter{footnote}{0} \section{Introduction} The study of equilibrium statistical mechanics and especially of critical phenomena has lead to a refined physical understanding of complex, interacting systems and their collective behaviour \cite{Amit84,Card96,Sach99,Nish11,Wipf13}. In the last decades, the study of equilibration processes and out-of-equilibrium dynamics has gained significant attention in order to complement the equilibrium studies, to understand how systems behave far from equilibrium and how thermalisation may occur \cite{Henk09,Henk10,Cugl03,Tauber}. A standard way to produce non-equilibrium situations is by studying {\it quench protocols} where either an external parameter (e.g. the temperature) or a Hamiltonian parameter (e.g. the interaction strength) is varied in time across a \textit{phase transition}. By means of such protocols, one is able to drive a system through different regions of the phase diagram and to investigate relaxation and thermalisation properties \cite{Cugl03,Henk09,Henk10,Stru78,Bray94,Cates00,Godr02,Paes03,Breu07,Scha14,Schmalian15,Mar15,Chio17,Schmalian14,Wald16,Wald18a,Wald18b,Gar04}. Most of the dedicated literature refers to quench protocols across continuous phase transitions, see e.g. \cite{Godr02,Paes03,Wald16,Wald18a,Godr00b,Godr13} but this list is far from being exhaustive. If the driving across such a transition is performed slowly (in a sense that will be specified later on), these quench protocols are described by the {\it Kibble-Zurek} ({\sc kz}) {\it scaling theory} \cite{Kibble,Kibble2,Kibble3,Zurek-85,Zurek-96}, which explains the formation of topological defects occurring after the quench. The main idea of this approach is illustrated in figure~\ref{fig:KZM}. The {\sc kz} scaling theory has been tested in a variety of experiments \cite{Donatello-16,Lamporesi-13,Pyka-13,Corman-14,Cui-16} where it has been shown to describe the off-equilibrium dynamics near transitions well and especially to predict experimental data for the density of topological defects accurately. In fact, the {\sc kz} theory has proven itself to be of great use for the description of non-equilibrium properties especially since in general, more complicated methods are used to analyse such scenarios \cite{Cala02,Godr00b,Wald18c,Cala05}. Therefore the {\sc kz} theory has been {\it inter alia} extended to quantum phase transitions \cite{Zurek-05,QKZ, QKZ2,qkz3,rev-off-eq1,rev-off-eq2}, where experiments with ultracold atoms in optical lattices provide an ideal platform for applications of the theory \cite{KZ-cold_atoms,KZ-cold_atoms2,DelCampo-11,Scopa, Land16,Hruby18,Landini18}. \begin{figure}[ht] \centering \includegraphics[width=.6\textwidth]{KZM1.pdf} \caption{\small Qualitative illustration of the {\sc kz} mechanism. We prepare the system initially in equilibrium at a certain temperature $T_i>T_c$ in the disordered phase. We then vary the temperature in time at a finite rate $f$ across the phase boundary up to a final value $T_f<T_c$. As long as the relaxation time $t_r < 1/f$ the system adapts to the temperature variation and adiabatically follows the quench protocol (yellow regions). In the vicinity of the transition point, the divergence of $t_r$ leads to a time $\tau$ where $t_r = 1/f$ after which the system cannot adjust to the temperature variation anymore and falls collectively out of equilibrium (blue region). The system remains frozen in this region until $t_r$ decays again in the ordered phase. \textcolor{black}{This phenomenological picture can be better understood by means of \textit{finite time scaling} \cite{Zhong2014,Zhong2010,Zhong2016}. }} \label{fig:KZM} \end{figure} The aim of this work is instead to extend the {\sc kz} scaling theory to quench protocols across a {\it first-order transition} ({\sc fot}). To do so, we consider a generic spin system which is known to show a phase transition from a magnetic {\it down} to {\it up} order, driven by an external magnetic field $h$ at a fixed temperature $T\leq T_c$, see figure~\ref{fig:phase}. The nature of this transition depends on the temperature at which the magnetic transition is driven. \begin{itemize} \item At $T=T_c$ the transition is {\it continuous} and the standard {\sc kz} scaling theory applies. \item For $T<T_c$ the transition is {\it discontinuous} and therefore the system correlation length remains finite at all times. \end{itemize} In the latter case, a {\it new} description is needed since the {\sc kz} theory is built on the fact that the correlation length $\xi$ diverges at the critical point. Nevertheless, an equilibrium scaling theory has been developed for {\sc fot}s \cite{Nienhuis-75,Fisher-82,Privman-83,Binder} by replacing the correlation length with the {\it coherence length} $\xi_h$, which corresponds to the typical domain size of ordered clusters \textcolor{black}{in minimal energy configuration. As soon as the discontinuous transition is approached, the two ordered phases become energetically indistinguishable, leading to a divergence of $\xi_h$. This divergence is physically reflected in magnetic systems by the long-range order arising in the spin-spin correlation function which asymptotically ($|\vec{x}-\vec{x}^{\prime}|\rightarrow\infty$) approaches the value of the squared magnetisation of the system \cite{Fisher-82}. Notice that the correlation length $\xi$ is instead defined by the connected part of the spin-spin correlator $G(\vec{x}, \vec{x}^{\prime}) \propto \exp\left(-|\vec{x}-\vec{x}^{\prime}|/\xi\right)$ and remains finite in the non-critical regime.} In this paper, we shall naturally extend this {\sc fot} scaling theory to the non-equilibrium case {\it \`a la} {\sc kz} and apply it to magnetic quench protocols in the ordered region as shown in figure~\ref{fig:phase} (green). \textcolor{black}{This treatment is complementary to previous renormalisation group studies \cite{Zhong1995,Zhong2005,Zhong2006} and recent numerical evidence of dynamical scaling across {\sc fot}s \cite{Zhong2018}.} \begin{figure}[ht] \centering \includegraphics[width=.6\textwidth]{phasediagram-new.pdf} \caption{\small Schematic representation of the phase diagram for a generic spin system. The red dot indicates a continuous phase transition while the green line indicates a discontinuous transition. The corresponing arrows show qualitatively the quench protocols we shall study.} \label{fig:phase} \end{figure} A theoretical framework for the description of generic spin systems is provided by the $O(n)$ model which is routinely cast as a field theory with the $n$-component vector field $\vec{\phi}(\vec{x})$ of unit norm and the action (see e.g. \cite{ZinnJustin}) \begin{equation}\label{eq:S4} S[\vec{\phi}] = \frac{n}{2}\; \int \D \vec{x}\left[ (\nabla \vec{\phi}(\vec{x}))^2 + r\vec{\phi}^2(\vec{x}) +u (\vec{\phi}^2(\vec{x}))^2 -2\vec{h}\, \vec{\phi}(\vec{x})\right] \end{equation} in $2<D<4$ spatial dimensions. Here, $u>0$ is a positive constant, $\vec{h}$ describes the \textit{external magnetic field} and $r=r(T)$ is the thermal coupling constant. This model includes the celebrated Ising model ($n=1$), the {\sc xy} model ($n=2$) and the Heisenberg model ($n=3$) as special cases \cite{Stanley-O(N)}. In the particular case where $n\to \infty$, the bulk critical behaviour reduces to the one of the spherical model \cite{Stanley} and allows analytic investigation \cite{Baxt82,Oliv06,Vojta96,Berl52,Lewi52,Wald15,Henk84a}. The analyticity in the large-$n$ limit arises due to the central limit theorem which implies that self-averaged $O(n)$ symmetric quantities are normally distributed for $n\to\infty$. This allows us to replace \cite{LargeN-rev} \begin{equation} (\vec{\phi}^2(\vec{x}))^2 \to \left<\vec{\phi}^2(\vec{x})\right>\vec{\phi}^2(\vec{x}) \end{equation} and cast the action in eq~(\ref{eq:S4}) quadratically \begin{align} S[\vec{\phi}]= \frac{n}{2}\! \int\! \D \vec{x} \big[ (\nabla \vec{\phi}(\vec{x}))^2\! +\! m^2\vec{\phi}^2(\vec{x}) \! -\!2\vec{h} \, \vec{\phi}(\vec{x})\big] \label{eq:Slargen} \end{align} with the {\it effective mass} \begin{equation} m^2 = r +u\left<\vec{\phi}^2(\vec{x})\right>. \label{eq:meff} \end{equation} For this specific model, the $O(n)$ model at large $n$, we shall derive the structure of the off-equilibrium scaling functions of the magnetisation and of the transverse correlation functions. The latter are self-consistently related through an equation of state which describes - roughly speaking - the time-evolution of the magnetisation as a rotation generated by the dissipation of the initial equilibrium magnetisation into the transverse field modes (see section~\ref{ssec:t<tc} for further details). We shall also provide a scaling prediction for the dissipated magnetic work $W$ during a round-trip protocol which takes the system across the {\sc fot} and back. If we call the quench time scale $t_s$, one obtains in $D=3$ spatial dimensions \begin{align} W \propto \begin{cases} t_s^{-2/3}, \hspace{.25cm} \text{for} \hspace{.25cm} T=T_c\\[.5cm] t_s^{-1/2}, \hspace{.35cm}\text{for} \hspace{.25cm} T<T_c \end{cases} \ . \end{align} Remarkably, these results apply beyond the large-$n$ limit with $2/3\approx 0.66$, as can be seen by a comparison with numerical results by {\it Pelissetto and Vicari} \cite{Vicari-off-16}. To complete our analysis, we shall numerically investigate the dynamics in the large-$n$ limit and explicitly test our scaling predictions. The paper is organised as follows: After having set up the model in this introduction we turn to its general dynamical description in section \ref{sec:dyn}. Here, we specify the magnetic quench protocol that we shall study and we determine the set of dynamical equations that describes the $O(n)$ model self-consistently for $n\to\infty$. In section \ref{sec:scaling} we then derive the dynamical scaling theory for the magnetic quench. We explicitly distinguish between (i) $T=T_c$ where we verify the {\sc kz} scaling and (ii) $T<T_c$ where we develop a new out-of-equilibrium scaling theory. Finally, we apply these results in section \ref{sec:hys} to a round-trip protocol for which we then derive the scaling behaviour of the hysteresis area and of the magnetic work performed over the cycle. We then briefly summarise our results and conclude the paper. Several technical aspects are described in the appendix. \section{Dynamical description of the $O(n)$ model}\label{sec:dyn} We want to describe the dynamics of the system~(\ref{eq:Slargen}) at and below the critical temperature $T_c$ when the external magnetic field is varied in time across the value $h_c = 0$. We choose the {\it linear}\footnote{The extension to non-linear protocols is straightforward, see e.g. \cite{Sondhi-12}.} ramp sketched in figure~\ref{fig:ramp} along a fixed direction $\vec{e}_1$ which takes the system from a {\it down} order at initial time $t_i<0$ to an {\it up} order at the final time $t_f>0$, i.e. \begin{equation}\label{eq:quench} \vec{h}(t)= \frac{t \ \vec{e}_{1}}{t_f-t_i} = t /t_s\, \vec{e}_{1} \ . \end{equation} Here, $t_s=t_f-t_i$ defines the {\it time-scale of the quench}. \begin{figure}[ht] \centering \includegraphics[width=.4\textwidth]{protocol.pdf} \caption{\small Schematic representation of the magnetic quench protocol~(\ref{eq:quench}). At initial time $t_i<0$ the system is in thermal equilibrium with the external magnetic field $h(t_i)<0$. This field is then linearly driven through the magnetic transition point $h_c = 0$ on a time scale $t_s$ until it reaches its final value $h(t_f) > 0$.} \label{fig:ramp} \end{figure} Within this convention, the critical value $h_c = 0$ is reached at time $t=0$ which is but a convenient choice. The dynamics of the components of the vector field is given by a Langevin equation \begin{equation}\label{dynamics} \partial_t \phi_{a} (\vec{x},t) = -\frac{\delta}{\delta\phi_{a}} S[\vec{\phi}] + \zeta_a(\vec{x},t) \ , \end{equation} where $\zeta_a(x,t)$ is a Gaussian white noise with zero mean, i.e.\footnote{The damping rate is set to unity here. The variance is set to $2$ in order for the long-time limit of the two-point function to correctly reproduce the equilibrium value when $\vec{h}(t) = cst$.} \begin{align}\label{noise} \big<\zeta_{a}(\vec{x},t)\big> &= 0\ ,\\ \big<\zeta_a(\vec{x},t)\zeta_b(\vec{y},t')\big> &= 2\delta_{a,b} \delta(\vec{x}-\vec{y}) \delta(t-t') \ . \end{align} The dynamics of the system is more involved than the one of a standard Gaussian theory due to eq~(\ref{eq:meff}) which has to be taken into account self-consistently. To do so, we first introduce the time-dependent magnetisation \begin{equation} \big<\phi_{a}(\vec{x},t)\big>= \delta_{1,a} \, M(t) \end{equation} as order parameter of the transition. Moreover, we define the longitudinal and orthogonal (connected) correlation function as \begin{align} G_{||}(x-y,t) &\equiv \left< \Big(\phi_{1}(\vec{x},t)-\braket{\phi_{1}(\vec{x},t)}\Big)\Big(\phi_{1}(\vec{y},t) -\braket{\phi_{1}(\vec{y},t)}\Big)\right>\ ,\\[.25cm] % G_{\perp}(x-y,t)&\equiv \left< \Big(\phi_{a}(\vec{x},t)-\braket{\phi_{a}(\vec{x},t)}\Big)\Big(\phi_{a}(\vec{y},t) -\braket{\phi_{a}(\vec{y},t)}\Big)\right>, \ a>1 \end{align} Due to translational invariance, the dynamics is straightforwardly described in Fourier space and it is easy to show that it is governed by the following set of equations \cite{Mazenko-85} \begin{align}\label{dyn-obs} \frac{\D}{\D t}M(t)&=-m^2(t) M(t) + h(t)\ , \\[.25cm] \partial_t G_{\perp}(\vec{q},t)&=-2(m^2(t)+q^2) G_{\perp}(\vec{q},t) +2 \ , \label{dyn-obsG} \end{align} where the time-dependent effective mass $m(t)$ is defined through the {\it equation of state} \begin{equation}\label{time-eq-state} m^2(t)= r+u\bigg[M^2(t) +\int_q G_{\perp}({\bf q},t)\bigg]. \end{equation} Here, we used the shorthand $\int_q = \int^\Lambda \D {\bf q}/(2\pi)^d $ with the momentum cut-off $\Lambda$. The dynamical eqs~(\ref{dyn-obs},\ref{dyn-obsG}) can be formally solved as follows \begin{subequations} \begin{align}\label{eq:M-t} M(t)&=M_0\, \exp\bigg[-\int_{t_i}^t \D u\, m^2(u)\bigg] + \int_{t_i}^t \D u \, h(u)\, \exp\bigg[-\int_{u}^t \D s \, m^2(s)\bigg], \\[.25cm] \label{eq:GT-t} G_{\perp}(\vec{q},t)& =2 \int_{t_i}^t \D u \exp\bigg[-2\int_{u}^t \D s \, \big(\vec{q}^2+m^2(s)\big)\bigg] \ , \end{align} \end{subequations} with the initial equilibrium magnetisation $M_0$. In the following, we shall use these formal solutions (\ref{eq:M-t},\ref{eq:GT-t}) together with the equation of state (\ref{time-eq-state}) in order to describe the model out of equilibrium. \section{Dynamical scaling theory across phase transitions} \label{sec:scaling} In this section we develop a scaling theory {\it \`a la} {\sc kz} for {\sc fot}s in order to describe the magnetic quench specified in eq~(\ref{eq:quench}). We shall though first start with the instructive case $T=T_c$ in section~\ref{ssec:tc} in order to illustrate the standard {\sc kz} theory for continuous transitions. We then turn to the case $T<T_c$ in section~\ref{ssec:t<tc} where we shall develop the non-equilibrium scaling theory for {\sc fot}s. Along with our scaling analysis, we shall provide numerical solutions of the dynamical equations of the $O(\infty)$ model $(\ref{dyn-obs},\ref{dyn-obsG},\ref{time-eq-state})$ and we shall use these results to check our scaling predictions. The numerical calculations will be carried out in $D=3$ spatial dimensions and with the normalisation $u=1$ which implies $r_c \simeq -0.051$ \cite{Mazenko-85,Sondhi-12}. For further details on the numerical method, see appendix \ref{numerics}. \subsection{Scaling theory for the continuous transition ($T=T_c$)} \label{ssec:tc} In this case the standard {\sc kz} scaling theory \cite{Kibble,Zurek-85} describes the universal scaling behaviour of the dynamics driven by the protocol~(\ref{eq:quench}). First, we have to express the correlation length $\xi$ close to the critical point $h_c=0$ as a power-law of the control parameter \cite{Sondhi-12} \begin{equation} \xi(t) \sim |h(t)|^{-\nu_h}, \; \; h\rightarrow 0 \label{eq:corrlength} \end{equation} with the equilibrium critical exponent\footnote{The {\sc rg} critical exponent for the $O(\infty)$ model is $\eta=0$, see e.g. \cite{ZinnJustin-VectorN3,Berl52}.} \begin{equation} \nu_h=\frac{1}{d_h}=\frac{2}{D+2} \ . \end{equation} \noindent From $\xi$ in eq~(\ref{eq:corrlength}) we can define the typical time scale on which the system adapts to the variation of the magnetic field via $t_{\text{ad}}(t)= \xi/\dot{\xi}$ and compare it to the relaxation time associated with $\xi$ via $t_{\text{r}}(t) \sim \xi^z$, where $z=2$ is the dynamical critical exponent \cite{Cardy}. These time scales do compete during the quench as the system tries to relax towards equilibrium {\it and} to follow the quench protocol simultaneously. The Ansatz that underlies the {\sc kz} approximation is that the system manages to equilibrate and to follow the quench adiabatically as long as $t_{\text{ad}}< t_{\text{r}}$, compare figure~\ref{fig:KZM}. In the opposite situation $t_{\text{ad}}> t_{\text{r}}$ the system cannot adapt to external changes anymore and is assumed to freeze out. The crossover time $\tau$ at which the system falls collectively out of equilibrium is then given by \begin{equation}\label{KZ-t} t_{\text{ad}}(\tau)\stackrel{!}{=}t_{\text{r}}(\tau) \; \Rightarrow \; \tau=t_s^{\frac{z\nu_h}{1+z\nu_h}} \ . \end{equation} From this, we define, \textcolor{black}{in analogy to the equilibrium correlation length}, a \textit{characteristic length scale} $\ell$ via the dynamical exponent $z$, i.e. \begin{equation}\label{ell} \ell=\tau^{1/z}=t_s^{\frac{\nu_h}{1+z\nu_h}} \ . \end{equation} This length scale encodes the characteristic distance on which the system is correlated and thus allows us to study the non-equilibrium regime in a scaling limit. In this {\sc kz} scaling regime, the quench is assumed to be slow $t_{s}\to \infty$ while $t/\tau$ and $q\, \ell$ are kept constant. It is well-known that the time-dependent correlation functions exhibit dynamical scaling behaviour for $h\rightarrow 0$ \cite{Sondhi-12} \begin{subequations}\label{KZscaling} \begin{align}\label{KZscaling-a} M(t) \sim \ell^{-d_{\phi}}\; \mathcal{M}(t/\tau)\ ,\\[.25cm] G_{\perp}(\vec{q},t) \sim \ell^2 \; \mathcal{G}_{\perp}(\vec{q}\, \ell, t/\tau) \ ,\label{KZscaling-b} \end{align} \end{subequations} where $d_{\phi}=(D-2)/2$ is the scaling dimension of the order parameter $M$ and $\mathcal{M}(\cdot)$, $\mathcal{G}_{\perp}(\cdot)$ are generic scaling functions \cite{Henk10}. \begin{figure}[ht] \centering \includegraphics[width=.485\textwidth]{magn_Tc.pdf} \ \includegraphics[width=.485\textwidth]{magnR_Tc_ins_col.pdf} \caption{\small Numerical analysis of the dynamical magnetisation in $D=3$ spatial dimensions and at the critical temperature $T=T_c$. \underline{Left panel}: magnetisation as a function of time for different quench times $t_s$, see eq~(\ref{KZ-t}). \underline{Right panel}: data collapse and dynamical scaling function for the magnetisation (compare eq~$\eqref{KZscaling-a}$). \textcolor{black}{The inset shows the convergence of the scaling functions at finite $t_s$ towards the asymptotic regime ($t_s\rightarrow \infty$) for three different times.}} \label{fig:num_M_rc} \end{figure} We briefly comment that finite-size scaling can be implemented in this theory as well. For a system of finite size $L$, we would consider the limit $L,t_s\rightarrow\infty$, $t\rightarrow 0$ such that $t/\tau$, $q\, \ell$ and $L/\ell$ are fixed. In this limit the time-dependent correlators present the scaling relations \cite{Henk10} \begin{subequations}\label{FSS} \begin{align} M(t,L) \sim L^{-d_{\phi}} \; \mathcal{M}(t/\tau, L/\ell)\ ,\\[.25cm] G_{\perp}(\vec{q},t,L) \sim L^2 \; \mathcal{G}_{\perp}(\vec{q}\, \ell, t/\tau, L/\ell)\ , \end{align} \end{subequations} such that the infinite-volume behaviour of $\eqref{KZscaling}$ is recovered for $L/\ell \rightarrow 0$ at fixed $q\, \ell$, $t/\tau$. Notice also that, by construction, these scaling relations match the equilibrium scaling behaviour for $|t|\rightarrow \tau$ (see appendix \ref{equilibrium}). From a dimensional analysis, it is clear that the effective mass term must scale as \cite{Sondhi-12} \begin{equation}\label{eq:mscalez2} m^2(t)\sim \ell^{-2} \, \mathfrak{m}^2(t/\tau) \end{equation} with a general scaling function $\mathfrak{m} (\cdot)$. From eq~(\ref{eq:M-t}), the magnetisation is then given as\footnote{The dependence on the initial condition is exponentially suppressed in the scaling limit, see appendix \ref{equilibrium}.} \begin{subequations} \begin{equation}\label{scal-M} \mathcal{M}(\bar{t})=\int_{-\infty}^{\bar{t}} \D u \, u \, \exp\bigg[\!\!-\!\int_{u}^{\bar{t}} \D s \, \mathfrak{m}^2(s)\bigg] \end{equation} and from eq~(\ref{eq:GT-t}) the transverse two-point function reads \begin{equation}\label{scal-G} \mathcal{G}_{\perp}(\bar{\vec{q}},\bar{t}) =2 \int_{-\infty}^{\bar{t}} \D u \, \exp\bigg[-2\int_{u}^{\bar{t}} \D s \, (\bar{\vec{q}}^2+\mathfrak{m}^2(s))\bigg] \end{equation} \end{subequations} with the rescaled time $\bar{t}=t/\tau$ and momentum $\bar{\vec{q}} = \vec{q}\, \ell$. The time evolution of the scaling functions is thus solely determined by the function $\mathfrak{m}^2$ which has to be found from eq~$\eqref{time-eq-state}$\footnote{ We use the shorthand notation $\int_{\bar{\vec{q}}}\equiv \int^{\infty} \frac{\D \bar{\vec{q}}}{(2\pi)^D}$. Notice that the scaling limit is {\it cut-off independent} since $\Lambda\, \ell \rightarrow \infty$ \cite{Sondhi-12}.} \begin{equation}\label{eq-state-critical} \mathcal{M}^2(\bar{t}, \mathfrak{m}) = \int_{\bar{\vec{q}}} (\mathcal{G}_{\perp}(\bar{\vec{q}},\bar{t},0)- \mathcal{G}_{\perp}(\bar{\vec{q}},\bar{t},\mathfrak{m})) \end{equation} where the critical thermal coupling constant has been expressed in terms of the critical two-point function \cite{LargeN-rev,Sondhi-12} \begin{equation}\label{r_c} r_c=-u \int_q \, G_{\perp}(\vec{q}, t, 0)\ . \end{equation} The equation of state $\eqref{eq-state-critical}$ shows that the time evolution of the magnetisation is generated by a dissipation into the transverse field components. In other words, the deviations from equilibrium of the magnetisation and of the transverse correlations compensate each other. In figure \ref{fig:num_M_rc} we show the numerical result for the time evolution of the magnetisation at the critical temperature. The numerical analysis confirms our scaling predictions, since clearly, for increasing times $\tau \sim t_s^{\frac{z\nu_h}{1+z \nu_h}}$, the data collapse onto a master curve which represents the sought scaling function. Similar results are obtained for the zero mode correlation function and the for mass term, shown in figure~\ref{num_2_rc} and \ref{num_2_rc_m} respectively. \begin{figure}[ht] \centering \includegraphics[width=.475\textwidth]{chi_Tc.pdf} \ \includegraphics[width=.49\textwidth]{chiR_Tc_ins_col.pdf} \caption{\small Numerical analysis of the zero mode correlation function at $T=T_c$ in $D=3$ spatial dimensions. We see the data in the left panel for different quench times $t_s$, see eq~(\ref{KZ-t}), and the data collapse \textcolor{black}{(in log scale)} for the dynamical scaling function in the right panel (compare eq~$\eqref{KZscaling-b}$). \textcolor{black}{The inset shows the convergence of the scaling functions at finite $t_s$ towards the asymptotic regime ($t_s\rightarrow \infty$) for three different times} (the label ``peaks'' refers to the convergence of the maximum of the curves).}\label{num_2_rc} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth]{mass_Tc.pdf} \ \includegraphics[width=.475\textwidth]{massR_Tc_ins_col.pdf} \caption{\small Numerical analysis of the effective mass at $T=T_c$ in $D=3$ dimensions. We see the data in the left panel for different quench times $t_s$ and the data collapse for the dynamical scaling function in the right panel (compare eq~$\eqref{eq:mscalez2}$). \textcolor{black}{The inset shows the convergence of the scaling functions at finite $t_s$ towards the asymptotic regime ($t_s\rightarrow \infty$) for three different times}. }\label{num_2_rc_m} \end{figure} \subsection{Scaling theory for {\sc fot}s ($T<T_c$)} \label{ssec:t<tc} In this section we argue that a {\sc kz}-like theory can also be applied to the magnetic quench performed at $T<T_c$.\footnote{In what follows, the scaling theory does not depend on the specific value of the temperature $T<T_c$.} The main obstacle for transferring the {\sc kz} scaling theory to the {\sc fot} below $T_c$ is that the system correlation length remains {\it finite}. We shall therefore turn to another length scale, the so-called \textit{coherence length} or \textit{persistent length} $\xi_h$ \cite{Fisher-82} which may be defined as the typical size of domains of \textcolor{black}{(}aligned\textcolor{black}{)} spins \textcolor{black}{in the minimum energy configuration}. \begin{figure}[b] \centering \includegraphics[width=.95\textwidth]{xih_newnew.pdf} \caption{\small Visualisation of the coherence length $\xi_h$ and the associated Ginzburg-Landau functional $E$ as a function of the order parameter $M$ for a generic spin system. The different pictures show the variation of $\xi_H$ and $E$ during the protocol described in eq~$\eqref{eq:quench}$. At the {\sc fot} $h=0$, the degeneracy of the two vacua associated to the different realisations of the ordered phase leads to a divergence of $\xi_h$.} \label{fig:xih} \end{figure} For $h\rightarrow 0$, the system cannot energetically distinguish between the two ordered phases and long-range order arises which leads to an increase of the coherence length. Eventually, this results in a macroscopic coherence length $\xi_h \propto L$, see figure~\ref{fig:xih}. In order to construct the scaling theory, we need to know the scaling behaviour of the coherence length as a function of the magnetic field in analogy to eq~(\ref{eq:corrlength}) and the dynamical exponent $z$. It is well-known that the behaviour of $\xi_h$ close to the {\sc fot} can be expressed as a power-law \cite{Nienhuis-75,Fisher-82,Privman-83,Binder} \begin{equation} \xi_h(t) \sim |h(t)|^{-1/D} \ \ , \ \ h\rightarrow 0 \end{equation} from which we can identify the critical exponent \begin{equation} \nu_{h}=1/D \,. \end{equation} For the dynamical exponent $z$, a lengthy but straightforward calculation reveals \begin{equation} z = D \ , \end{equation} which can be qualitatively understood as follows. Initially, the equilibrium magnetisation is aligned with the initial magnetic field $h<0$. As this field is driven across the critical value $h_c=0$, the magnetisation has to flip in order to align with the final magnetic field $h>0$. Therefore, the vector of the magnetisation has to perform a rotation which needs a characteristic time of the order of the system volume $L^D$\cite{Vicari-off-16}. For further details on how to determine $z$ in the low temperature regime, we refer to appendix~\ref{fot-dyn}. We are now able to draw the analogy to eq~(\ref{KZ-t}), i.e. the freeze-out condition reveals \begin{equation}\label{scales-fot} \tau_{\text{\sc fot}} = \sqrt{t_s}, \qquad \ell_{\text{\sc fot}} =t_s^{1/2D}. \end{equation} We notice that the freeze-out time $\tau_{\text{\sc fot}}$ coincides with the \textit{coercive time} of the model \cite{Dhar-92} that is the typical time scale after which a ferromagnet reacts to an inversion of the external magnetic field. \begin{figure}[ht] \centering \includegraphics[width=0.475\textwidth]{magn_low.pdf} \ \includegraphics[width=0.475\textwidth]{magnR_low_ins_col.pdf} \caption{\small Numerical analysis of the dynamical magnetisation below $T_c$ ($r=-1$) in $D=3$ spatial dimensions. \underline{Left panel}: dynamical magnetisation as a function of time for different quench time scales $t_s$. \underline{Right panel}: data collapse and dynamical scaling function for the magnetisation (compare eq~$\eqref{scaling-fot}$). \textcolor{black}{The inset shows the convergence of the scaling functions at finite $t_s$ towards the asymptotic regime ($t_s\rightarrow \infty$) for three different times}.} \label{num_1_low} \end{figure} Extending this analogy further, we assume that the time-dependent magnetisation of the system in the vicinity of the transition point $h\to0$ shows dynamical scaling behaviour in line with eq~(\ref{KZscaling}) in the limit $t_s\to\infty$ (with $t/\tau_{\text{\sc fot}}$ fixed) \begin{equation} \label{scaling-fot} M(t) \sim \ell_{\text{\sc fot}}^{-d_{\phi}}\; \mathcal{M}(t/\tau_{\text{\sc fot}})\ ,\\ \end{equation} where the scaling dimension of the order parameter is known to be $d_{\phi}=0$ \cite{Fisher-82}. The numerical result for the dynamical magnetisation below the critical temperature is shown in figure \ref{num_1_low} and explicitly verifies our scaling prediction (\ref{scaling-fot}). We describe our model in the \textit{spin-wave approximation} \cite{SpinWave1,SpinWave2} which states that at low-temperatures $T<T_c$ it is sufficient to study long-range excitations, i.e. only the degrees of freedom with $|\vec{q}| < q^{\ast}$ turn out to be relevant for the off-equilibrium dynamics. The boundary value $q^{\ast}$ which separates the short-distance fluctuations $|\vec{q}| > q^{\ast}$ from the low-energy modes $|\vec{q}| < q^{\ast}$ can be estimated as $q^{\ast}\propto t_s^{-1/4}$, see appendix~\ref{spin-wave}. In the scaling limit $h\rightarrow 0$, $t_s\rightarrow \infty$ keeping $q\, \ell_{\text{\sc fot}}$ fixed we notice that \begin{equation} |\vec{q}|\, \ell_{\text{\sc fot}} < q^{\ast} \, \ell_{\text{\sc fot}} \propto t_s^{\frac{2-D}{4D}} \rightarrow 0 \ , \end{equation} which implies that the zero-momentum contribution alone provides a good description of the off-equilibrium behaviour arising during the quench in the asymptotic limit. We introduce therefore the transverse susceptibility $\chi_{\perp}$ that obeys the scaling relation \begin{equation}\label{chi-low} \chi_{\perp}(t)\equiv G_{\perp}(\vec{0},t) \sim \ell_{\text{\sc fot}}^D \, \mathcal{X}_{\perp}(\bar{t})\ , \end{equation} as shown in figure \ref{num_2_low}. Notice that, by construction, the off-equilibrium scaling behaviours (\ref{scaling-fot},\ref{chi-low}) match again the equilibrium scaling when $|t|\rightarrow \tau_{\text{\sc fot}}$ (see appendix \ref{equilibrium}). \begin{figure}[ht] \centering \includegraphics[width=.485\textwidth]{chi_low.pdf} \ \includegraphics[width=.485\textwidth]{chiR_low_ins_col.pdf} \caption{\small Numerical analysis of the transverse susceptibility below $T_c$ ($r=-1$) in $D=3$ spatial dimensions. In the left panel, the data for different quench times $t_s$ is shown \textcolor{black}{in log scale} and the right panel shows the data collapse for the dynamical scaling function (compare eq~$\eqref{scal-G-low}$). \textcolor{black}{The inset shows the convergence of the scaling functions at finite $t_s$ towards the asymptotic regime ($t_s\rightarrow \infty$) for three different times}.}\label{num_2_low} \end{figure} For the effective mass we write the analogous scaling behaviour to eq~(\ref{eq:mscalez2}) \begin{equation} m^2(t) \sim \ell_{\text{\sc fot}}^{-D} \; \mathfrak{m}^2(t/\tau_{\text{\sc fot}}) \label{eq:mfot} \end{equation} due to the presence of a small magnetic field with scaling dimension $d_h=D$.\footnote{From eq~$\eqref{dyn-obs}$ we may write $m^2(t) =M(t)^{-1}\left(h(t) +\frac{\D}{\D t}M(t)\right)$ from which we conclude eq~(\ref{eq:mfot}).} The numerical result for the effective mass is shown in figure \ref{num_3_low}. The time-dependent magnetisation satisfies the (trivial) scaling relation in eq~$\eqref{scaling-fot}$ with the scaling function \begin{equation}\label{scal-M-low} M(t)\equiv \mathcal{M}(\bar{t})=\int_{-\infty}^{\bar{t}} \D s \, s \, \exp\bigg[-\int_{s}^{\bar{t}} \!\D u \, \mathfrak{m}^2(u)\bigg], \end{equation} while the scaling function of the transverse susceptibility reads \begin{equation}\label{scal-G-low} \mathcal{X}_{\perp}(\bar{t}) =2 \int_{-\infty}^{\bar{t}} \D s \, \exp\bigg[-2\int_{s}^{\bar{t}} \D u \, \mathfrak{m}^2(u)\bigg] \ , \end{equation} where now $\bar{t}=t/\tau_{\text{\sc fot}}$ was redefined. In the spin-wave approximation we introduce the quantity \begin{equation} \mathscr{S}(t)=\int_q \, G_{\perp}(|\vec{q}|<q^{\ast}, t) \sim {\cal S}(t/\tau_{\text{\sc fot}}) \end{equation} with a trivial scaling relation that follows from eq~$\eqref{chi-low}$. \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth]{mass_low.pdf} \ \includegraphics[width=.45\textwidth]{massR_low_ins_col.pdf} \caption{\small Numerical analysis of the effective mass below $T_c$ ($r=-1$). We see the data in the left panel for different quench times $t_s$ and the data collapse for the dynamical scaling function in the right panel (compare eq~$\eqref{eq:mfot}$). Notice that the effective mass term can also take negative values below $T_c$, see also \cite{Dhar-92}. This is not surprising in the presence of a broken symmetry and refers to the formation and the propagation of massless modes who connects degenerate vacua, see e.g. \cite{ZinnJustin}. \textcolor{black}{The inset shows the convergence of the scaling functions at finite $t_s$ towards the asymptotic regime ($t_s\rightarrow \infty$) for three different times}. }\label{num_3_low} \end{figure} Notice that once again the scaling functions above depend implicitly on $\mathfrak{m}^2$ via the equation of state $\eqref{time-eq-state}$ which in the scaling limit and for $T<T_c$ reads \begin{equation}\label{eq-state-below} M_0^2 = {\cal M}^2(\bar{t}, \mathfrak{m}) + \mathcal{S}(\bar{t},\mathfrak{m})-\mathcal{S}(\bar{t},0) \ , \end{equation} where we have expressed the thermal coupling $r<r_c$ as\footnote{This relation can be easily deduced from eq~$\eqref{time-eq-state}$ considering a system prepared in equilibrium without external fields $h=0$. In this case $m^2=0$ and the magnetisation is equal to $M_0=\sqrt{(r_c-r)/u}$.} \begin{equation} r= -u \left( M_0^2 + \mathscr{S}(t, 0) \right). \end{equation} Eq~$\eqref{eq-state-below}$ states that the magnetisation deviates from its equilibrium value $|M_0|= \sqrt{(r_c-r)/u}$ (see e.g. \cite{ZinnJustin}) by dissipating in the transverse modes. The magnetisation may be viewed as a $n$-vector $M_{a}(t)\equiv \braket{\phi_{a}(\vec{x},t)}$ whose longitudinal component $M(t)$ in eq~$\eqref{eq:M-t}$ is coupled to the other components $M_a(t)$, $a>1$, through the transverse correlation function. For weak magnetic fields, we may interpret the magnetisation as a $n$-vector of fixed length $|M_0|$ whose longitudinal component $M$ is decreased in favour of the transverse modes. The dynamical behaviour across the transition point is then nothing but a rotation of this vector. Moreover, this kind of dynamics is compatible with the $O(n-1)$ symmetry, because the $n-1$ transverse planes are equally likely to contain the vector magnetisation at any time. One may also verify that the definition of the scales $\eqref{scales-fot}$ is the only compatible with the dynamics $z=D$ that preserves the equilibrium limit at $|t|\rightarrow \tau_{\text{\sc fot}}$. \begin{figure}[ht] \centering \includegraphics[width=.475\textwidth]{magnR_low_temp.pdf} \ \includegraphics[width=.475\textwidth]{chiR_low_temp.pdf} \caption{\small Off-equilibrium scaling behaviour of the magnetisation (left panel) and of the transverse susceptibility (right panel) for different values of the temperature ($r=-1,\, -5,\,-10$) below the critical value in $D=3$ spatial dimensions. The data collapse is observed for each value of the temperature confirming that the scaling behaviour is not modified by the specific value of $T<T_c$ considered.}\label{num_low_temp} \end{figure} In figure \ref{num_low_temp} we numerically investigate the off-equilibrium scaling across the {\sc fot} for different values of the temperature $T<T_c$. As expected from phase kinetic arguments \cite{Bray94}, the scaling theory does not depend on the specific value of the temperature considered. \section{Hysteresis in the round-trip protocol} \label{sec:hys} In this section, we consider a {round-trip protocol} $\gamma(h)$ in which the magnetic field $\eqref{eq:quench}$ is varied from an initial value $h(t_i) < 0$ to $h(t_f) > 0$ across the transition point $h_c=0$ at $t=0$ and back in the reversed manner. By integrating the curve described by the magnetisation in time $\eqref{eq:M-t}$ over $\gamma(h)$ we obtain the \textit{hysteresis loop area} $A$ \begin{align} A\equiv&\oint_{\gamma(h)} \D t \; M(t) =\ 2\int_{t_i}^{t_f} \D t \, \int_{t_i}^t \D u \, h(u) \cosh\bigg[ \int_{u}^t \D s \, m^2(s) \bigg] \end{align} which is a quantifier of the deviation from equilibrium during the process \cite{Vicari-off-16,KZ-hyst}: the larger the deviations from equilibrium are, the larger is the value of $A$, while for a system in equilibrium $A=0$ since the magnetisation only depends on the instantaneous value of the external field. The hysteresis loop area $A$ is related to the magnetic energy $W$ dissipated by the system during the round trip. For a linear quench $\eqref{eq:quench}$ we have\footnote{This picture is compatible with a quasi adiabatic quench where $t_s\to\infty$ and therefore $W=0$ since the system will not fall out of equilibrium.} \begin{equation} W\equiv \oint_{\gamma(h)} \D h \; M(h)= \frac{A}{t_s}\; . \end{equation} For further considerations, we shall work in the scaling limit $t_s\rightarrow\infty$, $h\rightarrow 0$ at $\bar{t}$, $|\bar{\vec{q}}|$ fixed (where we refer to the definitions $\eqref{ell}$ for $T = T_c$ and $\eqref{scales-fot}$ $T <T_c$ respectively). \begin{figure}[ht] \centering \includegraphics[width=.475\textwidth]{hyst_Tc.pdf} \ \includegraphics[width=.475\textwidth]{hystR_Tc.pdf} \caption{\small Numerical analysis of the dynamical magnetisation at $T_c$ during a round-trip protocol in $D=3$ spatial dimensions. \underline{Left panel}: hysteresis loop area for different quench time scales $t_s$. \underline{Right panel}: data collapse and dynamical scaling of the hysteresis area (compare eq~$\eqref{scal-hyst-tc}$) }\label{hyst_rc} \end{figure} At the critical temperature $T=T_c$ the hysteresis loop area scales as \begin{equation}\label{scal-hyst-tc} A\sim \ell^{z-d_{\phi}}\, \mathcal{A} \sim t_s^{\frac{6-D}{6+D}}\, \mathcal{A} \ , \end{equation} where the constant $\mathcal{A}$ reads \begin{equation}\label{hyst-scal-funct} \mathcal{A}\! =2\!\int_{-\infty}^{+\infty}\!\!\! \D \bar{t} \int_{-\infty}^{\bar{t}}\!\!\! \D s \, s \cosh\!\bigg[\!\int_{s}^{\bar{t}}\!\D u\; \mathfrak{m}^2(u)\bigg]\ . \end{equation} Consequently, the dissipated energy (in form of magnetic work) $W$ in $D=3$ spatial dimensions obeys the scaling relation \begin{equation}\label{work-tc} W\sim t_s^{-2/3} \, \mathcal{A}\ , \end{equation} i.e. the slower the protocol is performed, the less energy is dissipated during the round-trip protocol. This can be intuitively understood since the system will stay longer in equilibrium for a slow quench. \begin{figure}[ht] \centering \includegraphics[width=.475\textwidth]{hyst_low.pdf} \ \includegraphics[width=.475\textwidth]{hystR_low.pdf} \caption{\small Numerical analysis of the dynamical magnetisation below $T_c$ ($r=-1$) during a round-trip protocol in $D=3$ spatial dimensions. \underline{Left panel}: hysteresis loop area for different quench time scales $t_s$. \underline{Right panel}: data collapse and dynamical scaling of the hysteresis area (compare eq~$\eqref{scal-hyst-low}$) }\label{hyst_low} \end{figure} Applying the same arguments to the magnetic {\sc fot} below the thermal critical point $T<T_c$ we obtain for the hysteresis loop \begin{equation}\label{scal-hyst-low} A \sim \ell^{D}_{\text{\sc fot}}\; \mathcal{A}\sim \sqrt{t_s}\, {\cal A} \end{equation} where the factor ${\cal A}$ has the same structure as $\eqref{hyst-scal-funct}$ in terms of the quantities at low temperature. For the magnetic work, we find the scaling relation \begin{equation}\label{work-low} W\sim \ell_{\text{\sc fot}}^{-D} \; \mathcal{A}= \frac{ \mathcal{A}}{\sqrt{t_s}} \end{equation} independently from the spatial dimensions $2<D<4$. Numerical results for the hysteresis loop area are given in the figures~\ref{hyst_rc} and \ref{hyst_low}, respectively for the cases $T=T_c$ and $T<T_c$. The scaling relations in eqs~(\ref{work-tc},\ref{work-low}) apply beyond the spherical limit $n\rightarrow \infty$ and are in agreement with the numerically obtained scaling behaviour for a $3D$ Heisenberg ferromagnet \cite{Vicari-off-16}. Indeed, in the case $T<T_c$, we showed that the dynamics is independent on the number of transverse components (see appendix \ref{fot-dyn}) while at the criticality $T=T_c$ the critical exponents of the $O(n)$ universality classes weakly depend on the {\sc rg} exponent $\eta$ (see table \ref{eta-exp}) so that the large-$n$ limit provides a reliable guideline. In this sense, we can conclude that {\it the scaling behaviour of the system is not modified by considering a finite number of components $n\geq 2$}. The off-equilibrium scaling relations presented in this work are briefly summarised in table \ref{scaling-general} for a generic $O(n\geq 2)$ model. \begin{table} \centering \caption{\small Numerical estimations of the critical exponent $\eta$ for different universality classes in three spatial dimensions \cite{Vicari-review}.} \vspace{.5cm} \begin{tabular}{|l|c|r|} \toprule universality class & $\eta$ & Ref.\\ \midrule XY & $0.0380(4)$ &\cite{exponents_O(2)}\\ Heisenberg & $0.0375(5)$ &\cite{exponents_O(3)}\\ $O(4)$ & $0.0365(10)$& \cite{exponents_O(4)}\\ $O(\infty)$ & $0$ & e.g. \cite{Berl52}\\ \bottomrule \end{tabular} \label{eta-exp} \end{table} \begin{table} \centering \caption{\small Off-equilibrium scaling relations for the $O(n)$ universality class with $n\geq 2$ during a magnetic quench $\eqref{eq:quench}$: at the continuous transition $T=T_c$ with $d_{\phi}=\frac{1}{2}\left(D-2+\eta\right)$, $\nu_h=1/(D-d_{\phi})$ and $z=2-\eta$ and for $T<T_c$.} \vspace{.25cm} \begin{tabular}{|l|l|l|} \toprule observable &\hspace*{.5cm} scaling $\;T=T_c$ \hspace*{.5cm}&\hspace*{.5cm} scaling $T<T_c$ \hspace*{.5cm}\\ \midrule magnetisation $M(t)$& $\sim \ell^{-d_{\phi}} \, {\cal M}(t/\tau)$& $\sim t_s^{0} \, {\cal M}(t/\sqrt{t_s})$ \\[.75cm] trans. susceptibility $ \chi_{\perp}(t)$& $ \sim \ell^{2-\eta} \, \mathcal{X}_{\perp}(t/\tau)$ & $ \sim t_s^{1/2} \, \mathcal{X}_{\perp}(t/\sqrt{t_s})$\\[.75cm] hysteresis area $A$ & $ \sim \ell^{z-d_{\phi}} \,{\cal A}$ & $\sim t_s^{1/2} \,{\cal A}$ \\ \bottomrule \end{tabular} \label{scaling-general} \end{table} \section{Summary and conclusion} We investigated the off-equilibrium scaling arising in classical spin systems due to the presence of a time-dependent magnetic field $h(t)=t/t_s$ which drives the system from an initial equilibrium state across the transition point $h_c=0$ at constant temperature $T\leq T_c$. In particular, we considered a system with $O(n)$ symmetry in the large-$n$ limit and in $2<D<4$ spatial dimensions. We analysed the two distinct scenarios $T=T_c$ and $T<T_c$ which are qualitatively different since the magnetic transition is continuous at $T=T_c$ and discontinuous for $T<T_c$. After recalling the general features of the {\sc kz} scaling for a continuous transition, we focused on the protocol below the critical temperature. Here, in the absence of a diverging correlation length, an equilibrium scaling theory is routinely formulate by referring to the coherence length $\xi_h$ as the characteristic scale. We extended this equilibrium theory to the non-equilibrium case by following the general ideas of the {\sc kz} approach. To do so, we deduced several equilibrium exponents such as e.g. the dynamical exponent $z=D$ for the {\sc fot}, needed to formulate the off-equilibrium scaling theory. As a result, thermodynamic observables such as the magnetisation or the magnetic susceptibility present dynamical scaling relations in terms of appropriate off-equilibrium scales. The latter are functions of the quench time scale $t_s$ and depend on the set of static and dynamic {\sc fot} exponents. Quite remarkably, these scaling relations have the same structure as those at $T=T_c$ but with different exponents. We then applied this scaling theory to a round-trip protocol, where we proposed the hysteresis area as a quantifier of the deviation from equilibrium and we derived its scaling behaviour. Moreover, the hysteresis can be easily connected to the dissipated magnetic energy by the system during a protocol and therefore with an energy cost. As mentioned, all results presented in this work are derived in the large-$n$ limit. However, we argued that the dynamics of the system is not affected by the number of transverse components so that our results apply for any finite $n\geq 2$, as confirmed by a comparison with numerical studies \cite{Vicari-off-16}. Although there are several works on the dynamical off-equilibrium scaling at {\sc fot}s, e.g. in \cite{Vicari-fot-15} where thermal quenches are analysed and in \cite{Vic-Dyn1,Vic-Dyn2,Vic-Dyn3} where finite-size scaling in quantum systems is discussed, the study of non-equilibrium behaviour at {\sc fot}s is much less understood and investigated than its continuous counterpart. We do therefore believe that the simple and clear physical picture that the {\sc kz} mechanism provides, opens new perspectives to this field of research which becomes experimentally more and more relevant, especially in the light of recent experiments in the area of ultracold atoms, where {\sc fot}s can be generated and studied systematically \cite{Land16,Dog16,Nied16,Flo17,Wald18a}. A next step might be the extension of the present work to a system with inhomogeneities, for which the continuous counterpart is already analysed in the literature e.g. \cite{Dragi-Collura, trap-scal1,trap-scal2}. The latter case is closely related to real experimental setups where ultracold atomic gases typically do not have a flat density profile due to the effects of a trapping potential \cite{trap-coldatoms-exp1,trap-coldatoms-exp2,trap-coldatoms-exp3}.\\ \noindent {\bf Acknowledgements:} SW is grateful to the LPCT Nancy for their warm hospitality. The authors would like to thank Ettore Vicari for his support during the development of this work. We appreciate fruitful discussions with Dragi Karevski and Malte Henkel and are thankful for their critical remarks on the manuscript. \textcolor{black}{Furthermore, we would like to thank the referee for his or her careful reading and useful comments on the manuscript.}
{ "timestamp": "2018-11-14T02:17:28", "yymm": "1806", "arxiv_id": "1806.00866", "language": "en", "url": "https://arxiv.org/abs/1806.00866" }
\section{Introduction} \label{sec:introduction} The search for particles beyond the Standard Model (SM) is one of the main goals of the Large Hadron Collider (LHC). In the midst of the Run II, a new range of energies is being explored, thus playing a crucial role in finding new phenomena or setting bounds on various aspects of New Physics (NP) models. The progress in the understanding of the Higgs sector via the Higgs coupling measurements at the LHC is also a major advance in the exploration of NP, as it allows to test the extensions of the SM either in new channels at colliders or to envisage new complementary ways to explore the presently explored final states. Among the many NP states searched for at the LHC, vector-like quarks (VLQs) play a prominent role in terms of experimental effort. A large number of searches have been performed by both ATLAS and CMS, exploring pair and single production of VLQs in a wide range of possible final states and signatures. No evidence of their existence has been observed so far, giving rise to mass bounds in the TeV range. The precise values depend on assumptions on the allowed decay channels and particular mixing with the SM quarks, and the bounds are overall robust if the mixing of the VLQs is mainly to 3rd generation quarks. The fact that VLQs are the object of such an extensive exploration did not happen by chance: in fact, they are predicted or suggested by a large number of extensions of the SM, especially in relation with the top quark. As examples, VLQs appear as top partners in composite Higgs models~\cite{Agashe:2004rs,Contino:2006qr,Giudice:2007fh,Matsedonskyi:2012ym}, extra-dimensional models~\cite{Antoniadis:1990ew,Antoniadis:2001cv,Csaki:2003sh,Hosotani:2004wv,Cacciapaglia:2009pa}, gauge-Higgs models \cite{Hosotani:1983xw}, models with gauge coupling unification \cite{Choudhury:2001hs,Panico:2008bx}, little Higgs models \cite{ArkaniHamed:2002qx,ArkaniHamed:2002qy,Schmaltz:2005ky} and models with an extended custodial symmetry \cite{Agashe:2006at,Chivukula:2011jh}. Typically, the experimental searches have been based on simplifying assumptions guided by the expectations in specific models, like mixing with the third generation of SM quarks and decays into a $W$, $Z$ or Higgs plus a top or bottom quark \cite{delAguila:2000rc,AguilarSaavedra:2005pv,Anastasiou:2009rv,AguilarSaavedra:2009es,Cacciapaglia:2010vn,Marzocca:2012zn,DeSimone:2012fs,Falkowski:2013jya,Aguilar-Saavedra:2013wba,Ellis:2014dza}. In general, however, the mixing with the first and second SM generations needs to be considered \cite{Atre:2011ae,Cacciapaglia:2011fx,Okada:2012gy,Buchkremer:2013bha,Delaunay:2013pwa,Barducci:2014ila}, and a few LHC searches are also available \cite{Aad:2011yn,Sirunyan:2017lzl}. Furthermore, decays into a non-SM boson \cite{Serra:2015xfa,Brooijmans:2016vro,Aguilar-Saavedra:2017giu,Chala:2017xgc,Bizot:2018tds} or Dark Matter \cite{Anandakrishnan:2015yfa,Kraml:2016eti,Moretti:2017qby,Balkin:2017aep,Chala:2018qdf} are recently receiving increasing attention. Apart from the specific set-up required by these models, it is interesting to study VLQs in a more general context, and we consider this possibility in the following. A common situation in NP models is the presence of extended global symmetries that require several VLQ multiplets, which remain close in mass. These multiplets mix with the SM quarks and among each other via Yukawa-type interactions of the Higgs field. This in turn affects the tree-level and loop-level bounds on masses and coupling strengths, modifying the results and the expectations obtained in simplified analyses. In the present work we further generalise the analysis we performed in \cite{Cacciapaglia:2015ixa} by considering general structures and mixing of more than one VLQ multiplet mixing s with the three SM quark generations. We take into account updated bounds both from direct searches, Higgs physics and Electroweak Precision Tests (EWPT). In particular we shall focus on the case of non-degenerate SU(2)$_L$ doublets, which is of particular interest for model building with extended custodial symmetry. Furthermore, these multiplets feature a cancellation at low energy that relaxes the typically very strong bounds coming from precision electroweak observable. Our main objective is to explore signatures that are characteristic of this specific bi-doublet configuration, and that can be used to distinguish models containing these multiplets from other generic VLQ models. We will identify configurations where the observation of the heavier VLQs is favoured with respect to the lightest one of the multiplets, and specific decay patterns for the charge $2/3$ VLQs. Finally, we point out the importance of pair production of two VLQs via electroweak interactions, which can dominate over the QCD pair production for large (allowed) mixing. This feature was, to the best of our knowledge, first noted in Ref.~\cite{Cacciapaglia:2009cu}. The paper is organised as follows: in Section 2 we recall the structures and properties of VLQ doublets, their relevance in well known models and the typical cases in which they feature cancellations that allow to reduce their impact on low-energy observable. In Section 3 we discuss indirect bounds from EWPT, tree level and loop level contributions to the $Z$ and Higgs couplings, and bounds from current direct searches. In Section 4 we discuss the main new features that lead to novel signatures at the LHC, before presenting our conclusions in Section 5. \section{Vector-like multiplets: models with two doublets} \label{sec:model} The general description of the first few VLQ multiplets is given in \cite{Cacciapaglia:2015ixa}, where they are classified in terms of both their quantum numbers and their particle content (multiplets containing top partners, bottom partners, or both). In addition to partners of the standard quarks, these multiplets may contain other exotic charged VLQ particles. The VLQ multiplets that can mix with SM quarks and a SM (or SM-like) Higgs boson have been studied, rather extensively, in the literature \cite{delAguila:1982fs,delAguila:2000rc,Aguilar-Saavedra:2013wba,Cacciapaglia:2010vn,Cacciapaglia:2011fx,Okada:2012gy,Buchkremer:2013bha}. In the following we focus on the specific case of VLQ doublets, as it is of particular importance in various extensions of the SM with an extended custodial symmetry (see, {\it e.g.}, Refs \cite{Agashe:2006at,Chivukula:2011jh}). The doublets we consider in the following are $\left(\begin{array}{c}U_1 \\ D_1 \end{array}\right)_{1/6}$ and $\left( \begin{array}{c} X_2^{5/3} \\ U_2 \end{array} \right)_{7/6}$\,, where the subscript number represents the hypercharge of the multiplet, and the exotic state $X^{5/3}$ has electromagnetic charge $+5/3\ e$. The presence of VLQ multiplets generically allows to add new Yukawa interactions between the VLQ multiplets and the SM quarks, or among VLQ multiplets, mediated by scalar fields from the Higgs sector. Gauge invariance requires that new VLQ doublets couple with the SM right-handed singlets (if the Higgs sector is not modified). For VLQ multiplets with the same quantum numbers as the SM quarks, a direct mass mixing can be written down but it is not physical, as it can be removed redefining the fields corresponding to the SM and VLQs. A description of the Lagrangian terms and mass matrices for scenarios with two doublets can be found in Appendix~\ref{app:multiplets}, where we also include the case of a doublet with hypercharge $-5/6$. The latter features an exotic charged bottom-partner, and we will consider its phenomenology in a follow-up work. A detailed account of the Yukawa structure and mixing patterns can be found in \cite{Cacciapaglia:2015ixa}. In the remaining of this section we will consider, in detail, the relation between the general formalism we use in this paper and composite (pseudo-Nambu-Goldstone) Higgs models. \subsection{Relation to composite top partners} \label{sec:composite} In models of composite top partners, where the elementary tops pick up a mass via mixing with composite operators~\cite{Kaplan:1991dc}, bi-doublets like the ones we consider in this paper arise naturally. This is due to the fact that the symmetries of the composite sector need to include the full custodial SO(4)$\sim$SU(2)$_L \times$SU(2)$_R$ symmetry of the Higgs sector~\cite{Georgi:1984af}, and top partners embedded in a bi-doublet are preferred by the absence of dangerous tree level corrections to the $Z$ couplings to the left-handed bottom quarks~\cite{Agashe:2006at}. The main difference between the composite case and the Lagrangian we adopted in Eq.~(\ref{eq:LV-SM}) (for the case relevant for the top mass generation) is twofold: on the one hand, in the effective Lagrangian for partially composite tops~\cite{Marzocca:2012zn}, the elementary fields corresponding to the SM tops do not couple directly to the Higgs boson but mix linearly with the composite operators via a mass term generated by the condensate; on the other hand, the Higgs field enters non-linearly in the couplings, thus higher order couplings are implicitly included. To establish a bridge between our study and models with partially composite tops, we detail here the correspondence between our parameters and the ones of a model based on the symmetry breaking SO(5)/SO(4) (so-called minimal composite Higgs), where the top partners are allowed to transform as a ${\bf 4}$ of the unbroken symmetry SO(4)~\cite{Agashe:2006at,Contino:2006qr}. This discussion is actually valid for any symmetry breaking pattern, as long as an unbroken custodial SO(4) is contained in the unbroken subgroup. We will follow the notation of Ref.~\cite{Cacciapaglia:2015dsa}, where the mass mixing in the effective Lagrangian description reads: \begin{multline} \mathcal{L}_{CHM} \supset - M_4\ (\bar{T}_L T_R + \bar{B}_L B_R + \bar{X}_{5/3 L} X_{5/3 R} + \bar{X}_{2/3 L} X_{2/3 R}) + \\ - y_{L4} f\ \left(\bar{b}_L B_R + \cos^2 \frac{\theta}{2} \ \bar{t}_L T_R + \sin^2 \frac{\theta}{2}\ \bar{t}_L X_{2/3 R} \right) - \frac{y_{R4} f \sin \theta}{\sqrt{2}} (\bar{T}_L t_R - \bar{X}_{2/3 L} t_R) + \mbox{h.c.} \end{multline} where $(T,B)$ and $(X_{5/3}, X_{2/3})$ are the two doublets that share a common mass $M_4$; $f$ is the decay constant of the pions in the composite sector (including the Higgs boson) and the angle $\theta$ parameterises in a non-linear way the Higgs vacuum expectation value (VEV), such that $v = f \sin \theta$. Note that the SM elementary doublet $(t,b)$ mixes with the composite doublet with strength $y_{L4} f$ not suppressed by the Higgs VEV, so that we can remove this term by redefining: \begin{equation} t_L = s_{\theta L} U_{1L} + c_{\theta L} u_L^3\,, \quad T_L = c_{\theta L} U_{1L} - s_{\theta L} u_l^3\,, \qquad s_{\theta L} = \sin \theta_L = \frac{y_{L4} f}{\sqrt{M_4^2 + y_{L4}^2 f^2}}\,, \end{equation} and analogously $b_L = s_{\theta L} D_{1L} + c_{\theta L} d_L^3$ and $B_L = c_{\theta L} D_{1L} - s_{\theta L} d_l^3$. Upon identifying the fields $t_R \equiv u_R^3$, $T_R \equiv U_{1R}$, $X_{2/3} \equiv U_2$ and $X_{5/3} \equiv X^{5/3}$, at leading order in the Higgs VEV the parameters in our Lagrangian~(\ref{eq:LV-SM}) match the composite ones as follows: \begin{equation} M_1 = \sqrt{M_4^2 + y_{L4}^2 f^2}\,, \quad M_2 = M_4\; (< M_1)\,, \end{equation} and \begin{equation} \tilde{m}_{33}^{\rm up} = - \frac{y_{R4} f \sin \theta}{\sqrt{2}} s_{\theta L}\,, \quad y_{1u}^3 = \frac{y_{R4} f \sin \theta}{\sqrt{2}} c_{\theta L}\,, \quad y_2^3 = \frac{y_{R4} f \sin \theta}{\sqrt{2}}\; (> y_{1u}^3)\,. \end{equation} The above formulas show that composite models indeed prefer masses for the two doublets that are not equal (and in particular, the hierarchy $M_2 < M_1$ is an outcome) as well as unequal Yukawa $y_2^3 > y_{1u}^3$. Another interesting possibility, which has deserved attention in the literature, is that the right-handed top component is itself a fully massless composite state~\cite{Panico:2012uw,DeSimone:2012fs}. In this case, a direct coupling of the left-handed elementary tops is allowed: \begin{multline} \mathcal{L}_{CtR} \supset - M_4\ (\bar{T}_L T_R + \bar{B}_L B_R + \bar{X}_{5/3 L} X_{5/3 R} + \bar{X}_{2/3 L} X_{2/3 R}) + \\ - y_{L4} f\ \left(\bar{b}_L B_R + \cos^2 \frac{\theta}{2} \ \bar{t}_L T_R + \sin^2 \frac{\theta}{2}\ \bar{t}_L X_{2/3 R} \right) - \frac{y_{Rt} f \sin \theta}{\sqrt{2}}\ \bar{t}_R t_L + \mbox{h.c.} \end{multline} where we see that the coupling between the right-handed top and the heavy doublets is replaced by a direct Yukawa with the light left-handed top. The same rotation among doublets can be done as before, now leading to the following identification of Yukawa couplings: \begin{equation} \tilde{m}_{33}^{\rm up} = \frac{y_{Rt} f \sin \theta}{\sqrt{2}} c_{\theta L}\,, \quad y_{1u}^3 = \frac{y_{Rt} f \sin \theta}{\sqrt{2}} s_{\theta L}\,, \quad y_2^3 = 0\,; \end{equation} while the masses of the heavy doublets are the same as above. \section{Constraints on the parameter space} \label{sec:ewbounds} We examine, in the following, the scenario with two doublets of hypercharge $1/6$ and $7/6$ respectively, each containing a charge $2/3$ top partner, labeled as $U_{1,2}$ in the gauge eigenstate basis and $t^\prime_{1,2}$ in the mass one, where $m_{t_1^\prime}<m_{t_2^\prime}$. The relation between the masses of $t_{1,2}^\prime$ and the Lagrangian parameters $M_{1,2}$ after the diagonalisation of the mass matrix is described in Appendix \ref{app:masses}. In the numerical study, we considered benchmark values for the mass parameters in the Lagrangian ({\it i.e.} the VLQ mass terms in the gauge eigenstates, before mixing) as follows: $M_1=1000$ GeV and $M_2=\{1100,1200,1400\}$ GeV. We will thus show selected results from those benchmarks. Note that in composite Higgs models, one typically expects the opposite mass ordering for the multiplets, however experimental bounds go rather in the opposite direction as the bounds on the $X^{5/3}$ exotic charge member (belonging to the second multiplet) are strong. We take, therefore, benchmark points that take this fact into account and that allow to explore in a first step an overall lower range of masses which are within immediate or close reach for the LHC. Indirect constraints on the spectrum and couplings of the VLQs arise both at tree level, via modifications to the couplings of the $Z$ and Higgs (and $W$) to the SM quarks \cite{delAguila:2000rc}, and at loop level via contribution to the observable in the EWPT \cite{Cacciapaglia:2010vn,Chen:2017hak} and loop-induced couplings of the Higgs \cite{Bizot:2015zaa}. These constraints give a first indication of the available parameter space that is still interesting to further explore in direct searches at the LHC. Note, however, that we are working under the assumption that the only light NP states are the new VLQs. Thus, the effect of other states to EWPT is not taken into account, and they may affect the results even if the new particles are heavier than the VLQs. The reader should be wary, therefore, that the loop-level indirect bounds should not be considered as absolute bounds, but rather they should be taken as an indication in models that contain other particles contributing to these corrections. Tree level bounds, on the other hand, are more solid as they arise directly from the mixing. \begin{figure}[h] \begin{center} \epsfig{file=fig_precision/Firstgen_M1_1000_M2_1100,width=0.46\textwidth}\hfill \epsfig{file=fig_precision/Firstgen_M1_1000_M2_1400,width=0.46\textwidth}\\[8pt] \epsfig{file=fig_precision/Secondgen_M1_1000_M2_1100,width=0.49\textwidth}\hfill \epsfig{file=fig_precision/Secondgen_M1_1000_M2_1400,width=0.49\textwidth} \caption{\label{fig:ewptlight} Tree level (yellow area is excluded at 3$\sigma$), EWPT (blue continuous line corresponds to the 3$\sigma$ bound, green dashed to 2$\sigma$, red dotted to 1$\sigma$, the strip between the lines is allowed) and LHC single VLQ production bounds (vertical black line, excluded region on the right) in the case of mixing of two VLQ multiplets with the first (top panels) or second (bottom panels) SM quark generation. Plots on the left column correspond to benchmark masses $M_1=1000$ GeV and $M_2=1100$ GeV, while on the right to $M_1=1000$ GeV and $M_2=1400$ GeV.} \end{center} \end{figure} \begin{figure}[h] \begin{center} \epsfig{file=fig_precision/Thirdgen_M1_1000_M2_1100,width=0.49\textwidth}\hfill \epsfig{file=fig_precision/Thirdgen_M1_1000_M2_1400,width=0.49\textwidth} \caption{\label{fig:ewpt3rdgen}EWPT bounds (blue line is the 3$\sigma$ bound, green dashed 2$\sigma$, red dotted 1$\sigma$, the strip between the lines is allowed) in the case of mixing of the two VLQ multiplets with the third SM quark generation. } \end{center} \end{figure} \begin{figure}[h] \begin{center} \epsfig{file=Higgs/Higgs_M1_1000_M2_1100,width=0.49\textwidth}\hfill \epsfig{file=Higgs/Higgs_M1_1000_M2_1400,width=0.49\textwidth} \caption{\label{fig:higgs}First generation mixing bounds from Higgs couplings data, Blue dotted line is 68\% CL and Red line corresponds to 95\% CL. Values of the Yukawa couplings below the corresponding curve are allowed.} \end{center} \end{figure} A combination of the numerical results we obtained are shown in Figs.~\ref{fig:ewptlight}, \ref{fig:ewpt3rdgen} and \ref{fig:higgs} for a selection of benchmarks. The details of the bounds we impose are described in the following sub-sections \ref{sec:bounds:3} and \ref{sec:EWPTHiggs}. The general trend is that, for VLQs that couple mainly to the first and second SM quark families, the bounds from EWPTs (curved lines) and tree level $Z$-couplings (excluded yellow area) tend to cover the same parameter space. This was also remarked in \cite{Cacciapaglia:2015ixa}, where the specific case of degenerate or quasi degenerate multiplets was considered. For earlier discussion of the degenerate case, we refer the reader to Refs~\cite{Atre:2011ae,Atre:2013ap}. In the cases we cover here, with less degenerate masses, we see that the allowed region shifts in the parameter space of the two Yukawa couplings, while the approximate overlap between tree and loop level bounds is conserved. The vertical black line gives a constraint on the Yukawa coupling coming from direct searches for a VLQ bottom partner at the LHC Run-II (more details in the following sub-section \ref{sec:direct}). We remark that bounds also arise from modifications to the Higgs couplings, mainly due to loops of VLQs to the couplings to gluons and photons. However, such bounds (shown in Fig. \ref{fig:higgs}) are much weaker and do not significantly affect the allowed parameter space. The case of third generation is quite different, as there are no tree level bounds due to our poor knowledge of the couplings of the $Z$ boson to the top quark. Furthermore, the loop contribution to the Higgs coupling features an interesting cancellation, thus leading to very weak constraints. The loop level EWPTs, however, give similar constraints to the ones from light families, as shown in Fig. \ref{fig:ewpt3rdgen}, and also features a characteristic shape due to a cancellation that allows large values of the couplings. \subsection{Tree level bounds} \label{sec:bounds:3} Among the long list of processes at tree level, we consider here only the most significant and effective one to obtain bounds on the parameters of VLQs. Specifically, we use bounds on the modifications to the Z couplings induced by the mixing between VLQs and SM quarks. The couplings of VLQs to gauge bosons are given in the appendix B of \cite{Cacciapaglia:2015ixa}. In the models under consideration, only the mixing of top partners with up-type SM quarks will induce this type of effects. The diagonalisation of the mass matrix is obtained through two unitary matrices $V_L$ and $V_R$, defined by \begin{equation} M_{u} = V_L \cdot M_{u}^{diag}\cdot V_R^\dagger \,, \end{equation} and the mass eigenstates can be obtained by rotating the flavour eigenstates with the same matrices: \begin{equation} \left( \begin{array}{c} u \\ c \\ t \\ t'_1 \\ t'_2 \end{array} \right)_{L/R} = V^\dagger_{L/R} \cdot \left( \begin{array}{c} u^1 \\ u^2 \\ u^3 \\ U_1 \\ U_2 \end{array} \right)_{L/R}\,. \end{equation} The above rotations modify the couplings of SM and VLQs with the gauge bosons, affecting in turn well measured processes, in particular observables involving the Z boson. The expressions of couplings of VLQs, SM quarks and the gauge bosons of the SM are provided in Appendix~\ref{app:couplings}. The modifications to the couplings with respect to the SM values are proportional to the $V_{L/R}^{4I}$ and the $V_{L/R}^{5I}$ elements of the mixing matrices, and we recall that for doublets larger mixing angles are obtained in the right-handed sector, while the ones in the left-handed sector are suppressed by the ratio between the SM quark mass and the VLQ masses \cite{Buchkremer:2013bha}. Strong constraints on the Z coupling with first generation SM quarks come from the weak charge measurement in atomic parity violation experiments~\cite{Deandrea:1997wk,Patrignani:2016xqp}. The couplings of the Z to the second generation quarks were tested in detail at LEP~\cite{ALEPH:2005ab}: \begin{equation} g_{ZL}^c = 0.3453 \pm 0.0036\,, \quad g_{ZR}^c = - 0.1580 \pm 0.0051\,, \quad \mbox{corr.} = 0.30 \,. \end{equation} We remark that the bounds shown in Figs \ref{fig:ewptlight} and \ref{fig:ewpt3rdgen} are calculated at 3$\sigma$. For couplings to the third generation, the $W_{tb}$ couplings were measured both at TeVatron and LHC. The value of $V_{tb}$ is affected by the mixing of the top with the VLQs in the left-handed sector: \begin{equation} |V_{tb}|^2 = 1-\sum_{K=4,5} |V_L^{K3}|^2\,. \end{equation} A complete list of direct measurements and lower bounds on $V_{tb}$ can be found in~\cite{Chiarelli:2013psr}. \footnote{Note that the strong constraints from the unitarity of the CKM matrix cannot be used, as the mixing with VLQs destroys such unitarity.} Again for more detailed formulas we refer to \cite{Cacciapaglia:2015ixa}. Numerically, the bound from $V_{tb}$ are rather weak and do not significantly affect the parameter space for heavy VLQs. \subsection{Electroweak precision tests and Higgs bounds} \label{sec:EWPTHiggs} Electroweak precision measurements, or EWPT, are a standard tool to constrain physics beyond the SM. They can be used to constrain the parameters of VLQs \cite{Cacciapaglia:2010vn,Chen:2017hak}, but only under the strong hypothesis that, except for the considered contributions, other heavy particles decouple or give negligible contributions. Seen the level of precision in the measurement, this is a rather strong assumption and may strongly bias the applicability of the results to specific models. For this reason, in the following, we will consider the bounds from EWPTs as an indication and not as a general exclusion, contrary to the tree level bounds. The Higgs measurements are also entering a precision era and, already at present, give valuable information and limits on the possible extensions of the SM. Model of VLQs are no exception and looking to the Higgs data gives useful constraints \cite{Bizot:2015zaa}. EWPT and Higgs couplings measurements give rather complementary bounds on the parameters space of VLQ models. Bounds from EWPTs are usually given in term of the oblique parameters S and T, as defined in Refs~\cite{Peskin:1990zt,Peskin:1991sw}. We have considered the following reference SM values: $m_{h, {\rm ref}}=125$ GeV, $m_{t,{\rm ref}}=173$ GeV and $m_{b,{\rm ref}} = 4.2$ GeV. Taking $U=0$, as it is the case in the models under scrutiny, the experimental values for the $S$ and $T$ parameters are~\cite{Baak:2014ora}: \begin{equation} S= 0.06 \pm 0.09 \,, \qquad T= 0.10 \pm 0.07 \,, \label{eq:7:10} \end{equation} where the correlation between $S$ and $T$ in this fit is 0.91. For more details and the complete list of formulas we refer to \cite{Cacciapaglia:2015ixa}. The EWPTs, complemented by the tree-level bounds for the light generations, tend to favour situations in which the two Yukawa couplings of the VLQ doublets to the SM are of similar size (see Figures \ref{fig:ewptlight} and \ref{fig:ewpt3rdgen}), giving rise to a funnel region that extends to large value of the Yukawas along the diagonal. In the non-degenerate VLQ mass case, the funnel is simply rotated away from the exact diagonal, shifting closer to the axis relative to the heavier multiplet. This, as expected, derives from stronger bounds on the Yukawa of the lighter multiplet. Concerning Higgs data, the direct measurement of the couplings to quarks is very challenging: only very recently the observation of production of the Higgs in association with tops has been reported by CMS~\cite{ttH} that measured the signal strength with a $30\%$ accuracy, while the couplings to light quarks (with the exception of the bottom) is out of reach. Thus, the only bounds come, indirectly, from loop effects on the couplings to gluons and photons. Being generated at loop level, they also suffer from the possible presence of additional contributions that would thus affect the bounds in more complete models. The combined ATLAS-CMS constraints on $\kappa_\gamma$ and $\kappa_g$ are given in Ref. \cite{exp_higgs}. The presence of new VLQs, which enter the loops allowing the Higgs boson to couple to photons and gluons, modifies these effective couplings giving rise to bounds on the parameter space of VLQs. We use therefore those combined constraints in the following to establish bounds on the parameter space for VLQ bi--doublets as shown in Figure \ref{fig:higgs}. These bound put an upper limit on the funnel region which was unrestricted by tree-level and EWPT data. The results of second generation mixing with VLQs are similar to those for the first generation mixing. On the contrary the third generation mixing case does not allow to put any extra constraint using the Higgs results. \subsection{Bounds from direct searches at the LHC} \label{sec:direct} As we already pointed out, VLQs are widely searched for at the LHC. Most efforts, so far, have been addressed towards VLQs that decay into third generation quarks and are pair produced via QCD interactions. For a top partner, the considered final states are $W b$, $Z t$ and $H t$. In the case of doublets, the rate into the charged current is nearly negligible, thus leading to bounds ranging from $1270$~GeV to $1300$~GeV from the latest CMS results~\cite{Sirunyan:2018omb,Sirunyan:2017usq}, while ATLAS~\cite{Aaboud:2018xuw,Aaboud:2017qpr} gives $1170$ to $1430$~GeV. Interestingly, for CMS the stronger bound corresponds to decays exclusively into $Z t$, while for ATLAS into $t H$. For completeness, similar bounds can be obtained for decays into $W b$ final states~\cite{Sirunyan:2017pks,Aaboud:2017zfn}. The bounds on the charge $-1/3$ $B$ and charge $5/3$ X, which decay uniquely into $W t$, range between $1100$~GeV (for same sign lepton channels)~\cite{CMS-PAS-B2G-16-019} to $1300$~GeV (for single lepton channels)~\cite{CMS-PAS-B2G-17-008} for CMS. In the approximations considered in the searches, those bounds do not depend on the value of the mixing angles with the SM quarks. Searches targeting single production channels, which are proportional to the mixing angles, are also available within the latest dataset. CMS has published a search for $B$ in the final state $H t$~\cite{Sirunyan:2018fjh} and for $T$ in the final state $Zt$~\cite{Sirunyan:2017ynj}, while ATLAS has a search in $Wb$ for the 2015 dataset~\cite{ATLAS-CONF-2016-072}. Only the search in the $Zt$ channel can, in principle, be used to set bounds on the Yukawa-like couplings in our model. However, we have checked that the cross sections we obtain are always smaller than the observed bounds. \begin{figure}[h] \begin{center} \epsfig{file=LHC_new/CS_VBq_M11000,width=0.6\textwidth} \caption{\label{fig:direct}Cross sections for single production of a bottom partner $B$ in $p p \to B q$, as a function of the $y_1^q$ (in GeV) for first and second generation mixing. The mass is fixed to $1000$~GeV, and the cross sections are compared to the 95\% CL bound from \cite{Sirunyan:2017lzl} at 8 TeV (black horizontal line). } \end{center} \end{figure} Fewer searches also cover the case of the couplings to light generations, and are limited to Run I data. From QCD pair production~\cite{Sirunyan:2017lzl}, the bounds range between $430$~GeV for exclusive decays into $H t$ to $605$ for $Z t$. Thus, our benchmark points are well above the current exclusion. In this case, however, single production can be very important thanks to the couplings to valence quarks~\cite{Atre:2011ae,Buchkremer:2013bha}. However, interpreting the bounds is more challenging, as they depend on the structure of the couplings to the light quarks that enter the single production. For the charge $2/3$ partners, in our case the dominant production is via the couplings to the $Z$, which is however not covered in the CMS analysis. Thus, the only bound we could directly apply to our scenario is for the single production of a bottom-type VLQ, $B$ in the SM-like multiplet, as cross-sections are bound by the limits for $p p \to B q$ from the CMS analysis \cite{Sirunyan:2017lzl} at 8 TeV. In turn this provides an upper bound on the maximal value of the Yukawa couplings for the SM-like doublet, $y_1^{u/c}$. To extract the bound, we have calculated the production cross section at LO, using the model implementation described in more details in Section~\ref{sec:cross-sections}, and compared it to the excluded value at 95\% CL. Note that the mass of the $B$ VLQ is equal to $M_1$, which is fixed to $1000$~GeV in our benchmarks. The result is shown in Figure \ref{fig:direct}, where we compare the production cross section for couplings to up (in violet) and charm (in red) quarks to the exclusion limit at a cross section of $\sim 250$ fb. Note that we only consider the central value here, and that an increase of the cross section due to QCD NLO effects should be expected \cite{Fuks:2016ftf}. Theoretical errors from scale variation are strongly reduced at NLO. The net bounds on $y_1^{u/c}$ amount to $y_1^u < 130$~GeV and $y_1^c < 485$~GeV, and they are shown as a black vertical line in Figure \ref{fig:ewptlight}: the region on the left side of the line is allowed. \section{LHC phenomenology} \label{sec:LHCpheno} Having determined the allowed region in parameter space, we now perform a phenomenological analysis of the signatures expected at the LHC. Compared to the current search strategies, which are based on simplified scenarios with a single VLQ, we will consider here in detail the interplay between the two VLQ doublets with hypercharges $Y=1/6$ and $Y=7/6$. We will show that peculiar patterns in the decay rates can be observed, as well as new production channels. Among the key properties of this scenario is the presence of two top partners that mix and have different masses and decay patterns. One feature common to all top partners coming from doublets is that the decay via charged currents, {\it i.e.} a $W^\pm$ boson, are very suppressed, thus searches based on this decay channel (which give the strongest bounds) will be ineffective. As we will see, peculiar decay patterns may be used to effectively tag this kind of scenario. \subsection{Masses and branching ratios} The analytical expressions of the masses and branching ratios (BRs) are reported in Appendices \ref{app:masses} and \ref{app:BRs} respectively. We recall that the values of the masses for $t'_1$ and $t'_2$ are not constant but depend on the values of the two Yukawas, as shown in Fig.~\ref{fig:masses}. We show results for the light quarks and for the benchmark masses $M_1=1000$~GeV and $M_2=1200$~GeV. For mixing with the top, the results are qualitatively similar and quantitatively very close too, as the VLQ masses are already constrained to be much heavier that the top, as discussed in the previous section. On the other hand, the bottom-partner $B$ and exotic charged $X^{5/3}$ have masses fixed, respectively, to $M_1$ and $M_2$, and BRs of 100\% into $B \to W^- u/c/t$ and $X^{5/3} \to W^+ u/c/t$. \begin{figure}[tb] \begin{center} \epsfig{file=fig_masses/masses_gen1_M1_1000_M2_1200,width=0.48\textwidth \caption{\label{fig:masses} Masses in GeV of $t'_1$ (blue contours) and $t'_2$ (red contours) mixing with the light quarks, for the benchmark $M_1=1000$~GeV and $M_2=1200$~GeV. The contours are shown at intervals of 10 GeV unless specified. Results for mixing with the top quark are numerically almost identical.} \end{center} \end{figure} For the branching ratios, in this section we present sample numerical results for the intermediate benchmark scenario with $M_1=1000$ GeV and $M_2=1200$ GeV, as the results for the other two cases as well as for heavier masses are qualitatively similar. We start from the lighter top partner, $t_1^\prime$. In Fig.~\ref{fig:brtp1_M1012} we show contours of the BRs of a $t_1^\prime$ that mixes with the up quark. The contours are shown in the plane identified by the two Yukawa couplings. Results for mixing to the charm are nearly identical (differences only depend on the mass of the charm, which is much smaller that the VLQ masses), so we superimpose on the same plot the regions excluded by tree level constraints for the two cases: orange for the charm, with the pink area additionally excluded for the up. The orange line marks the additional portion of parameter space that would be excluded at 3$\sigma$ by the loop-level EWPTs, in absence of additional contribution from New Physics and for mixing to the charm (for the up, the tree level bounds are always dominant). We notice that the charged current is absent, and that the decay rates are mostly sensitive to the value of the Yukawa for the second multiplet. For small values of $y_2^q$, the rates are almost equal between $Z$ and Higgs, while at large values the $Z$ tends to dominate. The analogous BRs for mixing of $t^\prime_1$ to the top are numerically very similar, due to the smallness of the top mass compared to the VLQ ones while , however, the excluded region is different (recall the absence of tree-level constraints). \begin{figure}[tb] \begin{center} \epsfig{file=fig_bratio/BRtp1uH_x1y1_M1012,width=0.47\textwidth}\hfill \epsfig{file=fig_bratio/BRtp1uZ_x1y1_M1012,width=0.47\textwidth \caption{\label{fig:brtp1_M1012} Branching ratios of $t_1^\prime$ mixing with the up/charm quark . The contours are show for values of the BR spaced by steps of $2.5\%$. For the light quarks, the orange region is excluded for the charm, while the orange plus pink areas are excluded for the up. The dashed line indicate the region excluded by EWPTs for mixing to the charm. } \end{center} \end{figure} For the heavier $t_2^\prime$, we show the BRs in Figs~\ref{fig:brtp2_M1012_1stgen} for couplings to light generations. We note the same pattern in the balance between the $Z$ and Higgs final states, but with inverted roles: it is the BR into the Higgs that dominates, in this case, for large values of the Yukawa with the first doublet, $y_1^q$. In addition, decays into the lighter VLQ $t^\prime_1$ are also allowed, but with very small rates that only increase above the few percent for large Yukawa couplings. It is useful to remark that, for mixing to the up quark, the allowed parameter region is very small, thus the values of the BRs are constrained to almost fixed values. For both $t^\prime_1$ and $t^\prime_2$, the rates into $u Z$ and $u H$ are close to $50\%$, while decays $t^\prime_2 \to t^\prime_1 Z/H$ are always bound to be below $1\%$. \begin{figure}[tb] \begin{center} \epsfig{file=fig_bratio/BRtp2uH_x1y1_M1012,width=0.47\textwidth}\hfill \epsfig{file=fig_bratio/BRtp2uZ_x1y1_M1012,width=0.47\textwidth}\\[8pt] \epsfig{file=fig_bratio/BRtp2tp1H_x1y1_M1012,width=0.47\textwidth}\hfill \epsfig{file=fig_bratio/BRtp2tp1Z_x1y1_M1012,width=0.47\textwidth} \caption{\label{fig:brtp2_M1012_1stgen}Branching ratios of $t_2^\prime$ mixing with the up/charm quark. The contours are show for values of the BR spaced by steps of $2.5\%$ (unless specified). The orange region is excluded for the charm, while the orange plus pink areas are excluded for the up. The dashed line indicate the region excluded by EWPTs for mixing to the charm.} \end{center} \end{figure} \clearpage \subsection{Cross-sections} \label{sec:cross-sections} The production cross-sections at the LHC also show distinctive patterns. For the calculation, we have used a modified version of the {\sc Feynrules}~\cite{Alloul:2013bka} VLQ model files provided in Ref.~\cite{Fuks:2016ftf}. A modification is necessary for including couplings between VLQs from different multiplets and SM gauge and Higgs bosons\footnote{The modified FeynRules file is available here: \url{http://deandrea.home.cern.ch/deandrea/VLQ_v4.fr}}. Such modifications allow an estimation of processes where VLQs of different multiplets are produced in association, as $p p \to t_1^\prime t_2^\prime$. We have used Madgraph5 version 2.6.1 \cite{Alwall:2011uj} for the estimation of cross-sections at LO in QCD, using the NN23LO1 parton distribution functions for the proton. We computed the production cross-sections at 13 TeV for the production of the charge $2/3$ VLQs $t_{1,2}^\prime$ in the parameter space allowed by precision, low energy and LHC@8TeV constraints, determined in Section \ref{sec:ewbounds}. We also focus on mixing to the up quark, which allows for sizeable single production rates thanks to the couplings to a valence quark in the proton. We chose, as representative benchmark, the set of input parameter $M_1 = 1000$ GeV and $M_2 = 1200$ GeV and scanned over the allowed values of the Yukawa couplings in the $y_1^u$ - $y_2^u$ plane. Specifically, we have considered processes of single production of $t_1^\prime$ and $t_2^\prime$ in association with SM objects, $p p \to t_{1,2}^\prime+\{h,Z,j\}$, and pair production of top partners of same or different kind, $p p \to t_i^\prime t_j^\prime$ with $i,j=1,2$, therefore including both QCD- and EW-strength couplings. In all cases we have considered both the production of particle and anti-particle states. Our results are summarised in Fig.~\ref{fig:LHC_13TeV}. \begin{figure}[htb] \epsfig{file=LHC_new/CS_Th_M1_1000_M2_1200_13TeV.eps,width=0.5\textwidth} \epsfig{file=LHC_new/CS_TZ_M1_1000_M2_1200_13TeV.eps,width=0.5\textwidth} \\%[8pt] \epsfig{file=LHC_new/CS_Tj_M1_1000_M2_1200_13TeV.eps,width=0.5\textwidth} ~~\epsfig{file=LHC_new/CS_Tp_M1_1000_M2_1200.eps,width=0.48\textwidth} \caption{\label{fig:LHC_13TeV}Scatter plot of production cross-sections at LHC@13TeV, scanning over the Yukawa coupling of $t_2^\prime$, for the production processes (from top left clockwise): $t_i^\prime+h$, $t_i^\prime+Z$, $t_i^\prime t_j^\prime$ and $t_i^\prime+\text{jet}$ with $(i,j = 1,2)$ and with $M_1=1000$ GeV and $M_2=1200$ GeV. The cross-section include also the production of the anti-particle states $\overline t_{i,j}^\prime$.} \end{figure} A number of conclusions can be derived: \begin{itemize} \item In the allowed region of parameter space, it is always possible to obtain configurations in which the production cross-section of the heavier VLQ ($t_2^\prime$) is comparable or even larger than the cross-section for the lighter VLQ ($t_1^\prime$). For single production channels, this switch happens for values of $y^u_1$ smaller than $60\div 80$ GeV, while for pair production channel the production of $t_2^\prime$ is comparable to $t_1^\prime$ for values of $y_1^u$ around the upper allowed limit. From a phenomenological point of view, this result can be very interesting because the decay patterns of the heavier top-partner are different from the ones usually considered in experimental searches. This includes the possibility of chain-decays to the lighter $t_1^\prime$, thus opening new channels for experimental exploration. \item The cross-section for production of a pair of VLQs of same kind exhibits an interesting pattern that indicates the dominance of EW production mechanism for large values of Yukawa couplings. In fact, QCD production only depends on the mass, which depends only mildly on the Yukawas. From the bottom-right plot in Fig.\ref{fig:LHC_13TeV}, however, we see a marked increase in the cross-section of both VLQs for $y_1^u > 60 \div 80$~GeV. In addition, the pure electroweak production of the two VLQs together, $t_1^\prime t_2^\prime$, becomes sizeable in the same parameter region, and it even dominates over pair production for the largest allowed values of the Yukawa couplings. This may be extremely relevant for phenomenological analyses as the kinematics of processes of production of a pair of VLQ with different masses will be different from the one usually considered in experimental searches where the same VLQ is produced in pairs only through QCD-driven processes. A similar effect was noted in Ref.~\cite{Cacciapaglia:2009cu} for VLQs in the context of Little Higgs models with T-parity and, more recently, the same phenomena was noted in \cite{Brooijmans:2018xbu,Fuks:2016ftf}. \end{itemize} The results we highlighted show novel channels that deserve a thoroughly investigation, as they may give rise to detectable characteristic signatures at the LHC. Furthermore, an analysis at NLO in QCD~\cite{Fuks:2016ftf,Cacciapaglia:2017gzh} is needed to go beyond a simple cross-section calculation, together with the addition of detector and reconstruction effects. \section{Conclusions} \label{sec:concl} We have considered VLQs in a more general framework than the usual simplified models, namely we study the presence of two doublets with general mixing structure with the SM quark generations. This template, inspired by situations which are typically present in various NP models, shows that present bounds in the general case are weaker than those assuming a single VLQ multiplet and coupling only to the third SM quark generation. Moreover we focused on the two ``top-partner-type'' heavy VLQs present in the case of the two studied multiplets. Due to their peculiar mixing patterns with the SM quarks, they feature production and decay channels that are usually not considered in experimental searches. In particular, we remark areas in the parameter space for sizeable Yukawa couplings where the single production of the heavier partner dominates, thus leading to cascade decays. Furthermore, in the same parameter region, production of the two mass eigenstates in association can dominate over QCD and EW pair production. These new features deserve to be included within the exploration programs for NP at the LHC, thus allowing to test these situations in detail. \section*{Acknowledgments} AD is partially supported by the Institut Universitaire de France. AD and GC also acknowledge partial support from the Labex-LIO (Lyon Institute of Origins) under grant ANR-10-LABX-66, FRAMA (FR3127, F\'ed\'eration de Recherche ``Andr\'e Marie Amp\`ere''). AD, GC and NG would like to acknowledge the support of the CNRS LIA (Laboratoire International Associ\'e) THEP (Theoretical High Energy Physics) and the INFRE-HEPNET (IndoFrench Network on High Energy Physics) of CEFIPRA/IFCPAR (Indo-French Centre for the Promotion of Advanced Research). The work of AD, GC and NG was also supported by CEFIPRA/IFCPAR grant number 5904-C. The research of YO is supported in part by JSPS KAKENHI Grant Number JP15K05066. This work is also supported in part by the TYL-FJPPL program. The work of DH is supported by grant number NSFC-11422544.
{ "timestamp": "2018-06-05T02:15:59", "yymm": "1806", "arxiv_id": "1806.01024", "language": "en", "url": "https://arxiv.org/abs/1806.01024" }
\section{Introduction} Remotely-sensed hyperspectral images (HSI) are images taken from airplanes or satellites that record a wide range of electromagnetic spectrum, typically more than 100 spectral bands from visible to near-infrared wavelengths. Since different materials reflect different spectral signatures, one can identify the materials at each pixel of the image by examining its spectral signatures. HSI is used in many applications, including agriculture \cite{Patel2001,Datt2003}, disaster relief \cite{Trierscheid2008,Eismann2009}, food safety \cite{Lu1999,Gowen2007}, military \cite{Manolakis2002,Stein2002} and mineralogy \cite{Horig2001}. One of the most important problems in hyperspectral data exploitation is HSI classification. It has been an active research topic in past decades \cite{Mountrakis2011,Fauvel2013}. The pixels in the hyperspectral image are labeled manually by experts based on careful review of the spectral signatures and investigation of the scene. Given these ground-truth labels (also called ``training pixels"), the objective of HSI classification is to assign labels to part or all of the remaining pixels (the ``testing pixels") based on their spectral signatures and their locations. Numerous methods have been developed for HSI classification. Among these, machine learning is a well-studied approach. It includes multinomial logistic regression \cite{Li2010,Li2012,Li2013}, artificial neural networks \cite{Benediktsson2005,Yue2015,Makantasis2015,Morchhale2016,Pan2017}, and support vector machines (SVMs) \cite{Boser1992,Cortes1995,Scholkopf2000}. Since our method is partly based on SVMs, we will discuss it in more details here. The original SVM classification method \cite{Melgani2004,Camps2005} performs pixel-wise classification that utilizes spectral information but not spatial dependencies. Numerous spectral-spatial SVM classification methods have been introduced since then. They show better performances when compared to the pixel-wise SVM classifiers. Here we report some of them. SVMs with composite kernels \cite{Camps2006} use composite kernels that are weighted summations of spectral kernels and spatial kernels. The spatial information is extracted by taking the average of the spectra in a fixed window around each pixel. To further utilize the spatial information, the method in \cite{Fang2015G} first applies superpixel segmentation to break the hyperspectral image into small regions with flexible shapes and sizes. Then it extracts the spatial information based on the segmentation and finally performs the classification using SVMs with multiple kernels. In \cite{Tarabalka2009}, a pixel-wise SVM classification is first used to produce classification maps, then a partitional clustering is applied to obtain a segmentation of the hyperspectral image. Then a majority vote scheme is used in each cluster and finally a filter is applied to denoise the result. The method in \cite{Kang2014} first produces pixel-wise classification maps using SVMs and then applies edge-preserving filtering to the classification maps. In addition to these methods, techniques based on Markov random fields \cite{tarabalka2010}, segmentation \cite{Tarabalka2009,Ghamisi2014,Fang2015G,Liu2017} and morphological profiles \cite{Fauvel2008,Liu2017} have also been incorporated into SVMs to exploit the spatial information. Besides machine learning approaches, another powerful approach is sparse representation \cite{Bruckstein2009}. It is based on the observation that spectral signatures within the same class usually lie in a low-dimensional subspace; therefore test data can be represented by a few atoms in a training dictionary. A joint sparse representation method is introduced in \cite{Chen2011} to make use of the spatial homogeneity of neighboring pixels. In particular, each test pixel and its neighboring pixels inside a fixed window are jointly sparsely represented. In \cite{Chen2013}, a kernel-based sparse algorithm is proposed which incorporates the kernel functions into the joint sparse representation method. It uses a fixed size local region to extract the spatial information. Approaches with more flexible local regions were proposed in \cite{Fang2014} and \cite{Fang2015}. They incorporate a multiscale scheme and superpixel segmentation into the joint sparse representation method respectively. Multiple-feature-based adaptive sparse representation was proposed in \cite{Fang2017}. It first extracts various spectral and spatial features and then the adaptive sparse representations of the features are computed. The method in \cite{Li2016} first estimates the pixel-wise class probabilities using SVMs, then applies sparse representation to obtain superpixel-wise class probabilities in which spatial information is utilized and the final result is obtained by combining both probabilities. A pixel-wise classifier (such as SVM), which considers only spectral information, generates results with decent accuracy but would appear noisy as spatial information is not used, see \cite{Melgani2004} and also \Cref{{example_noisy_pixelSVM}}. The noise can be restored by image denoising techniques that incorporate the spatial information. Image denoising is a well-studied subject and numerous effective denoising methods have been introduced \cite{Chan2000,Nikolova2004,Chan2005,Hintermuller2006,Bredies2010}. In this paper, we propose a simple but effective two-stage classification method inspired by our two-stage method for impulse noise removal \cite{Chan2005}. In the first stage, we apply a pixel-wise SVM method that exploits the spectral information to estimate a pixel-wise probability map for each class. In the second stage, we apply a convex denoising model to exploit the spatial information so as to obtain a smooth classification result. In the second stage, the training pixels are kept fixed as their ground-truth labels are already given. In this sense, this stage is exactly the same at the second stage in our impulse noise removal method in \cite{Chan2005}. \begin{figure}[h!] \centering \includegraphics[scale=1.3]{figures/z_indianpines_temp_label_map_SVM.pdf} \caption[]% {{\small An example of classification result using pixel-wise SVM classifier\label{example_noisy_pixelSVM}}} \end{figure} Our method utilizes only spectral information in the first stage and spatial information in the second stage. Experiments show that our method generates very competitive accuracy compared to the state-of-the-art methods on real HSI data sets, especially when the inter-class spectra are similar or the percentage of training pixels is high. This is because our method can effectively exploit the spatial information even when the other methods cannot distinguish the spectra. Moreover, our method has small number of parameters and shorter computational time than the state-of-the-art methods. This paper is organized as follows. In \Cref{sec:review} the support vector machine and variational denoising methods are reviewed. In \Cref{sec:proposed_method} our proposed two-stage classification method is presented. In \Cref{sec:results} experimental results are presented to illustrate the effectiveness of our method. \Cref{sec:conclusion} concludes the paper. \section{Support Vector Machines and Denoising Methods}\label{sec:review} \subsection{Review of $\nu$-Support Vector Classifiers} Support vector machines (SVMs) has been used successfully in pattern recognition \cite{Pontil1998}, object detection \cite{El2002,Osuna1997}, and financial time series forecasting \cite{Tay2001,kim2003} etc. It also has superior performance in hyperspectral classification especially when the dimensionality of data is high and the number of training data is limited \cite{Melgani2004,Camps2005}. In this subsection, we review the $\nu$-support vector classifier ($\nu$-SVC) \cite{Scholkopf2000} which will be used in the first stage of our method. Consider for simplicity a supervised binary classification problem. We are given $m$ training data $\{\mathbf{x}_{i}\}_{i=1}^m$ in $\mathbb{R}^{d}$, and each data is associated with a binary label $y_i \in \{-1,+1\}$ for $i=1,2,...,m$. In the training phase of SVM, one aims to find a hyperplane to separate the two classes of labels and maximize the distance between the hyperplane and the closest training data, which is called the support vector. In the kernel SVM, the data is mapped to a higher dimensional feature space by a feature map $\phi:\mathbb{R}^d\rightarrow\mathbb{R}^h$ in order to improve the separability between the two classes. The $\nu$-SVC is an advanced support vector classifier which enables the user to specify the maximum training error before the training phase. Its formulation is given as follows: \begin{equation}\label{nu_svm} \begin{cases} \hfil \underset{\mathbf{w},b,\xi,\rho}{\text{min }}\, \frac{1}{2}||\mathbf{w}||_2^2 - \nu \rho + \frac{1}{N} \sum\limits_{i=1}^{m} \xi_i \\ \text{subject to: } y_i(\mathbf{w}\cdot \phi(\mathbf{x}_{i})+b)\geq \rho-\xi_{i},\; i=1,2,\ldots,m, \\ \hfil \xi_i \geq 0, \; i=1,2,\ldots,m, \\ \hfil \eta \geq 0, \end{cases} \end{equation} where $\mathbf{w} \in \mathbb{R}^{h}$ and $b\in \mathbb{R}$ are the normal vector and the bias of the hyperplane respectively, $\xi_i$'s are the slack variables which allow training errors, and $\rho/||\mathbf{w}||_2$ is the distance between the hyperplane and the support vector. The parameter $\nu\in(0,1]$ can be shown to be an upper bound on the fraction of training errors \cite{Scholkopf2000}. The optimization problem (\ref{nu_svm}) can be solved through its Lagrangian dual: \begin{equation}\label{dual_nu_svm} \begin{cases} \hfil \underset{\mathbf{\alpha}}{\text{max }}\, -\frac{1}{2}\sum\limits_{i,j=1}^{m} \alpha_{i}\alpha_{j}y_{i}y_{j}K(\mathbf{x}_i,\mathbf{x}_j) \\ \text{subject to: } 0\leq \alpha_i \leq \frac{1}{N},\; i=1,2,\ldots,m,\\ \hfil \sum\limits_{i=1}^{m}\alpha_i y_i=0, \\ \hfil \sum\limits_{i=1}^{m} \alpha_i \geq \nu. \end{cases} \end{equation} Its optimal Lagrange multipliers can be calculated using quadratic programming methods \cite{Vapnik1998}. After obtaining them, the parameters of the optimal hyperplane can be represented by the Lagrange multipliers and the training data. The decision function for a test pixel $\mathbf{x}$ is given by: \begin{equation}\label{eq:SVM_decision} g(\mathbf{x})=\text{sgn}(f(\mathbf{x})),\;\text{where }f(\mathbf{x})= \sum\limits_{i=1}^{m} \alpha_i y_i K(\mathbf{x}_i,\mathbf{x})+b. \end{equation} Mercer's Theorem \cite[p.~423-424]{Vapnik1998} states that a symmetric function $K$ can be represented as an inner product of some feature maps $\phi$, i.e. $K(\mathbf{x},\mathbf{y})=\phi(\mathbf{x})\cdot \phi(\mathbf{y})$ for all $\mathbf{x},\mathbf{y}$, if and only if $K$ is positive semi-definite. In that case, the feature map $\phi$ need not be known in order to perform the training and classification, but only the kernel function $K$ is required. Examples of $K$ satisfying the condition in Mercer's Theorem include: $K(\mathbf{x}_i,\mathbf{x}_j) = \text{exp}(-||\mathbf{x}_i-\mathbf{x}_j||^2/(2\sigma^2))$ and $K(\mathbf{x}_i,\mathbf{x}_j) =(\mathbf{x}_i \cdot \mathbf{x}_j)^p$. \subsection{Review of Denoising Methods} Let $\Omega=\{1,...,N_1\}\times\{1,...,N_2\}$ be the index set of pixel locations of an image, ${\bf v}$ is the noisy image and ${\bf u}$ is the restored image. One famous approach for image denoising is the total variation (TV) method. It involves an optimization model with a TV regularization term which corresponds to the function $\|\nabla \cdot \|_1$. However, it is known that it reproduces images with staircase effect, i.e. with piecewise constant regions. Here, we introduce two approaches to improve it and they are related to our proposed method. The first approach is to add a higher-order term, see, \textit{e.g.}, \cite{Mumford1994,Chan2000,Shen2003,Hintermuller2006,Bredies2010}. In \cite{Hintermuller2006}, the authors considered minimizing \begin{equation}\label{convex_MF} H(\mathbf{u})=\frac{1}{2}||\mathbf{v}-\mathbf{u}||_2^2+\alpha_1||\nabla \mathbf{u}||_1+\frac{\alpha_2}{2}||\nabla \mathbf{u}||_2^2. \end{equation} Here the first term is the $\ell_2$ data-fitting term that caters for Gaussian noise. The second term is the TV term while the third term is the extra higher order term added to introduce smoothness to the restored image $\mathbf{u}$. By setting the parameters $\{\alpha_i\}_{i=1}^2$ appropriately, one can control the trade off between a piece-wise constant and a piece-wise smooth $\mathbf{u}$. In \cite{Cai2013,Chan2014,Cai2017}, the authors derived the same minimizational function (\ref{convex_MF}) as a convex and smooth approximation of the Mumford-Shad model for segmentation. They applied it successfully for segmenting greyscale and color images corrupted by different noise (Gaussian, Poisson, Gamma), information loss and/or blur successfully. The second approach is to smooth the TV function $\|\nabla \cdot\|_1$. In \cite{Chan2005}, a two-stage method is proposed to restore an image ${\bf v}$ corrupted by impulse noise. In the first stage an impulse noise detector called Adaptive Median Filter \cite{Hwang1995} is used to detect the locations of possible noisy pixels. Then in the second stage, it restores the noisy pixels while keeping the non-noisy pixels unchanged by minimizing: \begin{align}\label{eq:l1_approx_restrict} \begin{split} F(\mathbf{u})& = ||\mathbf{v}-\mathbf{u}||_1+\frac{\beta}{2} \| \nabla \mathbf{u}\|^{\alpha},\\ \text{s.t. } & \mathbf{u}|_{\Upsilon}=\mathbf{v}|_{\Upsilon}, \end{split} \end{align} where $\Upsilon$ is the set of non-noisy pixels, $\mathbf{u}|_{\Upsilon}=(u_i)_{i\in \Upsilon}$, and $1 < \alpha \le 2$. This 2-stage method is the first method that can successfully restore images corrupted with extremely high level of impulse noise (e.g. 90\%). Our proposed method is inspired by this two-stage method. In the first stage we use the spectral classifer $\nu$-SVC to generate a pixel-wise probability map for each class. Then in the second stage, we use a combination of (\ref{convex_MF}) and (\ref{eq:l1_approx_restrict}) to restore the mis-classified pixels, subject to the constraint that the training pixels are kept unchanged since their ground-truth labels are already given. \section{Our Two-stage Classification Method}\label{sec:proposed_method} SVMs yield decent classification accuracy \cite{Melgani2004} but their results can be noisy (see Figure \ref{example_noisy_pixelSVM}) since only spectral information is used. We therefore propose to use a denoising scheme to incorporate the spatial information into the classification. Our method first estimate the pixel-wise probability map for each class using SVMs. Then the spatial positions of the training data are used in the denoising scheme to effectively remove the noise in the map. \subsection{First Stage: Pixel-wise Probability Map Estimation} \subsubsection{SVM Classifier} HSI classification is a multi-class classification but the SVM is a binary classifier. To extend SVM to multi-class, we use the One-Against-One (OAO) strategy \cite{Hsu2002} where $[c(c-1)/2]$ SVMs are built to classify every possible pair of classes. Here $c$ is the number of classes. In this paper, we choose the SVM method $\nu$-SVC \cite{Scholkopf2000} with OAO strategy for the HSI multiclass classification in our first stage. We remark that one can use other SVMs or multiclass strategy such as the One-Against-All strategy in \cite{Hsu2002} instead. Moreover, the basis function kernel (RBF kernel) is used as the kernel function in our SVM method. The RBF kernel is defined as: \begin{equation}\label{RBFkernel} K(\mathbf{x}_i,\mathbf{x}_j) = \text{exp}\Big(-\frac{||\mathbf{x}_i-\mathbf{x}_j||^2}{2\sigma^2}\Big). \end{equation} \subsubsection{Probability Estimation of SVM Outputs} Given a testing pixel ${\bf x}$ and a SVM classifier with decision function $f({\bf x})$ in (\ref{eq:SVM_decision}), we can label ${\bf x}$ with a class according to the sign of $f({\bf x})$, see \cite{Cortes1995}. Under the OAO strategy, there are $[c (c-1)]/2$ such pairwise functions $f_{i,j}$, $1\le i,j \le c$, $i\neq j$. We use them to estimate the probability ${p_i}$ that $\mathbf{x}$ is in the $i$-th class. The idea is given in \cite{SVM_bin_prob,Wu2004}. We first estimate the pairwise class probability ${\rm Prob}(y=i \ | \ y=i \text{ or } y=j)$ by computing \begin{equation} r_{i,j}=\frac{1}{1+e^{\rho f_{i,j}(\mathbf{x})+\tau}}, \end{equation} where $\rho$ and $\tau$ are computed by minimizing a negative log likelihood problem over all the training pixels \cite{SVM_bin_prob}. Then the probability vector $\mathbf{p}=[p_1,p_2,...,p_c]^T$ of the testing pixel ${\bf x}$ is estimated by solving: \begin{align} & \underset{\mathbf{p}}{\text{min}}\; \frac{1}{2}\sum_{i=1}^{c}\sum_{j\neq i}(r_{j,i}p_i-r_{i,j}p_j)^2, \nonumber \\ & \text{s.t. }p_i \geq 0, \forall i,\; \sum_{i=1}^{c}p_i=1. \label{eq:ori_coupling} \end{align} Its optimal solution can be obtained by solving the following simple linear system, see \cite{Wu2004}: \begin{equation}\label{eq:lin_coupling} \begin{bmatrix} Q & \mathbf{e} \\ \mathbf{e}^T & {0} \end{bmatrix} \begin{bmatrix} \mathbf{p} \\ {b} \end{bmatrix} = \begin{bmatrix} \mathbf{0} \\ {1} \end{bmatrix}, \end{equation} where \begin{align*} Q_{ij} = \begin{cases} \sum\limits_{s\neq i} r_{s,i}^2 \; &\text{if } i=j, \\ -r_{j,i}r_{i,j} & \text{if } i\neq j, \end{cases} \end{align*} ${b}$ is the Lagrange multiplier of the equality constraint in (\ref{eq:ori_coupling}), $\mathbf{e}$ is the $c$-vector of all ones, and $\mathbf{0}$ is the $c$-vector of all zeros. In our tests, the probability vectors $\bf{p}({\bf x})$ for all testing pixels ${\bf x}$ are computed by this method using the toolbox of LIBSVM library \cite{libsvm}. We finish Stage 1 by forming the 3D tensor $\mathcal{V}$ where $\mathcal{V}_{i,j,k}$ gives the probability that pixel $(i,j)$ is in class $k$. More specifically, if pixel $(i,j)$ is a testing pixel, then $\mathcal{V}_{i,j,:}=\mathbf{p}({\bf x}_{i,j})$; if pixel $(i,j)$ is a training pixel belonging to the $c$-th class, then $\mathcal{V}_{i,j,c}=1$ and $\mathcal{V}_{i,j,k}=0$ for all other $k$'s. \subsection{Second Stage: Restoring the Pixel-wise Probability Map} Given the probability tensor $\mathcal{V}$ obtained in Stage 1, one can obtain an HSI classification by taking the maximum probability for each pixel \cite{Kang2014}. However, the result will appear noisy as no spatial information is taken into account. The goal of our second stage is to incorporate the spatial information into $\mathcal{V}$ by a variational denoising method that keeps the value of the training pixels unchanged during the optimization, as their ground-truth labels are given a priori. Let $\textbf{v}_{k}:=\mathcal{V}_{:,:,k}$ be the ``noisy" probability map of the $k$-th class, where $k=1,...,c$. We restore them by minimizing: \begin{align} \label{eqn:l2l1anisoll2_eq} \begin{split} \underset{\mathbf{u}}{\text{min }}&\frac{1}{2}||\mathbf{u}-\mathbf{v}_k||_2^2+\beta_{1}||\nabla \mathbf{u}||_1+\frac{\beta_{2}}{2}||\nabla\mathbf{u}||_2^2,\\ \text{s.t. }& \mathbf{u}|_{\Upsilon}=\mathbf{v}_k|_{\Upsilon}, \end{split} \end{align} where $\beta_1$, $\beta_2$ are regularization parameters and ${\Upsilon}$ is the set of training pixels. We choose this minimization functional because it gives superb performance in denoising and segmenting various types of images, see \cite{Hintermuller2006,Cai2013,Chan2014,Cai2017}. The higher-order $||\nabla\mathbf{u}||_2^2$ term encourages smoothness of the solution and can improve the classification accuracy, see \Cref{sec:effect_higher_order}. In our tests, we use anisotropic TV \cite{Zhao2013} and periodic boundary condition for the discrete gradient operator, see \cite[p.~258]{Gonzales1992}. Alternating direction method of multipliers (ADMM) \cite{ADMM2011} is used to solve (\ref{eqn:l2l1anisoll2_eq}). First, we rewrite (\ref{eqn:l2l1anisoll2_eq}) as follows: \begin{align}\label{formulation_constraint} \begin{split} \underset{\mathbf{u}}{\text{min }}&\frac{1}{2}||\mathbf{u}-\mathbf{v}_k||_2^2+ \beta_{1}||\mathbf{s}||_{1}+ \frac{\beta_{2}}{2}||D \mathbf{u}||_2^2+ \iota_\mathbf{w}\\ \text{s.t. } & \mathbf{s} = D \mathbf{u} {\rm \ and \ } \mathbf{w} = \mathbf{u}. \end{split} \end{align} Here $D$ denote the discrete operator of $\nabla$, $D=\begin{pmatrix}[0.6]D_x \\D_y \\ \end{pmatrix} \in \mathbb{R}^{2n\times n}$, where $D_x$ and $D_y$ are the first-order difference matrices in the horizontal and vertical directions respectively and $n$ is the number of pixels, $\iota_\mathbf{w}$ is the indicator function, where $\iota_\mathbf{w}=0$ if $\mathbf{w}|_{\Upsilon}=\mathbf{v}_k|_{\Upsilon}$ and $\iota_\mathbf{w}=\infty$ otherwise. Its augmented Lagrangian is given by: \begin{equation}\label{Largrangian} L(\mathbf{u},\mathbf{s},\mathbf{w},\boldsymbol{\lambda})=\frac{1}{2}||\mathbf{u}-\mathbf{v}_k||_2^2+\beta_1||\mathbf{s}||_1+\frac{\beta_{2}}{2}||D\mathbf{u}||_2^2+ \iota_\mathbf{w}+\frac{\mu}{2}||E\mathbf{u}-{\bf g}-\boldsymbol{\lambda}||_2^2, \end{equation} where $\mu>0$ is a positive constant, $E = \begin{pmatrix}[0.6]D \\I \\ \end{pmatrix}$, ${\bf g}=\begin{pmatrix}[0.6]\mathbf{s} \\ \mathbf{w} \\ \end{pmatrix}$ and $\boldsymbol{\lambda} = \begin{pmatrix}[0.6]\boldsymbol{\lambda}_1 \\\boldsymbol{\lambda}_2 \\ \end{pmatrix}$ the Lagrange multipliers. The formulation (\ref{Largrangian}) allows us to solve $\mathbf{u}$ and ${\bf g}$ alternately as follows: \begin{subequations} \begin{align} \mathbf{u}^{(r+1)}&=\underset{\mathbf{u}}{\text{argmin }}\ \bigg\{ \frac{1}{2}||\mathbf{u}-\mathbf{v}_k||_2^2+\frac{\beta_{2}}{2}||D\mathbf{u}||_2^2 +\frac{\mu}{2}||E\mathbf{u}-{\bf g}^{(r)}-\boldsymbol{\lambda}^{(r)}||_2^2 \bigg\} \label{sub_u}\\ {\bf g}^{(r+1)} &= \underset{{\bf g}}{\text{argmin }}\ \bigg\{ \beta_1||\mathbf{s}||_1 + \iota_\mathbf{w}+\frac{\mu}{2}||E\mathbf{u}^{(r+1)}-{\bf g}-\boldsymbol{\lambda}^{(r)}||_2^2 \bigg\} \label{sub_F} \\ \boldsymbol{\lambda}^{(r+1)} &= \boldsymbol{\lambda}^{(r)}-E\mathbf{u}^{(r+1)}+{\bf g}^{(r+1)} \label{sub_sigma} \end{align} \end{subequations} The $\mathbf{u}$-subproblem (\ref{sub_u}) is a least squares problem. Its solution is \begin{equation}\label{sol_u} \mathbf{u}^{(r+1)}=(I+\beta_2D^{T}D+\mu E^{T}E)^{-1}(\mathbf{v}_k+\mu E^{T}({\bf g}^{(r)}+\boldsymbol{\lambda}^{(r)})). \end{equation} Since periodic boundary conditions are used, the solution can be computed efficiently using the two-dimensional fast Fourier transform (FFT) \cite{Chan2007book} in $O(n\log n)$ complexity. For the ${\bf g}$-subproblem, the optimal $\mathbf{s}$ and $\mathbf{w}$ can be computed separately as follows: \begin{equation}\label{sub_s} \mathbf{s}^{(r+1)} = \underset{\mathbf{s}}{\text{argmin }}\ \bigg\{ \beta_1||\mathbf{s}||_1+\frac{\mu}{2} ||D \mathbf{u}^{(r+1)}-\mathbf{s} - \boldsymbol{\lambda}_1^{(r)} ||_2^2 \bigg\} \end{equation} and \begin{equation}\label{sub_w} \mathbf{w}^{(r+1)} = \underset{\mathbf{w}}{\text{argmin }}\ \bigg\{ \iota_\mathbf{w}+\frac{\mu}{2} ||\mathbf{u}^{(r+1)}- \mathbf{w} -\boldsymbol{\lambda}_2^{(r)}||_2^2 \bigg\} \end{equation} The solution of (\ref{sub_s}) can be obtained by soft thresholding \cite{Combettes2005}: \begin{equation}\label{sol_s} {[\mathbf{s}^{(r+1)}]}_{i}= \text{sgn}([{\bf r}]_i)\cdot \text{max}\{|[{\bf r}]_i|-\frac{\beta_1}{\mu},0\},\;i=1,...,2n, \end{equation} where ${\bf r}=D\mathbf{u}^{(r+1)}-\boldsymbol{\lambda}_1^{(r)}$. The solution of (\ref{sub_w}) is simply \begin{equation}\label{sol_w} [\mathbf{w}^{(r+1)}]_i = \left\{ \begin{array}{ll} [\mathbf{v}_k]_i & {\rm if } \ i \in {\Upsilon} ,\\ {[\mathbf{u}^{(r+1)}-\boldsymbol{\lambda}_2^{(r)}]}_i & {\rm otherwise.} \end{array} \right. \end{equation} Note that the computation of (\ref{sub_sigma}), (\ref{sol_s}) and (\ref{sol_w}) have a computational complexity of $O(n)$. Hence the computational complexity is $O(n \log n)$ for each iteration. Our algorithm is summarized in Algorithm \ref{alg:ADMM}. Its convergence to the global minimum is guaranteed by \cite{ADMM2011}. Once it finishes, we obtain the restored votes $\bf{u}$ for class $k$. We denote it as $\mathcal{U}_{:,:,k}$. After the votes for each class are restored we get a 3D tensor $\mathcal{U}$. The final classification of the $(i,j)$-th pixel is given by finding the maximum value in $\mathcal{U}_{i,j,:}$, i.e. $\underset{k}{\text{argmax }}\, \mathcal{U}_{i,j,k}$. \begin{algorithm} \begin{centering} \begin{algorithmic}[1] \Initialize{Set $r=0$. Choose $\mu>0$, $\mathbf{u}^{(0)}$, $\mathbf{s}^{(0)}$, $\boldsymbol{\lambda}^{(0)}$ and $\mathbf{w}^{(0)}$ where $\mathbf{w}^{(0)}|_{\Upsilon}=\mathbf{v}_k|_{\Upsilon}$.} \State{\textbf{When stopping criterion is not yet satisfied, do:}} \State{$\;\;\;\;$ $\mathbf{u}^{(r+1)}\leftarrow(I+\beta_2D^{T}D+\mu E^{T}E)^{-1}(\mathbf{v}_k+\mu E^{T}({\bf g}^{(r)}+\boldsymbol{\lambda}^{(r)}))$} \State{$\;\;\;\;$ $\mathbf{s}^{(r+1)}\leftarrow \text{sgn}({\bf r})\cdot \text{max}\{|{\bf r}|-\frac{\beta_1}{\mu},0\},$ where$\;{\bf r}=D\mathbf{u}^{(r+1)}-\boldsymbol{\lambda}_1^{(r)}$} \State{$\;\;\;\;$ $\mathbf{w}^{(r+1)}|_{\Omega\setminus \Upsilon}\leftarrow (\mathbf{u}^{(r+1)}-\boldsymbol{\lambda}_2^{(r)})|_{\Omega\setminus \Upsilon}$} \State{$\;\;\;\;$ $\boldsymbol{\lambda}^{(r+1)} \leftarrow \boldsymbol{\lambda}^{(r)}-E\mathbf{u}^{(r+1)}+{\bf g}^{(r+1)}$} \end{algorithmic} \par\end{centering} \caption{ADMM update process for solving (\ref{eqn:l2l1anisoll2_eq})\label{alg:ADMM}} \end{algorithm} We remark that in Stage 1, the operation is along the spectral dimension, i.e. the third index of the tensor, while in Stage 2, the operation is along the spatial dimension, i.e. the first two indices of the tensor. \section{Experimental Results}\label{sec:results} \subsection{Experimental Setup} \subsubsection{Data Sets} Three commonly-tested hyperspectral dataset are used in our experiments. These data sets have pixels labeled so that we can compare the methods quantitatively. The first one is the ``Indian Pines" data set acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over the Indian Pines test site in North-western Indiana. It has a spatial resolution of 20 m per pixel and a spectral coverage ranging from 0.2 to 2.4 $\mu$m in 220 spectral bands. However, due to water absorption, 20 of the spectral bands (the 104-108th, 150-163th and 220th bands) are discarded in experiments in previous papers. Therefore our data set is of size $145 \times 145 \times 200$, and there are 16 classes in the given ground-truth labels. The second and third images are the ``University of Pavia" and ``Pavia Center" data sets acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor over Pavia in northern Italy. The sensor has 1.3 m spatial resolution and spectral coverage ranging from 0.43 to 0.86 $\mu$m. The data set sizes are $610 \times 340 \times 103$ and $1096 \times 715 \times 102$ respectively, where the third dimension is the spectral dimension. Both sets have 9 classes in the ground-truth labels. \subsubsection{Methods Compared and Parameters Used} We have compared our method with five well-known classification methods: $\nu$-support vector classifiers ($\nu$-SVC) \cite{Scholkopf2000,Melgani2004} (i.e. the first stage of our method), SVMs with composite kernels (SVM-CK) \cite{Camps2006}, edge-preserving filtering (EPF) \cite{Kang2014}, superpixel-based classification via multiple kernels (SC-MK) \cite{Fang2015G} and multiple-feature-based adaptive sparse representation (MFASR) \cite{Fang2017}. All the tests are run on a laptop computer with an Intel Core i5-7200U CPU, 8 GB RAM and the software platform is MATLAB R2016a. In the experiments, the parameters are chosen as follows. For the $\nu$-SVC method, the parameters are obtained by performing a five-fold cross-validation \cite{Kohavi1995}. For the SVM-CK method, the parameters are tuned such that it gives the highest classification accuracy. All parameters of the EPF method, the SC-MK method, and the MFASR method are chosen as stated in \cite{Kang2014,Fang2015G,Fang2017} respectively, except the window size in the EPF method, the number of superpixels and the parameters of the superpixel segmentation algorithm in the SC-MK method, and the sparsity level of the MFASR are tuned such that the highest classification accuracies are obtained. For our method, the parameters of the $\nu$-SVC (\ref{nu_svm}) in the first stage are obtained by performing a five-fold cross-validation and the parameters of the optimization problem (\ref{eqn:l2l1anisoll2_eq}) in the second stage are tuned such that it gives the highest classification accuracy. \subsubsection{Performance Metrics} To quantitatively evaluate the performance of the methods, we use the following three widely-used metrics: (i) overall accuracy (OA): the percentage of correctly classified pixels, (ii) average accuracy (AA): the average percentage of correctly classified pixels over each class, and (iii) kappa coefficient (kappa): the percentage of correctly classified pixels corrected by the number of agreements that would be expected purely by chance \cite{Cohen1960}. For each method, we perform the classification ten times where each time we randomly choose a different set of training pixels. In the tables below, we give the averages of these metrics over the ten runs. The accuracies are given in percentage, and the highest accuracy of each category is listed in boldface. In the figures, we count the number of mis-classification for each testing pixel over the ten runs. The numbers of mis-classification are shown in the corresponding heatmap figures, with the heatmap colorbar indicating the number of mis-classifications. \subsection{Classification Results}\label{sec:comparison_results} \subsubsection{Indian Pines}\label{sec:comparison_resultsip} The Indian Pines data set consists mainly of big homogeneous regions and has very similar inter-class spectra (see \Cref{indianpines_training_sample} for the spectra of the training pixels of Indian Pines data where there are three similar classes of corns, three similar classes of grasses and three similar classes of soybeans). It is therefore very difficult to classify it if only spectral information is used. In the experiments, we choose the same number of training pixels as in \cite{Fang2015,Fang2015G} and they amount to about 10\% of the pixels from each class. The rest of the labeled pixels are used as testing pixels. The number of training and testing pixels as well as the classification accuracies obtained by different methods are reported in \Cref{indianpines_table}. We see that our method generates the best results for all three metrics (OA, AA and kappa) and outperforms the comparing methods by a significant margin. They are at least 0.95\% higher than the others. Also, the second stage of our method improves the overall accuracy of $\nu$-SVC (used in the first stage of our method) by almost 20\%. \Cref{indianpines_fig} shows the heatmaps of mis-classifications. The results of the $\nu$-SVC, SVM-CK and EPF methods produce large area of mis-classifications. The SC-MK also produces mis-classification at the top-right region and the middle-right region which are soybeans-clean and soybeans-no till respectively. This shows that SC-MK cannot distinguishing these two similar classes well. The heatmap of MFASR method contains scattered regions of mis-classification. In contrast, our method generates smaller regions of mis-classifications and less errors as it effectively utilizes the spatial information to give an accurate result. \begin{figure}[h!] \centering \includegraphics[scale=1.3]{figures/indian_pines_training_sample.pdf} \caption[]% {{\small Spectra of training pixels of Indian Pines data \label{indianpines_training_sample}}} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_IndianPines_gt_gray.pdf} \caption[]% {{\small Ground Truth}} \end{subfigure} \hfill \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[width=\textwidth]{figures/Indian_Pines_label_graph.pdf} \caption[]% {{\small Label color}} \end{subfigure} \hfill \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/Indian_Pines_false_color.pdf} \caption[]% {{\small False color image}} \end{subfigure} \hfill \vskip\baselineskip \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{figures/heatmap_bar_thick.pdf} \caption[]% {{\small Heatmap colorbar}} \end{subfigure} \hfill \centering \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_SVM.pdf} \caption[]% {{\small SVM \cite{Melgani2004}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_SVM_CK.pdf} \caption[]% {{\small SVM-CK \cite{Camps2006}}} \end{subfigure} \hfill \vskip\baselineskip \centering \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_EPF.pdf} \caption[]% {{\small EPF \cite{Kang2014}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_SC_MK.pdf} \caption[]% {{\small SC-MK \cite{Fang2015G}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_MFASR.pdf} \caption[]% {{\small MFASR \cite{Fang2017}}} \end{subfigure} \hfill \vskip\baselineskip \centering \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_Our.pdf} \caption[]% {{\small Our 2-stage}} \end{subfigure} \caption{Indian Pines data set. (a) ground-truth labels, (b) label color of the ground-truth labels, (c) false color image, (d) heatmap colorbar, (e)--(j) classification results by different methods. \label{indianpines_fig}} \end{figure} \begin{table}[h!] \centering \caption{Number of training/testing pixels and classification accuracies for Indian Pines data set.\label{indianpines_table}}\begin{centering} \begin{tabular}{|c c|c|c|c|c|c|c|} \hline Class &train/test& $\nu$-SVC & SVM-CK & EPF &SC-MK & MFASR & 2-stage \tabularnewline \hline Alfalfa&10/36 & 70.28\% & 81.94\% & 97.29\%& \textbf{100\%}& 98.06\% & 99.17\%\tabularnewline \hline Corn-no till&143/1285 & 77.90\% & 89.98\% & 96.03\% & 95.44\%& 96.66\% &\textbf{97.89\%} \tabularnewline \hline Corn-mill till&83/747 & 67.80\% & 89.68\% & 97.75\% & 97.16\%& 97.94\% &\textbf{98.73\%}\tabularnewline \hline Corn & 24/213& 52.96\% & 86.24\% &93.03\% &\textbf{99.25\%}&91.69\% &99.01\%\tabularnewline \hline Grass/pasture &48/435& 89.13\% & 93.31\% & \textbf{99.17\%}& 96.67\%& 94.62\%&96.92\% \tabularnewline \hline Grass/trees &73/657& 96.15\% & 98.98\% & 96.02\%& 99.70\%& 99.56\% &\textbf{99.74\%}\tabularnewline \hline Grass/pasture-mowed&10/18 & 93.33\% & 96.11\% & 99.47\%& \textbf{100\%}& \textbf{100\%}&\textbf{100\%} \tabularnewline \hline Hay-windrowed&48/430 & 93.93\% &98.42\% & \textbf{100\%}& \textbf{100\%}& 99.98\% &\textbf{100\%}\tabularnewline \hline Oats&10/10 & 90.00\% & \textbf{100\%} & 96.25\%& \textbf{100\%}& \textbf{100\%}&\textbf{100\%} \tabularnewline \hline Soybeans-no till&97/875 & 72.26\% & 88.81\% & 92.21\%& 94.62\%& \textbf{96.03\%}&96.01\% \tabularnewline \hline Soybeans-mill till&246/2209 & 79.71\% & 91.57\% & 86.65\%& \textbf{98.80\%}& 98.58\%&99.54\% \tabularnewline \hline Soybeans-clean &59/534& 67.66\% & 85.90\% & 96.26\% & 96.29\%& 97.06\% &\textbf{99.64\%}\tabularnewline \hline Wheat &21/184& 96.09\% & 98.64\%& \textbf{100\%} & 99.67\%& 99.57\% &\textbf{100\%}\tabularnewline \hline Woods &127/1138& 91.89\% & 96.85\% & 95.24\%& \textbf{99.99\%}& 99.89\%&99.91\% \tabularnewline \hline Bridg-Grass-Tree-Drives&39/347 & 56.97\% & 88.01\% & 93.70\% & 98.39\%& 98.01\%&\textbf{99.14\%} \tabularnewline \hline Stone-steel lowers &10/83& 85.66\% & 98.43\% & 96.11\% & 97.71\%& \textbf{98.92\%} &96.39\% \tabularnewline \hline \hline OA & &79.78\% & 92.11\% & 93.34\%& 97.83\%& 97.88\% &\textbf{98.83\%}\tabularnewline \hline AA & &80.11\% & 92.68\% & 95.95\% & 98.35\%& 97.91\%&\textbf{98.88\%}\tabularnewline \hline kappa & & 0.769 & 0.910 & 0.924 & 0.975& 0.976&\textbf{0.987}\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \subsubsection{University of Pavia}\label{sec:comparison_resultsup} The University of Pavia data set consists of regions with various shapes, including thin and thick structures and large homogeneous regions. Hence it can be used to test the ability of the classification methods on handling different shapes. In the experiments, we choose the same number of training pixels (200 for each class) as in \cite{Fang2015G}. This accounts for approximately 4\% of the labeled pixels. The remaining ones are used as testing pixels. \Cref{paviaU_table} reports the classification accuracies obtained by different methods. We see that the performances of SC-MK, MFASR, and our method are very close: approximately 99\% in all three metrics (OA, AA and kappa) and they outperform the $\nu$-SVC, SVM-CK and EPF methods. \Cref{paviaU_fig} shows the heatmaps of mis-classifications. The $\nu$-SVC, SVM-CK and EPF methods produce large regions of mis-classifications. The SC-MK method produces many mis-classifications at the middle and bottom regions where the meadows are. The MFASR method and our method generate smaller regions of mis-classification. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_PaviaU_gt_gray.pdf} \caption[]% {{\small Ground Truth}} \end{subfigure} \hfill \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/PaviaU_label_graph.pdf \caption[]% {{\small Label color}} \end{subfigure} \hfill \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/paviaU_false_color.pdf} \caption[]% {{\small False color image}} \end{subfigure} \hfill \vskip\baselineskip \centering \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/heatmap_bar_thick.pdf} \caption[]% {{\small Heatmap colorbar}} \end{subfigure} \hfill \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_PaviaU_heat_map2_SVM.pdf} \caption[]% {{\small $\nu$-SVC \cite{Scholkopf2000,Melgani2004}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_PaviaU_heat_map2_SVM_CK.pdf} \caption[]% {{\small SVM-CK \cite{Camps2006}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_PaviaU_heat_map2_EPF.pdf} \caption[]% {{\small EPF \cite{Kang2014}}} \end{subfigure} \hfill \vskip\baselineskip \centering \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_PaviaU_heat_map2_SC_MK.pdf} \caption[]% {{\small SC-MK \cite{Fang2015G}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_PaviaU_heat_map2_MFASR.pdf} \caption[]% {{\small MFASR \cite{Fang2017}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_PaviaU_heat_map2_Our.pdf} \caption[]% {{\small Our 2-stage}} \end{subfigure} \caption{University of Pavia data set. (a) ground-truth labels, (b) label color of the ground-truth labels, (c) false color image, (d) heatmap colorbar, (e)--(j) classification results by different methods. \label{paviaU_fig}} \end{figure} \begin{table}[h!] \centering \caption{Number of training/testing pixels and classification accuracies for University of Pavia data set.\label{paviaU_table}} \begin{tabular}{|c c|c|c|c|c|c|c|} \hline Class &train/test& $\nu$-SVC & SVM-CK & EPF &SC-MK & MFASR & 2-stage\tabularnewline \hline Asphalt & 200/6431 & 84.65\% & 95.84\% & 98.84\%& 99.06\%& \textbf{99.44\%} & 98.68\% \tabularnewline \hline Meadows & 200/18449 & 89.96\% & 97.62\% & 99.62\%& 98.14\%& 98.52\% & \textbf{98.78\%} \tabularnewline \hline Gravel &200/1899& 83.59\%& 91.99\% & 95.50\%& \textbf{99.98\%}& 99.80\%&99.69\% \tabularnewline \hline Trees &200/2864 & 94.94\%&97.95\% & 98.94\%& \textbf{99.03\%}& 98.02\% &96.56\%\tabularnewline \hline Metal Sheets&200/1145 & 99.59\%& 99.97\%& 99.03\% & 99.87\%& 99.91\%& \textbf{100\%} \tabularnewline \hline Bare Soil&200/4829 & 90.69\%& 97.49\% & 92.95\%& 99.70\%& 99.78\% &\textbf{100\%}\tabularnewline \hline Bitumen &200/1130& 92.73\%& 98.41\% & 93.84\%& 100\%& 99.92\% &\textbf{100\%}\tabularnewline \hline Bricks&200/3482 & 82.59\%& 92.71\% & 92.92\%& 99.05\%& \textbf{99.41\%}&99.02\%\tabularnewline \hline Shadows &200/747& 99.60\%& 99.92\% & 99.30\%& 99.99\%& \textbf{100\%}&99.18\% \tabularnewline \hline \hline OA& & 89.16\% & 96.80\% & 97.60\%& 98.83\%& \textbf{99.02\%}&98.89\%\tabularnewline \hline AA& & 90.93\% & 96.88\% & 96.77\%& \textbf{99.42\%}& \textbf{99.42\%}&99.10\% \tabularnewline \hline kappa & & 0.857 & 0.957 & 0.968 & 0.984& \textbf{0.987} &0.985\tabularnewline \hline \end{tabular} \end{table} \subsubsection{Pavia Center}\label{sec:comparison_resultspc} The Pavia Center data set also consists of regions with various shapes. In the experiments, we use the same number of training pixels as in \cite{Liu2017} (150 training pixels per class). This accounts for approximately 1\% of the labeled pixels. The rest of the labeled pixels are used as testing pixels. \Cref{paviacenter_table} reports the number of training/testing pixels and the classification accuracies of different methods. We see that the EPF method gives the highest OA and kappa while our method gives the second highest and their values differ by about 0.1\%. However, our method gives the highest AA (99.12\%) which outperforms the EPF method by almost 1\%. The SC-MK and MFASR methods give slightly worse accuracies than our method. \Cref{paviacenter_fig} shows the heatmaps of mis-classifications. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_CPavia_gt_gray.pdf} \caption[]% {{\small Ground Truth}} \end{subfigure} \hfill \begin{subfigure}[b]{0.125\textwidth} \centering \includegraphics[width=\textwidth]{figures/PaviaC_label_graph.pdf} \caption[]% {{\small Label color}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/paviaC_false_color.pdf} \caption[]% {{\small False color image}} \end{subfigure} \hfill \begin{subfigure}[b]{0.125\textwidth} \centering \includegraphics[width=\textwidth]{figures/heatmap_bar_thick.pdf} \caption[]% {{\small Heatmap colorbar}} \end{subfigure} \hfill \vskip\baselineskip \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_paviacenter_heat_map2_SVM.pdf} \caption[]% {{\small $\nu$-SVC \cite{Scholkopf2000,Melgani2004}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_paviacenter_heat_map2_SVM_CK.pdf} \caption[]% {{\small SVM-CK \cite{Camps2006}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_paviacenter_heat_map2_EPF.pdf} \caption[]% {{\small EPF \cite{Kang2014}}} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_paviacenter_heat_map2_SC_MK.pdf} \caption[]% {{\small SC-MK \cite{Fang2015G}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_paviacenter_heat_map2_MFASR.pdf} \caption[]% {{\small MFASR \cite{Fang2017}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_paviacenter_heat_map2_Our.pdf} \caption[]% {{\small Our 2-stage}} \end{subfigure} \caption{Pavia Center data set. (a) ground-truth labels, (b) label color of the ground-truth labels, (c) false color image, (d) heatmap colorbar, (e)--(j) classification results by different methods. \label{paviacenter_fig}} \end{figure} \begin{table}[h!] \centering \caption{Number of training/testing pixels and classification accuracies for Pavia Center data set.\label{paviacenter_table}} \begin{tabular}{|c c|c|c|c|c|c|c|} \hline Class &train/test& $\nu$-SVC & SVM-CK & EPF &SC-MK & MFASR & 2-stage\tabularnewline \hline Water & 150/65128 & 99.54\% & 99.82\% & \textbf{100\%}& 99.86\%& 99.97\% & 99.66\% \tabularnewline \hline Trees & 150/6357 & 94.22\% & 95.61\% & \textbf{99.11\%}& 94.59\%& 95.52\% & 98.61\%\tabularnewline \hline Meadows &150/2741& 95.14\%& 96.15\% & 97.16\%& 98.78\%& 98.54\% & \textbf{98.84\%}\tabularnewline \hline Bricks &150/2002 & 92.56\%&97.37\% & 90.08\%& 99.91\%& 99.62\% & \textbf{99.98\%}\tabularnewline \hline Soil&150/6399 & 94.31\%& 96.51\%& 99.40\% & \textbf{99.76\%}& 99.59\%& 98.69\%\tabularnewline \hline Asphalt&150/7375 & 95.94\%& 97.34\% & 98.86\%& 99.24\%& 98.76\% & \textbf{99.60\%}\tabularnewline \hline Bitumen &150/7137& 89.99\%& 94.75\% & \textbf{99.79\%}& 98.64\%& 99.55\% & 97.86\%\tabularnewline \hline Tiles&150/2972 &97.42\%& 99.33\% & \textbf{99.97\%}& 99.32\%& 99.05\%&99.52\%\tabularnewline \hline Shadows &150/2015& 99.98\%& \textbf{100\%} & 99.96\%& 99.85\%& 99.97\% & 99.27\%\tabularnewline \hline \hline OA& & 97.54\% & 98.80\% & \textbf{99.59\%}& 99.31\%& 99.33\% & 99.42\%\tabularnewline \hline AA& & 95.46\% & 97.43\% & 98.26\%& 98.88\%& 98.95\%&\textbf{99.12\%} \tabularnewline \hline kappa & & 0.965 & 0.983 & \textbf{0.994} & 0.990& 0.990&0.991\tabularnewline \hline \end{tabular} \end{table} \subsection{Advantages of Our 2-stage Method} \subsubsection{Percentage of Training Pixels} Since our method improves on the classification accuracy by using the spatial information, it is expected to be a better method if the training percentage (percentage of training pixels) is higher. To verify that, \Cref{trainper_indianpines,trainper_paviaU,trainper_paviacenter} show the overall accuracies obtained by our method on the three data sets with different levels of training percentage. We see that our method outperforms the other methods when training percentage is high. When it is not high, our method still gives a classification accuracy that is close to the best method compared. \begin{table}[h!] \centering \caption{Classification results on the Indian Pines data with different levels of training pixels.\label{trainper_indianpines}} \begin{tabular}{|c|c|c|c|c|} \hline Method \textbackslash Training percentage & 5\% & 10\% & 20\% &40\% \tabularnewline \hline $\nu$-SVC & 73.49\% & 79.78\% & 84.98\% & 88.55\% \tabularnewline \hline SVM-CK & 86.00\% & 92.11\% & 96.00\% & 98.51\% \tabularnewline \hline EPF & 89.37\% & 93.34\% & 97.42\% & 98.90\% \tabularnewline \hline SC-MK & \textbf{97.21\%} & 97.83\% & 98.11\% & 98.42\% \tabularnewline \hline MFASR & 95.67\% & 97.88\% & 98.82\% & 99.25\% \tabularnewline \hline 2-stage & 96.98\% & \textbf{98.83\%} & \textbf{99.61\%} & \textbf{99.93\%} \tabularnewline \hline Difference from the best & 0.23 \% & 0.00 \% & 0.00 \% & 0.00\% \tabularnewline \hline \end{tabular} \end{table} \begin{table}[h!] \centering \caption{Classification results on the University of Pavia data with different levels of training pixels.\label{trainper_paviaU}} \begin{tabular}{|c|c|c|c|c|c|} \hline Method \textbackslash Training percentage & 4\% & 8\% & 16\% & 32\% \tabularnewline \hline $\nu$-SVC & 89.16\% & 91.19\% & 94.04\% & 94.63\% \tabularnewline \hline SVM-CK & 96.80\% &97.93 \% & 98.78\% & 99.13\% \tabularnewline \hline EPF & 97.60\% & 98.37\% & 98.60\% & 98.94\% \tabularnewline \hline SC-MK & 98.83\% & \textbf{99.67\%} & 99.66\% & 99.86\% \tabularnewline \hline MFASR & \textbf{99.02\%} & 99.52\% & 99.81\% & 99.85\% \tabularnewline \hline 2-stage & 98.89\% & 99.58\% & \textbf{99.82\%} & \textbf{99.89\%} \tabularnewline \hline Difference from the best & 0.13 \% & 0.09\% & 0.00 \% & 0.00\% \tabularnewline \hline \end{tabular} \end{table} \begin{table}[h!] \centering \caption{Classification results on the Pavia Center data with different levels of training pixels.\label{trainper_paviacenter}} \begin{tabular}{|c|c|c|c|c|} \hline Method \textbackslash Training percentage & 1\%& 2\% & 4\% & 8\% \tabularnewline \hline $\nu$-SVC & 97.54\% & 98.01\% & 98.28\% & 98.51\% \tabularnewline \hline SVM-CK & 98.80\% & 99.46\% & 99.67\% & 99.83\% \tabularnewline \hline EPF & \textbf{99.59\%} & \textbf{99.76\%} & 99.76\% & 99.92\% \tabularnewline \hline SC-MK & 99.31\% & 99.59\% & 99.75\% & 99.85\% \tabularnewline \hline MFASR & 99.33\% & 99.64\% & 99.86\% & 99.92\% \tabularnewline \hline 2-stage & 99.42\% & 99.73\% & \textbf{99.90\%} & \textbf{99.94\%} \tabularnewline \hline Difference from the best & 0.17 \%& 0.03\% & 0.00 \% & 0.00\% \tabularnewline \hline \end{tabular} \end{table} \subsubsection{Model Complexity and Computational Time} \Cref{parameter_table,runtime_table} shows the computational time required and the number of parameters for all methods. We note that the reported timing does not count the time required to find the optimal set of parameters. The $\nu$-SVC, SVM-CK and EPF methods have fast computational time because of the simpleness of their models. They have only a few parameters (2, 3 and 4 respectively). However, from the results in \Cref{sec:comparison_results}, they are worse than the other three methods. The SC-MK method is a good method in terms of accuracy and timing, but it has 9 parameters. The MFASR method has 10 parameters and the longest computational time. In comparison, our method has 5 parameters (2 parameters $\nu$ and $\sigma$ for the $\nu$-SVC (\ref{nu_svm}) and the RBF kernel (\ref{RBFkernel}) respectively in the first stage, 2 parameters $\beta_1$ and $\beta_2$ for the denoising model (\ref{eqn:l2l1anisoll2_eq}) in the second stage and 1 parameter $\mu$ for the ADMM algorithm (\ref{Largrangian})). It has much better (if not the best) classification accuracies and slightly longer computational time than those of $\nu$-SVC, SVM-CK and EPF. \begin{table}[h!] \begin{centering} \caption{Comparison of number of parameters.\label{parameter_table}} \begin{tabular}{|c c|c|c|c|c|c|c|} \hline & & $\nu$-SVC & SVM-CK & EPF &SC-MK& MFASR & 2-stage \tabularnewline \hline & Number of parameters&2&3&4&9&10&5 \tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \begin{table}[h!] \begin{centering} \caption{Comparison of computational time (in seconds)\label{runtime_table}} \begin{tabular}{|c c|c|c|c|c|c|c|} \hline Data & size/training \% & $\nu$-SVC & SVM-CK & EPF &SC-MK& MFASR & 2-stage \tabularnewline \hline Indian Pines & $145\times 145\times 200 /10\%$ & 5.98 & 6.32 & 6.92 & 9.44 & 119 & 8.24 \tabularnewline \hline University of Pavia & $610\times 340 \times 103 /4\%$ & 24.02 & 32.12 & 28.53 & 39.47 & 443 & 35.97 \tabularnewline \hline Pavia Center & $1096\times 715 \times 102 /1\%$ & 58.46 & 81.63 & 118 & 107 & 2599 & 145 \tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \subsection{Effect of the Second-order Term}\label{sec:effect_higher_order} Here we examine empirically the importance of the term $||\nabla \mathbf{u}||_2^2$ in (\ref{eqn:l2l1anisoll2_eq}). \Cref{comparison_TV2} shows the heatmaps of mis-classifications on the Indian Pines data by using our method with and without $||\nabla \mathbf{u}||_2^2$ over ten runs. The training pixels are randomly selected and consist of 2.5\% of the labeled pixels. \Cref{comparison_TV2} (a) shows the ground-truth labels. \Cref{comparison_TV2} (b)--(d) show the heatmaps of mis-classifications of the $\nu$-SVC classifier (i.e. the first stage of our method), the second stage of our method without the $||\nabla \mathbf{u}||_2^2$ term, and the second stage of our method with the $||\nabla \mathbf{u}||_2^2$ term respectively. Recall the term $||\nabla \mathbf{u}||_2^2$ control the smoothness of the restored votes and the final classification result is determined by taking the maximum over the restored votes of each class. By choosing the parameter associated with the term appropriately, we can then control the level of shrinking or expanding the homogeneous regions in the final classification result. From \Cref{comparison_TV2} (c), when the term is dropped, the mis-classification regions at the top left and bottom left of the first stage result are not only still mis-classified, but the numbers of mis-classification increase. In contrast, when the term is kept, we see from \Cref{comparison_TV2} (d) that the numbers of mis-classification are significantly lowered. Moreover, most of the mis-classified regions of the first stage result are now correctly classified when the parameters are chosen appropriately. \begin{figure}[h!] \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_IndianPines_gt_gray.pdf} \caption[]% {{\small Ground Truth}} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_SVM_2p5.pdf} \caption[]% {{\small $\nu$-SVC \label{SVM_noisy_heatmap}}} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_Our_no2tv.pdf} \caption[]% {{\small 2-stage without $||\nabla \mathbf{u}||_2^2$}} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/z_indianpines_heat_map2_Our_2tv.pdf} \caption[]% {{\small 2-stage with $||\nabla \mathbf{u}||_2^2$}} \end{subfigure} \caption{Heatmaps of mis-classifications on Indian Pines data. (a) ground-truth labels, (b) $\nu$-SVC (the first stage), (c) and (d) our method without or with the second order term respectively. \label{comparison_TV2}} \end{figure} \section{Conclusions}\label{sec:conclusion} In this paper, a novel two-stage hyperspectral classification method inspired by image denoising is proposed. The method is simple yet performs effectively. In the first stage, a support vector machine method is used to estimate the pixel-wise probability map of each class. The result in the first stage has decent accuracy but is noisy. In the second stage, an image denoising method is used to clean the probability maps. Since both spectral and spatial information are effectively utilized, our method is very competitive when compared with state-of-the-art classification methods. It also has a simpler framework with fewer number of parameters and faster computational time. It performs particularly well when the inter-class spectra are close or when the training percentage is high. For future work, we plan to investigate automated parameter selection \cite{Liao2009,Dong2011,Wen2012,Bredies2013} of the denoising method in the second stage, using deep learning methods in the first stage \cite{Yue2015,Makantasis2015,Morchhale2016,Pan2017} and classifying fused hyperspectral and LiDAR data \cite{Gader2013,Debes2014}. \clearpage \section*{Acknowledgement} The authors would like to thank Computational Intelligence Group from the Basque University for sharing the hyperspectral data sets in their website\footnote{\url{http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes}}, Prof. Leyuan Fang from College of Electrical and Information Engineering at Hunan University for providing the programs of the SC-MK and MFASR methods in his homepage\footnote{\url{http://www.escience.cn/people/LeyuanFang}} and Prof. Xudong Kang from College of Electrical and Information Engineering at Hunan University for providing the program of the EPF method in his homepage\footnote{\url{http://xudongkang.weebly.com/}}. \clearpage \bibliographystyle{ieeetr} \input{2stage_classification_arXiv.bbl} \end{document}
{ "timestamp": "2018-06-05T02:11:41", "yymm": "1806", "arxiv_id": "1806.00836", "language": "en", "url": "https://arxiv.org/abs/1806.00836" }
\section{Introduction} Halo nuclei are weakly-bound states of a few valence nucleons and a tightly-bound core nucleus~\cite{Zhukov:1993aw,Hansen:1995pu,Jonson:2004,Jensen:2004zz,Riisager:2012it}. They exemplify the emergence of new degrees of freedom close to the neutron and proton drip lines which are difficult to describe in ab initio approaches. Cluster models of halo nuclei are formulated directly in the new degrees of freedom and thus take the emergence phenomenon into account by construction, typically using a phenomenological interaction~\cite{Descouvemont:2008zz,Schuck:2017jtw}. These models have improved our understanding of halo nuclei significantly. However, they cannot be improved systematically and lack a reliable way to estimate theoretical uncertainties. Halo effective field theory (Halo EFT) is a systematic approach to these systems that exploits the apparent separation of scales between the small nucleon separation energy of the halo nucleus and the large nucleon separation energy and excitation energy of the core nucleus~\cite{Bertulani:2002sz,Bedaque:2003wa}. This scale separation defines (at least) two momentum scales: a small scale $M_{\text{lo}}$ and a large scale $M_{\text{hi}}$. Halo EFT provides a systematic expansion of low-energy observables in powers of $M_{\text{lo}}/M_{\text{hi}}$. Predictions made in Halo EFT can be improved systematically through the calculation of additional orders in the low-energy expansion. The interaction between the core and the valence nucleons is parametrized by contact interactions tuned to reproduce a few low-energy observables. Note that the absence of explicit pion exchange in the interaction indicates that the approach breaks down for momenta of the order of the pion mass. Similar EFT approaches can be used for systems of atoms and nucleons at low energies \cite{Braaten:2004rn,Hammer:2010kp}. $^{11}$Be represents the prototype of a one-nucleon halo nucleus and thus has been considered as a test case for Halo EFT. It has a $J^P=1/2^+$ ground state that can be described as a neutron in an $S$-wave relative to the $^{10}$Be core. $^{11}$Be also has a $J^P=1/2^-$ excited state which can be considered as a neutron in a $P$-wave relative to the core. The electric properties of the two bound states in $^{11}$Be were studied in detail in Ref.~\cite{Hammer:2011ye} using Halo EFT. $^{11}$Be also has a magnetic moment due to its halo neutron \cite{Fernando:2015jyd} but there are no magnetic transitions between the two states because of their opposite parity. For a recent review of Halo EFT and applications to other halo nuclei see Ref.~\cite{Hammer:2017tjm}. Here, we will focus on the electromagnetic properties of $^{17}$C. This nucleus is an interesting halo candidate but has not yet been investigated using Halo EFT. Its continuum properties cannot yet be addressed using standard ab initio methods. It is too heavy for an approach that employs a combination of the no-core shell model (NCSM) and the resonating group model (RGM)\cite{Navratil:2016ycn} but it is too light to neglect center-of-mass motion effects as is done in coupled cluster calculations. (See Ref.~\cite{Hagen:2012rq} for a calculation of $^{40}$Ca-proton scattering where this approximation is well justified.). Recent calculations in the NCSM also seem to suggest that this nucleus is too large to obtain converged results for its spectrum \cite{Smalley:2015ngy} with the available computational resources. $^{17}$C has a $J^P=3/2^{+}$ ground state, and two excited states with $J^P=1/2^{+}$ and $5/2^{+}$ \cite{Suzuki:2008zz}. The neutron separation energy of the ground state of about 0.7 MeV \cite{Wang:2017ame} is significantly smaller than the excitation energy of the $J^P=0^{+}$ $^{16}$C core, which is about 1.8 MeV \cite{Ajzenberg-Selove:1986lxy}, while the neutron separation energies of the excited states are only of order 0.4-0.5 MeV \cite{Smalley:2015ngy} (see the level scheme in Fig.~\ref{fig:levelschemeC17}). This suggests that $^{17}$C may be amenable to a description using Halo EFT with $S$- and $D$-wave neutron-core interactions~\cite{Braun:2018hug}. Recently, M1 transition rates from both excited states into the ground state were measured \cite{Suzuki:2008zz,Smalley:2015ngy}. Below, we will discuss these transition rates in the framework of Halo EFT to leading order (LO) in the Halo EFT counting. Besides these electromagnetic transitions, we will also consider static electric and magnetic properties as well as neutron capture on $^{16}$C into $^{17}$C. We will show that future experiments and/or ab initio calculations of these quantities can provide insight in the interaction of neutrons with $^{16}$C. This manuscript is organized as follows: In Sec. \ref{sec:Halo_EFT_Formalism}, we introduce the theoretical foundations required to calculate the properties of halo nuclei with effective field theory. After reviewing results for the charge radius and quadrupole moment for the $S$- and $D$-wave states in Sec. \ref{sec:static}, we calculate magnetic moments for both states. In Sec. \ref{sec:trans_cap} we discuss E2 and M1 transitions between the different states in $^{17}$C and calculate E1 and M1 capture reactions to the $S$- and $D$-wave states. We end with a summary and an outlook. \section{Halo EFT formalism for $^{17}$C} \label{sec:Halo_EFT_Formalism} Our goal is to investigate the electromagnetic properties of the halo nucleus $^{17}$C using Halo EFT. As discussed above, $^{17}$C can be described as a weakly-bound state of a $^{16}$C core and a neutron. First, we need to account for the free propagation of the core and neutron degrees of freedom. The corresponding Lagrangian is \begin{equation} \label{eq:L0} \mathcal{L}_0 = c^\dagger \left(i \partial_t + \frac{\nabla^2}{2M}\right) c + n^\dagger \left(i \partial_t + \frac{\nabla^2}{2m}\right) n ~, \end{equation} where $n$ denotes the spin-1/2 neutron field, $c$ the spin-0 core field, $m$ is the nucleon mass, and $M$ is the mass of the $^{16}$C core. \begin{figure}[t] \centering \includegraphics[width=0.45\columnwidth]{levelschemeC17.pdf} \caption{Level scheme of $^{17}$C showing quantum numbers $J^P$, excitation energies in MeV, and the $^{16}$C$~+~n$ threshold.} \label{fig:levelschemeC17} \end{figure} The first excitation of the $^{16}$C core has an energy of $E_{{}^{16}C}^* = 1.766(10)$~MeV~\cite{Ajzenberg-Selove:1986lxy}, while the neutron separation energy of $^{16}$C is $S_n(^{16}{\rm C})= 4.250(4)$~MeV~\cite{Wang:2017ame}. Moreover, the neutron separation energy of $^{17}$C is $S_n(^{17}{\rm C})= 0.734(18)$~MeV~\cite{Wang:2017ame}. This suggests that the $J^P=3/2^{+}$ ground state of $^{17}$C can be described as a neutron in a $D$-wave relative to the $^{16}$C core, although the halo nature of the ground state is not commonly accepted \cite{Suzuki:2008zz,Smalley:2015ngy}. As illustrated in Fig.~\ref{fig:levelschemeC17}, $^{17}$C also has two excited states with $J^P=1/2^+$ and $5/2^+$ with energies $E^*_{1/2^+} = 0.218(1)$~MeV and $E^*_{5/2^+} = 0.332(1)$~MeV~\cite{Smalley:2015ngy}, respectively. In Halo EFT, these two states are described by a neutron in an $S$-wave and $D$-wave relative to the core, respectively. To account for these states, we define the interaction part of the effective Lagrangian as~\cite{Braun:2018hug} \begin{align} \notag \mathcal{L} = \mathcal{L}_0 &+ d_{J,M}^\dagger \left[c_2^J \left(i \partial_t + \frac{\nabla^2}{2M_{nc}}\right)^2 + \eta_2^J \left(i \partial_t + \frac{\nabla^2}{2M_{nc}}\right) + \Delta_2^J \right] d_{J,M}\\ \notag &- g^{J}_2 \left[d^\dagger_{J,M} \left[n \overset{\leftrightarrow}\nabla^2 c\right]_{J,M} + \left[n \overset{\leftrightarrow} \nabla^2 c \right]^\dagger_{J,M} d_{J,M} \right] \\ &+ \sigma_s^\dagger \left[\eta_0 \left(i \partial_t + \frac{\nabla^2}{2M_{nc}}\right) + \Delta_0 \right] \sigma_s - g_0 \left[c^\dagger n_s^\dagger \sigma_s + \sigma_s^\dagger n_s c\right]+\ldots\ , \label{eq:lagrangian_dwave} \end{align} where $M_{nc}=M+m$ and $d_{J,M}$ is a $(2J+1)$-component field. We project on the $J=3/2$ and $5/2$ parts of the resonant $D$-wave interaction via \begin{align} \label{eq:d_tensor_L} \left[n \overset{\leftrightarrow}\nabla^2 c\right]_{J,M} &= \sum_{m_s m_l} \left(\left. \frac{1}{2} m_s \ 2 m_l \right\vert J\, M \right) \ n_{m_s} \sum_{\alpha \beta} \left(\left. 1 \alpha \ 1 \beta \right\vert 2 m_l \right) \frac{1}{2} \left( \overset{\leftrightarrow}\nabla_\alpha \overset{\leftrightarrow}\nabla_\beta + \overset{\leftrightarrow}\nabla_\beta \overset{\leftrightarrow}\nabla_\alpha \right) \ c \ , \end{align} where $\alpha$ and $\beta$ denote spherical indices and $\overset{\leftrightarrow}\nabla$ is a Galilei-invariant derivative. The $D$-wave interaction introduces 4 low-energy constants in the leading order (LO) Lagrangian: $c_2^J$, $\Delta_2^J$, $g^J_2$, and $\eta_2^J = \pm 1$, but only three of them are independent at LO. This increased number of parameters compared to the $S$-wave arises from the appearance of power divergences up to 5th order in the $D$-wave self-energy. Their renormalization requires effective range parameters up to the shape parameter to enter at LO~\cite{Bertulani:2002sz}. In this work, we will follow Ref.~\cite{Braun:2018hug} and use dimensional regularization with the power divergence subtraction scheme (PDS) \cite{Kaplan:1998tg, Kaplan:1998we} for all practical calculations. The accuracy of this approach is set by the ratio of the low-momentum scale $M_{\text{lo}}$ over the high-momentum scale $M_{\text{hi}}$ which for ground state observables can be estimated as $\sqrt{S_n(^{17}{\rm C})/E_{{}^{16}C}^* } \approx 0.64$ in our case. The expansion parameter is relatively large, and we expect slow convergence for ground state observables. However, for the excited states, the expansion parameter is approximately 0.5 which leads to 50\% errors at first order and 25\% errors at second order in the EFT expansion. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{dyson-dwave-new-paper} \caption{Diagrammatic representation of the dressed $d$-propagator. The dashed (solid) line denotes the core (neutron) field. The thin double line represents the bare $d$-propagator, while the thick double line with the blob is the dressed $d$-propagator.} \label{fig:dimer-propagator} \end{figure} The dressed propagators of the $\sigma$ and $d_{J,M}$ fields are obtained by summing the bubble diagrams for the $nc$-interactions (cf. Fig.~\ref{fig:dimer-propagator} for the $D$-wave case) to all orders. Throughout this paper, a thick single line denotes the dressed $\sigma$-propagator and a thick double line the dressed $d$-propagator in all our figures. {\em $\sigma$-propagator.} The $\sigma$-propagator for the $S$-wave state is well known (see, e.g., Ref.~\cite{Hammer:2011ye}) and we quote only the final result: \begin{align} D_\sigma(\tilde{p}_0) &= \frac{1}{\Delta_0 + \eta_0 [\tilde{p}_0 +i\epsilon] -\Sigma_\sigma(\tilde{p}_0)}~,\\[6pt] \Sigma_\sigma(\tilde{p}_0) &= - \frac{g^2_0 m_R}{2 \pi} \left[i \sqrt{2 m_R \tilde{p}_0} + \mu\right]~, \end{align} where $\mu$ is the PDS scale~\cite{Kaplan:1998tg, Kaplan:1998we}, $m_R$ the reduced mass of the neutron-core system, and $\tilde{p}_0=p_0 - \mathbf{p}^2/(2 M_{nc})$ is the Galilei invariant energy. {\em $d$-propagator.} The dressed propagator for the $d_{J,M}$ field was computed in Ref. \cite{Braun:2018hug}.\footnote{See also Ref.~\cite{Brown:2013zla} for a previous calculation using dimensional regularization with minimal subtraction which ignores power law divergences and sets $\eta_2=c_2=0$ at LO.} Since we use a Cartesian representation of the $D$-wave, the propagator depends on four vector indices, two in the incoming channel and two in the outgoing channel. Note that Roman indices refer to Cartesian indices and Greek ones to spherical indices. Evaluating the Feynman diagrams in Fig.~\ref{fig:dimer-propagator}, we obtain: \begin{align} \label{eq:d-wave-tensor} & D_d(\tilde{p}_0)_{ij,op} = D_d(\tilde{p}_0) \ \frac{1}{2}\left(\delta_{io} \delta_{jp} + \delta_{ip} \delta_{jo} - \frac{2}{3} \delta_{ij} \delta_{op}\right) \ , \\ & D_d(\tilde{p}_0) = \left[\Delta_2 + \eta_2 \tilde{p}_0 + c_2 \tilde{p}_0^2 - \Sigma_d(\tilde{p}_0)\right]^{-1} \ , \end{align} with the one-loop self-energy \begin{align} & \Sigma_d(\tilde{p}_0) =- \frac{2}{15} \frac{m_R g_2^2}{2 \pi} \ (2m_R\tilde{p}_0)^2 \left[i \sqrt{2m_R\tilde{p}_0+i\epsilon} - \frac{15}{8} \mu \right] \ . \label{eq:d_s-en_cutoff} \end{align} The term proportional to $c_2$ in \eqref{eq:lagrangian_dwave} is required to absorb the $\mu$-dependence from the PDS scheme. Following the arguments in Ref. \cite{Braun:2018hug}, the terms proportional to $\eta_2$, $\Delta_2$, and $g_2$ are also required to be consistent with the threshold expansion of the scattering amplitude. In a momentum cutoff scheme, these terms absorb the linear, cubic, and quintic power law divergences in the cutoff~\cite{Bertulani:2002sz}. {\em Power counting.} The canonical power counting for the $\sigma$-propagator representing a shallow $S$-wave state was given in Refs.~\cite{vanKolck:1997ut,vanKolck:1998bw,Kaplan:1998tg,Kaplan:1998we}. It implies $\gamma_0 \sim 1/a_0 \sim M_{\text{lo}}$ and $r_0 \sim 1/M_{\text{hi}}$, where $\gamma_0= \sqrt{2 m_R (S_n(^{17}\mathrm{C})-E^*_{{1/2}^+})}$ is the binding momentum of the $S$-wave state and $r_0$ the effective range. As a result, $r_0$ enters at NLO in the expansion in $M_{\text{lo}}/M_{\text{hi}}$. The power counting for partial waves beyond the $S$-wave is more complicated and different scenarios have been proposed~\cite{Bertulani:2002sz,Bedaque:2003wa,Braun:2018hug}. We look for a scenario that exhibits the minimal number of fine tunings consistent with the scales of the system. Bedaque et al.~\cite{Bedaque:2003wa} suggested for the $P$-wave case that $a_1 \sim 1/(M_{\text{lo}}^2 M_{\text{hi}})$ and $r_1 \sim M_{\text{hi}}$, where higher ERE parameters scale with the appropriate power of $M_{\text{hi}}$ given by dimensional analysis. This power counting is adequate for the excited state of $^{11}$Be~\cite{Hammer:2011ye}. It requires only one fine-tuned constant in $\mathcal{L}$ instead of two as proposed in Ref.~\cite{Bertulani:2002sz} where both $a_1$ and $r_1$ scale with appropriate powers of $M_{\text{lo}}$. In Ref.~\cite{Bedaque:2003wa}, the power counting was also generalized to $l>1$. However, we employ a different power counting with a minimal number of fine tunings for $l=2$ as proposed in Ref.~\cite{Braun:2018hug}. In the case of the $d$-propagator, \eqref{eq:d-wave-tensor}, two out of three ERE parameters need to be fine-tuned because $a_2 \sim 1/(M_{\text{lo}}^4 M_{\text{hi}})$ and $r_2 \sim M_{\text{lo}}^2 M_{\text{hi}}$ are both unnaturally large, while $\mathcal{P}_2 \sim M_{\text{hi}}$. Higher ERE terms are suppressed by powers of $M_{\text{lo}}/M_{\text{hi}}$. Thus, the relevant fit-parameters in our EFT at LO are $\gamma_0$, $\gamma_2$, $r_2$, and $\mathcal{P}_2$, where $\gamma_2 = \sqrt{2m_R S_n(^{17}\mathrm{C})}$ is the binding momentum of the $^{17}$C ground state, while $r_2$ and $\mathcal{P}_2$ denote the $D$-wave effective range and shape parameter, respectively. For the $5/2^+$ excited state, the binding momentum is $\gamma_{2'} =\sqrt{2m_R (S_n(^{17}\mathrm{C})-E^*_{5/2^+})}$, while $r_{2'}$, $\mathcal{P}_{2'}$ are the corresponding effective range parameters. The corresponding wave function renormalization constants for the $1/2^+$, $3/2^+$, and $5/2^+$ states at LO are: \begin{align} \label{eq:normalizationSD} & Z_\sigma = \frac{2\pi}{m_R^2 g_0^2} \ \gamma_0 \ , \qquad Z_d^{3/2} = -\frac{15\pi}{m_R^2 g^2_2} \ \frac{1}{r_2 + \mathcal{P}_2 \gamma_2^2} \ , \qquad Z_{d'}^{5/2} = -\frac{15\pi}{m_R^2 g^2_{2'}} \ \frac{1}{r_{2'} + \mathcal{P}_{2'} \gamma_{2'}^2} \ , \end{align} respectively. At NLO, $Z_\sigma$ is modified by a factor $(1+\gamma_0 r_0)$. The constants $Z_d^{3/2}$ and $Z_{d'}^{5/2}$ are only required at LO for our calculations. \section{Static electromagnetic properties of $^{17}$C} \label{sec:static} We first consider the static electromagnetic properties of $^{17}$C. These are usually easier to measure experimentally than dynamical properties. They can also be calculated in ab initio approaches that provide the wave functions of the involved states. In particular, we will consider the charge radii and magnetic moments of the $^ {17}$C states. It is convenient to calculate all form factors in the Breit frame where the photon transfers no energy, $q=(0,\boldsymbol{q})$, and to choose the photon to be moving in the $\hat{z}$ direction $\boldsymbol{q} = |\boldsymbol{q}| \hat{z}$. \subsection{Charge radii} \label{sec:charge-radius} The form factor of a general $S$-wave one-neutron halo nucleus was calculated in Ref.~\cite{Hammer:2011ye}. The electric charge radius of the $S$-wave state at NLO is given by: \begin{align} \braket{r_E^2}^{(\sigma)} = \frac{f^2}{2 \gamma_0^2} (1+r_0\gamma_0)\ . \label{eq:rE-swave} \end{align} where $f=m_R/M$ is a mass factor. The LO result can be obtained by setting $r_0=0$ in Eq.~(\ref{eq:rE-swave}). At next-to-next-to-leading order (NNLO) a counterterm related to the radius of the core contributes. In the standard power counting, the factors of $f$ are counted as ${\cal O}(1)$, although they can become rather small for large core masses. As a consequence, the counterterm contribution is enhanced numerically. Up to NLO, one can interpret the Halo EFT result as a prediction for the radius relative to the core~\cite{Hammer:2011ye}. Using the measured one-neutron separation energy of the $1/2^{+}$ state, we obtain for the charge radius of the excited $S$-wave state of $^{17}$C relative to the charge radius of $^{16}$C at LO: \begin{equation} \label{eq:charge_radius} \braket{r_E^2}^{1/2^{+}}_{^{17}C} - \braket{r_E^2}_{^{16}C} = 0.074 \ \text{fm}^2 \ , \end{equation} where the error from NLO corrections is about 50\%. To make a numerical prediction for the full charge radius of $^{17}$C, we have to add the charge radius of $^{16}$C, $\braket{r_E^2}_{^{16}C}$, to our result. For this purpose, we use the point-proton radius $R_p$ from Ref.~\cite{Kanungo:2016tmz} and the formula for the charge radius from Ref.~\cite{Mueller:2008bj}, including the Darwin-Foldy term and the neutron charge radius as corrections, to obtain $\sqrt{\braket{r_E^2}^{1/2^{+}}_{^{17}C}} = \sqrt{\left(R_p^{^{16}C}\right)^2 +r_p^2 + \frac{3}{4 m^2} + \frac{N}{Z} r_n^2 + 0.074 \text{ fm}^2} = 2.53(5)$ fm. Here we have used the proton, $r_p = 0.875$ fm, and neutron charge radii, $r_n^2 = -0.116$ fm$^2$ \cite{Yao:2006px}, and $N = 11$ ($Z = 6$) denotes the number of neutrons (protons) of $^{17}$C. The error bar includes both the experimental and the Halo EFT uncertainties. To date, there is no experimental data for the charge radius of the $1/2^{+}$ excited state to compare with. As a consistency check, we compare with the experimental value for the $3/2^+$ ground state of $^{17}$C extracted in Ref.~\cite{Kanungo:2016tmz}, $\sqrt{\braket{r_E^2}^{3/2^+}_{^{17}C}} = 2.54(4)$ fm, which is very close to our result for the $1/2^{+}$ excited state. Note that the difference between the charge radius of $^{17}$C and $^{16}$C is smaller than the experimental error from Ref. \cite{Kanungo:2016tmz} for this quantity. The charge radius of a $D$-wave state has recently been calculated in Ref.~\cite{Braun:2018hug} at LO and yields: \begin{align} \label{eq:r_E_LO} \braket{r_E^2}^{(d)} \ = -\frac{6 \tilde{L}_{C0E}^{(d) \text{ LO}}}{r_2 + \mathcal{P}_2 \gamma_2^2} \ . \end{align} Here, the counterterm $\tilde{L}_{C0E}^{(d) \text{ LO}}$ already contributes at LO while the loop contribution is suppressed. For the $D$-wave state, we also find a quadrupole moment which yields at LO: \begin{align} \label{eq:Q_LO} \mu_Q^{(d)} &= \frac{40 \tilde{L}_{C02}^{(d)\text{ LO}}} {3 \left(r_2 + \mathcal{P}_2 \gamma_2^2 \right)} \ , \end{align} where another counterterm enters at LO. Both $D$-wave observables have the same denominator of effective range parameters $(r_2 +\mathcal{P}_2 \gamma_2^2)$ which is related to the Asymptotic Normalization Coefficient (ANC) of the $D$-wave state, $A_2 = \sqrt{2 \gamma_2^4/(-r_2 -\mathcal{P}_2 \gamma_2^2)}$. Similar to the correlation between $\mu_Q^{(d)}$ and B(E2) in Ref.~\cite{Braun:2018hug}, we find a smooth correlation between $\braket{r_E^2}^{(d)}$ and $\mu_Q^{(d)}$: \begin{align} \mu_Q^{(d)} = -\frac{20}{9} \frac{\tilde{L}_{C02}^{(d)\text{ LO}}}{\tilde{L}_{C0E}^{(d)\text{ LO}}} \braket{r_E^2}^{(d)} \ , \end{align} which implies that ab initio calculations with different phaseshift-equivalent interactions should show a linear correlation between the quadrupole moment and the charge radius. \subsection{Magnetic moments} \label{sec:magnetic-moments} \begin{figure} \centering \includegraphics[width=0.89\textwidth]{M1momentNew22} \caption{Diagrams contributing to the magnetic moment. The first diagram is the coupling of a vector photon to the charge of the core arising from minimal substitution in the Lagrangian. The second diagram displays a vector photon coupling to the magnetic moment of the neutron. The last diagrams shows a two-body current. The thick solid line denotes the dressed $\sigma$-propagator.} \label{fig:diagramsM1momentGammaV} \end{figure} The magnetic properties of shallow bound states are predominantly determined by the magnetic moments of its degrees of freedom. The magnetic moment of a single particle is introduced into the Lagrangian through an additional {\it magnetic} one-body operator \cite{Chen:1999tn,Fernando:2015jyd}. An additional counterterm enters via a two-body current. Assuming a spin-0 core, the effective Lagrangian is \begin{align} \label{eq:L_magnetic} \mathcal{L}_{M} = \kappa_n \mu_N n^\dagger \boldsymbol{\sigma \cdot B} n + 2 \mu_N L_M^J \Phi^\dagger \boldsymbol{S_J \cdot B} \Phi ~, \end{align} where $\Phi$ is a place holder for the relevant auxiliary field ($\sigma_s$, $\pi_s$, $d_{J,M}$, ...), $\boldsymbol{S_J}$ is the corresponding spin matrix for spin $J$, $\mu_N$ denotes the nuclear magneton, and $L^J_M$ the coupling constant for the magnetic two-body current. For the neutron anomalous magnetic moment we use $\kappa_n = -1.91304$. \subsubsection{Magnetic moment of the $1/2^{+}$ state} \label{sec:magetic-moment-swave} We reproduce the results obtained by Fernando {\it et al.} \cite{Fernando:2015jyd}, who calculated electromagnetic form factors for $S$-wave states of one-neutron halo nuclei. Up to NLO, only the two last diagrams in Fig.~\ref{fig:diagramsM1momentGammaV} contribute to the magnetic form factor in the Breit frame: \begin{align} \frac{e Q_c}{2 M_{nc}} G_M(q^2) &= Z_\sigma \mu_N \left(g_0^2 \kappa_n \frac{m m_R}{\pi q} \arctan\left[\frac{q m_R}{2 m \gamma_0}\right] + L_M^\sigma \right)~, \end{align} with \begin{align} Z_\sigma = \frac{2\pi \gamma_0}{m_R^2 g_0^2} (1+r_0\gamma_0)~, \qquad \text{and we define} \qquad \tilde{L}_M^\sigma = \frac{2 \pi L_M^\sigma}{m_R^2 g_0^2} \ . \end{align} The magnetic moment $\kappa_\sigma$ is obtained by evaluating the form factor at $q^2=0$: \begin{align} \kappa_\sigma &= \frac{e Q_c}{2 M_{nc}} G_M(0) =\left( \kappa_n + \tilde{L}_M^\sigma \gamma_0 \right) (1+r_0\gamma_0)~, \end{align} where $\kappa_\sigma$ is given in units of $\mu_N$. Naive dimensional analysis with rescaled fields $[\tilde{\sigma}] = 2$ \cite{Hammer:2011ye} determines the scaling of the counterterm $\tilde{L}_M^\sigma \sim M_{\text{hi}}^{-1}$. As a consequence, $\tilde{L}_M^\sigma$ contributes at NLO. At LO, the magnetic moment of the $1/2^{+}$ state is thus given by the magnetic moment of the neutron, $\kappa_n$. \subsubsection{Magnetic moments of the $3/2^{+}$ and $5/2^{+}$ states} \label{sec:magetic-moment-dwave} In the case of the $D$-wave, the only contribution to the magnetic moment at LO is the two-body current in Eq. \eqref{eq:L_magnetic}, which corresponds to the last diagram in Fig. \ref{fig:diagramsM1momentGammaV}, and we obtain: \begin{align} \frac{e Q_c}{2 M_{nc}} G_M(q^2) &= Z_d \mu_N L_M^d = -\frac{\mu_N \tilde{L}_M^d}{r_2 + \mathcal{P}_2 \gamma_2^2} \ , \end{align} with \begin{align} Z_d = -\frac{15\pi}{m_R^2 g_2^2} \frac{1}{r_2 + \mathcal{P}_2 \gamma_2^2}~, \qquad \text{and} \qquad \tilde{L}_M^d = \frac{15 \pi L_M^d}{m_R^2 g_2^2} \ . \end{align} This yields for the magnetic form factor at LO: \begin{align} \kappa_d = - \frac{ \tilde{L}_M^d}{{r_2 + \mathcal{P}_2 \gamma_2^2}} ~, \end{align} where $\kappa_d$ is again given in units of $\mu_N$. Beyond LO we also need to consider the two loop diagrams in Fig. \ref{fig:diagramsM1momentGammaV}. Therefore, we require additional counterterms to renormalize the corresponding divergences. This makes predictions even harder, and for that reason, we do not calculate the NLO contribution to the magnetic form factors for the $D$-wave state explicitly. In general, the magnetic moment of the $D$-wave states will thus differ significantly from the magnetic moment of the neutron since $\kappa_n$ is a NLO contribution. \section{Electromagnetic transitions and capture reactions of $^{17}$C} \label{sec:trans_cap} \subsection{E2 transitions} The ground state and the two excited states of $^{17}$C have positive parity and differ at most by 2 units in total angular momentum. All states can therefore be connected by E2 transitions. The transition strength for $S \rightarrow D'$ has been calculated at LO in Ref.~\cite{Braun:2018hug} for the transition: \begin{align} \label{eq:E2_LO} \text{B(E2: $1/2^+ \to 5/2^+$)} &= -\frac{4}{5 \pi} \frac{Z_{eff}^2 e^2}{r_{2'} +\mathcal{P}_{2'} \gamma_{2'}^2} \ \gamma_0 \ \left[ \frac{3\gamma_0^2 + 9\gamma_0\gamma_{2'} + 8\gamma_{2'}^2}{(\gamma_0 + \gamma_{2'})^3} \right]^2 , \end{align} where the effective charge for $^{17}$C, $Z_{eff} = (m/M_{nc})^2 Q_c \approx 0.021$~\cite{Typel:2004us}, comes out of the calculation automatically. At NLO, there is an unknown short-range contribution that enters via a counterterm. For the transition strength B(E2: $1/2^+ \to 3/2^+$), we get the same result for the amplitude but with different Clebsch Gordan coefficients (leading to a relative factor of $3/2$) and the appropriate binding momentum and renormalization constant for the $3/2^+$ ground state: \begin{align} \label{eq:E2_LOx} \text{B(E2: $1/2^+ \to 3/2^+$)} &=- \frac{8}{15 \pi} \frac{Z_{eff}^2 e^2}{r_2 +\mathcal{P}_2 \gamma_2^2} \ \gamma_0 \ \left[ \frac{3\gamma_0^2 + 9\gamma_0\gamma_2 + 8\gamma_2^2}{(\gamma_0 + \gamma_2)^3} \right]^2 . \end{align} Following the approach in Ref.~\cite{Braun:2018hug}, we can also calculate the E2 transition for $D \to D'$. However, we do not display the result here since the relevant diagram diverges cubically and, therefore, additional counterterms are required for this observable already at LO. \subsection{M1 transitions} \subsubsection{S $\rightarrow$ D} We will first consider the M1 transition strength from the $3/2^+$ ground state ($D$-wave) to the first excited $1/2^+$ state ($S$-wave) in $^{17}$C since it was measured in Refs.~\cite{Suzuki:2008zz,Smalley:2015ngy}. The experimental result is small compared with typical M1 transition strengths in nuclei, {\it i.e.} $\text{B(M1: $1/2^+ \to 3/2^+$)} = 1.04^{+0.03}_{-0.12} \times 10^{-2} \mu_N^2$ \cite{Smalley:2015ngy} or $0.58 \times 10^{-2}$ W.U. expressed in Weisskopf units. In the neutron-core picture of Halo EFT, the M1 transition from a $D$-wave to an $S$-wave state is forbidden for one-body currents which is in agreement with the experimental suppression of the transition. The non-zero transition strength can only be accounted for by a two-body current which takes short-ranged (core) physics into account. We therefore add the gauge-invariant counterterm \begin{align} \mathcal{L}_{M} = -\mu_N L_{M1}^{\sigma d} \sigma_m^\dagger d_{m'} \left(\frac{1}{2} m 1 i \bigg| \frac{3}{2} m' \right) B_i \ . \end{align} By rescaling the fields to absorb unnaturally large coupling constants, leading to $[\tilde{\sigma}] = 2$, $[\tilde{d}] = 0$, and using naive dimensional analysis for the rescaled fields \cite{Beane:2000fi}, we find $L_{M1}^{\sigma d} \sim M_{\text{hi}} l_{M1}^{\sigma d} g_0 g_2 m_R^2$ with $l_{M1}^{\sigma d}$ of order one. To obtain the magnetic transition amplitude we calculate the vertex function \begin{align} \Gamma_{m m' i} = \left(\frac{1}{2} m 1 i \bigg| \frac{3}{2} m' \right) \mu_N \tilde{L}_{M1}^{\sigma d} \epsilon_{ijk} k_j ~, \end{align} with $\tilde{L}_{M1}^{\sigma d} = \frac{\sqrt{30}\pi}{m_R^2 g_0 g_2} L_{M1}^{\sigma d}$. If we consider the case $m = -m' = \pm 1/2$ and choose the photon to be traveling in $\hat{z}$ direction, we find \begin{align} \bar{\Gamma}_{\pm \mp, \mp1} = \mp \frac{\mu_N}{\sqrt{3}} \tilde{L}_{M1}^{\sigma d} \omega ~. \end{align} This yields for the M1 transition strength: \begin{align} \text{B(M1: $1/2^+ \to 3/2^+$)} = \frac{3}{4\pi} \left( \frac{\bar{\Gamma}_{\pm \mp, \mp1}}{\omega}\right)^2 = -\frac{1}{4\pi} \frac{\gamma_0}{r_2 +\mathcal{P}_2 \gamma_2^2} \left( \tilde{L}_{M1}^{\sigma d} \right)^2 \mu_N^2 ~. \label{eq:M1x} \end{align} Moreover, combining Eqs.~(\ref{eq:M1x}) and (\ref{eq:E2_LOx}), we find a correlation between B(E2) and B(M1): \begin{align} \text{B(E2: $1/2^+ \to 3/2^+$)} = \frac{32}{15} \frac{Z_{eff}^2 e^2}{\left(\tilde{L}_{M1}^{\sigma d} \right)^2 \mu_N^2} \ \left[ \frac{3\gamma_0^2 + 9\gamma_0\gamma_2 + 8\gamma_2^2}{(\gamma_0 + \gamma_2)^3} \right]^2 \text{B(M1: $1/2^+ \to 3/2^+$)} ~. \end{align} If we use the experimental result for B(M1: $1/2^+ \to 3/2^+$) $= 1.04^{+0.03}_{-0.12} \times 10^{-2} \mu_N^2$ and employ naive dimensional analysis for the counterterm $\tilde{L}_{M1}^{\sigma d} \sim M_{\text{hi}} \approx 0.28 \text{ fm}^{-1}$, we can make a rough prediction for B(E2), \begin{align} \text{B(E2: $1/2^+ \to 3/2^+$)} \approx 3 \times 10^{-2} \ e^2 \text{fm}^4\, . \end{align} Moreover, we can compare the M1 and E2 transition strengths for $^{17}$C if we look at the transition rates \cite{Greiner:1996nuc}, \begin{align} T(R \lambda) = \frac{8\pi (\lambda +1)}{\lambda [(2\lambda+1)!!]^2} \omega^{2\lambda+1} B(R \lambda)~, \end{align} that have, in contrast to B(M1) and B(E2), the same units. Here $R$ stands for E or M, $\lambda$ denotes the order of the transition and $\omega$ defines the photon energy which, in this case, is $0.218$ MeV (cf.~Fig.~\ref{fig:levelschemeC17}). Using the naive dimensional analysis result for $\tilde{L}_{M1}^{\sigma d}$ from above we find: \begin{align} \frac{T(E2)}{T(M1)} = \frac{32 \omega^2}{125} \frac{Z_{eff}^2 e^2}{\left(\tilde{L}_{M1}^{\sigma d} \right)^2 \mu_N^2} \ \left[ \frac{3\gamma_0^2 + 9\gamma_0\gamma_2 + 8\gamma_2^2}{(\gamma_0 + \gamma_2)^3} \right]^2 \approx 1\times 10^{-5}~, \end{align} which implies that the M1 transition strongly dominates over E2 for $^{17}$C. \subsubsection{D' $\rightarrow$ D} \begin{figure}[t] \centering \includegraphics[width=0.99\textwidth]{M1transNew2} \caption{Relevant diagrams for the M1 transition. In the diagram (a) a vector photon couples to the magnetic moment of the neutron and in (b) to the electric charge of the core. In the two remaining diagrams the photon couples directly to the $D$-wave dimers. For a more detailed description of the lines, see Fig. \ref{fig:dimer-propagator}.} \label{fig:diagramsM1trans} \end{figure} The M1 transition strength from the $3/2^+$ ground state ($D$-wave) to the second excited $5/2^+$ state ($D'$-wave) in $^{17}$C was also measured in Ref.~\cite{Suzuki:2008zz}: $\text{B(M1: $5/2^+ \to 3/2^+$)} = 7.12_{-0.96}^{+1.27} \times 10^{-2} \mu^2_N$. Compared to the $D \to S$-state M1 transition strength, it is around one order of magnitude larger. This is in agreement with the fact that M1 transitions are allowed for neutron-core systems with one-body currents by the usual selection rules. We calculate both loop diagrams in Fig. \ref{fig:diagramsM1trans} and find that we need additional counterterms to absorb all divergences. Moreover, we obtain results for the M3 and M5 transition. We find that two different counterterms are needed for the M1 transition and also two for the M3 transition. In the following, we concentrate the discussion on the M1 transition. In this case, the two counterterms are given by: \begin{align} \mathcal{L}_{M} = - L^{dd'}_{M1a} \mu_N d^\dagger_{ij} d'_{ij} \sigma_k^{m_s m_{s'}} B_k \ - L^{dd'}_{M1b} \mu_N d^\dagger_{ij} \boldsymbol{\nabla}\cdot{\bf A} d'_{ij} \ . \end{align} The first counterterm is needed to renormalize the scale dependence from diagram (a) with the magnetic photon coupling to the neutron and the second one renormalizes the scale for the vector photon coupling in diagram (b), respectively. For the calculation it is convenient to define: \begin{align} \tilde{L}_{M1a}^{dd'} &= \frac{15 \pi}{m_R^2 g_{2'} g_2} L^{dd'}_{M1a} + \frac{15}{4} \left( \gamma_2^2+\gamma_{2'}^2 \right) \kappa_n \mu \ , \\ \tilde{L}^{dd'}_{M1b} &= \frac{15 \pi}{m_R^2 g_{2'} g_2} L^{dd'}_{M1b} + \frac{15}{4} \left( \gamma_2^2+\gamma_{2'}^2 \right) \frac{m_R Q_c}{M} \mu \ , \end{align} where $\mu$ is the PDS scale. Again, the photon has four-momentum $k = (\omega, \boldsymbol{k})$, and its polarization index is denoted by $\nu$. The computation of both diagrams yields a vertex function $\Gamma_{m m'\nu}$ , where $m$ is the total angular momentum projection of the $3/2^+$ state and $m'$ denotes the spin projection of the $5/2^+$ state. We compute the vertex function with respect to the specific components of the $D$-wave interaction: \begin{align} \Gamma_{m m' \nu} = \sum_{\alpha\beta\delta\eta m_l m_l' m_s m_s'} {\left( \frac{1}{2} m_s 2 m_l \left| \frac{3}{2} m \right. \right) \left( 1 \alpha 1 \beta \left| 2 m_l \right. \right) \left( \frac{1}{2} m_s' 2 m_l' \left| \frac{5}{2} m' \right. \right)\left( 1 \delta 1 \eta \left| 2 m_l' \right. \right) \tilde{\Gamma}_{\alpha\beta\delta\eta \nu}} \ . \end{align} We calculate the irreducible vertex in Coulomb gauge so that we have $\boldsymbol{k} \cdot \boldsymbol{\epsilon} = 0$ for real photons. Additionally, we choose $\boldsymbol{k} \cdot \boldsymbol{p} = 0$, where $\boldsymbol{p}$ denotes the incoming momentum of the $D$-wave state. As a result, the space-space components of the vertex function in Cartesian coordinates for the left diagram can be written as: \begin{align} \tilde{\Gamma}_{ijopk} = \Gamma_M^{(a)} \epsilon_{abk} \sigma_{a}^{m_s m_{s'}} k_b \left(\frac{\delta_{io} \delta_{jp} + \delta_{ip} \delta_{jo}}{2} - \frac{1}{3} \delta_{ij} \delta_{op}\right) \ , \end{align} and for the right one: \begin{align} \tilde{\Gamma}_{ijopk} = \Gamma_M^{(b)} p_k \left(\frac{\delta_{io} \delta_{jp} + \delta_{ip} \delta_{jo}}{2} - \frac{1}{3} \delta_{ij} \delta_{op}\right) + \Gamma_{E2} \left[k_i \left(\frac{\delta_{j p} \delta_{k o}+\delta_{j o} \delta_{k p}}{2} - \frac{1}{3} \delta_{j k} \delta_{o p}\right) + \cdots \right] \ . \end{align} In the left diagram, the photon couples to the spin of the neutron and we get a spin flip $m_s \neq m_s'$. In the case of the right diagram there is no spin flip so that $m_s=m_{s'}$. By choosing the photon to be traveling in $\hat{z}$ direction it follows from the tensor structure of $\tilde{\Gamma}_{ijop\nu}$ that $m_l = m_l'$ and $\nu \neq 0$. For the case that $m=\pm1/2=-m'$ we get: \begin{align} -\Gamma_{-+,1} = \Gamma_{+-,-1} = \frac{\sqrt{6}}{5} \Gamma_M^{(a)} \sqrt{2} \omega \ , \end{align} and for $m=m'$ we get $0$ for all possible values. This yields for the B(M1: $3/2^+ \to 5/2^+$) transition: \begin{align} \notag \text{B(M1: $3/2^+ \to 5/2^+$)} &= \frac{3}{4 \pi} \left(\frac{\Gamma_{+-,-1}}{\omega}\right)^2 = \frac{9}{25\pi} \left(\frac{\bar{\Gamma}_{M}^{(a)} \omega}{\omega}\right)^2 \\ &= \frac{9 \mu_N^2}{25\pi} \frac{1}{r_2 + \mathcal{P}_2\gamma_2^2} \frac{1}{r_{2'} + \mathcal{P}_{2'}\gamma_{2'}^2} \left[ \tilde{L}^{dd'}_{M1a} + \frac{2 \gamma _{2'}^4 \kappa_n}{ \left(\gamma _{2'}+\gamma _2\right)} + 2\kappa_n\left( \gamma _2 \gamma _{2'}^2 + \gamma _2^3 \right) \right]^2 \ , \end{align} with the renormalized, irreducible vertex $\bar{\Gamma}_M = \sqrt{Z_d Z_{d'}} \Gamma_M$. By rescaling the fields, $[\tilde{d}] = [\tilde{d}'] = 0$, and using dimensional analysis we find that the counterterm scales as $L^{dd'}_{M1a} \sim M_{\text{hi}}^3 l^{dd'}_{M1a} g_2 g_{2'} m_R^2$ with $l^{dd'}_{M1a}$ of order one. In contrast, the contribution from the loop scales as $M_{\text{lo}}^3$ which means that in LO only the counterterm contributes to the M1 transition and the loop diagram is suppressed by $(M_{\text{lo}}/M_{\text{hi}})^3$. Thus the M1 transition is strongly dominated by short-range physics. \subsection{E1 neutron capture on $^{16}$C} \subsubsection{E1 capture into the $1/2^{+}$ state} E1 capture proceeds dominantly through the vector coupling of the photon to the halo core. The corresponding leading order operator is generated through minimal substitution in Eq.~\eqref{eq:lagrangian_dwave}. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{E1cap_pd} \caption{Relevant diagram contributing to the E1 capture amplitude to $S$-wave states at LO. For a more detailed description of the lines, see Fig. \ref{fig:dimer-propagator} and \ref{fig:diagramsM1momentGammaV}. } \label{fig:diagramsE1captureLO} \end{figure} The diagram that contributes at LO to this process is shown in Fig.~\ref{fig:diagramsE1captureLO}. It is the time-reversed diagram of the photodissociation reaction considered in Ref.~\cite{Hammer:2011ye}. At LO, the amplitude is \begin{equation} \bar{\Gamma}^{i} = \frac{\boldsymbol{\epsilon^{i} \cdot p}}{M} \frac{\sqrt{Z_\sigma} e Q_c g_0 2 m_R}{\gamma_0^2 + (\boldsymbol p - \frac{m}{M_{nc}} \boldsymbol k)^2} \ , \end{equation} where $i$ is the photon polarization, $\boldsymbol p$ denotes the relative momentum of the $nc$ pair and $\boldsymbol{k}$ the photon momentum. Throughout this section we choose the $nc$ pair to be traveling in $\hat{z}$ direction which means that $\boldsymbol{p} = |\boldsymbol{p}| \boldsymbol{e}_z$. Since $m/M_{nc}$ is small and it follows from power counting that $p \sim \gamma_0 \sim M_{\text{lo}}$ and $k \sim M_{\text{lo}}^2/M_{\text{hi}}$, we can neglect the recoil term $\sim \boldsymbol{p \cdot k}$ in the denominator. By averaging over the neutron spin and photon polarization and summing over the outgoing $S$-wave spin we obtain at LO ($ \frac{m}{M_{nc}} k \ll p$): \begin{align} \frac{d\sigma^{cap}}{d\Omega} &= \frac{m_R}{4\pi^2} \frac{k}{p} |\mathcal{M}^{(1/2)}|^2 \ = \ \frac{e^2 Z_{eff}^2}{\pi m_R^2} \frac{p \gamma_0 \sin^2{\theta}}{(p^2+\gamma_0^2)} \ , \end{align} with $k \approx (p^2+\gamma_0^2)/2 m_R$, $\bf \hat{k} \cdot \hat{p} = \cos\theta$, $Z_{eff} = (m_R/M) Q_c \approx 0.353$ and \begin{align} |\mathcal{M}^{(1/2)}|^2 &= \frac{1}{2} \sum_{i,m_s,M} |\bar{\Gamma}^{i}|^2 \delta_{m_s,M} \ , \end{align} where $m_s$ denotes the neutron spin and $M$ the $S$-wave polarization. Since the neutron spin is unaffected by this reaction, $m_s$ and $M$ have to be the same. After integration over $d\Omega$ we get \begin{align} \sigma^{cap} &= \frac{m_R}{\pi} \frac{k}{p} |\mathcal{M}^{(1/2)}|^2 \ = \ \frac{8 e^2 Z_{eff}^2}{3 m_R^2} \frac{p \gamma_0}{(p^2+\gamma_0^2)} = \ \frac{32 \pi \alpha Z_{eff}^2}{3 m_R^2} \frac{p \gamma_0}{(p^2+\gamma_0^2)} \ , \end{align} with the fine-structure constant $\alpha = e^2/(4\pi)$. Exploiting the detailed balance theorem, the capture cross section $\sigma^{cap}$ can be related to the photodissociation cross section $\sigma^{dis}$~\cite{Baur:1986pd}, \begin{equation} \label{eq:detailed_balance} \sigma^{cap} = \frac{2(2j_{^{17}\text{C}}+1)}{(2j_n+1)(2j_c+1)} \frac{k^2}{p^2} \ \sigma^{dis} \ = \ 2 \frac{k^2}{p^2} \ \sigma^{dis} \ . \end{equation} Our numerical results for the E1 capture into $^{17}$C and photodissociation of $^{17}$C obtained using Eq.~\eqref{eq:detailed_balance} at LO are shown in Fig. \ref{fig:E1captureCrossSectionC17swave}. At NLO, there is an additional contribution from the effective range $r_0$. By assuming that $r_0$ scales as $1/M_{\text{hi}}$, we can estimate the size of the NLO contribution by multiplying the LO result by a factor of $(1 \pm \gamma_0/M_{hi} )$ and add an error band to our LO results in Fig.~\ref{fig:E1captureCrossSectionC17swave}. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{E1capcrossC17swaveNew} \includegraphics[width=0.49\textwidth]{E1disscrossC17swaveNew} \caption{Left panel: E1 capture cross section into $^{17}$C as a function of the center-of-mass energy $E_{cm}$. Right panel: E1 photodissociation cross section as a function of $E_{cm}$. The solid (blue) line denotes the LO result and the dashed (red) lines show an estimate of the NLO corrections.} \label{fig:E1captureCrossSectionC17swave} \end{figure} \subsubsection{E1 capture into the $3/2^{+}$ and $5/2^{+}$ states} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{E1cap_d} \caption{Relevant diagrams for E1 capture to $D$-wave states at LO. For a more detailed description of the lines, see Fig. \ref{fig:dimer-propagator}.} \label{fig:diagramsE1captureD} \end{figure} In this section, we calculate E1 neutron capture to the $3/2^+$ $D$-wave ground state and $5/2^+$ excited state of $^{17}$C. The relevant diagrams that emerge from minimal substitution in our Lagrangian~\eqref{eq:lagrangian_dwave} are shown in Fig. \ref{fig:diagramsE1captureD} . They yield \begin{align} \notag \bar{\Gamma}^{i}_{m_s JM} &= \sum_{m_{s'} m_l} \left(\left. \frac{1}{2} m_{s'} \ 2 m_l \right\vert J\, M \right) \sum_{\alpha \beta} \left(\left. 1 \alpha \ 1 \beta \right\vert 2 m_l \right) \sqrt{Z_d} g_2 e Q_c \frac{2 m_R}{M} \times \\ &\left[ \frac{\left( \boldsymbol{p} - \frac{m}{M_{nc}} \boldsymbol{k} \right)_\alpha \left( \boldsymbol{p} - \frac{m}{M_{nc}} \boldsymbol{k} \right)_\beta }{\gamma_2^2 + \left(\boldsymbol{p} - \frac{m}{M_{nc}} \boldsymbol{k} \right)^2} \ \boldsymbol{\epsilon^{i} \cdot p} + \epsilon^{i}_\alpha \left(p_\beta - \frac{m}{2 M_{nc}} k_\beta \right) \right] \delta_{m_s m_s'}\ , \end{align} with the charge of the core $Q_c$, the photon momentum $\boldsymbol{k}$, the relative momentum of the incoming $nc$ pair $\boldsymbol{p}$, the photon polarization $i$ and $JM$ denoting the spin and polarization of the $D$-wave. Note that the neutron spin is unaffected by the E1 capture process up to this order. If we project out the $J=3/2$ part of the amplitude $M^{(3/2)}$ and average (sum) over incoming (outgoing) spins, respectively, we finally find the differential cross section for the E1 capture process at LO ($ \frac{m}{M_{nc}} k \ll p$): \begin{align} \frac{d\sigma^{cap}}{d\Omega} &= \frac{m_R}{4\pi^2} \frac{k}{p} \left| \mathcal{M}^{(3/2)} \right|^2 = \frac{15}{2 \pi} \frac{\left( p^2 + \gamma_2^2 \right)}{m_R^2 p} \frac{e^2 Z_{eff}^2}{-r_2 - \mathcal{P}_2 \gamma_2^2} X(\theta) = \frac{30 \alpha Z_{eff}^2}{-r_2 - \mathcal{P}_2 \gamma_2^2} \frac{\left( p^2 + \gamma_2^2 \right)}{m_R^2 p} X(\theta) \ , \end{align} with the fine-structure constant $\alpha$, $Z_{eff} = (m_R/M) Q_c$, \begin{align} &|\mathcal{M}^{(3/2)}|^2 = \frac{1}{2} \sum_{i,m_s,M} |\bar{\Gamma}^{i}_{m_s 3/2 M}|^2 \ , \end{align} and \begin{align} &X(\theta) = \frac{1}{15} \left[ 2 p^2 (13 - \cos (2 \theta ))+ \frac{4 p^4 \sin ^2(\theta )}{\left(\gamma _2^2+p^2\right)} \left( \frac{p^2}{\left(\gamma _2^2+p^2\right)} + 2 \right) \right] \ . \end{align} After integrating over $d\Omega$ we find for the total cross section: \begin{align} \sigma^{cap} = \frac{\alpha Z_{eff}^2}{-r_2 - \mathcal{P}_2 \gamma_2^2} \frac{32 \pi p}{3 m_R^2} \frac{ \left(5 \gamma _2^4 + 11 p^4+ 14 \gamma _2^2 p^2\right)}{ \left(\gamma _2^2+p^2\right)} \ . \end{align} From an experimental measurement of the capture (or dissociation) cross section we can therefore extract the numerical value of the combination of $D$-wave effective range parameters $1/(-r_2 - \mathcal{P}_2 \gamma_2^2)$. For the $5/2^+$ state we project out the $J=5/2$ part of the amplitude $M^{(5/2)}$ and obtain: \begin{align} \frac{d\sigma^{cap}}{d\Omega} &= \frac{m_R}{4\pi^2} \frac{k}{p} \left| \mathcal{M}^{(5/2)} \right|^2 = \frac{45}{4\pi} \frac{\left( p^2 + \gamma_{2'}^2 \right)}{m_R^2 p} \frac{e^2 Z_{eff}^2}{-r_{2'} - \mathcal{P}_{2'} \gamma_{2'}^2} X(\theta) = \frac{45 \alpha Z_{eff}^2}{-r_{2'} - \mathcal{P}_{2'} \gamma_{2'}^2} \frac{\left( p^2 + \gamma_{2'}^2 \right)}{m_R^2 p} X(\theta) \ , \end{align} where $X(\theta)$ is the same as for the $J=3/2$ cross section. After integrating over $d\Omega$ we find for the total cross section: \begin{align} \sigma^{cap} = \frac{\alpha Z_{eff}^2}{-r_{2'} - \mathcal{P}_{2'} \gamma_{2'}^2} \frac{16 \pi p}{m_R^2} \frac{ \left(5 \gamma _{2'}^4+ 11 p^4+ 14 \gamma _{2'}^2 p^2\right)}{ \left(\gamma _{2'}^2+p^2\right)} \ , \end{align} which is the same result as the $J=3/2$ cross section multiplied by a factor of $3/2$ and different numerical values for $\gamma_2$, $r_2$ and $\mathcal{P}_2$. \subsection{M1 neutron capture on $^{16}$C} \label{sec:m1-capture} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{M1capNew3} \caption{Relevant diagrams contributing to M1 capture at LO. For a more detailed description of the lines, see Fig. \ref{fig:dimer-propagator} and \ref{fig:diagramsM1momentGammaV}.} \label{fig:diagramsM1capture} \end{figure} \subsubsection{M1 capture into the $1/2^{+}$ state} Similar to E1 capture, we can calculate the M1 capture cross section. The main difference between both processes is the parity conservation in the M1 matrix element. Therefore, the loop diagram (b) shown in Fig.~\ref{fig:diagramsM1capture} is also relevant at LO for M1 capture since initial state interactions in the $S$-wave channel have to be taken into account. Additionally, the photon now couples to the magnetic moment of the halo neutron in diagrams (a) and (b). In principle, we also need to consider diagrams which arise from minimal substitution. This is shown in the third diagram (c) where the photon couples to the charged $^{16}$C core. In the $S$-wave case, however, diagram (c) yields no contribution to the M1 capture process. For diagram (a) in Fig.~\ref{fig:diagramsM1capture} we get: \begin{align} \label{eq:M1capA} \bar{\Gamma}^{(a)}_{i m_s m_{s'}} = -2 \sqrt{Z_\sigma} \kappa_n \mu_N g_0 m_R \frac{\sigma_j^{m_s m_{s'}} ({\bf k} \times {\boldsymbol \epsilon^i})_j} {\gamma_0^2 + \left(\boldsymbol{p} - \frac{M}{M_{nc}} \boldsymbol{k} \right)^2} \ , \end{align} with the Pauli matrices $\sigma_j$, the photon polarization index $i$, and the relative momentum of the incoming $nc$ pair $\boldsymbol{p}$. Since the power counting stipulates $p \sim \gamma_0 \sim M_{\text{lo}}$ and $k \sim M_{\text{lo}}^2/M_{\text{hi}}$, we can neglect the recoil term $\sim \boldsymbol{p}\cdot{\bf k}$ in the denominator of Eq.~\eqref{eq:M1capA}. \begin{align} \label{eq:M1capALO} \bar{\Gamma}^{(a)}_{i m_s m_{s'}} = -2 \sqrt{2 \pi \gamma_0} \kappa_n \mu_N \frac{\sigma_j^{m_s m_{s'}} ({\bf k} \times {\boldsymbol \epsilon^i})_j}{\gamma_0^2 + p^2} \ . \end{align} Diagram (b) with the intermediate $S$-wave state yields \begin{align} \bar{\Gamma}^{(b)}_{i m_s m_{s'}} = - \sqrt{Z_\sigma} g_0^3 \kappa_n \mu_N \frac{2 \pi}{g_0^2 m_R} \frac{\sigma_j^{m_s m_{s'}} ({\bf k} \times {\boldsymbol \epsilon^i})_j}{\frac{1}{a_0} -\frac{r_0}{2} p^2 + i p} \int{\frac{dl^3}{(2\pi)^3} \frac{2 m_R}{p^2 - l^2} \frac{2 m_R}{\gamma_0^2 + \left(\boldsymbol{l} + \frac{m_R}{m} \boldsymbol{k} \right)^2}} \ , \end{align} with the loop momentum $\boldsymbol{l}$, which leads at LO to \begin{align} \bar{\Gamma}^{(b)}_{i m_s m_{s'}} = 2 \sqrt{2 \pi \gamma_0} \kappa_n \mu_N \frac{\sigma_j^{m_s m_{s'}} ({\bf k} \times {\boldsymbol \epsilon^i})_j}{\gamma_0 + i p} \frac{1}{\gamma_0 - ip} = - \bar{\Gamma}^{(a)}_{i m_s m_{s'}} \ . \end{align} As a consequence, both diagrams cancel each other at LO. In coordinate space, this process is given by an overlap integral between two orthogonal wave functions. At NLO, there is an additional contribution from the effective range $r_0$ as discussed for the E1 capture process before, which will give a correction of order $\gamma_0 r_0\approx 40\%$. Moreover, a two-body current enters at NLO with an additional counter term that has to be fixed from data, similar to the case of magnetic moments discussed in Sec. \ref{sec:magetic-moment-swave}. This shows again that counter terms play a more dominant role in the magnetic sector than in the electric one. \paragraph{Recoil corrections -} Subleading recoil corrections are usually dropped in EFT calculations for capture reactions such as this one. Taking recoil corrections into account, the first diagram (a) will give non-zero contributions to higher multipoles through higher partial waves in the initial state. The second diagram (b) in Fig.~\ref{fig:diagramsM1capture} contributes only when the core and the nucleon are in a relative $S$-wave in the initial state. The denominator in Eq.~\eqref{eq:M1capA} for diagram (a) can be spherically expanded as \begin{align} \label{eq:M1capRecoil} \frac{1}{\gamma_0^2 + \left(\boldsymbol{p} - \frac{M}{M_{nc}} \boldsymbol{k} \right)^2} &= - \sum_{l}{(2l+1) i^{2l} P_l(\hat{p} \hat{k}) \frac{M_{nc}}{2 M k p} \mathcal{R}e \bigg \{ Q_l\left( - \frac{M_{nc}}{2M pk} \left( p^2 + \frac{M^2}{M_{nc}^2} k^2 + \gamma_0^2 \right) \right) \bigg \} } \ , \end{align} where $Q_l(x)$ denotes the Legendre function of the second kind. As an example, we consider the $S$-wave result for Eq. \eqref{eq:M1capRecoil} \begin{align} - \frac{1}{a} \ln\left(1 - \frac{a}{\gamma_0^2 + \left(p + \frac{M}{M_{nc}} k \right)^2} \right) \ , \end{align} with $a = M_{nc}/(4M kp)$, which is in perfect agreement with Eq.~\eqref{eq:M1capALO} if we set $k \sim 0$ and expand the logarithm. After averaging and summing over incoming and outgoing spins, respectively, we obtain for the differential cross section the general result: \begin{align} \frac{d\sigma^{cap}}{d \Omega} = \frac{m_R}{4 \pi^2} \frac{k}{p} |\mathcal{M}^{(1/2)}|^2 = \frac{m_R}{m^2} \frac{k^3}{p} \frac{4 \alpha\kappa_n^2 \gamma_0} {\left[\gamma_0^2 + \left(\boldsymbol{p} - \frac{M}{M_{nc}} \boldsymbol{k} \right)^2 \right]^2}\ , \end{align} with the fine structure constant $\alpha$ and \begin{align} |\mathcal{M}^{(1/2)}|^2 &= \frac{1}{2} \sum_{i,m_s,m_{s'}} |\bar{\Gamma}^{(a)}_{i m_s m_{s'}}|^2 \ . \end{align} \subsubsection{M1 capture into the $3/2^{+}$ and $5/2^{+}$ states} \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{M1capNewDwave} \caption{Relevant diagrams contributing to M1 capture into $D$-wave at up to NLO. The thick double line denotes the dressed $D$-wave dimer and the thick single line the dressed $S$-wave dimer. For a description of the other lines, see Figs. \ref{fig:dimer-propagator} and \ref{fig:diagramsM1momentGammaV}. The solid squares denote different vertices from two-body currents.} \label{fig:diagramsM1captureD} \end{figure} In this section, we calculate M1 neutron capture from the continuum into the $3/2^+$ $D$-wave ground state or $5/2^+$ excited state of $^{17}$C. Compared to the $1/2^+$ case in the previous section, there are additional contributions from two-body currents for the $D$-wave case at LO and NLO: \begin{align} \notag \mathcal{L}_{M} =& - \mu_N L^{d'd}_{M1cap} d^\dagger_{m'} d'_{m} B_i \left(\frac{5}{2} m 1 i \left| \frac{3}{2} m' \right.\right) - \mu_N L^{dd}_{M1cap} d^\dagger_{m'} d_{m} B_i \left(\frac{3}{2} m 1 i \left| \frac{3}{2} m' \right.\right)\\ \label{eq:M1capD} & - \mu_N L^{d'd'}_{M1cap} d'^\dagger_{m'} d'_{m} B_i \left(\frac{5}{2} m 1 i \left| \frac{5}{2} m' \right.\right) - \mu_N L^{\sigma d}_{M1cap} d^\dagger_{m'} \sigma_{m} B_i \left(\frac{1}{2} m 1 i \left| \frac{3}{2} m' \right.\right)~. \end{align} By rescaling the fields to absorb unnaturally large coupling constants, leading to $[\tilde{\sigma}] = 2$, $[\tilde{d}] = [\tilde{d'}] = 0$, and using naive dimensional analysis for the rescaled fields \cite{Beane:2000fi}, we find $L_{M1cap}^{d'd} \sim M_{\text{hi}}^3 l_{M1cap}^{d'd} g_{2'} g_2 m_R^2$, $L_{M1cap}^{d^{(\prime)} d^{(\prime)}} \sim M_{\text{hi}}^3 l_{M1cap}^{d^{(\prime)} d^{(\prime)}} g^2_{2^{(\prime)}} m_R^2$ and $L_{M1cap}^{\sigma d} \sim M_{\text{hi}} l_{M1cap}^{\sigma d} g_0 g_2 m_R^2$ with the constants $l_{M1cap}^{\cdots}$ all of order one. The corresponding diagrams are shown in Fig. \ref{fig:diagramsM1captureD}. The first diagram (a) represents the first three terms in Eq. \eqref{eq:M1capD} where the two-body current is between two $D$-wave states. This is the LO contribution to the M1 capture process. The second diagram (b) belongs to the last term in Eq. \eqref{eq:M1capD} and is only relevant for the $3/2^+$ ground state. This yields an NLO contribution. The diagram (a) in Fig. \ref{fig:diagramsM1capture}, where the photon couples to the magnetic moment of the neutron, contributes at N$^2$LO and the two loop diagrams at N$^3$LO. Since we get additional counter terms $L_{M1cap}$ that have to be matched to data, predictions for the M1 capture in $D$-wave case become even more complicated. For that reason, we concentrate on the LO result which yields for the $5/2^+$ excited state: \begin{align} \notag \bar{\Gamma}_{m_s \frac{5}{2} M}^{i} = \sum_{m_s m_l} \left(\left. \frac{1}{2} m_s \ 2 m_l \right\vert \frac{5}{2} M' \right) \sum_{\alpha \beta} \left(\left. 1 \alpha \ 1 \beta \right\vert 2 m_l \right) \frac{{p}_\alpha {p}_\beta}{\sqrt{r_{2'} + \mathcal{P}_{2'} \gamma_{2'}^2}} \times \\ \mu_N \sum_{M' \gamma} \left(\frac{5}{2} M' 1 \gamma \left| \frac{5}{2} M \right.\right) ({\bf k} \times {\boldsymbol \epsilon^i})_\gamma \ \frac{ \tilde{L}_{M1cap}^{d'd'}}{\frac{1}{a_{2'}} - \frac{r_{2'}}{2} p^2 + \frac{\mathcal{P}_{2'}}{4} p^4 } \ , \end{align} with the $D$-wave polarizations $\alpha$ and $\beta$, the photon momentum $\boldsymbol{k}$, photon polarization $i$, the relative momentum of the incoming $nc$ pair $\boldsymbol{p}$, and we have defined $\tilde{L}_{M1cap}^{d'd'} = \frac{(15 \pi)^{3/2}}{m_R^2 g_{2'}^2}L_{M1cap}^{d'd'}$. For the $3/2^+$ ground state we obtain: \begin{align} \notag \bar{\Gamma}_{m_s \frac{3}{2} M}^{i} = \left(\left. 1 \alpha \ 1 \beta \right\vert 2 m_l \right) \frac{{p}_\alpha {p}_\beta \mu_N ({\bf k} \times {\boldsymbol \epsilon^i})_\gamma}{{\sqrt{r_{2} + \mathcal{P}_{2} \gamma_{2}^2}}} \left[\left(\left. \frac{1}{2} m_s \ 2 m_l \right\vert \frac{3}{2} M' \right) \left(\frac{3}{2} M' 1 \gamma \left| \frac{3}{2} M \right.\right) \frac{ \tilde{L}_{M1cap}^{dd}}{\frac{1}{a_{2}} - \frac{r_{2}}{2} p^2 + \frac{\mathcal{P}_{2}}{4} p^4 } \right. \\ + \left. \left(\left. \frac{1}{2} m_s \ 2 m_l \right\vert \frac{5}{2} M' \right) \left(\frac{5}{2} M' 1 \gamma \left| \frac{3}{2} M \right.\right) \frac{ \tilde{L}_{M1cap}^{d'd}}{\frac{1}{a_{2'}} - \frac{r_{2'}}{2} p^2 + \frac{\mathcal{P}_{2'}}{4} p^4 } \right] \ , \end{align} where we have implicitly summed over repeated indices and we have defined $\tilde{L}_{M1cap}^{dd} = \frac{(15 \pi)^{3/2}}{m_R^2 g_{2}^2}L_{M1cap}^{dd}$ and $\tilde{L}_{M1cap}^{d'd} = \frac{(15 \pi)^{3/2}}{m_R^2 g_{2'} g_2}L_{M1cap}^{d'd}$. The differential cross section for the M1 capture process at LO for $J=3/2$ or $5/2$ is then given by: \begin{align} \frac{d\sigma^{cap}}{d \Omega} = \frac{m_R}{4 \pi^2} \frac{k}{p} |\mathcal{M}^{(J)}|^2 ~, \quad \text{with} \quad |\mathcal{M}^{(J)}|^2 &= \frac{1}{2} \sum_{i,m_s,M} |\bar{\Gamma}_{m_s J M}^{i}|^2 \ . \end{align} Since we need at least four additional input parameters to make predictions for the M1 capture process into the $D$-wave state already at LO, numerical predictions are currently not possible. This shows the limitations of Halo EFT for higher partial waves especially in the magnetic sector. \section{Summary} \label{sec:summary} Halo nuclei are weakly bound systems of a tightly bound core nucleus and a small number of valence nucleons. Their structure can be probed experimentally by measuring capture reactions, dissociation cross sections, and charge radii. In this work, we have discussed these observables for $S$- and $D$-wave halo states using the framework of Halo EFT. We have considered the nucleus $^{17}$C as a halo nucleus consisting of a $^{16}$C core and a neutron. $^{17}$C is an interesting halo candidate since it has three $S$- and $D$-wave neutron-core states with small neutron separation energies in its spectrum. We have calculated the key observables relevant to this system, including radii, magnetic moments as well as electric and magnetic transition rates. Moreover, we showed that capture reactions can provide insight into the continuum properties of the neutron-$^{16}$C system. We found that predictions of many observables for states with angular momentum larger than zero need additional input parameters, beyond the neutron separation energy. This limits the predictive power of Halo EFT for such states. However, these counterterms can be matched to experiment or other theoretical calculations. For example, the counterterms appearing in the expressions for the $S$- to $D$-wave transitions can be determined in this way. Coupled-cluster calculations for $^{17}$C were carried out in Ref.~\cite{Kanungo:2016tmz} using effective interactions derived from first principles, and this approach could be extended to calculate the transitions in our work. The results could then be used to predict capture cross sections since the counterterms in capture cross sections and transition strengths are related. This strategy would provide insights into the continuum properties of ${}^{17}$C based on a combination of halo EFT and the shell model. Alternatively, one can eliminate unknown counterterms by considering correlations between different observables. These correlations can be used to test the consistency between different ab initio calculations and/or experimental data. The structure of such correlations is universal in the sense that it is independent of the specific neutron separation energies and applies to all states with the same quantum numbers. As a consequence, Halo EFT is complementary to ab initio approaches by exploiting universal correlations driven by the weak binding. Some of the observables discussed in this work have been studied extensively in the case of the deuteron which can be considered the lightest halo nucleus, consisting of a neutron and a proton core~\cite{Chen:1999tn,Chen:1999vd}. One-neutron halo nuclei can therefore have similar electromagnetic properties to the deuteron. For example, the expression for the LO charge radius of an $S$-wave neutron halo nucleus shown in Eq.~\eqref{eq:rE-swave} is the same as for the deuteron. However, the deuteron consists of two spin-1/2 particles and interacts resonantly in the spin-triplet and spin-singlet $S$-wave channels. This leads to a relatively large M1 capture cross section between the unbound spin-singlet and the spin-triplet channel in which the deuteron resides. The absence of a second resonantly interacting channel leads a strong suppression of magnetic capture in the case of ${}^{17}$C. We hope that our investigation will motivate further theoretical and experimental investigations of $^{17}$C. The expressions presented in this paper should be useful for the analysis of experimental and/or ab initio data on ${}^{17}$C in order to establish the halo nature of $^{17}$C. The combination of Halo EFT and ab initio calculations as was done in Refs. \cite{Hagen:2013jqa,Ryberg:2014exa,Zhang:2014zsa} could provide insights into the continuum properties of ${}^{17}$C and should facilitate a test of the power counting that was used in this work. Future extensions of our calculation to NLO and beyond would improve this comparison quantitatively, but a growing number of counterterms may invalidate this advantage. \acknowledgements We acknowledge useful discussions with Thomas Papenbrock and Wael Elkamhawy. JB thanks the University of Tennessee, Knoxville and the Joint Institute for Nuclear Physics and Applications for their hospitality and partial support. This work has been supported by Deutsche Forschungsgemeinschaft under grant SFB 1245, by the BMBF under grant No. 05P15RDFN1, by the Office of Nuclear Physics, U.S.~Department of Energy under Contract No. DE-AC05-00OR22725 and the National Science Foundation under Grant No. PHY-1555030.
{ "timestamp": "2018-10-08T02:12:16", "yymm": "1806", "arxiv_id": "1806.01112", "language": "en", "url": "https://arxiv.org/abs/1806.01112" }
\section{Introduction} Though, we deal in the present paper with real equations the obtained results can be developed to the complex domain following the approaches \cite{Mi}. Solvable problems of non-relativistic quantum mechanics have always attracted much attention \cite{Ju,Gd}. The analytical methods to resolve the Schroedinger equation are very well-known \cite{BF}. A further remarkable development in solving the Schroedinger equation was the introduction of the concept of shape invariance \cite{Gd}. Many of the potentials related by supersymmetry \cite{Ju, Co, Ra} were found to have similar shapes (i.e. to depend on the coordinate in similar way), only the parameters appearing in them were different. Although the number of potentials satysfying the shape invariance condition is limited, it turned out that the energy spectrum and the wavefunctions can be determined by elementary calculations in this case. \section{Schroedinger equation} In the present paper we consider the Schroedinger equation in one dimension, setting $\hbar=2m=1$: \begin{equation} \-{d^2 \over dx^2}\psi_n(x)+ (E_n-V(x)) \psi_n(x)=0 \label{e1} \end{equation} then the function \begin{equation} W_n(x)=-{\psi_n'(x) \over \psi_n(x)}, \label{e2} \end{equation} where prime denotes differentation with respect to $x$, satisfies the corresponding Riccati equation \begin{equation} W'_n(x)-W_n^2(x)=E_n-V(x). \label{e3} \end{equation} Assuming that the function $W_0(x)$ has a zero inside interval $I$ and \begin{equation} W_0'(x)>0 \quad \forall x \in I \subset \mathbb{R}, \label{e4} \end{equation} which is associated with normalization of the basic function $\psi_0$, we get \begin{equation} W_0'(x)=F(W_0), \label{e5} \end{equation} where $F$ is an arbitrary function satisfying eq.\eqref{e4}. The last equation is obtained from reversibility of the function $W_0(x)$ on interval $I$. Taking eq.\eqref{e5} into account and comparing it with eq.\eqref{e3} we get the following result: \begin{equation} W_0'(x)=W_0^2+f(W_0), \label{e6} \end{equation} where \begin{equation} E_0-V(x)=f(W_0). \label{e7} \end{equation} Now we can express the potential $V(x)$ in terms of $W_0$ and we can use eq.\eqref{e6} to generate potentials by choosing $f(W_0)$. A simplest and most obvious choice seems to be a second order polynomial \begin{equation} W_0'(x)=AW_0^2+BW_0+C, \label{e8} \end{equation} where $A, B, C$ are parameters. This differential equation is a first-order one and it can be solved in a straightforward way. The solution of the eq.\eqref{e8} has the form \begin{equation} W_0(x)=-{B \over 2A}+{\sqrt{-B^2+4AC} \over 2A}\tan\left({1\over 2}\sqrt{-B^2+4AC}(x-x_0)\right), \label{e9} \end{equation} where \begin{equation} x_0-{\pi \over \sqrt{-B^2+4AC}}\leq x \leq x_0+{\pi \over \sqrt{-B^2+4AC}}. \label{e10} \end{equation} Thus \begin{equation} \psi_0(x)=e^{B(x-x_0)\over{2A}} (\cos({1\over 2}\sqrt{-B^2+4AC}(x-x_0))^{1\over A} \label{e11a} \end{equation} is the unnormalized form of the ground state. \section{ Cascade of equations} In order to have the same potential function in the Riccati equation for $n=1$ we introduce expression for $W_1$ : \begin{equation} W_1=W_0-{a_1\over b_1W_0-c_1}. \label{e11} \end{equation} Now, let us turn our attention to the explicit determination of the coefficients $a_1, b_1, c_1$ in terms of the $A, B, C$. From the eq.\eqref{e3} and eq.\eqref{e8} we get \begin{equation} a_1=(A+2)C-{(A+2)^2B^2\over 4(A+1)^2}, \label{e12} \end{equation} \begin{equation} b_1=A+2, \label{e13} \end{equation} \begin{equation} c_1=-{(A+2)B \over 2(A+1)}. \label{e14} \end{equation} Strightforward calculations lead us to the form of the first excited state wavefunction \begin{equation} \psi_1(x)=e^{\alpha_{01} x} (\cos\theta(x-x_0))^{\gamma_{01}} (\alpha_{1} \cos\theta(x-x_0)+\beta_{1} \sin\theta(x-x_0))^{\gamma_{1}}, \label{e16a} \end{equation} where all coefficients, denoted in the Greek letters, depend on the parameters $A, B, C$. Although this relationship is rather complex but, by use of the equations \eqref{e2}, \eqref{e9} and \eqref{e11}, easy to achieve. Considerations presented above can be generalized if we take the explicit form of the function $W_n$ in terms of $W_0$ : \begin{equation} W_n=W_0-\cfrac{a_n}{b_nW_0-\cfrac{c_n}{b_{n-1}W_0-\cdots}}. \label{e15} \end{equation} This function preserves the expression of equation \begin{equation} W'_n(x)-W_n^2(x)= W'_0(x)-W_0^2(x)+ E_n-E_0 \label{e16b} \end{equation} for suitable values of coefficients which are involved in a system of non-linear equations (too complicated to be presented here). We should select the appropriate values of $A, B, C$ parameters to simplify calculations. It will be done in the next chapter. The equations \eqref{e2}, \eqref{e3} and \eqref{e8} enable us to obtain every wavefunctions $\psi_n$ and energies $E_n$. It is easy to prove \cite{Ra} that the function $W_0$ fulfil the shape invariance condition, so this potential family, resulting from the eq.\eqref{e8}, is an example for shape-invariant solvable potentials. Using eq.\eqref{e15} the wavefunctions can be written, without normalization, as \begin{equation} \psi_n(x)=e^{\alpha_{0n} x} (\cos\theta(x-x_0))^{\gamma_{0n}}\prod_{i=1}^{n} (\alpha_{i} \cos\theta(x-x_0)+\beta_{i} \sin\theta(x-x_0))^{\gamma_{i}}, \label{e16} \end{equation} where, as in the previous case, all coefficients written in Greek depend on the $A, B, C$ parameters. \section{The classic potentials} Equation \eqref{e8} offers a convenient way to link this simple method with the well-known solutions of the Schroedinger equation. For instance, choosing $B=0, C=1$ we get the following results: \begin{equation} W_0'=AW_0^2+1 \label{e17} \end{equation} and the first three wavefunctions \begin{equation} \psi_0(x)=(\cos{(\sqrt{A}x)})^{1\over A}, \label{e18} \end{equation} \begin{equation} \psi_1(x)={(\cos{(\sqrt{A}x)})^{1\over A}\sin{(\sqrt{A}x)}\over \sqrt{A}}, \label{e19} \end{equation} \begin{equation} \psi_2(x)={(\cos{\sqrt{A}x)})^{1\over A}(-1+(1+A)\cos{(2\sqrt{A}x)})\over A}, \label{e20} \end{equation} which are orthoghonal on domain $I$ (eq.\eqref{e10}), tend to very well-known solutions of the quantum oscillator $\psi_n(x)\to H_n(x)e^{-{x^2\over 2}}$ for $A\to0$. It can be seen from the figures below that the wavefunctions have the same characteristic shapes but they differ in domains. \pagebreak \begin{figure}[h] \begin{center} \subfigure[ The first three wavefunctions for $A>0 \quad (A=0.9)$.]{ \resizebox*{5cm}{!}{\includegraphics{osc1.eps}}}\hspace{5pt} \subfigure[The first three wavefunctions of the quantum harmonic oscillator $(A\to 0)$.]{ \resizebox*{5cm}{!}{\includegraphics{osc2.eps}}} \caption{Example of the wavefunctions with different values of the parameter $A$, where $x$ is on the horizontal axis and $\psi_n(x)$ on the vertical one.} \label{figure1} \end{center} \end{figure} Basing on the procedure described above we are able to get solutions of the Schroedinger equation with the radial Coulomb potential (angular momentum is equal to zero). In this case \begin{equation} W_0'=AW_0^2-BW_0+{B^2\over 4} \label{e21} \end{equation} whose basic solution is \begin{equation} \psi_0(x)={(\sin({{1\over 2}\sqrt{A-1}}Bx))^{1\over A}\over \sqrt{A-1}}e^{-{B\over 2A}x} \label{e22} \end{equation} which tends to the radial part of the ground state eigenfunction of the Schroedinger equation for one-electron atom, $\psi_0\to{1\over 2 }Bxe^{-{1\over 2}Bx}$ for $A\to1$. With help of the eq.\eqref{e15} we are able to get the wavefunctions for the excited states. Another example of the eq.\eqref{e8} which lead us to the very well-known solution is \begin{equation} W_0'=-AW_0^2-W_0+C, \label{e23} \end{equation} where parameters $A>0$, and $C>0$. Thus \begin{equation} W_0(x)=-{1\over 2A}+{\sqrt{1+4AC}\over2A}\coth\left({1\over 2}\sqrt{1+4AC}(x-x_0)\right), \label{e24} \end{equation} where the integration constant \begin{equation} x_0={1\over \sqrt{1+4AC}}(\ln{A}+\imath \pi) \label{e25} \end{equation} is the complex number. In this case we have \begin{equation} W_0(x)=-{1\over 2A}+{\sqrt{1+4AC}\over2A}\tanh\left({1\over 2}(\sqrt{1+4AC}x-\ln{A})\right) \label{e26} \end{equation} and \begin{equation} \psi_0(x)=e^{x\over 2A}\cosh\left({1\over 2}(\sqrt{1+4AC}x-\ln{A})\right)^{-{1\over A}} \label{e27} \end{equation} is the basic wavefunction without normalization constant. From eq.\eqref{e26} we have $W_0=C-e^{-x}$ for $A\to0$ what is the standard expression for the Morse potential \cite{Co}. All potentials resulting from eq.\eqref{e8} have a trigonometric form. It means that they are expressed in terms of the tangent function. If we wish to obtain potentials interesting from the physical point of view, like the Coulomb potential, or the Morse potential, we should follow the procedure outlined above or choose the proper, inital value of the parameters in eq.\eqref{e8}. It should be emphasized that every solution of the Schroedinger equation related to the orthogonal polynomials can be obtained by this method. \section{The new Hamiltonian} The results can be generalized to the form for which this method works \cite{RS}. If we take eq.\eqref{e8}, in the form \begin{equation} W_0'=AW_0^2+{P_{l+1}(W_0)\over Q_{l}(W_0)}=R_{l+2,l}(W_0), \label{e28} \end{equation} where $P_{l+1}(W_0)$ is a polynomial in $W_0$ with degree no greater than $l+1$, $Q_{l}(W_0)$ is a polynomial in $W_0$ with degree equal to $l$ and $R_{l+2,l}(W_0)$ is a rational function such that both the numerator and the denominator are polynomials with degree $l+2$ and $l$ respectively. Substituting \begin{equation} W_0(x)=R_{l+1,l}(\tan(\phi \cdot(x-x_0))) \label{e29} \end{equation} in eq.\eqref{e28} and adjusting the indices of the sums to get the same powers\\ of $\tan(\phi \cdot(x-x_0))$, we get the explicit form of $W_0$. It should be emphasized that the condition of eq.\eqref{e4} must be satisfied. The procedure outlined above can be applied to the function \begin{equation} W_n(x)=R_{l+n+1,l+n}(\tan(\phi \cdot(x-x_0))) \label{e30} \end{equation} and thus the excited state wavefunctions $\psi_n(x)$ can be obtained. Let us consider the simple equation, being the example of generalized eq.\eqref{e28} \begin{equation} W_0'=W_0^2+{3W_0-1\over W_0-3}={(W_0-1)^3\over W_0-3}, \label{e31} \end{equation} where all coefficients has been chosen to simplify calculations.\\ Hence \begin{equation} W_0(x)={2\sqrt{x+{1\over 4}}-3\over 2\sqrt{x+{1\over 4}}-1}, \label{e32}\end{equation} which gives the correspondig eigenvalue $E_0=-1$ and the completly new potential is discovered: \begin{equation} V(x)=-{2\over \sqrt{x+{1\over 4}}}\quad\textrm{for }x\geq 0. \label{e33} \end{equation} Thus the ground state function, without normalization constant, has the form \begin{equation} \psi_0(x)=e^{-x+ 2\sqrt{x+{1\over 4}}}\left( 2\sqrt{x+{1\over 4}}-1\right). \label{e34} \end{equation} Substituting \begin{equation} W_1={P_2(W_0)\over Q_1(W_0)} \label{e35} \end{equation} in eq.\eqref{e3} and taking into account eq.\eqref{e33}, we obtain the unnormalized wavefunction \begin{equation} \psi_1(x)=e^{-0.79x+ 2.52\sqrt{x+{1\over 4}}}\left( 2\sqrt{x+{1\over 4}}-1\right)\left( 2\sqrt{x+{1\over 4}}-3.74\right), \label{e36} \end{equation} where all decimal numbers are approximated and $E_1\approx-0.63$. It is easy to show that the latest potential does not fulfil the shape invariance condition \cite{Ra}, so this new potential family is an example for non-shape-invariant solvable potentials. Let us now discuss the question of the explicit form of the Schroedinger equation. Treating the eq.\eqref{e28} not as a condition but rather as the transformed Schroedinger equation and substituting eq.\eqref{e2} (for $n=0$) to eq.\eqref{e28}, we get \begin{equation} -\psi_0''(x;\alpha)\psi_0(x;\alpha)-\alpha(\psi_0'(x;\alpha))^2=\left[ E_0-V\left(-{\psi_0'(x;\alpha)\over \psi_0(x;\alpha)}\right)\right]\psi_0^2(x;\alpha), \label{e37} \end{equation} where the parameter $\alpha$ is usually related to the parameter $A$ in eq.\eqref{e28} and the potential $V$ has the form \begin{equation} V\left(-{\psi_0'(x;\alpha)\over \psi_0(x;\alpha)}\right)=R_{l+2,l}\left(-{\psi_0'(x;\alpha)\over \psi_0(x;\alpha)}\right). \label{e38} \end{equation} As we see the eq.\eqref{e37} is nonlinear differential equation. Taking into account the previous considerations regarding the quantum oscillator, the Coulomb potential and the Morse potential we get the following form of the eq.\eqref{e37} in the $\alpha \to0$ limit: \begin{equation} -\psi_0''(x;0)=\left[ E_0-V\left(-{\psi_0'(x;0)\over \psi_0(x;0)}\right)\right]\psi_0(x;0) \label{e39} \end{equation} what is the familiar form of the Schroedinger equation, and where nonlinearity is hidden in the form of the potential function. \section{Conclusions} The new method of obtaining solvable potentials has been reviewed in this paper. The main role in this method plays the Riccati equation which is a result of the transformed, one-dimensional, stationary Schroedinger equation. It allows us to emphasize the importance of function $W_0$ known in a literature as a "superpotential" \cite{Ju,Co}. By use of its features we can show that the potential is not an arbitrary function of $x$ but rather its form depends on the function $W_0$. As a consequence, we can find not only very well-known solutions of the Schroedinger equation but also a new class of the solvable potentials. These considerations may help us to identify new clasess of the solvable potentials and may serve as an aid for further investigations concerning the relationship between solvability of the Schroedinger equation and the form of the potential.
{ "timestamp": "2018-06-05T02:10:08", "yymm": "1806", "arxiv_id": "1806.00761", "language": "en", "url": "https://arxiv.org/abs/1806.00761" }
\section{Introduction} Graph is a well-known data structure that can represent a wealth of relationship between objects. Graph processing has a great potential to solve many real-world problems, e.g. path navigation, social network analysis and financial fraud detection. As the graph size is increasingly growing up, it has become a critical turning point regarding how to store as well as further process these large-scale graphs efficiently. A typical solution for large-scale graph processing is to divide the entire graph into many sub-graphs that are then distributed onto different machines for the computation~\cite{malewicz2010pregel,gonzalez2014graphx}. Though these distributed systems (with more computation resources and storage resources) have made the impressive progress~\cite{gonzalez2012powergraph,zhu2016gemini}, people are often more inclined to use a single-machine processing system, which is easier to manage and understand~\cite{kyrola2012graphchi}. A wide spectrum of graph systems have emerged for processing large-scale graphs under a single machine\cite{shun2013ligra,nguyen2013lightweight}, particularly in the aspect of GPU acceleration because of its powerful computing capacity~\cite{wang2016gunrock,zhong2014medusa,khorasani2014cusha}. With an elegant advance-filter programming model, Gunrock~\cite{wang2016gunrock} naturally integrates many well-optimized graphics analysis techniques. However, neither of these graph systems are able to handle the large-scale graphs that can not fit into the GPU global memory. In an effort to cope with this problem, researchers extend to store the graph data in the large host memory for assisting the GPU computing. With the vertex data stored in the GPU memory, GTS~\cite{kim2016gts} streams the subgraph data in an asynchronous manner. GraphReduce~\cite{sengupta2015graphreduce} only transfers the subgraph that has at least one active vertex or edge to the GPU. Unfortunately, due to relatively-low interconnect bandwidth between host and GPU (e.g., $\sim$12GB/s for PCI-Express 3.0), the potential limitation of GPU accelerator under the heterogeneous architecture can become more serious. Our motivating study (discussed in Section 2.2) also shows that, 75\% of GPU computing capability can be under-utilized even in the presence of existing state-of-the-art GPU-specific optimizations. It gradually becomes of great importance and necessity to scale up the performance of heterogeneous graph systems for large-scale graph processing. In this paper, we focus on studying whether and how we can provide a scale-up efficiency of heterogeneous graph systems under a commodity heterogeneous architecture. Recently, there still exist a number of graph systems that attempt to improve the performance of heterogeneous graph systems. Graphie~\cite{han2017graphie} proposes two renaming algorithms to improve the memory access efficiency, and keeps track of the active partitions to avoid moving the inactive partitions to GPU. Garaph~\cite{ma2017garaph} reduces the transmission amount by performing a part of sparse vertex updating on the host side. Nevertheless, for many real-world large graphs that can be dense and always active, these recent advances may still involve non-trivial amount of data transmission, making them limited for practical use. In this paper, we present Seraph, a novel heterogeneous graph system, which can significantly scale up the performance of out-of-GPU-memory graph processing. The key insight of this work lies in a fact that each subgraph in one transmission iteration for many graph algorithms (e.g., BFS and SSSP) may involve much information that is useful for the convergence of the next subgraph iteration. Unlike most of existing research that process each subgraph only once and then overwrite their updates~\cite{han2017graphie, ma2017garaph}, we propose to iterate each subgraph multiple times so as to fully exhaust the value of each subgraph for avoiding the redundant data transmission and unnecessary iteration. Guided by this principle, we can thus make an innovation of leveraging the powerful yet `limited' GPU processing capability to accelerate the multi-time subgraph iteration, enabling the scale-up efficiency for heterogeneous graph systems. In an effort to break the update limitation within the subgraph, we present to pipeline the iteration of subgraphs. Compared with CLIP~\cite{ai2017squeezing} that iterates on a fixed subgraph loaded from disk, our pipelined subgraph iteration is novel with a maximum scope of the subgraph information propagation. Further, based on existing high-optimized GPU-based computing model (e.g., pull execution model), we obverse that a large number of vertex computations do not necessarily contribute to a valid vertex updating, causing to waste a large amount of GPU processing capacity. We propose a predictive vertex updating, aiming at efficiently identifying these vertices and further eliminating the unnecessary computations on them for better supporting pipelined subgraph iteration. We compare Seraph with two state-of-the-art heterogeneous graph systems. Our results on a wide variety of real-world graphs demonstrate that Seraph outperforms Graphie~\cite{han2017graphie} and Garaph~\cite{ma2017garaph} by 5.42x and 3.05x, respectively. In particular, Seraph can be significantly scaled up over existing heterogeneous graph systems. In addition, we compare Seraph with other large-scale graph processing solutions, revealing that Seraph can also achieve impressive performance in comparison to state-of-the-art CPU-based (i.e., Ligra~\cite{shun2013ligra}) and distributed graph processing systems (i.e., Gemini\cite{zhu2016gemini}). The rest of this paper is as follow. Section 2 gives the background and motivation. We present pipelined iteration in Section 3. Section 4 elaborates predictive vertex updating. Section 5 shows the results. We survey the related work in Section 6. Section 7 concludes the work. \begin{figure} \includegraphics[scale=0.6]{fig/CPU_GPU_ARC.pdf} \vspace{-1em} \caption{GPU-accelerated Heterogeneous Architecture} \label{fig:Heterogeneous Architecture} \end{figure} \section{Background and Motivation} In this section, we first give a brief introduction to GPU-accelerated heterogeneous architecture, followed by a motivating study regarding the inefficient data transmission of existing heterogeneous graph systems for large-scale graph processing, finally motivating our approach. \subsection{Heterogeneous Architecture} There emerge various heterogeneous architectural designs. Some are dedicated to performance improvement \cite{kayiran2014managing}, some for energy reduction \cite{wang2014co}. Some take both into consideration \cite{munger2016carrizo}. This paper has focused on GPU-accelerated heterogeneous architecture. Figure~\ref{fig:Heterogeneous Architecture} illustrates a typical GPU-accelerated heterogeneous architecture, which integrates the hardware advantages of both host side (with larger host memory) and GPU accelerator (with stronger computing ability). A GPU accelerator generally consists of multiple streaming multiprocessors (SMXs), each of which includes hundreds of cores. In comparison to the high-speed internal bandwidth (e.g., $\sim$700GB/s for NVIDIA Tesla P100) of GPU cores accessing global memory, GPUs are generally connected to the host side with the relatively slow interface. For instance, the transmission bandwidth between CPU and GPU via PCI Express 3.0 lane connection can be limited as slowly as $\sim$12GB/s in practice~\cite{ben2017groute}. This significant gap may severely suppress the performance potential of heterogeneous architecture if the data is frequently transferred~\cite{kim2016gts,han2017graphie}. Though there are a number of transmission interfaces that can provide higher interconnect bandwidth (e.g., Intel QuickPath Interconnect with 25.6GB/s, NVLink high-speed interconnect with 160GB/s), this paper focuses on PIC-e interconnect since it is more common in the current commodity market. \begin{figure}[t] \includegraphics[scale=0.55]{fig/soc_980.pdf} \vspace{-1em} \caption{Performance characterization using Graphie's subgraph iteration~\cite{han2017graphie} with the varying GPU SMXs} \vspace{-1em} \label{fig:motivation} \end{figure} \subsection{Inefficiency of Existing Heterogeneous Graph Systems: A Motivating Study} In an effort to leverage the hardware advantages of heterogeneous architecture for large-scale graph processing, existing heterogeneous graph systems generally divide the entire graph data into subgraphs~\cite{han2017graphie, ma2017garaph,ai2017squeezing}. The CPU offers the graph data to the GPU in the form of subgraph. Once each subgraph is consumed, the GPU requests to process the next subgraph which is transferred from the host memory to the GPU global memory. As a representative of state-of-the-art heterogeneous graph processing system, on the basis of basic subgraph iteration above, Graphie~\cite{han2017graphie} additionally has two highlighted optimizations. First, it is ensured that only those subgraphs that have at least one active edge or vertex can be transferred. Second, an asynchronous runtime to reuse the transferred subgraph at the next iteration. As a consequence, Graphie can partly reduce an amount of data transmission. Nevertheless, the potential of GPU-accelerated heterogeneous architecture can be still limited since their approach may be still inefficient to handle many graph algorithms (e.g., CC) where almost all subgraph can be active in the first few iterations. As a consequence, the majority of subgraphs still need to be transferred to the GPU within the iteration. We investigate the performance characterization of Graphie's cached subgraph iteration on three well-known graph algorithms as the number of available GPU SMXs is increasing in Figure~\ref{fig:motivation}. More details regarding the experimental settings can be found at Section~\ref{sec:evaluation:setup}.Offering more SMXs is helpless for improving the efficiency of large-scale graph processing. As is known, current mainstream GPU accelerators usually {have far more than 4 SMXs}. There remains a significant gap between the low data transmission efficiency and high GPU computing capability for large-scale graph processing. Unfortunately, existing heterogeneous graph systems rarely respond to this challenge easily for providing the scale-up efficiency. Under the premise of a fixed transmission bandwidth, one viable approach to improving the CPU-GPU transmission efficiency is to increase the bandwidth utilization with the well-organized subgraph data. Nevertheless, there still remain two significant defects at least for this approach. First, on account of the random memory accesses, graph processing usually behaves poor data locality~\cite{sengupta2015graphreduce, maass2017mosaic}. It is extremely difficult, if not impossible, to prepare a high-quality subgraph that can be fully used for each iteration. Second, even if we can identify such high-quality subgraph, it is also difficult to gather these data in a cost-efficient manner at runtime since graph re-organization is a well-known time-consuming process that may take more time than graph processing for many graph datasets~\cite{zhu2015gridgraph, malicevic2017everything}. \subsection{Our Observations} This work aims at reducing the impact of slow subgraph transmission on the high-performance graph computing. Instead of expensively preparing the high-quality subgraphs, we have the key insight that the transmission efficiency of heterogeneous graph systems can be greatly improved by making full use of the value of each subgraph, backed up by our observations as follows. \underline{\bf\em Observation $1$}:\quad {\em Each subgraph in one iteration has much useful information that serves to the subgraph processing in the next iteration.} Each subgraph is structured with many vertices that are associated via the edges. It has been observed that many graph algorithms (e.g., BFS and SSSP) are incremental iteration method that can greatly benefit from iterating each subgraph multiple times~\cite{ai2017squeezing}. More specifically, in one iteration, the information of a given vertex can be only propagated to its neighboring vertices, which still fall short in informing all the vertices that may be involved for graph processing. In order to handle all these vertices, there may need more iteration times to do so, causing to repeatedly load a large amount of redundant graph data. This inspires us of multi-time subgraph iteration to fully exploit its potential value for improving the transmission efficiency. \underline{\bf\em Observation $2$}:\quad {\em Multi-time subgraph iteration enables to improve the efficiency of heterogeneous graph processing by exploiting GPU processing capability.} Multi-time subgraph iteration enables the propagation and sharing of information among the subgraphs with multiple hops. This also provides a possibility to reduce the original task with multiple iterations into one iteration. Thus, a new question is how to efficiently iterate each subgraph multiple times, which largely depends on the GPU processing capability. That is, we are allowed to further improve the performance of heterogeneous graph systems through exploiting GPU processing capability. With the above-discussed observation, the transmission inefficiency problem of heterogeneous graph systems is transferred into a problem of enhancing GPU-based graph processing, for which, a wide of highly optimized techniques can be directly leveraged~\cite{wang2016gunrock}. \section{Pipelined SubGraph Iteration} Guided by Observation $1$, we present a pipelined multi-time subgraph iteration, which is designed to make sure the information of each subgraph iteration can be propagated to a larger scope so that the value of each subgraph can be totally exhausted. \subsection{Preparation} We start by introducing the requisite preliminaries for the pipelined subgraph iteration. {\bf Subgraph Organization} Unlike the edge list organization used in prior work~\cite{han2017graphie}, Seraph uses more compact graph structure that manages to minimize the transmission data demand as much as possible. Specifically, Seraph only needs to transfer the Compressed Sparse Column (CSC) to the GPU. In order to process large-scale graph, CSC structure will be cut into much smaller CSC pages which include the a set of continuous vertices with the corresponding incoming edges. Compared to the page structure in~\cite{ma2017garaph}, our data structure is more compact for omitting the destination vertex index array. {\bf Heterogeneous Execution Model} Seraph processes large-scale graphs with a heterogeneous execution model. Similar with prior work~\cite{shun2013ligra,zhu2016gemini,liu2015enterprise,ma2017garaph}, Seraph also adopts a density-aware model, which is efficient with less time- consumption. Our heterogeneous execution can be described as follow. 1) {In the sparse stage}, we compress the active frontier and perform the sparse updating on the CPU with push-based execution model, so as to make sure GPU focuses on the heavy subgraph iteration. 2) {In the dense stage}, we use the pull-based execution model without data contention overhead, and focus on break the transmission bound and improve the computing efficiency. \subsection{Pipelined Subgraph Scheduling} The key idea behind our multi-time subgraph iteration is to pipeline the subgraph scheduling for the purpose of maximizing the information propagation of each subgraph to other subgraphs. \begin{figure}[t] \centering \includegraphics[scale=0.35]{fig/pipeline_ldr.pdf} \vspace{-1em} \caption{Limited information propagation of subgraph iteration with loaded data reentry in CLIP~\cite{ai2017squeezing}} \label{fig:reentry} \end{figure} Streaming topology is widely used in network optimization~\cite{anyseexfliao}, and also in storage hierarchy optimization. CLIP~\cite{ai2017squeezing} presents a disk-based "loaded data reentry ($ldr$)" streaming topology to squeeze out the value of loaded data. It has the following features: 1) process the subgraph more than once to reduce the total iteration times; and 2) subgraph data is loaded by sequential order to maximize disk IO bandwidth. Nevertheless, the loaded data reentry this way is prone to causing much redundant computation. The underlying reason accounting for this is because the information can not propagate among multiple subgraphs for timely interaction,and block the convergence of the entire graph. Figure~\ref{fig:reentry} shows an example of CLIP's subgraph iteration with BFS on the given graph based on pull model. Considering the similarity, we only give a part of the entire graph. With the loaded data reentry optimization, we can find that BFS has no sign of convergence, but simply process subgraph one by one has converged quickly with the same schedule times. It is because the subgraph $A$ and subgraph $B$ can not share the update within the multi-reentries. More specifically, the update of $A$ can notify $B$ after $A$ has finished ($1\rightarrow$$4$), but the update of $B$ can not feed back to subgraph $A$ within the multi-reentry ($4\rightarrow$$2$), since subgraph $A$ has been discarded from this iteration. That is, the graph partition destroys the sub-structure among subgraphs, further blocking information propagation between multiple subgraphs. \begin{figure} \label{fig:sub:schedule} \includegraphics[scale=0.3]{fig/pipeline_psi.pdf} \vspace{-2em} \caption{Comparison of the workflow of CLIP's loaded data reentry and Seraph's pipelined subgraph iteration} \vspace{-1em} \label{fig:sub:schedule} \end{figure} {\bf Pipelined Subgraph Iteration } We therefore present a novel subgraph iteration in a pipelined fashion for maximizing the propagation scope and rebuild the substructure among subgraphs. Our pipelined subgraph iteration method is shown in Figure~\ref{fig:sub:schedule}. For facilitating the descriptions, considering GPU memory space can maximally hold four subgraphs, and we assume that the GPU can process three subgraphs and transfer one subgraph at the same time. Then kernel execution on subgraph A,B,C can be overlapped with the transmission of subgraph D. We could treat subgraph subgraph A, B and C as a ``super subgraph" since we process the vertices and edges in A, B, C concurrently. When the kernel finished, subgraph D has just been copied from the CPU, we then launch kernel to process subgraph B, C, D and transfer subgraph E repeatedly. Afterwards, the space occupied by subgraph A is overwritten. However, to maximize to substructure and fully overlap the data transfer with kernel execution, double-buffer based schedule workflow can be viewed in figure~\ref{fig:sub:schedule}. Similarly, this method also load the graph once but process more than once, furthermore process the subgraph $AB$ three times and transfer subgraph $CD$ at the same time. But our approach is more efficient for two reasons. First, we can offer a larger scope for information propagating. As figure~\ref{fig:sub:schedule} shows, the D can even reconstruct substructure with B and E, enlarging the information propagate scope to at most five subgraphs in the example compared with two in $ldr$ used in CLIP~\cite{ai2017squeezing}. Second, pipelined subgraph iteration further allow us to make a fine-grained scheduling, and we introduce as follows. Example mentioned above assumed that GPU can process three subgraphs and transfer one subgraph at the same time. However, it is hard to reflect most of real-world situations since the execution time is constantly change, and there are many idle slots (transmission wait for computation or computation wait for transmission). In order to fill the idle slots between subgraph transmission and computation, Seraph minimizes the effect by a more fine-grained scheduling. Specifically, if the transmission thread finds that the transmission task has finished, but the computation has not finished yet, it will check if the kernel executed on the first few subgraph (the subgraphs will be discarded from the execution set immediately) has finished. If so, the thread will transfer the next subgraph to GPU, and overwrite the space occupied by the finished kernel. On the contrary, if the computation has finished but the transmission not, Seraph will select one subgraph in the finished kernel set to reentry once. This method mainly benefit from much smaller subgraphs used in Seraph, and offer an incremental model to fill the idle slot as much as possible. {\bf Remark}\quad With subgraph iteration pipelined, we enable to use more information within each iteration for the fewer iteration numbers and faster convergence speed. The vertex update in one subgraph could spread to a large scope rebuilt by multiple smaller subgraph, making the information spreading more sufficient. \begin{figure} \centering \includegraphics[scale=0.3]{fig/predictive_observation.pdf} \vspace{-1em} \caption{A connected component algorithm on the given graph. We list the labels of vertex 2 and vertex 6 after each iteration} \vspace{-1em} \label{fig:pvu:cc} \end{figure} \section{Predictive Vertex Updating} Guided by Observation 2, this section mainly presents predictive vertex updating. We first review the characteristics of vertex updating with pipelined subgraph iteration. In accordance with characteristics analysis, we further present two efficient solutions of vertex updating to enhance the GPU processing capability. \subsection{Characteristics Analysis of Vertex Centric Updating} In this work, we follow the pull-based execution model on GPU which involves few data races, so that the data-parallel potential of GPU can be fully exploited.However, almost all existing systems~\cite{shun2013ligra, wang2016gunrock} with density-aware optimization perform the pull update attempt on all vertices. Nevertheless, though a part of vertices can be updated, we note that many of other vertices updating attempts have failed, leading to a large number of unnecessary computations. Figure~\ref{fig:pvu:cc} shows an example of connected component with a label-propagation-based method on a given graph. The algorithm follows a synchronous execution model. We find that there may involve two unnecessary vertex updating situations. 1) The vertices have converged before the iteration (e.g. vertex 5 has converged after the first iteration). 2) The vertices are far from the global minimal label but converged with local minimal label (e.g. vertex 6 will get label 1 at the 5th iterations, but with label 2 during the 1st$\sim$4th iteration). Though these updates make no benefit for graph convergence during the iteration, but it is difficult to predict whether update attempt will fail or succeeded in this iteration. Accurately identifying these unnecessary vertex computations are the key to enhance the GPU processing capability. In the following, we next introduce two efficient yet accurate solutions to predict whether a vertex value would be useful in making a valid vertex update at this iteration towards two situations above. \subsection{Strong Prediction Condition} We find that many vertices could be judged as converged by simply according to the vertex value. For example, vertex 5 in figure~\ref{fig:pvu:cc} is converged just after the first iteration, because it has already gotten the smallest label of 1 in the given graph. As the iteration number is increasing, the vertex value changes in a monotonous pattern. \begin{table}[t] \centering \label{tab:pvu:strong} \caption{Strong convergence condition on BFS,WCC and SSSP. $Value[i]$ means the vertex value for various graph algorithms, e.g., the depth for BFS and distance for SSSP. $ccsize[v]$ represents the number of vertices labeled with $v$.} \vspace{0.2em} \tabcolsep=0.05cm \scriptsize \begin{tabular}{c|c|c} \hline {} & condition & definition \\ \hline $BFS$ & $value[i] \le k$ & $k$ is the iteration times\\ $CC$ & $value[i] \le s$ & $s$ = min\{$v|ccsize[v]^{(k-1)}\neq ccsize[v]^{(k-2)}$\}\\ $SSSP$ & $value[i] \le l$ & $l$ = min\{$value[v]^{(k-1)}|value[v]^{(k-1)}\neq value[v]^{(k-2)}$\}\\ \hline \end{tabular} \end{table} We thus list a strongly predictive solution in Table~\ref{tab:pvu:strong}. We prove the condition of convergence separately. And note that the condition is suitable for asynchronous update, naturally suitable for synchronous model. For BFS, we can conclude that the vertex has converged if the value of vertex i is smaller than the iteration times $k$. It is easy to prove that if there is a shorter unweighted path $r$ ($r < k$ and $r < value[i]$) from the source vertex, the path must have been found in $r$ times iteration. Though the top-down/down-top based BFS also benefits from omitting the update for traversed vertices, but it is not suitable for asynchronous execution model, since the vertex could be traversed with a longer path for the much faster asynchronous update, but not converged actually. For CC, we can conclude that the vertex has converged if the vertex has been chosen into a converged component. More specifically, the component size will not increase if the component root (minimal vertex in the component) has packed all the vertex into the component, and we call the component has converged. For a vertex, the optimal situation is finding an component not converged yet. Thus the minimal label could be $s$, the root for the component not converged yet. So the vertex must have converged if the vertex label is smaller than $s$. A special but really useful case is that the vertex has got the minimal label of the whole graph, then can be concluded converged immediately. For SSSP, we focus on weighted graph without negative edge. We can conclude the vertex has converged if the vertex has found a path shorter than the latest shortest path in the last iteration. It is easy to prove, since the vertex's distance to the source vertex is based on the more closer vertex on the shortest path. It is impossible to get a smaller distance than the closest one at the last iteration. The strong predictive vertex updating method can judge the vertex converged or not directly from to the vertex value, regardless of the graph topology. And we only need to compared the vertex value with the threshold, then if the vertex value satisfy the condition, the cost for computation and the memory access on this vertex can be all reduced. \subsection{Weak Prediction Condition} Though strong prediction can reduce the unnecessary vertex update directly, however, in many cases it is difficult to design a strong converged condition such as data-driven PageRank (PageRank-delta). We thus present an alternative weak predictive technology based on the updating history. Inspired by using the branch history to predict the branch output~\cite{hennessy2011computer}, we also try to predict the vertex update result with the update history. Firstly we try to find the update patterns for graph processing, so we keep a record of the update history for each vertex with various graph datasets and algorithm (note that we only record the iteration result processed on GPU side with asynchronous execution model). For example, a vertex update history is $[1,1,0,1,0]$ indicating that the vertex value changed at the 1,2,4 iteration, but the update attempt in iteration 3 and 5 failed, indicating the vertex value do not change at these two iterations. And we find an interesting pattern: if the vertex value has changed before, but the value no longer change with the two subsequent iteration, the vertex can be concluded as converged with a high likelihood (91.3\% for SSSP on our dataset, and about 98\% for CC and BFS). We conclude this pattern with a lot of off-line analysis. \begin{figure} \centering \includegraphics[scale=0.5]{fig/predictive_state.pdf} \vspace{-1em} \caption{Weakly predictive vertex updating state-transition diagram. We let the vertices with status 2,3,4 to be not updated. If there is a valid update, we return to initial state 0 by following the red arrow, otherwise following the black arrow.} \vspace{-1em} \label{fig:pvu:dia} \end{figure} Based on the afore-mentioned statistical regularities, we further present a state-transition diagram based solution to guide the vertex updating with low overhead. And the diagram should match the statistical regularities to predict the vertex update result effectively. Figure~\ref{fig:pvu:dia} shows the state-transition diagram designed for our system. There are six states used in the diagram. The vertices with state 0, 1, 5 indicate that this vertex should make an update attempt during this iteration, and the update attempts for the vertex with state 2, 3, 4 will be canceled during the iteration. At the beginning, all vertices are initialled as the state 0. After the first iteration, the vertex state will get into 5 (dormancy candidate state) if the update attempt failed, or it returns back to 0 if success. The vertices with state 5 will get into 3 (dormancy state) if the update failed again. This design ensures that all vertices with state 3 has failed at least two times and we can assume the vertex has converged with a high likelihood, which matches with the statistical regularities mentioned above. Vertex with state 3 will be dormant and do not make an update attempt at the next two iterations, then memory access and computation can be canceled on the vertices with these status. To avoid the vertex getting into dead state and no longer attempt to update, we add state 1 to make an attempt after the two round of dormancy iterations, but if the update failed again, the vertex will get into state 4 as a punishment. This method can be integrated into the graph engine naturally. Since Seraph follow a vertex-centric programming model, we just need to allocate a status word for each vertex and initialize it to state 0, then the status can be driven by checking the vertex value changed or not during this iteration, or just turn into the next state when it is in dormancy state. And the state-transition diagram is defined by users as a input for graph engine, making the graph algorithm unmatched with this diagram can also benefit from this method by using other diagram. And though the weak predictive vertex update may be false, we will make an all-pull update to retrieve the active vertices within the dense to sparse switch. {\bf Discussion}\quad We find that the BFS, CC with strong predictive update method is really efficient, but SSSP can get a better efficiency with weak predictive method. We analyze the update history and find that SSSP with strong predictive vertex update can only reduce a small amount of useless update accurately, but weak predictive update method can remove considerable amount of useless update though may be false with low possibility. With the predictive vertex updating, many useless update can be identified and further reduce the amount of computation. And these saved computation can be used to squeeze out the real value of loaded data, improve the system efficiency directly. \section{Evaluation} In this section, we evaluate the efficiency of Seraph by answering the following four research questions: \begin{itemize} \item {\bf\em RQ1:} How efficient is Seraph compared to existing heterogeneous graph systems? [Section~\ref{sec:evaluation:efficiency}] \item {\bf\em RQ2:} How does Seraph scale with varying graph sizes and processing capability? [Section~\ref{sec:evaluation:scalability}] \item {\bf\em RQ3:} How effective are pipeline subgraph iteration and predictive vertex updating? [Section~\ref{sec:evaluation:effectiveness}] \item{\bf\em RQ4:} How well is Seraph advantageous over other state-of-the-art large-scale graph parallel processing systems? [Section~\ref{sec:evaluation:comparison:other:systems}] \end{itemize} \subsection{Experimental Setup} \label{sec:evaluation:setup} {\bf Testbed}\quad All tests are performed on a machine where the host side is equipped with an Intel 10-core Xeon E5-2650v3@2.40GHz, 64GB main memory. The GPU accelerator is NVIDIA GTX980 (with 16SMXs, 2048 cores and 4GB GDDR5 memory), which is connected to the host side via PCI Express 3.0 lanes operates at 16x speed. The transmission bandwidth for asynchronous memory copy between CPU and GPU is around 11GB/s. {\bf Methodology}\quad Graphie~\cite{han2017graphie} and Garaph~\cite{ma2017garaph} are the most related GPU-accelerated heterogeneous systems to our Seraph. Graphie tracks the active subgraph with a compact data structure, and only transfers the active subgraph from the CPU to GPU to reduce the useless data transfer. It gives the priority to process the subgraph buffered on GPU after last iteration. Garaph dispatches the subgraphs between the CPU and GPU during the dense stage, and processes the subgraphs on the CPU side during the sparse stage. Unfortunately, neither Graphie nor Garaph is open sourced. For making a comparison with them, we can thus only reference the experimental results reported in their work (just as previous work has also done \cite{han2017graphie,maass2017mosaic}), and evaluate Seraph with the same graph datasets that have been used. Table~\ref{tab:machine:specifications} depicts the detailed machine specification of Seraph against those for Graphie and Garaph. We note that Seraph basically has the worst configuration but gains the best performance as will be discussed in the subsequent experiments. {\bf Graph Algorithms}\quad We evaluate Seraph using three well-known graph traversal algorithms: 1) {\em Breath-First Search (BFS) } for traversing the graph hop by hop so as to compute the distances of all vertices from a specific vertex ; 2) {\em Connected Components (CC)} for finding a maximal number of subgraphs where any two vertices can be connected via a chain of paths; and 3) {\em Single-source Shortest Path (SSSP)} for finding a path of a given vertex to every vertex such that the sum of the weights of their constituent edges is minimized. As discussed in Section~4.3, we evaluate BFS and CC with strong predictive vertex updating optimization. SSSP is evaluated with weak predictive vertex updating optimization. {\bf Graph Datasets}\quad We benchmark the graph algorithms with a variety of graph collections, incuding: 1) 12 real-world graphs (coming from Stanford Large Network Dataset Collection\footnote{http://snap.stanford.edu/data} and Laboratory for Web Algorithmics\footnote{http://law.di.unimi.it/datasets.php}); and 2) 6 large synthesized graphs (generated by the RMAT tool). Table~\ref{tab:graph:collection} depicts the graph collections that we have used for the aforementioned comparison with Graphie and Garaph, and further evaluation. \subsection{{\em RQ1}: Efficiency} \label{sec:evaluation:efficiency} We first evaluate the efficiency of Seraph in comparison to two state-of-the-art GPU-accelerated heterogeneous systems: Graphie and Garaph. In an effort to make a fair comparison, like Graphie and Garaph, our results take both data transfer and kernel execution into account. \begin{table} \centering \caption{Detailed machine specifications that have been used in Garaph, Graphie and Seraph} \label{tab:machine:specifications} \tabcolsep=0.05cm \scriptsize \begin{tabular}{|c|c|c|c|} \hline Spec. & Graphie & Garaph & Seraph\\ \hline\hline GPU type& NVIDIA Titan Z & NVIDIA GTX1070 & NVIDIA GTX980\\ On-board memory & 6GB GDDR5 & 8GB GDDR5 & 4GB GDDR5 \\ Internal bandwidth & 288GB/s & 256GB/s & 224GB/s\\ CUDA cores & 2688 & 1920 & 2048\\ \hline \hline CPU & E7-4830 v3 & E5-2650 v3 & E5-2650 v3\\ Host memory & 256GB DDR3 & 64GB DDR4 & 64GB DDR3 \\ \hline \end{tabular} \end{table} \begin{table}[t] \caption{Graph datasets used in our evaluation} \label{tab:graph:collection} \centering \begin{threeparttable} \tabcolsep=0.08cm \scriptsize \begin{tabular}{|c|ccc|c|ccc|} \cline{2-4}\cline{6-8} \multicolumn{1}{c|}{}&dataset&$|V|$&$|E|$&\multicolumn{1}{c|}{} &dataset&$|V|$&$|E|$\\ \hline\hline \parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{Graphie}}}& cage15 & 5.1M & 99.1M& \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Garaph}}}&uk2007@1M & 1M & 41.2M\\ &kron-500 & 2.1M & 182.1M& &uk2014-host & 4.8M & 50.8M\\ &nlpkkt160 & 8.3M & 221.1M& &enwiki-2013 & 4.2M & 101.3M\\ &orkut & 3.1M & 117.2M& &gsh-2015-tpd & 30.8M & 602.1M\\ &uk-2002 & 18.5M & 298.1M& &twitter-2010 & 61.6M & 1,468.4M\\ &twitter-2010 & 61.6M & 1,468.4M& &sk-2005 & 50.6M & 1,949.4M\\ \cline{5-8} &friendster & 124.8M & 1,806.1M& &RMAT-$k$\tnote{*} & $2^k$ & $2^{k+4}$\\ \hline \end{tabular} \begin{tablenotes} \item[*] We make $22<k<29$ to create different graph sizes \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[t] \centering \caption{Seraph vs. Graphie} \label{tab:comparison:Graphie} \tabcolsep=0.02cm \scriptsize \begin{tabular}{|c|ccc|ccc|ccc|} \hline \multirow{2}{*}& \multicolumn{3}{c|}{BFS}& \multicolumn{3}{c|}{CC}& \multicolumn{3}{c|}{SSSP}\\ \cline{2-10} &Graphie&Seraph&speedup&Graphie&Seraph&speedup&Graphie&Seraph&speedup\\ \hline cage15 & 0.63 & 0.095 & 6.63x & 0.23 & 0.066 & 3.48x & 0.24 & 0.54 & 0.44x \\ kron\_g500 & 0.59 & 0.047 & 12.55x & 0.48 & 0.057 & 8.42x &1.67 &0.135 &12.37x \\ nlpkkt160 & 6.11 & 0.248 & 24.64x & 1.02 & 0.73 & 1.4x & 7.03 &8.42 & 0.83x \\ orkut & 0.21 & 0.067 & 3.13x & 0.26 & 0.088 & 2.95x & 0.6 & 0.24 & 2.5x\\ uk-2002 & 4.3 & 0.23 & 18.7x &5.04 &0.35 & 14.4x & 11.73 &0.85 & 13.8x\\ twitter & 5.42 & 1.06 & 5.11x & 4.21 & 1.09& 3.86x & 14.67 & 4.8 & 3.06x\\ friendster & 16.44 & 3.8& 4.32x & 12.46 & 2.3 & 5.42x & 29.24 & 7.07 &4.14x\\ \hline \end{tabular} \end{table} {\bf Compared with Graphie}\quad Table~\ref{tab:comparison:Graphie} shows the detailed comparative results. It is worth noting that, for Graphie, Garaph and Seraph, {\tt twitter} and {\tt friendster} can not fit into their GPU global memory. Seraph provides a considerable performance benefit with 4.2x on average due to our pipelined subgraph iteration that has less requirement on data transmission. For the small graphs, all graph data can be fit into the GPU global memory. In this case, though the piplined subgraph iteration might be considered helpless, our predictive vertex updating can still be efficacious in enhancing in-memory computing. As we can see, Seraph significantly outperforms Graphie for almost all cases. For instance, BFS with {\em nlpkkt160} can obtain up to {24.64x} speedup. We should note that, for SSSP with {\tt cage15} and {\tt nlpkkt160} , Seraph has little performance improvement that is because {\tt nlpkkt160} show up meshwork property which is more suitable for a push-based SSSP algorithm~\cite{shun2013ligra}. \begin{table} \centering \caption{Seraph vs. Garaph} \label{tab:comparison:Garaph} \tabcolsep=0.05cm \scriptsize \begin{tabular}{|c|ccc|ccc|} \hline \multirow{2}{*}& \multicolumn{3}{c|}{CC}& \multicolumn{3}{c|}{SSSP}\\ \cline{2-7} &Garaph&Seraph&speedup&Garaph&Seraph&speedup\\ \hline uk2007@1M & 0.14 & 0.048& 2.92x & 0.48 & 0.11 & 4.36x \\ uk2014-host & 0.17 &0.09 &1.89x & 0.57 & 0.236 & 2.42x \\ enwiki-2013& 0.28 & 0.05 & 5.6x & 0.7& 0.174 & 4.02x \\ gsh-2015-tpd & 1.21 & 0.42 & 2.88x & 4.32 & 1.8 & 2.4x\\ twitter & 3.32 & 1.09 & 3.05x & 12.75 & 4.8 & 2.66x \\ sk-2005 & 4.47 & 4.01 & 1.12x & 18.13 & 11.13 & 1.65x \\ \hline \end{tabular} \end{table} {\bf Compared with Garaph}\quad Table~\ref{tab:comparison:Garaph} depicts the comparative results. Since Garaph does not provide the results regarding BFS, we thus test Seraph using CC and SSSP only. As a consequence, we can see that Seraph has a better performance than Garaph for all graph datasets, with up to 5.6x speedup for in-GPU-memory graphs and 2.6x speedup for out-of-GPU-memory graphs. Note that GTX 1070 used in Garaph has a better configuration than Seraph's, especially with double memory capacity with ours. There are two reasons for Seraph outperforms Garaph. Though Garaph reduces a part of I/O during the dense stage, but GPU is more powerful than CPU, most subgraphs also have been transferred to the GPU, but Garaph does nothing to cover the gap between computation and data transfer. Second Garaph adopts all-pull method to process the edge during the dense stage, which includes much useless computation, and Seraph reduces these computation with the predictive vertex update. Though Garaph reduces the data transfer during the sparse iteration, Seraph also reduces these IO overhead with a more efficent push-based method. \begin{figure}[t] \begin{centering} \subfloat[Out-of-Memory ({\tt uk-2007})]{\begin{centering} \includegraphics[height=2.9cm,width=3.9cm]{fig/scalbility-scale-up.pdf} \par\end{centering}} \subfloat[In-Memory ({\tt enwiki-2013})]{\begin{centering} \includegraphics[height=2.9cm]{fig/effectiveness-explained.pdf} \par\end{centering}} \par\end{centering} \vspace{-0.5em} \caption{Performance characterization with a varying number of SMXs. GTEPS represents Giga-scale traversed edges per second.} \label{fig:scalability:scale:up} \end{figure} \subsection{{\em RQ2}: Scalability} \label{sec:evaluation:scalability} We investigate the scalability of Seraph by: 1) controlling the number of SMXs available for graph processing, and 2) adjusting the graph size that varies from in-memory scale to out-of-memory scale. {\bf Scalability with varying SMXs}\quad Figure~\ref{fig:scalability:scale:up}(a) depicts the performance characterization between Graphie's subgraph iteration and ours by benchmarking entire {\tt uk2007} data set (with 105 million vertices and 3.7 billion edges). With our technical highlights of pipelined subgraph iteration and predictive vertex updating, it is revealed that, for all three graph algorithms, Seraph can significantly improve the performance of graph processing over Graphie as fed with more GPU SMXs. For instance, for BFS, when the number of SMXs reaches at 4, the performance of {Graphie} will be saturated. In contrast, Seraph can continue offering a near-linear performance improvement. CC and SSSP have the similar observation. We should note that Seraph may also enter into saturation. For BFS, the benefits will stop when the number of SMXs reached at 11. We guess one of underlying reasons may lie in the limited GPU internal bandwidth between SMX and global memory. Figure~\ref{fig:scalability:scale:up}(b) further shows the performance characterization of pure in-memory computing using the small graph {\tt enwiki-2013} that can fit into the global memory. We find that a similar observation has occurred. This yields a conclusion that the latter saturation has nothing to do with the CPU-GPU transmission efficiency. Coping with the internal inefficiency of GPU for graph processing can be interesting future work, which is beyond the scope of this paper. \begin{figure*}[t] \begin{centering} \subfloat[BFS]{\begin{centering} \includegraphics[width=5.2cm]{fig/scalilbity-graph-size-bfs.pdf} \par\end{centering}} \subfloat[CC]{\begin{centering} \includegraphics[width=5.2cm]{fig/scalilbity-graph-size-wcc.pdf} \par\end{centering}} \subfloat[SSSP]{\begin{centering} \includegraphics[width=5.2cm]{fig/scalilbity-graph-size-sssp.pdf} \vspace{-0.5em} \par\end{centering}} \par\end{centering} \vspace{-0.5em} \caption{Throughput characterization of Graphie, and Seraph w/ or w/o using predictive vertex updating (pvu) as the graph size of {\tt RMAT} is increasing. SSSP returns nothing for 4-billion edges since it runs out of host memory.} \label{fig:scalability:graph:size} \end{figure*} {\bf Scalability with varying graph sizes}\quad Figure~\ref{fig:scalability:graph:size} depicts the performance characterization using different strategies with different scale of {\tt RAMT} dataset where the edge scale is 16x larger than vertex's. Note that GTX980 has 4G Bytes global memory. BFS and CC works on unweighted graphs, and hence, they can load the unweighted {\tt RMAT} with 1 Billion edges at most into GPU global memory. SSSP works on weighted graph that can have 512 million edges at most into GPU global memory. Overall, the throughput (i.e., GTEPS) is high when the graphs can be load into global memory. Further, it will go down dramatically when the graph comes to the critical point that is out of global memory because of the CPU-GPU data transmission. To be specific, predictive vertex updating can introduce more benefits, especially for the in-memory situations. As for large-size graphs, we can still find that our pipelined subgraph iteration and predictive vertex updating contribute to considerable performance benefits. Note that the host memory will run out for SSSP on {\tt RAMT28} with 4-billion edges, which needs at least 64GB to store the weighted graph data. \begin{figure}[tb] \centering \includegraphics[scale=0.65]{fig/effectinveness-breakdown.pdf} \caption{Performance breakdown and effectiveness evaluation of pipelined subgraph iteration} \label{fig:effectiveness:breakdown} \end{figure} \subsection{{\em RQ3}: Effectiveness} \label{sec:evaluation:effectiveness} We also evaluate the effectiveness of piplined subgraph iteration (psi) and predictive vertex updating (pvu). {\bf Breakdown}\quad Figure 9 shows the breakdown results. We conduct the test on four large graph datasets where {\tt TW} is {\tt twitter2010}. RMAT represents RMAT27 and UK indicates the entire dataset of {\tt uk2007}. The baseline is the traditional subgraph processing without multiple iterations~\cite{han2017graphie}. In comparison to baseline, we can see that psi can offer up to 1.86x speedup for CC on {\tt uk2007}, and especially for large-scale graph,because the subgraph cache optimization is useless when the graph size is much larger than memory size. The predictive vertex update optimization can offer up to 1.79x speed up on twitter, and offer 1.46x speed up on average. The total benefit can get from these two technology is up to 2.86x speed up for CC on friendster, and offer 2.11x speed up on average. {\bf Effectiveness of pipelined subgraph iteration} To demonstrate the effectiveness of psi, we also test {naive reentry subgraph iteration (rsi)} that have been used in CLIP~\cite{ai2017squeezing}. For CLIP, we have set the most suitable value of maximum reentry times (MRT) for BFS, CC and SSSP to be 3, 2 and 2, respectively. More details regarding how to set the reasonable value of MRT can be found in~\cite{ai2017squeezing}. Figure~\ref{fig:effectiveness:breakdown} lists the detailed results. We can see that rsi introduces a slight performance improvement in comparison to the baseline, with at most $34\%$ improvement for CC on {\tt uk2007} as an example,and only $10.4\%$ on average. This is because rsi adopts updating the subgraph over and over again, further causing much redundant computation. In comparison to rsi, psi cause less redundant computation due to the pipiline schedule method. As a result, we can get up to 86\% performance improvement for CC on {\tt uk2007}. The averaged performance improvement is by 44.4\%. {\bf Effectiveness of predictive vertex updating} Relative to strong pvu that can definitely reduce the useless work, the weak pvu has more complex relations. Thus, we choose SSSP to evaluate the effectiveness of pvu for better demonstration. Figure~\ref{fig:effectiveness:pvu}(a) counts the number of vertices for each status during each iteration. It shows that the number of vertices that involve a data update (i.e., with status 0, 1 and 5) is significantly reduced as iteration goes deeper. That is, there is a large amount of redundant computation (i.e., with status 2, 3 and 4) that does not lead to a valid vertex updating is increasing during each iteration. Figure~\ref{fig:effectiveness:pvu} further sums up the number of vertices (with status 0/1/5) as {\tt our} label, and those (with 2/3/5) as as {\tt redundant} label. {\tt Real} indicates the number of vertices that involve the valid updating during each iteration. Compared to the total amount of computation in prior work~\cite{shun2013ligra, ma2017garaph}, our optimization can basically reflect the real situation, and also, dramatically reduce their redundant computation for performance speedup. \begin{figure}[t] \begin{centering} \subfloat[Status]{\begin{centering} \includegraphics[width=3.9cm]{fig/pvu-status.pdf} \par\end{centering}} \subfloat[Computation amount]{\begin{centering} \includegraphics[width=3.8cm]{fig/pvu-effectiveness.pdf} \par\end{centering}} \par\end{centering} \vspace{-1em} \caption{Effectiveness evaluation of predictive vertex updating using SSSP with {\tt cage15}. (a) The variation in the number of vertices for each status; (b) The variation in the amount of computation.} \vspace{-1em} \label{fig:effectiveness:pvu} \end{figure} \begin{figure}[tb] \centering \includegraphics[scale=0.8]{fig/other-systems.pdf} \vspace{-1em} \caption{Performance comparison with other state-of-the-art solutions for large-scale graph processing} \vspace{-1em} \label{fig:evaluation:other:systems} \end{figure} \subsection{{\em RQ4}: vs. Other Advanced Large Scale Graph Processing Solutions} \label{sec:evaluation:comparison:other:systems} We finally compare Seraph with other state-of-the-art solutions for large-scale graph processing: 1) Ligra~\cite{shun2013ligra}, a host-memory based solution; and 2) Gemini~\cite{zhu2016gemini}, a distributed solution. Since Ligra is memory consuming~\cite{ma2017garaph}, we thus extend to use a node with 256GB memory for testing it. Gemini is set on a 4-node cluster. Each node has the same configuration as the one for Ligra. Figure~\ref{fig:evaluation:other:systems} depicts the results. In comparison to Ligra, it can be seen that Seraph is much efficient for CC and SSSP with up to 3.9x speedup. Heterogeneous solution with an integration of specialized accelerator can provide a great potential over traditional shared memory solution. Even compared to Gemini (with more computing and storage resources), Seraph can also obtain the comparable results. For instance, our results for CC on all datasets are superior to the ones using Gemini, with up to 1.64x speedup. Though Gemini is a good scale-out solution, heterogeneous solution can also offer comparable efficiency, mainly thanks to less network communication and fewer redundant computations. Note that both Ligra and Gemini show a better efficiency than Seraph for BFS. The underlying reason lies in some specialized optimization they have used. To be specific, vertex update will break early during the dense iteration if the vertex has found a neighbor vertex has been traversed.So the computation for BFS is much sparse than CC and SSSP, We should also note that such tailor-made optimization can be also implemented into Seraph framework for accelerating BFS. \vspace{-1em} \section{Related Work} {\bf Heterogeneous Graph Systems}\quad Heterogeneous architecture has integrated the advantageous resources of different devices for satisfying different demands of graph processing. A wide spectrum of efforts have been put into developing specialized graph accelerators via single-~\cite{Zhang:2017:BPF, Zhou:2016:HTE, Jin:2017:ICDCS} or multi-FPGAs~\cite{Dai:2017:FEL} for energy-efficiency purposes. There also emerge a number of GPU-accelerated heterogeneous graph systems~\cite{wang2016gunrock,hong2017multigraph,liu2015enterprise,khorasani2014cusha} for supporting high-performance large-scale graph processing. Graphie~\cite{han2017graphie} transfers the active subgraph to GPU and reuses the cached subgraph in the next iteration to reduce the I/O amount. Garaph \cite{ma2017garaph} streams edge data asynchronously to the GPU for graph processing. As discussed before, these prior graph systems may still fall into limited performance when handling large-scale graphs due to their inefficient data transmission. This paper first (to our best knowledge) proposes to leverage multi-time subgraph iteration to break through this limitation, enabling the scale-up efficiency of heterogeneous graph systems. {\bf Distributed Graph Systems}\quad A large amount of research seeks help from distributed deployment that can aggregate more resources than single machine for processing large-scale graphs. The primary task of distributed graph systems is to obtain well-cut graph partitions~\cite{gonzalez2012powergraph,chen2015powerlyra,gonzalez2014graphx,avery2011giraph,zhu2016gemini} so as to minimize the communication across machines. A few recent studies use the emerging high-speed network (e.g RDMA) to reduce the communication overhead~\cite{wu2015g, Shi:2016:FCR}. Gemini presents a series of adaptive runtime optimizations with sparse-dense switching, locality-aware and NUMA-aware features, enabling an attractive scale-out efficiency. In comparasion to these distributed designs, it is verified in our evaluation (Section~7.5) that heterogeneous solutions, in spite of involving fewer resources, can be also or even more promising in practice for large-scale graph processing because of less communication cost and fewer redundant computations. {\bf Disk-based Graph Systems}\quad There also exist many disk-based systems to support large-scale graph processing. GraphChi~\cite{kyrola2012graphchi} proposes parallel sliding windows to lead only non-sequential accesses to the disk. GridGraph~\cite{zhu2015gridgraph} uses 2-level hierarchical partitioning to reduce the I/O amount. TurboGraph~\cite{han2013turbograph} presents a pin-and-slide model to fully exploit the multicore and I/O parallism. Since the significantly low disk-to-memory bandwidth, disk-based graph systems are orders-of-magnitude slower than heterogeneous solutions. {\bf In-Memory Graph Systems}\quad For in-memory graph processing, the graph data only needs to be copied at the beginning. Once data is ready, the processors can process the graph without any data transmission until finished~\cite{malicevic2017everything,shun2013ligra,nguyen2013lightweight}. Usually, in-memory graph systems, such as Gunrock~\cite{wang2016gunrock}, can provide orders of magnitude performance improvement over heterogeneous implementations. However, limited to global memory, existing dedicated accelerators (e.g., GPU) can not store real-world graphs (with more than billions edges)~\cite{zhang2015numa, shun2013ligra}. Considering the inefficient data transmission, the potential of accelerators is also significantly underutilized. On the contrary, combined with pipelined subgraph iteration, we further present predictive vertex updating to better exploit the GPU processing capability for enhancing the performance of heterogeneous graph systems. \section{Conclusion} \vspace{-1em} \indent There remains tremendously challenging for scaling up the performance of heterogeneous graph systems due to the well-known interconnect transmission inefficiency. To cope with problem, by iterating each subgraph multiple times, it is observed that the heterogeneous graph systems can further obtain an improvable performance by enhancing GPU processing capacity. With this guideline, we develop Seraph integrated with two technical innovations. First, we present a pipelined subgraph iteration to maximize the information propagation of each subgraph to other ones for fully exhausting its value. Second, we propose two efficient vertex updating solutions to predict unnecessary vertex computations for better supporting pipelined subgraph iteration. Our results demonstrate that Seraph outperform the state-of-the-art heterogeneous graph systems Graphie and Garaph by 5.42x and 3.05x, respectively. Seraph can be also scaled up over existing heterogeneous graph system. Our comparative results also reveal that Seraph can achieve impressive performance in comparison to state-of-the-art CPU-based (i.e., Ligra) and distributed graph pro- cessing systems (i.e., Gemini). { \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:10:12", "yymm": "1806", "arxiv_id": "1806.00762", "language": "en", "url": "https://arxiv.org/abs/1806.00762" }
\section*{Introduction} Studying free probability analogues to classical probability items has been a main topic in free probability since the inception of the theory by Voiculescu in \cite{DV3}. The most popular and thoroughly studied item is semicircular distributions, a free probability analogue to classical Gaussian distributions. Free Poisson distributions hold a status of the most popular next to semicircular distributions in free probability. Multidimensional semicircular distributions were first treated in \cite{RS1}. Finite dimensional multi-variable free Poisson distributions were defined and studied in \cite{RS}. More general infinitely dimensional multi-variable compound free Poisson distributions were studied recently in \cite{AG}. Voiculescu's seminal work in \cite{DV} started a new research field in free probability, bi-free probability. Generalizing the ideas and results in free probability to this new setting has been a main theme in the quick development of bi-free probability. For example, in \cite{DV}, Voiculescu determined bi-free central limit distributions, a bi-free analogue of semicircular distributions. Voiculescu \cite{DV4} constructed a bi-free partial $R$-transform as an analogue of his $R$-transform in \cite{DV5}. Voiculescu \cite{DV2} developed a bi-free partial $S$-transform. On the combinatorial side, Mastnak and Nica \cite{MN} introduced a collection of partitions for bi-free pairs of faces that was postulated to be analogous to the role non-crossing partitions played in free probability. In \cite{CNS1} the postulate of Mastnak and Nica was confirmed to be true. Subsequently, \cite{CNS2} generalized such notions to the operator-valued setting. In addition, the notion of bi-free infinitely (additive) divisible distributions for commutative pairs of self-adjoint random variables was developed in \cite{GHM}, which was generalized to the case of (not necessarily commutative) pairs of random variables in \cite{MG}. Gu, Huang, and Mingo \cite{GHM} discovered compound bi-free Poisson distributions for commutative pairs of random variables in terms of their bi-free cumulants (Example 3.13 (3) in \cite{GHM}). In this paper, inspired by the work in \cite{RS} and \cite{AG}, we define and study compound bi-free Poisson distributions for {\sl two-faced families of random variables}. Voiculescu \cite{DV} defined bi-free central limit distributions for two-faced families of random variables in terms of their bi-free cumulants (Definition 7.4 in \cite{DV}). A similar method was adopted in defining (multi-variable) free Poisson distributions in \cite{NS}, \cite{RS} and \cite{AG}. Based on the same philosophy, we define {\sl compound bi-free Poisson distributions for two-faced families of random variables} in terms of their bi-free cumulants. A compound bi-free Poisson distribution can be realized as the Poisson limit distribution of a sequence of two-faced families of random variables. Like in free probability, a bi-free infinitely divisible distribution, as a positive linear functional on the $*$-algebra of polynomials, could be approximated in distribution by compound bi-free Poisson distributions of self-adjoint random variables. Discovering of the connections between free probability and random matrices initially done by Voiculescu in \cite {DV1} is a milestone in the development of free probability, which led free probability out of the umbrella of operator algebras and into a wider field of research. The development of the connection theory between free probability and random matrices has made free probability to be a powerful tool in random matrices (\cite{AGZ}). Inspired by the corresponding work in free probability, Skoufranis \cite{PS} constructed bi-matrix models for bi-free central limit distributions and a special kind of bi-free Poisson distributions. Precisely, Skoufranis constructed bi-matrix models of random matrices, and of matrices with entries of creation and annihilation operators, for bi-free central limit distributions (Sections 4 and 5 in \cite{PS}). Skoufranis also constructed a bi-matrix model of random matrices for a bi-free Poisson distribution determined by numbers $\lambda>0, \alpha, \beta \in \mathbb{R}$, i. e., a distribution $\mu$ characterized by $\kappa_\chi(\mu)=\lambda \alpha^{|\chi^{-1}(\{l\})|}\beta^{|\chi^{-1}(\{r\})|}$, for $n\in \mathbb{N}$ and $\chi:\{1, 2, \cdots, n\}\rightarrow \{l, r\}$ (Example 4.15 in \cite{PS}). In this paper, we construct bi-matrix models for a compound bi-free Poisson distribution determined by $\lambda>0$ and the distribution $\mu_a$ of a bipartite two-faced family $a=((a_{1, l}, \cdots, a_{N, l}), (a_{1, r}, \cdots, a_{M, r}))$, where the tuple $\{a_{1, l}, \cdots, a_{N, l}, a_{1, r}, \cdots, a_{M, r}\}$ has an almost sure random matrix model. This paper is organized as follows. Besides this Introduction, there are five sections in this paper. In Section 1, we recall some essential combinatorial aspects of bi-free probability, operator-valued bi-free probability, and the general construction of bi-matrix models in \cite{PS}. We give the definition and a Poisson limit theorem for a two-faced family of random variables to have a compound bi-free Poisson distribution in Section 2 (Definition 2.1 and Theorem 2.3). Bi-free infinite divisibility of distributions of two-faced families of self-adjoint random variables is defined and characterized by existence of a bi-free additive convolution semigroup associated with the distribution (Definition 3.1 and Theorem 3.4). Furthermore, such a distribution can be approximated by compound bi-free Poisson distributions of two-faced families of self-adjoint random variables (Theorem 3.5). Historically, Marchenko-Pastur's work in \cite{MP} implies that Wishart matrices with appropriate normalization and scaling form a random matrix model for free Poisson distributions (see also Theorem 4.1.9 in \cite{HP}). F. Hiai and D. Pets \cite{HP} provided a random matrix model for compound free Poisson distribution $P(\lambda, \nu)$ (Proposition 4.4.11 in \cite{HP}), where $\lambda>0$ and $\nu$ is a probability measure on $\mathbb{R}$ with a compact support. In Section 4, we construct a random matrix model for a multi-dimensional compound free Poisson distribution $P(\lambda, \nu_a)$ (Theorem 4.2), where $\lambda>0$, $\nu_a$ is the distribution of $a=(a_1, \cdots, a_N)$, if the tuple $\{a_1, \cdots, a_N\}$ has an almost sure random matrix model. Especially, $P(\lambda, \nu_a)$ has a random matrix model, if the tuple $\{a_1,..., a_N\}$ is classically independent (Corollary 4.3). Applying this result to two-faced families of random variables, we get a random bi-matrix model for a compound bi-free Poisson distribution determined by $\lambda>0$ and $\nu_a$, $a=((a_{1,l},\cdots, a_{N,l}), (a_{1, r}, \cdots, a_{M, r}))$, if the tuple has an almost sure random matrix model, and the left random variables commute with the right random variables in the tuple (Theorem 4.4). P. Skoufranis \cite{PS} constructed bi-matrix models with entries of creation and annihilation operators for bi-free central limit distributions (Theorem 5.1 and Remark 5.2 in \cite{PS}). Using the operator model in \cite{MN}, we construct an asymptotic bi-matrix model with entries of creation and annihilation operators for a compound bi-free Poisson distribution determined by $\lambda>0$ and the distribution of a commutative pair $(a_1, a_2)$ (Theorem 5.1). \section{Preliminaries} In this section we will summarize some essential combinatorial aspects of bi-free probability, operator-valued bi-free probability, and the general construction of bi-matrix models in \cite{PS}. The reader is referred to \cite{DV}, \cite{CNS1}, \cite{CNS2}, and \cite{PS1} for more details on bi-free probability, and to \cite{NS} and \cite{VDN} for basics on free probability. \subsection{Combinatorics of Bi-free Probability} Let $\chi:\{1, 2, \cdots, n\}\rightarrow \{l,r\}$. Let's record explicitly where are the occurrences of $l$ and $r$ in $\chi$. $\chi^{-1}(l)=\{i_1<i_2<\cdots <i_p\}$ and $\chi^{-1}(r)=\{i_{p+1}>i_{p+2}>\cdots >i_n\}$. Then we define a permutation $s_\chi$ on $\{1, 2, \cdots, n\}: s_\chi(k)=i_k$, for $k=1, 2,\cdots, n$. For a subset $V=\{i_1< i_2< \cdots< i_k\}$ of the set $\{1, 2, \cdots, n\}$, $a_1\cdots, a_n\in \A$, define $$\varphi_V(a_1, \cdots, a_n)=\varphi(a_{i_1}a_{i_2}\cdots a_{i_k}).$$ Let $\P(n)$ be the set of all partitions of $\{1, 2, \cdots, n\}$. For a partition $\pi=\{V_1, V_2, \cdots, V_d\}\in \P(n)$, we define $$\varphi_\pi(a_1, \cdots, a_n):=\prod_{V\in \pi}\varphi_V(a_1, \cdots, a_n).$$ Define $BNC(n,\chi)=\{s_\chi\circ \pi:\pi\in NC(n)\}$, where $NC(n)$ is the set of all non-crossing partitions of $\{1, 2, \cdots, n\}$ (Lecture 9 in \cite{NS}). A $\sigma\in BNC(n,\chi)$ is called a {\sl bi-non-crossing} partition of $\{1, 2, \cdots, n\}$. Let $(\A,\varphi)$ be a non-commutative probability space. The bi-free cumulants $(\kappa_\chi:\A^n\rightarrow \mathbb{C})_{n\ge 1, \chi:\{1, 2, \cdots, n\}\rightarrow \{l,r\}}$ of $(\A,\varphi)$ are defined by $$\kappa_\chi (a_1, \cdots, a_n)=\sum_{\pi\in BNC(n,\chi)}\varphi_{\pi}(a_1, \cdots, a_n)\mu_n(s^{-1}_\chi\circ\pi, 1_n),\eqno (1.1)$$ for $n\ge 1, \chi:\{1,2,\cdots, n\}\rightarrow \{l,r\}, a_1, \cdots, a_n\in A$, where $\mu_n$ is the Mobius function on $NC(n)$ (Lecture 10 in \cite{NS}). For a subset $V=\{i_1, i_2, \cdots, i_k\}\subseteq \{1, 2, \cdots, n\}$, let $\chi_V$ be the restriction of $\chi$ on $V$. We define $\kappa_{\chi, V}(a_1, a_2, \cdots, a_n)=\kappa_{\chi_V}(a_{i_1}, a_{i_2}, \cdots, a_{i_k})$. For a partition $\pi=\{V_1, V_2, \cdots, V_k\} \in BNC(n,\chi)$, we define $\kappa_{\chi, \pi}(a_1, \cdots, a_n)=\prod_{V\in \pi}\kappa_{\chi, V}(a_1, a_2, \cdots, a_n)$. Then the bi-free cumulant appeared in $(1.1)$ is $\kappa_{\chi, 1_n}(a_1, \cdots, a_n)$. The bi-free cumulants are determined by the equation $$\varphi (a_1 a_2 \cdots a_n)=\sum_{\pi\in BNC(n, \chi)}\kappa_{\chi,\pi}(a_1, a_2, \cdots, a_n), \forall a_1, \cdots, a_n\in \A, \eqno (1.2)$$ for a $\chi:\{1, 2, \cdots, n\}\rightarrow \{l,r\}$. Charlesworth, Nelson, and Skoufranis \cite{CNS1} proved that $$z'=((z'_i)_{i\in I}, (z'_j)_{j\in J}), z''=((z''_i)_{i\in I}, (z''_j)_{j\in J})$$ in a non-commutative probability space $(\A,\varphi)$ are bi-free if and only if $$\kappa_\chi(z_{\alpha(1)}^{\epsilon_1},z_{\alpha(2)}^{\epsilon_2}, \cdots, z_{\alpha(n)}^{\epsilon_n})=0, \eqno (1.3)$$ whenever $\alpha:\{1,2,\cdots, n\}\rightarrow I\bigsqcup J$, $\chi:\{1, 2, \cdots, n\}\rightarrow \{l,r\}$ such that $\alpha^{-1}(I)=\chi ^{-1}(\{l\})$, $\epsilon:\{1,2, \cdots, n\}\rightarrow \{',''\}$ is not constant, and $n\ge 2$ (Theorem 4.3.1 in \cite{CNS2}). Let $\mu$ and $\nu$ be distributions of the pairs $(a_l, a_r)$ and $(b_l, b_r)$, respectively. We call the distribution $\mu\boxplus\boxplus\nu$ of $(a_l+b_l, a_r+b_r)$ the bi-free additive convolution of $\mu$ and $\nu$, if $(a_l, a_r)$ and $(b_l, b_r)$ are bi-free. \subsection{Structures of Operator-valued Bi-freeness} Let $B$ be a unital algebra. A $B$-$B$-{\sl non-commutative probability space} is a triple $(\A,E,\varepsilon)$ where $\A$ is a unital algebra, $\varepsilon: B\otimes B^{op}\rightarrow \A$ is a unital homomorphism such that $\varepsilon|_{B\otimes 1_B}$ and $\varepsilon|_{1_B\otimes B}$ are injective, and $E:\A\rightarrow B$ is a linear map such that $E(\varepsilon(b_1\otimes b_2)a)=b_1E(a)b_2$ and $E(a\varepsilon(b\otimes 1_B))=E(a\varepsilon(1_B\otimes b))$. Let $L_b=\varepsilon(b\otimes 1_B)$ and $R_b=\varepsilon(1_B\otimes b)$. The unital subalgebras $$\A_l=\{a\in \A: aR_b=R_ba, \forall b\in B\}, \A_r=\{a\in \A: L_ba=aL_b,\forall b\in B\}$$ are called the {\sl left } and {\sl right} algebras of $\A$, respectively. The following we give a canonical example of $B$-$B$-non-commutative probability spaces. A {\sl $B$-$B$-bi-module with a specified $B$-vector state} is a triple $(\mathcal{X}, \mathcal{X}^0, p)$ where $\mathcal{X}=B\oplus \mathcal{X}^0$, a direct sum of $B$-$B$ bi-modules and $p:\mathcal{X}\rightarrow B$, $p(b\oplus\eta)=b$. Let $\L(\X)$ denote the set of linear operators on $\mathcal{X}$. For $b\in B$, define $L_b, R_b\in \L(\X)$ by $$L_b(x)=bx, R_b(x)=xb, \forall x\in \X.$$ Similarly, we can define left and right algebras as follows $$\L_l(\X):=\{A\in \L(\X): AR_b=R_bA, \forall b\in B\}, \L_r(\X):=\{A\in \L(\X):AL_b=L_bA, \forall b\in B\}. $$ Given a $B$-$B$-bi-module with a specified $B$-vector state $\{\X, \X^0, p\}$, the expectation $E_{\L(\X)}$ of $\L(\X)$ onto $B$ is defined by $$E_{\L(\X)}(A)=p(A1_B), \forall A\in \L(\X).$$ Define $\varepsilon: B\otimes B^{op}\rightarrow \A$, $\varepsilon(b_1\otimes b_2)=L_{b_1}R_{b_2}$. Then $(\L(\X), E_{\L(\X)}, \varepsilon)$ is a (concrete) $B$-$B$-non-commutative probability space. Moreover, Theorem 3.2.4 in \cite{CNS2} demonstrated that every abstract $B$-$B$-non-commutative probability space can be represented inside a concrete $B$-$B$-non-commutative probability space. \subsection{Bi-Matrix Models} Let $\X$ be a vector space over $\mathbb{C}$ with $\X=\mathbb{C}\xi\oplus \X^0$ and $p:\X\rightarrow \mathbb{C}, p(\lambda\xi+\eta)=\lambda$. We call $(\X, \X^0, \xi, p)$ a pointed vector space. For $N\in \mathbb{N}$, consider $M_N(\mathbb{C})$-$M_N(\mathbb{C})$ bi-modular actions on $\X_N:=M_N(\X)$, $$[a_{i,j}]\cdot[\eta_{i,j}]=[\sum_{k=1}^Na_{i,k}\eta_{k,j}], [\eta_{i,j}]\cdot[a_{i,j}]=[\sum_{k=1}^Na_{k,j}\eta_{i,k}],$$ for all $[a_{i,j}]\in M_N(\mathbb{C})$ and $[\eta_{i,j}]\in \X_N$. Then $\X_N$ becomes an $M_N(\mathbb{C})$-$M_N(\mathbb{C})$-bi-module with a specified $M_N(\mathbb{C})$-vector state via $$\X_N=M_N(\mathbb{C}\xi)\oplus M_N(\X^0),$$ and a linear map $p_{\X_N}:\X_N\rightarrow M_N(\mathbb{C})$ defined by $p_{\X_N}([\eta_{i,j}])=[p(\eta_{i,j})]$, which is called the $M_N(\mathbb{C})$-$M_N(\mathbb{C})$ -bi-module associated with $(\X,p)$ and $(\L(\X_N), E_{\L(\X_N)}, \varepsilon)$ is called the $M_N(\mathbb{C})$-$M_N(\mathbb{C})$-non-commutative probability space associated with $(\X,p)$. The expectation $E_{\L(\X_N)}:\L(\X_N)\rightarrow M_N(\mathbb{C})$ has the form $E_{\L(\X_N)}(A)=p_{\X_N}(A1_{N,\xi})$, where $A\in \L(\X_N)$ and $1_{M,\xi}$ is the diagonal matrix $diag(\xi, \xi, \cdots, \xi)$. To consider bi-matrix models, we define two homomorphisms $L: M_N(\L(\X))\rightarrow \L(\X_N)$, and $R: M_N(\L(\X)^{op})^{op}\rightarrow \L(\X_N)$, $$L([T_{i,j}])[\eta_{i,j}]=[\sum_{k=1}^NT_{i,k}(\eta_{k,j})], R([T_{i,j}])[\eta_{i,j}]=[\sum_{k=1}^NT_{k,j}\eta_{i,k}],$$ for $[\eta_{i,j}]\in \X_N$ and $[T_{i,j}]\in M_N(\L(\X))$. $L([T_{i,j}])$ and $R([T_{i,j}])$ are called left and right matrices of $\L(\X)$, respectively. Let $\A=(L^{\infty-}(\Omega, P), E)$, where $L^{\infty-}(\Omega, P)=\cap_{p\ge 1}L^p(\Omega, P)$, $(\Omega, P)$ is a probability space, and $E: f\mapsto \int_\Omega f(d)dP(t)$ is the expectation. Skoufranis \cite{PS} introduced {\sl random pairs of matrices} as follows. For $N\in \mathbb{N}$, an $N\times N$ random pair of matrices on $L^{\infty-}(\Omega, P)$ is a pair $(X_l, X_r)$, where $X_l$ is a left matrix and $X_r$ is a right matrix with entries from $\A\subset \L(L^2(\Omega, P))$ (Definition 4.5 in \cite{PS}). We generalize the concept to two-faced families. \begin{Definition} For $N\in \mathbb{N}$, an $N\times N$ random two-faced family of matrices on $L^{\infty-}(\Omega, P)$ is a two-faced family $((X_{i})_{i\in I}, (X_{j})_{j\in J})$ where $X_i, i\in I,$ are left matrices and $X_j, j\in J,$ are right matrices with entries from $L^{\infty-}(\Omega, P)$. \end{Definition} {\bf Acknowledgements} The author would like to thank the referee(s) for carefully reading the original version of this paper, pointing out some mistakes and typos in it, and providing some suggestions to revise it. \section{ The Definition and A Poisson Limit Theorem} In this section, we give the definition for a two-faced family to have a bi-free compound Poisson distribution. Furthermore, a bi-free compound Poisson distribution can be realized via a bi-free Poisson limit theorem. \begin{Definition} Let $I$ and $J$ be two disjoint index sets, and $((z_i)_{i\in I}, (z_j)_{j\in J})$ be a two-faced family of random variables in a non-commutative probability space $(\A,\varphi)$. We say that $((z_i)_{i\in I}, (z_j)_{j\in J})$ has a compound bi-free Poisson distribution, if there exist a real number $\lambda>0$ and a two-faced family $((a_i)_{i\in I}, (a_j)_{j\in J})$ of random variables in $(\A,\varphi)$ such that, for every $n\in \mathbb{N}$, and a map $$\chi:\{1, 2, \cdots, n\}\rightarrow I\bigsqcup J,$$ we have $$\kappa_\chi(z_{\chi(1)}, \cdots, z_{\chi(n)})=\lambda \varphi (a_{\chi(1)}a_{\chi(2)}\cdots a_{\chi(n)}).$$ We call the distribution of $((z_i)_{i\in I}, (z_j)_{j\in J})$ a compound bi-free Poisson distribution determined by $\lambda$ and $\mu_a$, the distribution of $a=((a_i)_{i\in I}, (a_{j})_{j\in J})$. \end{Definition} \begin{Remark} The above definition covers the following well-known cases. \begin{enumerate} \item Let $I=\{l\}, J=\{r\}$, $[z_l, z_r]=[a_l, a_r]=0$. Then $$\kappa_{\chi}(z_l, z_r)=\lambda \varphi(a_l^{|\chi^{-1}(\{l\})|}a_r^{|\chi^{-1}(\{r\})|}), $$ which defines a compound bi-free Poisson distribution for a commutative pair of random variables (Example 3.13 (3) of \cite{GHM}). \item If $\chi:\{1,2,\cdots, n\}\rightarrow I$, we get $$\kappa(z_{\chi(1)}, \cdots, z_{\chi(n)})=\lambda \nu (X_{\chi(1)}X_{\chi(2)}\cdots X_{\chi(n)}),$$ where $\nu:\mathbb{C}(X_i:i\in I)\rightarrow \mathbb{C}$ defined by $\nu(X_{\chi(1)}X_{\chi(2)}\cdots X_{\chi(n)})=\varphi (a_{\chi(1)}a_{\chi(2)}\cdots a_{\chi(n)})$ is the distribution of $\{a_i:i\in I\}$ in $(\A, \varphi)$. In this case, we obtain the scalar-valued compound (free) Poisson distributions defined in 4.4.1 of \cite{RS} with parameter $\lambda>0$. It follows that Definition 2.1 provides a bi-free analogue of multi-variable compound free Poisson distributions. \end{enumerate} \end{Remark} Corollary 2.4 in \cite{MG} gives the following Poisson limit theorem for compound bi-free Poisson distributions. \begin{Theorem} Let $((a_i)_{i\in I}, (a_j)_{j\in J})$ be a two-faced family of random variables in $(\A, \varphi)$, and $\lambda >0$. For each $N>\lambda$, let $(\C_N, \phi_N)$ be a $C^*$-probability space, and $p_N\in \mathcal{C}_N$ be a projection with $\phi_N(p_N)=\frac{\lambda}{N}$. Let $a_N= ((a_i\otimes p_{N})_{i\in I}, (a_j\otimes p_{N})_{j\in J})$ be a two-faced family of random variables in $\A_N:=\A\otimes \C_N$ with $\varphi_N:=\varphi \otimes \phi_N$. Let $\{(a_{N, i, m})_{i\in I}, (a_{N,j,m})_{j\in J}):m=1, 2, \cdots, N\}$ be a bi-free sequence of $N$ identically distributed two-faced families of random variables in $(\A_N, \varphi_N)$ such that each of the two-faced families has the same distribution as that of $a_N$. Let, finally, $S_{N,k}=\sum_{m=1}^Na_{N,k,m}$, for $k\in I\bigsqcup J$, $S_N=(S_{N,i})_{i\in I},(S_{N,j})_{j\in J})$. Then $S_N$ converges in distribution to the compound bi-free Poisson distribution determined by $((a_i)_{i\in I}, (a_j)_{j\in J})$ and $\lambda$, $N\rightarrow \infty$. \end{Theorem} \begin{proof} Let $n\in \mathbb{N}$, and $\chi:\{1, 2, \cdots, n\}\rightarrow I\bigsqcup J$. By Corollary 2.4 in \cite{MG}, the limit distribution of $S_N$, as $N\rightarrow \infty$, is characterized by \begin{align*} \kappa_\chi(b_{\chi(1)}, \cdots, b_{\chi(n)})&=\lim_{N\rightarrow \infty}N\varphi_N((a_{\chi(1)}\otimes p_{N})\cdots (a_{\chi(n)}\otimes p_{N}))\\ =&\lim_{N\rightarrow \infty}N\cdot \frac{\lambda}{N}\varphi(a_{\chi(1)}a_{\chi(2)}\cdots a_{\chi(n)})\\ =&\lambda \varphi(a_{\chi(1)}a_{\chi(2)}\cdots a_{\chi(n)}), \end{align*}where the two-faced family $((b_i)_{i\in I}, (b_j)_{j\in J})$ has the limit distribution. \end{proof} In the $C^*$-probability space case, when taking the positivity of the state $\varphi$ on $\A$ into account, we can get a conclusion that the bi-free compound Poisson distribution $P_{\lambda, \mu_a}$ is a positive linear functional on the polynomial algebra. \begin{Corollary} Let $(\A, \varphi)$ be a $C^*$-probability space, and $I$ and $J$ be disjoint index sets. Let $\lambda>0$ be a positive number and $a:=((a_i)_{i\in I}, (a_j)_{j\in J})$ be a two-faced family of self-adjoint random variables in $\A$. Then the bi-free compound Poisson distribution $P_{\lambda, \mu_a}:\mathbb{C}\langle X_k: k\in I\bigsqcup J\rangle \rightarrow \mathbb{C}$ is positive, where $\mathbb{C}\langle X_k: k\in I\bigsqcup J\rangle$ is a unital $*$-algebra with $X_k=X^*_k$, for $k\in I\bigsqcup J$. \end{Corollary} \begin{proof} For a two-faced family $c=((c_i)_{i\in I}, (c_j)_{j\in J})$ of self-adjoint random variables in $\A$, $\mu_c:\mathbb{C}\langle X_k: k\in I\bigsqcup J\rangle \rightarrow \mathbb{C}$ is positive. In fact, for a polynomial $P=\sum \alpha_kX_k\in \mathbb{C}\langle X_k: k\in I\bigsqcup J\rangle$, we have $$\mu(c)(P^*P)=\mu_c(\sum_{k, k'}\overline{\alpha_k}\alpha_{k'}X_{k}X_{k'})=\sum_{k,k'}\overline{\alpha_k}\alpha_{k'}\varphi(c_{k}c_{k'})=\varphi((\sum_{k, k'}\alpha_kc_k)^*(\sum_{k,k'}\alpha_kc_k))\ge 0.$$ Moreover, if $\mu_1$ and $\mu_2$ are positive linear functionals on $\mathbb{C}\langle X_k: k\in I\bigsqcup J\rangle$, then $\mu_1\boxplus\boxplus\mu_2$ is positive. Let $\mu_N$ be the distribution of $((a_i\otimes p_N)_{i\in I}, (a_j\otimes p_N)_{j\in J})$ of self-adjoint operators in $\A_N=\A\otimes_m \C_N$, the spacial tensor product of $C^*$-algebras $\A$ and $\C_N$, in the proof of Theorem 2.3. Then $\mu_{S_N}=\mu_N^{\boxplus\boxplus N}$ is positive. By Theorem 2.3, $P_{\lambda, \mu_a}$ is the weak limit of $\mu_{S_N}$. It implies that the bi-free Poisson distribution $P_{\lambda, \mu_a}$ is positive. \end{proof} \section{Bi-free Infinitely Divisible Distributions} Bi-free infinite divisibility of the distribution of a pair $(a,b)$ of random variables was defined and studied in \cite{GHM} and \cite{MG}. We now generalize the concept to a more general case of two-faced families. Like in free probability (see Section 4.5 in \cite{RS}), a bi-free infinitely divisible distribution can be approached by compound bi-free Poisson distributions. \begin{Definition} When $X_k=X^*_k$ for $k\in I\bigsqcup J$, $\mathbb{C}\langle X_k: k\in I \bigsqcup J\rangle$ becomes a unital $*$-algebra. In this case, we use $\Sigma^+(I, J)$ to denote the set of all positive unital linear functional on $\mathbb{C}\langle X_k: k\in I \bigsqcup J\rangle$. \begin{enumerate} \item A distribution $\mu\in \Sigma^+(I, J)$ is bi-free infinitely divisible if for each $N\in \mathbb{N}$, there is a distribution $\mu_{1/N}\in \Sigma^+(I,J)$ such that $$\mu=\underbrace{\mu_{1/N}\boxplus\boxplus\mu_{1/N}\boxplus\boxplus\cdots\boxplus\boxplus\mu_{1/N}}_{N \text{ times}}.$$ In the language of random variables, we have the following equivalent definition. \item A two-faced family $((z_i)_{i\in I}, (z_j)_{j\in J})$ of self-adjoint random variables in a $*$-probability space $(\A, \varphi)$ has a bi-free infinitely divisible distribution if for each $N\in \mathbb{N}$, there exits a bi-free sequence of $N$ identically distributed two-faced families $\{((z_{N, i, n})_{i\in I}, (z_{N,j, n})_{j\in J}):n=1, 2, \cdots, N \}$ of self-adjoint random variables such that $((S_{N,i})_{i\in I}, (S_{N,j})_{j\in J})$ has the same distribution as that of $((z_i)_{i\in I}, (z_j)_{j\in J})$, where $S_{N,k}=\sum_{n=1}^Nz_{N, k, n}$ for $k\in I\bigsqcup J$. \end{enumerate} \end{Definition} \begin{Remark} There is a one to one correspondence between the set of distributions of two-faced families $((z_i)_{i\in I}, (z_j)_{j\in J})$ in some non-commutative probability space $(\A, \varphi)$ and the set $\Sigma(I, J)$ of all unital linear functional on the unital polynomial algebra $\mathbb{C}(X_k:k\in I\bigsqcup J)$ in non-commutative variables $\{X_k:k\in I\bigsqcup J\}$. In fact, for $z:=((z_i)_{i\in I}, (z_j)_{j\in J})$ in $(\A, \varphi)$, define $\nu_z:\mathbb{C}(X_k:k\in I\bigsqcup J)\rightarrow \mathbb{C}$ by $$\nu_z(P(X_k:k\in I\bigsqcup J))=\varphi(P(z_k:k\in I \bigsqcup J)).$$ It is obvious that $\nu_z\in \Sigma(I\bigsqcup J)$. Conversely, Let $\nu\in \Sigma(I\bigsqcup J)$. Then $(\mathbb{C}(X_k:k\in I\bigsqcup J), \nu)$ is a non-commutative probability space, and $\nu$ is the distribution of $((X_i)_{i\in I}, (X_j)_{j\in J})$. Similarly, we can identify $\Sigma^+(I, J)$ with distributions of two-faced families of self-adjoint random variables. Note that $\Sigma^+(I, J)$ is a convex set of linear functionals. Therefore, we can define $\lambda \nu_1 +(1-\lambda)\nu_2\in \sum(I, J)$, for $0\le \lambda \le 1$ and $\nu_1, \nu_2\in \sum(I, J)$. \end{Remark} Lemma 5.2 in \cite{GHM} states that if $a_1^*=a_1, a_2^*=a_2, [a_1, a_2]=0$, for $a_1, a_2$ in a $C^*$-probability space $(\A, \varphi)$, a projection $p\in \A$ free from $\{a_1, a_2\}$, then there exists a compactly supported probability measure $\mu$ on $\mathbb{R}^2$ such that $$\kappa_{m,n}^\mu=\kappa_{m,n}^{p\A p}(\underbrace{pa_1p, \cdots, pa_1p}_{m \ \mathrm{ times}}, \underbrace{pa_2p, \cdots, pa_2p}_{n\ \mathrm{ times}}).$$ The following result gives a formula to compute the bi-free cumulants of a free projection compressed family, which is a bi-free analogue of Theorem 14.10 in \cite{NS}. \begin{Theorem} Let $z=((z_i)_{i\in I}, (z_j)_{j\in J})$ be a two-faced family of random variables in a non-commutative -probability space $(\A, \varphi)$, and $p$ is a projection ($p=p^2$) in $\A$, which is free from $\{z_k:k\in I \bigsqcup J\}$, and $\varphi(p)=\lambda\ne 0$. Then for a number $n\in \mathbb{N}$ and a function $\chi:\{1, 2, \cdots, n\}\rightarrow I\bigsqcup J$, we have $$\kappa_\chi^{p\A p}(pz_{\chi(1)}p, \cdots, pz_{\chi(n)}p)=\frac{1}{\lambda}\kappa_\chi(\lambda z_{\chi(1)}, \cdots, \lambda z_{\chi(n)}),$$ where $\kappa_\chi$ and $\kappa_\chi^{p\A p}$ are the bi-free cumulants of $(\A, \varphi)$ and $(p\A p, \varphi_p)$, respectively. \end{Theorem} \begin{proof}Let $\chi:\{1, 2, \cdots, n\}\rightarrow I\bigsqcup J$, $\chi^{-1}(I)=\{i_1<i_2<\cdots i_k\}$ and $$\chi^{-1}(J)=\{i_{k+1}>i_{k+2}>\cdots >i_{n}\}.$$ As in Section 1, we define a permutation $s_\chi\in S_n$, $s_\chi(j)=i_j$, for $j=1, 2, \cdots, n$. The permutation $s_\chi $ defines a lattice isomorphism from $NC(n)$ onto $BNC(\chi)$ by $\pi \mapsto s_\chi\cdot\pi$, for $\pi\in NC(n)$, where $$s_\chi\cdot \pi=\{s_\chi\cdot V=\{s_\chi(t_1), s_\chi(t_2), \cdots, s_\chi(t_k)\}: V=\{t_1, t_2, \cdots, t_k\}\in \pi\}.$$ Thus, $s_\chi\cdot \pi$ is the corresponding partition of $\pi$ in the new partial ordered set $$\{s_\chi(1)\prec s_\chi(2)\prec\cdots \prec s_\chi(n)\}.$$ It implies that $\varphi_\pi(z_{\chi(s_\chi(1))}, z_{\chi(s_\chi(2))}, \cdots, z_{\chi(s_\chi(n))})=\varphi_{s_\chi\cdot \pi}(z_{\chi(1)}, z_{\chi(2)}, \cdots, z_{\chi(n)})$. Therefore, by $(1.1)$, we have \begin{align*} \kappa_n(z_{\chi(s_\chi(1))}, \cdots, z_{\chi(s_\chi(n))})=&\sum_{\pi\in NC(n)}\varphi_\pi(z_{\chi(s_\chi(1))}, \cdots, z_{\chi(s_\chi(n))})\mu_{NC}(\pi, 1_n)\\ =&\sum_{\pi\in NC(n)}\varphi_{s_\chi\cdot\pi}(z_{\chi(1)}, \cdots, z_{\chi(n)})\mu_{BNC(\chi)}(s_\chi\cdot\pi,1_n)\\ =&\sum_{\sigma\in BNC(\chi)}\varphi_\sigma(z_{\chi(1)}, \cdots, z_{\chi(n)})\mu_{BNC(\chi)}(\sigma,1_n))\\ =&\kappa_\chi(z_{\chi(1)}, \cdots, z_{\chi(n)}). \end{align*} By Theorem 14.10 in \cite{NS}, we get \begin{align*}\kappa_\chi^{p\A p}(pz_{\chi(1)}p, \cdots, pz_{\chi(n)}p)=&\kappa_n^{p\A p}(pz_{\chi(s_\chi(1))}p, \cdots, pz_{\chi(s_\chi(n))}p)\\ =&\frac{1}{\lambda}\kappa_n(\lambda z_{\chi(s_\chi(1))}, \cdots, \lambda z_{\chi(s_\chi(n))})\\ =&\frac{1}{\lambda}\kappa_{\chi}(\lambda z_{\chi(1)}, \cdots, \lambda z_{\chi(n)}). \end{align*} \end{proof} The following theorem characterizes the bi-free infinite divisibility of a distribution in $\Sigma^+(I,J)$ in terms of the existence of a bi-free additive convolution semigroup of distributions in $\Sigma^+(I, J)$ associated with the distribution. \begin{Theorem} Let $\nu\in \Sigma^+(I, J)$. Then the distribution $\nu$ is bi-free infinitely divisible if and only if there is a semigroup $\{\nu_t: t\ge 0\}$ of distributions in $\Sigma^+(I, J)$ such that $\nu_{s+t}=\nu_s\boxplus\boxplus\nu_t$ for $s, t\ge 1$, $\nu_1=\nu$, $\nu_0=\delta_0$, and $\lim_{t\rightarrow 0}\nu_t=\delta_0$ weakly. \end{Theorem} \begin{proof} Let $\nu$ be a bi-free infinitely divisible distribution in $\Sigma^+(I,J)$. Then, for every $2\le n\in \mathbb{N}$, there is a two-faced family $z_{1/n}=((z_{i, 1/n})_{i\in I}, (z_{j,1/n})_{j\in J})$ of self-adjoint random variables in a $*$-probability space $(\A_{1/n}, \varphi_{1/n})$ such that $\nu_{1/n}^{\boxplus\boxplus n}=\nu$, where $\nu_{1/n}\in \Sigma^+(I, J)$ is the distribution of $z_{1/n}$, Therefore, for $\chi: \{1, 2, \cdots, n\}\rightarrow I\bigsqcup J$, we have $\kappa_{\chi, z_{1/n}}=\frac{1}{n}\kappa_{\chi,z}$. By performing bi-free additive convolution, we can get a distribution $\nu_{r}\in \Sigma^+(I, J)$ such that $\kappa_{\chi, \nu_r}=r\kappa_{\chi, z}$, for a positive rational number $r$. For a real number $t>0$, there is a sequence $\{r_n:n=1, 2, \cdots\}$ of positive rational numbers such that $\lim_{n\rightarrow \infty}r_n=t$, we define distribution $\nu_t$ as follows. Whenever $n\in \mathbb{N}, \chi:\{1, 2, \cdots, n\}\rightarrow I\bigsqcup J$, the moment \begin{align*} m_{\chi, \nu_t}=\nu_t(X_{\chi(1)}X_{\chi(2)}\cdots X_{\chi(n)}):=&\lim_{n\rightarrow \infty}\varphi_{r_n}(z_{\chi(1)}\cdots z_{\chi(n)})=\lim_{n\rightarrow\infty}\sum_{\pi\in BNC(\chi)}\kappa_{\chi, \nu_{r_n}}\\ =&\lim_{n\rightarrow \infty}\sum_{\pi\in BNC(\chi)}r_n\kappa_{\chi,\nu}=\sum_{\pi\in BNC(\chi)}t\kappa_{\chi, \nu}. \end{align*} Define $\nu_t(1)=1$. Then $\nu_t\in \Sigma^+(I, J)$ and $\kappa_{\chi, \nu_t}=t\kappa_{\chi, \nu}$. Define $\nu_0=\delta_0$. Then $\{\nu_t:t\ge 0\}$ is semigroup of distributions in $\Sigma^+(I, J)$ with respect to the bi-free additive convolution. Moreover, $\nu_t\rightarrow \delta_0$ weakly, as $t\rightarrow 0+$, since $\lim_{t\rightarrow 0}m_{\chi, \nu_t}=0$, for $n\in \mathbb{N}$ and $\chi:\{1, 2, \cdots, n\}\rightarrow I\bigsqcup J$. Conversely, if $\{\nu_t: t\ge 0\}$ is such a semigroup, it is obvious that $\nu_1$ is bi-free infinitely divisible. \end{proof} The following theorem gives a bi-free Poisson approach to bi-free infinitely divisible distributions, which is a bi-free analogue of Theorem 4.5.5 in \cite{RS}. \begin{Theorem} A distribution $\nu\in \Sigma^+(I, J)$ is bi-free infinitely divisible if and only if there is a sequence $\{P_{\lambda_n,\nu_n}:n=1,2,\cdots\}$ of compound bi-free Poisson distributions determined by $\lambda_n>0$ and $\nu_n\in \Sigma^+(I, J)$ such that $P_{\lambda_n,\nu_n}\rightarrow \nu$ weakly, as $n\rightarrow \infty$. \end{Theorem} \begin{proof}If $P_{\lambda, \nu}$ is a bi-free compound Poisson distribution with $\nu\in \Sigma^+(I, J)$, then for $n\in \mathbb{N}$ and $\chi:\{1,2,\cdots, n\}\rightarrow I\bigsqcup J$, we have $$\kappa_{\chi,P_{\lambda, \nu}}=\lambda m_{\chi,\nu}=n(\frac{\lambda}{n}m_{\chi, \nu})=n\kappa_{\chi, P_{\lambda/n, \nu}}.$$ It follows that $P_{\lambda, \nu}=(P_{\lambda/n, \nu})^{\boxplus\boxplus n}$ and $P_{\lambda/n, \nu}\in \Sigma^+(I, J)$. Therefore, $P_{\lambda, \nu}$ is bi-free infinitely divisible. Furthermore, If $\nu_k\rightarrow \nu$ weakly in $\Sigma^+(I, J)$, and $\nu_k$ is bi-free infinitely divisible for each $k\ge 1$, then for $n\in \mathbb{N}$, and $\chi:\{1,2,\cdots, n\}\rightarrow I\bigsqcup J$, we have $$\kappa_{\chi,\nu}=\lim_{k\rightarrow \infty}\kappa_{\chi,\nu_k}=n\lim_{k\rightarrow \infty}\frac{1}{n}\kappa_{\chi, \nu_k}=n\lim_{k\rightarrow \infty}\kappa_{\chi, \nu_{k, 1/n}},\eqno (3.1) $$ where $\nu_{k, 1/n}\in \Sigma^+(I, J)$ such that $(\nu_{k, 1/n})^{\boxplus\boxplus n}=\nu_k$, for $k=1, 2, \cdots$. Define $\nu_{1/n}$ as the weak limit of $\nu_{k,1/n}$, as $k\rightarrow \infty$. This weak limit exists and is in $\Sigma^+(I, J)$, because of $(3.1)$. The equation (3.1) also shows that $\kappa_{\chi, \nu}=n\kappa_{\chi, \nu_{1/n}}$. Therefore, $\nu=\nu_{1/n}^{\boxplus\boxplus n}$, that is, $\nu$ is bi-free infinitely divisible. Conversely, if $\nu\in \Sigma^+(I, J)$ is bi-free infinitely divisible, then $\nu$ can be extended to a semigroup $\{\nu_t\in \Sigma^+(I, J):t\ge 0\}$ with $\nu_1=\nu$, by Theorem 3.4. Thus, we can define a sequence $\{P_{k, \nu_{1/k}}: k=1, 2, \cdots\}$ of bi-free compound Poisson distributions. For $m\in \mathbb{N}$ and $\chi:\{1,2,\cdots, m\}\rightarrow I\bigsqcup J$, we then have \begin{align*} \lim_{n\rightarrow\infty}\kappa_{\chi, P_{n, \nu_{1/n}}}=&\lim_{n\rightarrow \infty}nm_{\chi, \nu_{1/n}}=\lim_{n\rightarrow \infty}n\sum_{\sigma\in BNC(\chi)}\kappa_{\chi, \nu_{1/n}, \sigma}\\ =&\lim_{n\rightarrow \infty}n\sum_{\sigma\in BNC(\chi)}\prod_{V\in\sigma}\frac{1}{n}\kappa_{\chi|_{V},\nu}\\ =&\lim_{n\rightarrow \infty}n\sum_{\sigma\in BNC(\chi)}\frac{1}{n^{|\sigma|}}\prod_{V\in \pi}\kappa_{\chi|_{V},\nu}=\kappa_{\chi, \nu}. \end{align*} \end{proof} \section{Random Bi-Matrix Models} The goal of this section is to construct random bi-matrix families (see Definition 1.1 for the definition) to approximate in distribution to a bi-free compound Poisson distribution $P(\lambda, \mu_a )$, when $a=((a_i)_{i\in I}, (a_{j})_{j\in J})$ has an almost sure random matrix model. Our result will generalize Example 4.15 in \cite{PS} to a much more general case. We first recall a concept from \cite{HP}. \begin{Definition}[Page 125 in \cite{HP}, Page 19 in \cite{MS}] An $n\times n$ complex self-adjoint random matrix is called a GUE random matrix (GUE stands for Gaussian unitary ensemble) if \begin{enumerate} \item $\{\Re X_{ij}(n): 1\le i\le j\le n\}\cup\{\Im X_{ij}(n): 1\le i<j\le n\}$ is an independent family of Gaussian random variables, and \item $E(X_{ij}(n))=0$, for all $i, j$, $E((X_{ii}(n))^2)=\frac{1}{n}$, for all $i$, and $$E((\Re X_{ij}(n))^2)=E((\Im X_{ij}(n))^2)=\frac{1}{2n},$$ for $1\le i<j\le n$. \end{enumerate} \end{Definition} A tuple $\{X(1,n), ..., X(N,n)\}$ of $n\times n$ random matrices on a probability space $(\Omega, P)$ has {\sl an almost sure limit in distributions}, if there is a tuple $(a_1,..., a_N)$ in a non-commutative probability space $(\A, \varphi)$ such that for almost all $\omega\in \Omega$, $\{X(1,n,\omega), ..., X(N,n,\omega)\}\subset (M_n(\mathbb{C}), tr_n)$ converges in distribution to $(a_1,..., a_N)$, as $n\rightarrow \infty$. If, furthermore, $\{a_1, ..., a_N\}$ is a free family of random variables, we say that {\sl $X(1,n), ..., X(N,n)$ are almost surely asymptotically free} (Section 4.1 in \cite{MS}). In this case, we call $\{X(1,n),..., X(N,n))\}$ an almost sure random matrix model of $(a_1,..., a_N)$. The following theorem gives a random matrix model for a compound free Poisson distribution determined by $\lambda>0$ and a tuple of random variables $a=(a_1,..., a_N)$, generalizing the work on random matrix models of compound free Poisson distributions of single random variables in Section 4.4 of \cite{HP} to the multi-dimensional random variable case. \begin{Theorem} Let $a_1, a_2, \cdots, a_N$ be self-adjoint elements in a $C^*$-probability space $(\A,\varphi)$, and $\lambda>0$ be a positive number. If $\{a_1, ..., a_N\}$ has an almost sure random matrix model, then there are a subsequence $\alpha(n)$ of natural numbers and a sequence $\{Y(n, i):i=1, 2, \cdots N\}$ of $\alpha(n)\times \alpha(n)$ random matrices with the following property. When $n\rightarrow \infty$, the sequence converges in distribution to the multidimensional free compound Poisson distribution determined by $\lambda$ and the distribution of $\{a_i\}_{i=1}^N.$ \end{Theorem} \begin{proof} Let $\{B(n, 1),..., B(n,N)\}$ be an almost sure random matrix model of $(a_1, ..., a_N)$. We can choose $B(n, i)$ to be an $n\times n$ random matrix in a probability space $(\Omega,P)$, for $i=1, ..., N$, and $n=1,2,...$. It is obvious that there is a number $p(n)\in \mathbb{N}$, for each $n\in \mathbb{N}$, such that $p(n)\le n$ and $\lim_{n\rightarrow \infty}\frac{p(n)}{n}=\lambda$, for $n=1, 2, \cdots$, when $\lambda\le 1$. Actually, we can choose $p(n)$ as follows. \begin{enumerate} \item When $\lambda <1$, let $p(n)=\lambda n-\delta(n)$, where $0\le \delta(n)<1$. Then $\lim_{n\rightarrow \infty}\frac{p(n)}{n}=\lambda.$ \item When $\lambda=1$, let $p(n)=n$. \end{enumerate} Choose an $n\times n$ standard self-adjoint Gaussian matrix $X(n)=(X_{ij}(n))_{n^N\times n^N}$, which is independent from $\{\widetilde{B}(n,i):, i=1, 2,..., N\}$, where $\widetilde{B}(n,i)=B(p(n), i)\oplus 0_{n-p(n)}$. For $m\in \mathbb{N}$, and $\chi:\{1, 2, \cdots, m\}\rightarrow \{1, 2, \cdots, N\}$, we have \begin{align*} &tr_{n}(\widetilde{B}(n,\chi(1), \omega)\cdots \widetilde{B}(n,\chi(m), \omega))\\ =&tr_{n}(B(p(n), \chi(1), \omega)\cdots B(p(n), \chi(m), \omega)\oplus 0_{n-p(n)})\\ =&\frac{p(n)}{n}tr_{p(n)}(B(p(n), \chi(1), \omega)\cdots B(p(n), \chi(m), \omega))\\ \rightarrow &\lambda \varphi(a_{\chi(1)}a_{\chi(2)}\cdots a_{\chi(m)}), \end{align*} as $n\rightarrow \infty$, for almost all $\omega\in \Omega$. Now we show that $\{X(n)\widetilde{B}(n,i)X(n):i=1, 2, \cdots, N\} $ converges in distribution to the multidimensional compound free Poisson distribution with parameters $\lambda$ and the distribution $\mu_a$ of $a:=\{a_1, a_2, \cdots, a_N\}$. By Theorem 5 of Section 4.2 in \cite{MS}, $X(n)$ and $\{\widetilde{B}(n, i): i=1, 2, \cdots, N\}$ are almost surely asymptotically free, and the limit distribution of $(X(n))$ is the standard semicircular element $s$, Hence, when $n\rightarrow\infty$, $\{X(n)\widetilde{B}(n,i)X(n):i=1, 2, \cdots, N\}$ converges in distribution to $\{sb_is:i=1, 2, \cdots, N\}$, where $s,b_1, \cdots, b_N$ are in a $C^*$-probability space, say, $(\A, \varphi)$, such that \begin{enumerate} \item $s$ is a standard semicircular element, \item $\{b_i=b_i^*: i=1,2, \cdots, N\}$ has the almost sure limit distribution of $\{\widetilde{B}(n, 1), \cdots, \widetilde{B}(n, N)\}$, \item $\{s\}$ and $\{b_i:i=1, 2, \cdots, N\}$ are free. \end{enumerate} By Example 12.19 in \cite{NS}, for $1<m\in \mathbb{N}$, $\chi:\{1, 2, \cdots, m\}\rightarrow \{1, 2, \cdots, N\}$, $$\kappa_m(sb_{\chi(1)}s,\cdots, sb_{\chi(m)}s)=\varphi(b_{\chi(1)}\cdots b_{\chi(m)})=\lambda\varphi(a_{\chi(1)}\cdots a_{\chi(m)}).$$ This shows that $\{sb_1s, sb_2s, \cdots, sb_Ns\}$ has the desired multidimensional compound free Poisson distribution. For $\lambda>1$, let $\lambda=k+\delta$, where $k\in \mathbb{N}$ and $0\le \delta<1$. Let $$\{X(n)\widetilde{B}(n,i)X(n):i=1, 2, \cdots, N\}, \{X(n)\widetilde{C}(n,i)X(n):i=1, 2, \cdots, N\}$$ be random matrix models of the multidimensional free Poisson distributions $P(1, \mu_a)$ and $P(\delta, \mu_{a})$, respectively, where $P(\lambda, \mu_a)$ is the multidimensional free Poisson distribution determined by parameter $\lambda>0$ and the distribution $\mu_a$ of the tuple $a=(a_1, a_2, \cdots, a_N)$. Furthermore, we place two families $\{\widetilde{B}(n,i):i=1, 2, \cdots, N\}\subset M_{n}(\mathbb{C})$ and $\{\widetilde{C}(n,i):i=1, 2, \cdots, N\}\subset M_{n}(\mathbb{C})$ into the tensor product algebra $M_{n^{2}}(\mathbb{C})=M_{n}(\mathbb{C})\otimes M_{n}(\mathbb{C})$ so that the two families are tensorial independent, and, therefore, $\{\widetilde{B}(n,i), \widetilde{C}(n,i): i=1, 2, \cdots, N\} $ has an almost sure limit distribution, as $n\rightarrow \infty$. Let $\widetilde{X}(n)$ be an $n^{2}\times n^{2}$ standard self-adjoint Gaussian random matrix, which is independent from $\{\widetilde{B}(n,i), \widetilde{C}(n,i): i=1, 2, \cdots, N\}$, for $n\in \mathbb{N}$. Then $\{\widetilde{X}(n)\widetilde{B}(n,i)\widetilde{X}(n):i=1, 2, \cdots, N\}$ and $\{\widetilde{X}(n)\widetilde{C}(n,i)\widetilde{X}(n):i=1, 2, \cdots, N\}$ are random matrix models for the multidimensional free Poisson distributions $P(1, \mu_a)$ and $P(\delta, \mu_{a})$, respectively. By Theorem 5 in Section 4.2 in \cite{MS}, $\{\widetilde{X}(n)\}$ and $\{\widetilde{B}(n,i), \widetilde{C}(n,i): i=1, 2, \cdots, N\} $ are almost surely asymptotically free. It implies that $$\{\widetilde{X}(n)\widetilde{B}(n,i)\widetilde{X}(n), \widetilde{X}(n)\widetilde{C}(n,i)\widetilde{X}(n): i=1, 2, \cdots, N\} \subset (M_{n^{2}}(L^{-\infty}(\Omega, P)), E\circ tr)$$ has an almost surely limit distribution as $n\rightarrow \infty$. Let $l=\max\{2N, k+1\}$, $D_{n,i}=\widetilde{X}(n)\widetilde{B}(n,i)\widetilde{X}(n)$, for $i=1, 2, \cdots, N$, $D_{n,N+i}=\widetilde{X}(n)\widetilde{C}(n,i)\widetilde{X}(n)$, for $i=1, 2, \cdots, N$, and $D_{n, 2N+i}=I$, the identity matrix with an appropriate size, if $l>2N$. By the above discussions, the family $$\mathbf{D}(n)=\{D_{n, i}: i=1, 2, \cdots, l\}\subset (M_{n^{2}}(L^{-\infty}(\Omega, P)), E\circ tr)$$ converges in distribution to $\{sb_1s, \cdots, sb_Ns, sc_1s, \cdots, sc_Ns, 1\}$ almost surely, where $\{sc_1s, \cdots, sc_Ns\}$ is the tuple of limit random variables of $\{\widetilde{X}(n)\widetilde{C}(n, i)\widetilde{X}(n):i=1, 2, \cdots, N\}$. Moreover, $$\lim_{n\rightarrow \infty}tr(D_{n,i}^{2m})\le \max\{2^{2m}\varphi((sb_is)^{2m}), 2^{2m}\varphi((sc_is)^{2m}), 1:i=1, 2, \cdots, N\}$$ $$\le \max\{2\|sb_is\|, 2\|sc_is\|: i=1, 2, \cdots, N\}, \{1\}\}^{2m}, a. s., $$ for $i=1, 2, \cdots, l$. Therefore, we can choose $D>0$ such that, for $m=1, 2, \cdots$, $$\sup\{tr(D_{n, i}^{2m}): n=1, 2, \cdots, i=1, 2, \cdots, l\}\le D^{2m},a. s. $$ Let $\mathbf{U}_n=\{U_{n,i}:i=1, 2, \cdots, l\}$ be independent unitary random matrices with the Haar law, independent from $\mathbf{D}(n)$. By Theorem 4.5.10 in \cite{AGZ} and the discussion at the end of Section 4.3 in \cite{MS} (Page 110 of \cite{MS}), $$\{U_{n, 1}, U_{n, 1}, \cdots, U_{n, l}, U_{n, l}^*, \widetilde{X}(n)\widetilde{B}(n, i)\widetilde{X}(n),\widetilde{ X}(n)\widetilde{C}(n, i)\widetilde{X}(n), i=1, 2, \cdots, N, I_{n^{2N}} \}$$ converges in distribution to $\{u_1, u_1^*, \cdots, u_{l}, u_{l}^*, sb_1s, \cdots, sb_Ns, sc_1s, \cdots, sc_Ns, 1\}$, as $n\rightarrow \infty$, $$\{u_1, u_1^*\}, \cdots, \{u_{l}, u_{l}^*\}, \{sb_is, sc_is: i=1, 2, \cdots, N, 1\} $$ are free, and $u_1, u_2, \cdots, u_{l}$ are Haar unitaries. Let $$Y_{n,j}=\sum_{i=1}^kU_{n, i}\widetilde{X}(n)\widetilde{B}(n, j)\widetilde{X}(n)U_{n,i}^*+U_{n, k+1}\widetilde{X}(n)\widetilde{C}(n, j)\widetilde{X}(n)U_{n,k+1}^*,$$ for $j=1, 2, \cdots, N$. Then $\{Y_{n, 1}, \cdots, Y_{n, N} \}$ converges in distribution to $\{y_1, \cdots, y_N\}$, where $$y_j=\sum_{i=1}^ku_isb_jsu_i^*+u_{k+1}sc_jsu_{k+1}^*,$$ for $j=1, 2, \cdots, N$. We shall show that $\{y_1, \cdots, y_N\}$ has the multidimensional compound free Poisson distribution determined by $\lambda$ and the distribution $\mu_a$ of the tuple $a=\{a_1, a_2, \cdots, a_N\}$. By the discussion before Theorem 9 in Section 4.3 of \cite{MS}, we have the following result. If $\{u_1, u_1^*\}, \{u_2, u_2^*\}$, and $\{d_1, \cdots, d_m\}$ are free, and $u_1$ and $u_2$ are Haar unitaries, then $$\{u_1d_1u_1^*, \cdots, u_1d_mu_1^*\},\ \{u_2d_1u_2^*, \cdots, u_2d_mu_2^*\}$$ are free. In fact, for polynomials $P_1, Q_1, \cdots, P_r, Q_r$ such that $$\varphi(P_i(u_1d_1u_1^*, \cdots, u_1d_mu_1^*))=\varphi(Q_i(u_2d_1u_2^*, \cdots, u_2d_mu_2^*))=0,$$ for $i=1, 2, \cdots, r$, we have $$\varphi(P_i(u_1d_1u_1^*, \cdots, u_1d_mu_1^*)))=\varphi(u_1(P_i(d_1, \cdots, d_m))u_1^*)=\varphi(P_i(d_1, \cdots, d_m))=0.$$ Similarly, $\varphi(Q_i(d_1, \cdots, d_m))=0$, for $i=1, 2, \cdots, r$. Therefore, \begin{align*} &\varphi(P_1(u_1d_1u_1^*, \cdots, u_1d_mu_1^*)Q_1(u_2d_1u_2^*, \cdots, u_2d_mu_2^*)\\ \cdots &P_r(u_1d_1u_1^*, \cdots, u_1d_mu_1^*)Q_r(u_2d_1u_2^*, \cdots, u_2d_mu_2^*))\\ =&\varphi(u_1P_1(d_1, \cdots, d_m)u_1^*u_2Q_1(d_1, \cdots, d_m)u_2^*\\ \cdots& u_1P_r(d_1, \cdots, d_m)u_1^*u_2Q_r(d_1, \cdots, d_m)u^*_2)=0. \end{align*} It implies that $$\{u_1sb_1su_1^*, \cdots, u_1sb_Nsu_1^*\}, \cdots \{u_ksb_1su_k^*, \cdots, u_ksb_Nsu_k^*\}\}, \{u_{k+1}sc_1su_{k+1}^*, \cdots, u_{k+1}sc_Nsu_{k+1}^*\}$$ are free. Then for $m\in \mathbb{N}, \chi:\{1, 2, \cdots, m\}\rightarrow \{1, 2, \cdots, N\}$, we have \begin{align*} \kappa_m(y_{\chi(1)},\cdots, y_{\chi(m)})=&\sum_{i=1}^k\kappa_m(u_isb_{\chi(1)}su_i^*, \cdots, u_isb_{\chi(m)}su_i^*)\\ +&\kappa_m(u_{k+1}sc_{\chi(1)}su_{k+1}^*, \cdots, u_{k+1}sc_{\chi(m)}su_{k+1}^*)\\ =&(k+\delta)\varphi(a_{\chi(1)}\cdots a_{\chi(m)})=\lambda\varphi(a_{\chi(1)}\cdots a_{\chi(m)}). \end{align*} \end{proof} Recall from \cite{DV}, \cite{VDN}, and \cite{PS1} that two elements $a$ and b in non-commutative probability space $(\A, \varphi)$ are classically independent (or, tensor-independent) if $\varphi(a^{p_1}b^{q_1}\cdots a^{p_n}b^{q_n})=\varphi(a^{p})\varphi(b^{q})$, where $p=\sum_{i=1}^np_i$, $q=\sum_{i=1}^nq_i$, $0\le p_1, q_1, ..., p_n, q_n$ are integers. Generally, random variables $a_1, a_2, \cdots, a_N$ are classically independent, if for $n\in \mathbb{N}$, $\chi:\{1, 2, \cdots, n\}\rightarrow \{1, 2, \cdots, N\}$, and $\ker\chi$, the partition of $\{1, 2, \cdots, n\}$ defined by $p\sim_{\ker\chi}q$ if and only if $\chi(p)=\chi(q)$, for $1\le p, q\le n$, we have $$\varphi(a_{\chi(1)}\cdots a_{\chi(n)})=\varphi_{\ker\chi}(a_{\chi(1)}\cdots a_{\chi(n)})=\prod_{V\in \ker\chi}\varphi(a_{\chi(V)}^{|V|}),$$ where $\chi(V)$ is the common value of $\chi$ when restricted to $V$. \begin{Corollary} Let $a_1,..., a_N$ be self-adjoint random variables in a $C^*$-probability space $(\A, \varphi)$, and $\lambda$ a positive real number. If $a_1,..., a_N $ are classically independent in $(\A, \varphi)$, then the compound free Poisson distribution $\pi_{\lambda, \mu_a}$ determined by $\lambda$ and the distribution $\mu_a$ of the tuple $a=(a_1,..., a_N)$ has a random matrix model. \end{Corollary} \begin{proof} By the proof of Proposition 4.4.9 of \cite{HP}, for each $n\in \mathbb{N}$, there are real numbers $\xi_1(n,i)\le \cdots \le \xi_n(n,i)$ in the spectrum $\sigma(a_i)$ of $a_i$ such that the sequence $$\{B(n,i)=\mathbf{Diag}(\xi_1(n, i), \xi_2(n,i), \cdots, \xi_n(n,i)): n=1, 2, \cdots\}\subset (M_{n}(\mathbb{C}), tr)$$ converges in distribution to $a_i$ (see the bottom of Page 169 of the book, also Remark 22.27 in \cite{NS}), for $i=1, 2, \cdots, N$, where $tr$ is the normalized trace on the matrix algebra $M_n(\mathbb{C})$. Place $B(n,1), B(n,2)$, $\cdots, B(n,N)$ into the tensor product matrix algebra $M_{n^N}(\mathbb{C})=M_n(\mathbb{C})\otimes M_n(\mathbb{C})\cdots \otimes M_n(\mathbb{C})$ so that $$B(n,1), B(n,2), \cdots, B(n,N)$$ are mutually tensorial independent in the tensor product algebra. It implies that $$\{B(n,1), B(n,2), \cdots, B(n,N)\}$$ converges in distribution to $\{a_1, \cdots, a_N\}$, as $n\rightarrow \infty$. We have proved that $\{a_1, ..., a_N\}$ has an almost sure random matrix model. Now the conclusion follows from Theorem 4.2. \end{proof} Combining Theorem 4.2 and the work in Section 4 of \cite{PS}, we can get the following bi-random matrix model for a kind of bi-free compound free Poisson distributions, which is an illustrative example of the spirit of Section 4 of \cite{PS}: one can generalize results in random matrix models to bi-random matrix models. \begin{Theorem} Let $\lambda>0$ and $a:=\{a_{1,l}, \cdots, a_{N, l},a_{1,r}, \cdots, a_{M, r} \}$ be a finite sequence of self-adjoint elements in a $C^*$-probability space $(\A, \varphi)$. If the tuple $a:=\{a_{l,1}, \cdots, a_{l,N},a_{r,1}, \cdots, a_{r,M} \}$ has an almost sure random matrix model, and $a_{i, l}a_{j,r}=a_{j,r}a_{i, l}$ for $i=1, ...., N$ and $j=1, ..., M$, then there exist a subsequence $\alpha(n)$ of natural numbers, and, for each $n\in \mathbb{N}$, an $\alpha(n)\times \alpha(n)$ random two-faced family $Z(n):=((Z_{n, i, l})_{1\le i\le N}, (Z_{n,j, r})_{1\le j\le M})$ of matrices such that $Z(n)$ converges in distribution to the bi-free compound free Poisson distribution determined by $\lambda$ and $\mu_a$, the distribution of $a$. \end{Theorem} \begin{proof} By the proof of Theorem 4.2, there exist a subsequence $\tilde{\alpha}(n)$ of natural numbers and a sequence $W_n:=\{W_{n,1}, W_{n, 2}, \cdots, W_{n, N+M}\}_{n=1}^\infty$ of $\tilde{\alpha}(n)\times \tilde{\alpha}(n)$ random matrices such that $$\{W_{n, i}\}_{i=1}^{N+M}\subset (M_{\tilde{\alpha}(n)}(L^{\infty-}(\Omega, P)), E\circ tr)$$ converges in distribution to the multidimensional compound free Poisson distribution $P(\lambda, \mu_a)$, which means that, for $c_i=a_{l,i}$, for $i=1,..., N$ and $c_{N+i}=a_{r,i}$ $i=1,..., M$, $m\in \mathbb{N}$, and $\alpha:\{1, 2, \cdots, m\}\rightarrow \{1, 2, \cdots, N+M\}$, we have $$\lim_{n\rightarrow\infty}\kappa_m(W_{n,\alpha(1)}, \cdots, W_{n,\alpha(m)})=\lambda\varphi(c_{\alpha(1)}\cdots c_{\alpha(m)}).$$ Define $X_{n, i, l}=L(W_{n, i}), X_{n, i, r}=R(W_{n,i}),$ for $i=1, 2, \cdots, N+M$. We need the following result from \cite{PS}. \vskip 3mm \textit{ Remark 3.2 in \cite{PS}. If $[T_{i,j}], [S_{ij}]\in M_N(\L(\X))$ are such that $T_{ij}S_{k,m}=S_{km}T_{ij}$, for all $i,j,k,m$, then it is elementary to verify that $L([T_{ij}])R([S_{ij}])=R([S_{ij}])L([T_{ij}])$}. \vskip 3mm By Remark 3.5 in \cite{PS}, for a non-commutative probability space $(\A, \varphi)$, $\A\subset \L(\A)$ by left multiplication. It implies form Remarks 3.2 and 3.5 in \cite{PS} that$$L([T_{ij}])R([S_{ij}])=R([S_{ij}])L([T_{ij}]),$$ if $[T_{ij}], [S_{ij}]\in M_N(\A)\subset M_N(\L(\A))$, where $\A$ is a commutative algebra. It follows that $$X_{n, i, l}X_{ n, j, r}=X_{ n, j, r}X_{ n, i, l}, i, j=1, 2, \cdots, N+M,\eqno (4.1)$$ since the entries of the matrices $W_{n,i}$, for $i=1, 2, ..., M+N$, are elements in the commutative algebra $L^{\infty-}(\Omega, P)$. Let $m\in \mathbb{N}$, $\chi: \{1, ..., m\}\rightarrow \{l,r\}$, and $\alpha:\{1,..., m\}\rightarrow \{1, ..., N+M\}$. We define a permutation $s\in S_{m}$ by the following equations $$\chi^{-1}(\{l\})=\{s(1)<\cdots<s(m-k)\}, \chi^{-1}(\{r\})=\{s(m-k+1)<\cdots <s(m)\}.$$ We prove $$\lim_{n\rightarrow \infty}\kappa_{\chi}(X_{n, \alpha(1),\chi(1)}, \cdots, X_{n, \alpha(m), \chi(m)})=\lambda\varphi(c_{\alpha(s(1))}\cdots c_{\alpha(s(m))}).\eqno (4.2)$$ Suppose $|\chi^{-1}(\{r\})|=0$, that is, $\chi(i)=l$, for all $i=1, 2, \cdots, m$. In this case, $s$ is the identity permutation of $S_m$, that is, $s(1)=1, ..., s(m)=m$. By Lemma 3.7 in \cite{PS}, in this case, the distribution of $\{X_{n, \alpha(1),\chi(1)}, ..., X_{n,\alpha(m),\chi(m)}\}$ in $(\L(M_{\alpha(n)}(L^{\infty-}(\Omega, P))), \Phi_n:=tr\circ E_{\L(M_{\alpha(n)}(L^{\infty-}(\Omega, P)))})$ is equal to that of $\{W_{n, \alpha(1)}, \cdots, W_{n, \alpha(m)}\}$ in $(M_{\alpha(n)}(L^{\infty-}(\Omega, P)), E\circ tr)$. It implies from the discussion in first paragraph of this proof that $$\lim_{n\rightarrow \infty}\kappa_{\chi}(X_{n, \alpha(1),\chi(1)}, \cdots, X_{n, \alpha(m),\chi(m)})=\lim_{n\rightarrow \infty}\kappa_m(W_{n, \alpha(1)}, \cdots, X_{n, \alpha(m)})=\lambda\varphi(c_{\alpha(1)}\cdots c_{\alpha(m)}).$$ Suppose (4.2) holds true when $|\chi^{-1}(\{r\})|=k$. Now we prove $(4.2)$ when $|\chi^{-1}(\{r\})|=k+1$. We adopt some ideas from Example 4.15 in \cite{PS}. Let $\hat{\chi}=\chi\circ s$, where $s$ is the function defined before $(4.2)$. Then $\hat{\chi}^{-1}(\{l\})=\{1, 2, \cdots, m-k-1\}$ and $\hat{\chi}^{-1}(\{r\})=\{m-k, \cdots, m\}$. Define a lattice isomorphism $\rho:BNC(\chi)\rightarrow BNC(\hat{\chi}), \rho:\pi\mapsto s^{-1}\circ \pi$. Then $\mu_{BNC}(\pi, 1_\chi)=\mu_{BNC}(s^{-1}\circ \pi, 1_{\hat{\chi}})$. It follows by $(4.1)$ that $$\Phi_n(X_{n, \alpha(1),\chi(1)}, ..., X_{n,\alpha(m),\chi(m)})=\Phi_{n}(X_{n,\alpha(s(1)), \hat{\chi}(1)}, ...,X_{n, \alpha(s(m)), \hat{\chi}(m)}).\eqno (4.3)$$ For a partition $V\in \pi\in BNC(\chi)$, we have $ s^{-1}\circ V\in s^{-1}\circ \pi\in BNC(\hat{\chi})$. Without loss of generality, we can assume that $V=\{p_1\prec_{\chi}\cdots\prec_{\chi}p_q\}$ is a $\chi$-ordered interval of the $\chi$-ordered set $$\{s(1)\prec_{\chi}\cdots \prec_{\chi}s(m-k-1)\prec_{\chi}s(m)\prec_{\chi}\cdots \prec_{\chi}s(m-k)\}.$$ Then $s^{-1}\circ V=\{s^{-1}(p_1)\prec_{\hat{\chi}}\cdots \prec_{\hat{\chi}}s^{-1}(p_q)\}$ is a $\hat{\chi}$-ordered interval of the $\hat{\chi}$-ordered set $$\{1\prec_{\hat{\chi}} 2\prec_{\hat{\chi}} \cdots \prec_{\hat{\chi}}m-k-1\prec_{\hat{\chi}} m \prec_{\hat{\chi}} \cdots \prec_{\hat{\chi}}m-k\}.$$ By $(4.3)$, we get $$\Phi_V(X_{n, \alpha(1),\chi(1)}, ..., X_{n,\alpha(m),\chi(m)})=\Phi_{s^{-1}\circ V}(X_{n,\alpha(s(1)), \hat{\chi}(1)}, ...,X_{n, \alpha(s(m)), \hat{\chi}(m)}).$$ It implies that $$\Phi_\pi(X_{n, \alpha(1), \chi(1)}, ..., X_{n, \alpha(m), \chi(m)})=\Phi_{s^{-1}\circ \pi}(X_{n, \alpha(s(1)),\hat{\chi}(1)}, ..., X_{n, \alpha(s(m)), \hat{\chi}(m)}),$$ for $\pi\in BNC(\chi)$. We thus have $$\kappa_{\chi}(X_{n, \alpha(1), \chi(1)}, ..., X_{n, \alpha(m), \chi(m)}) = \sum_{\pi\in BNC(\chi)}\Phi_{n,\pi}(X_{n, \alpha(1), \chi(1)},..., X_{n, \alpha(m), \chi(m)})\mu_{BNC}(\pi, 1_\chi)$$ $$=\sum_{\pi\in BNC(\hat{\chi})}\Phi_{n, \pi}(X_{n, \alpha(s(1)),\hat{\chi}(1)},..., X_{n, \alpha(s(m)), \hat{\chi}(m)})\mu_{BNC}(\pi, 1_{\hat{\chi}}). \eqno (4.4)$$ We then define $\tilde{\chi}:\{1, 2, \cdots, m\}\rightarrow \{l,r\}$ by $\tilde{\chi}(i)=\hat{\chi}(i)$, if $i<m$; $\tilde{\chi}(m)=l$. Replacing $\hat{\chi}$ by $\tilde{\chi}$ induces an isomorphism from $BNC(\hat{\chi})$ to $BNC(\widetilde{\chi})$. It follows from the fact $X_{n, p, r}I_{\alpha(n)}=X_{n, p, l}I_{\alpha(n)}$ for $p=1, 2,..., N+M $ and $n\in \mathbb{N}$, and $(4.4)$ that $$\kappa_{\chi}(X_{n, \alpha(1), \chi(1)}, ..., X_{n, \alpha(m), \chi(m)})=\sum_{\pi\in BNC(\tilde{\chi})}\Phi_{n, \pi}(X_{n, \alpha(s(1)), \tilde{\chi}(1)},..., X_{n, \alpha(s(m)),\tilde{\chi}(m)})\mu_{BNC}(\pi, 1_{\tilde{\chi}}). \eqno(4.5)$$ By the inductive hypothesis and $(4.5)$, we have $$\lim_{n\rightarrow \infty}\kappa_{\chi}(X_{n, \alpha(1), \chi(1)}, ..., X_{n, \alpha(m), \chi(m)}) =\lim_{n\rightarrow \infty}\kappa_{\tilde{\chi}}(X_{n, \alpha(s(1)), \tilde{\chi}(1)}, ..., X_{n, \alpha(s(m)), \tilde{\chi}(m)})$$ $$=\lambda\varphi(c_{\alpha(s(1))}\cdots c_{\alpha(s(m))}), $$ where the last equality holds true because $|\tilde{\chi}^{-1}(\{r\})|=|\hat{\chi}(\{r\})|-1=k$. We have proved $(4.2)$. For $m\in \mathbb{N}$, $\chi:\{1, ..., m\}\rightarrow \{l,r\}$, and function $\alpha:\{1, ..., m\}\rightarrow \{1, ..., N+M\}$ such that $1\le \alpha(i)\le N$ if $\chi(i)=l$, and $N<\alpha(i)\le N+M$ if $\chi(i)=r$, for $i=1, ..., m$, Then $c_{\alpha(i)}=a_{l,\alpha(i)}$ if $\alpha(i)\le N$; $c_{\alpha(i)}=a_{r, \alpha(i)-N}$, if $\alpha(i)>N$, for $i=1, ..., m$. It follows that $$\varphi(c_{\alpha(1)}\cdots c_{\alpha(m)})=\varphi(c_{\alpha(s(1))}\cdots c_{\alpha(s(m))}),\eqno (4.6)$$ because the left random variables commute with the right random variables in the family $$a=((a_{l, i})_{1\le i\le N}, (a_{r, j})_{1\le j\le M}).$$ By $(4.2)$ and $(4.6)$, we have $$\lim_{n\rightarrow \infty}\kappa_{\chi}(X_{n, \alpha(1),\chi(1)}, \cdots, X_{n, \alpha(m), \chi(m)}) =\lambda \varphi(c_{\alpha(1)}\cdots c_{\alpha(m)}),$$ which shows that $$\{(X_{ n, i, l})_{1\le i\le N}, (X_{ n, i, r})_{N+1\le i\le N+M}\} \subset (\L(M_{\alpha(n)}(L^{\infty-}(\Omega, P))), \Phi_n:=tr\circ E_{\L(M_{\alpha(n)}(L^{\infty-}(\Omega, P)))})$$ converges in distribution to the compound bi-free Poisson distribution determined by $\lambda$ and the distribution of $\mu_a$. Let $Z_{n,i,l}=X_{n, i, l}$ for $i=1,..., n$ and $Z_{n, i, r}=X_{n, N+i, r}$ for $i=1,..., M$. Then the random two faced family $Z_n:=((Z_{n, i,l})_{1\le i\le N}, (Z_{n, i,r})_{1\le i\le M})$ of matrices converges in distribution to the compound bi-free Poisson distribution determined by $\lambda$ and the distribution $\mu_a$. \end{proof} \section{Bi-Matrix Models With Fock Space Entries } This section is devoted to constructing a bi-matrix model for a compound bi-free Poisson distribution determined by a positive number and a commutative pair of random variables, where the bi-matrix model consists of matrices with entries of creation and annihilation operators on the full Fock space of a Hilbert space. This bi-matrix model is an analogue of P. Skoufranis' similar bi-matrix models for bi-free central limit distributions in \cite{PS}. \begin{Theorem} Let $\lambda>0$, $\{a_1, a_2\}$ be a commutative pair of random variables in a non-commutative space $(\A, \varphi)$. Then for $n\in N$, there is a sequence $\{(Z_{n, N, l}, Z_{n,N, r}): N=1, 2, \cdots\}$ of $n\times n$ left and right matrices $Z_{n, N, l}$ and $Z_{n, N, r}$, respectively, with entries of creation and annihilation operators on a full fuck space such that $Z_{n,N}=(Z_{n,N, l}, Z_{n, N, r})$ converges in distribution to the compound bi-free Poisson distribution determined by $\lambda$ and $\mu_{a_1, a_2}$, the distribution of the pair $(a_1, a_2)$, as $N\rightarrow \infty$. That is, $$\lim_{N\rightarrow \infty}\kappa_\chi(Z_{n, N})=\lambda\varphi(a_{\chi(1)}\cdots a_{\chi(m)})=\lambda\varphi(a_1^{|\chi^{-1}(\{l\})|}a_2^{|\chi^{-1}(\{r\})|}), $$ for $\chi:\{1, 2, \cdots, m\}\rightarrow \{1,2\}$ and $m\ge 1$. \end{Theorem} \begin{proof} Let $\X=\F(\H)$ be the full Fock space of an infinite dimensional complex Hilbert space $\H$. Let $\X^0=\bigoplus_{n\ge 1}\H^{\otimes n}$,then $\X=\mathbb{C}\Omega\oplus \X^0$. Let $p:\X\rightarrow \mathbb{C},\ p (\lambda\Omega+x_0)=\lambda$. Then $(\X, \X^0, \Omega, p)$ is a pointed vector space. Define a unital linear functional $\omega: \L(\X)\rightarrow \mathbb{C}, \omega(T)=\langle T\Omega, \Omega\rangle$, for $T\in \L(\X)$. Let $\{e_1, e_2\}$ be an orthonormal set in $\H$. Let $l_i$ and $r_i$ be the left and right creation operators associated with $e_i$, respectively, for $i=1, 2$. Let $$W_{N,,i,l}=l^*_i+\sum_{n=0}^N \sum_{\alpha:\{1, 2, \cdots, n\}\rightarrow \{1, 2\}}\lambda \varphi(a_{\alpha(1)}\cdots a_{\alpha(n)}a_i)l_{\alpha(n)}\cdots l_{\alpha(1)}, $$ $$W_{N, i, r}=r_i^*+\sum_{n=0}^N \sum_{\alpha:\{1, 2, \cdots, n\}\rightarrow \{1,2\}}\lambda \varphi(a_{\alpha(1)}\cdots, a_{\alpha(n)}a_i)r_{\alpha(n)}\cdots r_{\alpha(n)},$$ for $i=1, 2$, and $N\in \mathbb{N}$. For $m\in \mathbb{N}$, and $\chi:\{1, 2, \cdots, m\}\rightarrow \{l, r\}$, we define $$ W_{N, l}=W_{N, 1,l}, W_{N, r}=W_{N, 2, r}.$$ Then by Theorem 6.2 in \cite{MN}, we have $$\kappa_\chi(W_{N,\chi(1)}, \cdots, W_{N, \chi(m)})=\left\{\begin{array}{ll}\lambda \varphi(a_1^{|\chi^{-1}(\{l\})|}a_2^{|\chi^{-1}(\{r\})|}),&\text{if } m\le N,\\ 0, & \text{if } m>N.\end{array} \right.$$ For $n\in \mathbb{N}$, let $\{e_{i,j}^k: i, j=1, 2, \cdots, n, k=1, 2\}$ be an orthonormal set of $\H$, and $$L_{k}=\frac{1}{\sqrt{N}}L([l(e_{i,j}^k)]_{n\times n}), L_k^*=\frac{1}{\sqrt{N}}L([l^*(e_{j,i}^k)]_{n\times n}), $$ $$R_k=\frac{1}{\sqrt{N}}R([r(e_{i,j}^k)]_{n\times n}), R_k^*=\frac{1}{\sqrt{N}}L([r^*(e_{j,i}^k)]_{n\times n}), k=1, 2, $$ where $L(A)$ and $R(A)$ are left and right matrices associated with matrix $A$, respectively, and $l(e_{i,j}^k)$ and $r(e_{i,j}^k)$ are left and right creation operators on $\X$ associated with vector $e_{i,j}^k$. By Theorem 5.1 in \cite{PS}, the joint distribution of $\{L_k, L_k^*, R_k, R_k^*: k=1, 2\}$ with respect to $\Phi:=tr\circ E_{\L(\X_n)}$ is equal to the joint distribution of $\{l_k, l_k^*, r_k, r_k^*: k=1, 2\}$ with respect to $\omega$. Let $$Z_{n, N, l}=L_1^*+\sum_{i=0}^N\sum_{\alpha:\{1, 2, \cdots, i\}\rightarrow \{1, 2\}}\lambda \varphi(a_{\alpha(1)}\cdots a_{\alpha(i)}a_1)L_{\alpha(i)}\cdots L_{\alpha(1)},$$ $$Z_{n, N, r}=R_2^*+\sum_{i=0}^N\sum_{\alpha:\{1, 2, \cdots, i\}\rightarrow \{1,2\}}\lambda\varphi(a_{\alpha(1)}\cdots a_{\alpha(i)}a_2)R_{\alpha(i)}\cdots R_{\alpha(1)}.$$ We then have \begin{align*}\kappa_\chi(Z_{n, N, \chi(1)}, \cdots, Z_{n, N, \chi(m)})=&\kappa_\chi(W_{N, \chi(1)}, \cdots, W_{N, \chi(m)})\\ =&\left\{\begin{array}{ll}\lambda\varphi(a_1^{|\chi^{-1}(\{l\})|}a_2^{|\chi^{-1}(\{r\})|}) ,&\text{if } m\le N,\\ 0, & \text{if } m>N.\end{array} \right. \end{align*} It implies that $$\lim_{N\rightarrow \infty}\kappa_\chi(Z_{n, N, \chi(1)}, \cdots, Z_{n, N, \chi(m)})=\lambda \varphi(a_1^{|\chi^{-1}(\{l\})|}a_2^{|\chi^{-1}(\{r\})|}).$$ \end{proof}
{ "timestamp": "2019-05-10T02:09:44", "yymm": "1806", "arxiv_id": "1806.01007", "language": "en", "url": "https://arxiv.org/abs/1806.01007" }
\subsection{Related works} Computing linear recurrence relations of multi-dimensional sequences finds applications in Coding Theory, Computer Algebra and Combinatorics. Historically, the \BM algorithm was designed to decode cyclic codes, like the \textsc{BCH} codes, \cite{BoseRC1960,Hocquenghem1959}. Therefore, decoding $n$-dimensional cyclic codes, a generalization of Reed Solomon codes, was Sakata's motivation for designing the \BMS algorithm in~\cite{Sakata91}. On the other hand, as the output of the \BMS and the \sFGLM algorithms is a \gb, a natural application in Computer Algebra is the computation of a \gb of an ideal for another order, typically from a total degree ordering to an elimination ordering. In fact the latest versions of the \spFGLM algorithm rely heavily on the \BM and \BMS algorithms, see~\cite{FM11,faugere:hal-00807540}. These notions are recalled in a concise way in Section~\ref{s:prelim}, see also~\cite[Section~2]{part1}. Finally, computing linear recurrence relations with \emph{polynomial} coefficients finds applications in Computer Algebra for computing properties of univariate and multivariate Special Functions. The Dynamic Dictionary of Mathematical Functions (\textsc{DDMF}, \cite{DDMF}) generates automatically web-pages on univariate special functions through the differential equations they satisfy. Equivalently, they could be generated through the linear recurrence relations satisfied by their Taylor series sequence of coefficients. Deciding whether \textsc{2D}/\textsc{3D}-space walks are D-finite or not finds applications in Combinatorics, see~\cite{BanderierF2002,BostanBMKM2014,BousquetMM2010,BousquetMP2003}. This motivated the authors to extend the \sFGLM algorithm to handle relations with polynomial coefficients in~\cite{berthomieu:hal-01314266}. \subsection{Contributions} Following the open question in~\cite{part1} whether an adaptive variant of the \BMS algorithm, reducing the number of sequence queries, exists or not, first we answer positively. Then, the goal of this paper is to compare this adaptive variant and the \asFGLM algorithm. In Section~\ref{s:aBMS}, we design an adaptive variant of the \BMS algorithm, namely the \aBMS algorithm, reducing the number of sequence queries. To our knowledge some early termination criteria were proposed for the \BMS algorithm, see~\cite{Sakata09}. However, these criteria did not allow one to skip some relation testings. Here, the \aBMS algorithm can skip some relation testings and still test some further relations. In practice, this variant is more efficient than the \BMS algorithm thanks to these skippings. To do so, it uses an a priori upper bound on the staircase size to prevent some useless relation testings. In some favorable cases, this can even allow us to require fewer sequence elements than when calling the \BMS algorithm. The presentation of this variant follows the linear algebra description of the \BMS algorithm introduced in \cite[Section~3.2]{part1}, see also Appendix~\ref{ss:bms_lin_alg}. In Section~\ref{s:asFGLM}, we deal with the \asFGLM algorithm, first presented in~\cite{issac2015}. Compared to the \BMS algorithm, we iteratively increase the size of the staircase. Although it can drastically decrease the number of sequence queries, one of its drawback is that it can fail to compute the true ideal of relations of a sequence. Therefore, it is essential to investigate when these algorithms output a \gb of the ideal of relations. To do so, we focus on their similarities and differences of behaviors. We report here simplified and synthetic versions of the results obtained in Section~\ref{s:comparison_adapt}. A first similarity is that they both output a zero-dimensional ideal of relations. \begin{theorem}[Theorem~\ref{th:closed_staircase_adapt}] Let $\bu=(u_{i,j})_{(i,j)\in\N^2}$ be a sequence, let $\prec$ be a degree monomial ordering and $d$ be the size of the staircase. Calling each algorithm on $\bu$, $\prec$, $d$ yields a truncated \gb of a zero-dimensional ideal. \end{theorem} In the \gb change of ordering application, like the \spFGLM algorithm, one needs to use the lexicographical ordering. Although the \BMS algorithm is not designed to handle such an ordering, the \aBMS can perfectly be called with this ordering. Indeed, if the ideal is in \emph{shape position}, then, as a second similarity, both algorithm output correctly the ideal. \begin{theorem}[Theorem~\ref{th:shape_position_adapt}] Let $\bu=(u_{i,j})_{(i,j)\in\N^2}$ be a linear recurrent sequence whose ideal of relations $I=\langle g(y),x-f(y)\rangle$ is in \emph{shape position} for the $\LEX(y\prec x)$ ordering, with $\deg f<\deg g=d$ and $g$ squarefree. Assuming no error is thrown in the execution of the \asFGLM algorithm called on $\bu$, $d$ and $\LEX(y\prec x)$ ordering, then the ouput ideal is $I$. Likewise, calling the \aBMS algorithm on $\bu$, $d$ and $\LEX(y\prec x)$ yields ideal $I$. \end{theorem} Although, the previous two theorems seem to show that both algorithms have very similar outputs, their outputs can still differ. Indeed, as neither algorithm can test if their output relations are valid on the whole sequence, they intrinsically return the \emph{shifts} of the relations: that is the set of translation monomials for which the relations are valid. Thus, the larger the shift, the more the relation has been tested. Therefore, it reinforces the confidence one can have in the guessed output ideal. Even if both algorithms output the same ideal, they usually do so while outputting different shifts. \begin{theorem}[Theorem~\ref{th:valid_shift_adapt}] Let $\bu=(u_{i,j})_{(i,j)\in\N^2}$ be a sequence, $\prec$ be a monomial ordering and $d$ be the size of the output staircase $S$. Let us assume that both algorithms return a common relation $g$ when called on $\bu$, $\prec$, $d$ and some stopping monomial $M$ for the \aBMS algorithm. Then, the shift associated to $g$ the \aBMS algorithm yields is the monomial set $\{m,\ m\,\LM(g)\preceq M\}$. In other words, the smaller $\LM(g)$, the larger its shift. The shift associated to $g$ the \asFGLM algorithm returns is either $S$ if $\LM(g)\succ\max_{\prec}(S)$ or $\{m\in S,\ m\prec\LM(g)\}\cup\{\LM(g)\}$ otherwise. In other words, the larger $\LM(g)$, the larger its shift. \end{theorem} As a consequence of these differences of behavior, it is not possible to tweak one of the algorithms in order to mimic exactly the behavior of the other. Finally, in Section~\ref{s:implem_adapt}, we compare both algorithms based on the number of sequence queries they perform and their number of basic operations. We show that the \aBMS algorithm is able to perform four (\resp seven) times fewer operations than the \BMS algorithm to ouput the ideal of relations of a family of bidimensional (\resp tridimensional) sequences. We also show that the \asFGLM needs fewer queries and fewer basic operations to recover the whole ideal of relations of several families of sequences. However, it seems that asymptotically the ratios between the number of basic operations and the number of sequence queries made by both algorithm could be the same. \subsection{Conclusion and Perspectives} We now understand better the advantages of each algorithm. On the one hand, the \asFGLM algorithm can fail to return the right answer, yet, on the other hand, we can tweak it to test the computed relations further, allowing us to discard wrong relations. Furthermore, generally it returns the right ideal of relations and it usually does so faster than the \aBMS algorithm. However, the \aBMS algorithm seems to be the safer one. If the upper bounds on the staircase size is correct, it will always return the right ideal of relations. Though, its performance speedup relies on the number of skipped relation testings and thus on the sharpness of this bound. Moreover, it seems hard to predict in advance which monomials will be totally skipped during the execution of the algorithm. Combining the design of the \textsc{Polynomial Scalar-FGLM} algorithm, based on polynomial arithmetic in~\cite{berthomieu:hal-01784369}, and the comparison of the \aBMS and \asFGLM algorithms in this paper could lead to the design of an hybrid algorithm taking advantage of all these algorithms. In particular, this algorithm could replace the linear algebra arithmetic by a polynomial one. Indeed, the goal would be to mix the efficiency of the polynomial arithmetic in the \textsc{Polynomial Scalar-FGLM} algorithm and the small number of queries performed by the \aBMS and the \asFGLM algorithms to compute the relations. \subsection{Sequences and relations} For $n\geq 1$, we let $\bi=(i_1,\dots,i_n)\in\N^n$. Classically, we write $\x=(x_1,\ldots,x_n)$ and $\x^{\bi}=x_1^{i_1}\,\cdots\,x_n^{i_n}$. An $n$-dimensional sequence $\bu=(u_\bi)_{\bi\in\N^n}$ over a field $\K$ satisfies the (linear recurrence) relation induced by $\balpha=(\alpha_\bk)_{\bk\in\cK}\in\K^{|\cK|}$, with $\cK\subset\N^n$ finite if \begin{equation} \label{eq:recmulti} \forall \bi\in\N^n,\,\sum_{\bk\in\cK} \alpha_\bk\, u_{\bk+\bi}=0. \end{equation} \begin{example}\label{ex:binom} Let $\bin$ be the $2$-dimensional sequence of the binomial coefficients, $\bin = \left(\binom{i}{j}\right)_{(i,j)\in\N^2}$. Then the Pascal's rule: \[ \forall (i,j)\in\N^2,\, \bin_{i+1,j+1}-\bin_{i,j+1}-\bin_{i,j}=0 \] is a linear recurrence relation for the sequence $\bin$. \end{example} As we can only work with a finite number of terms of a sequence, in this paper, a \emph{table} shall denote a finite subset of terms of a sequence: it is one of the input parameters of the algorithms. Given a finite table extracted from the sequence $\bu$, the main purpose of the \BMS and the \sFGLM algorithms is to, lousy speaking, determine a minimal set of relations that will allow us to generate this finite table using only the values of $\bu$ on their supports. Relations satisfied by a sequence can be added and shifted, therefore it is natural to associate them with multivariate polynomials in $\K[\x]$. \begin{definition} Let $f=\sum_{\bk\in\cK}\alpha_\bk\,\x^\bk\in\K[\x]$. We will denote by $\cro{f}_{\bu}$, or $\cro{f}$ when no ambiguity arises, the linear combination $\sum_{\bk\in\cK}\alpha_\bk \,u_\bk$. Moreover, if $\balpha$ defines a relation for $\bu$, that is for all $\bi\in\N^n$, $\cro{\x^{\bi}\,f}=0$, then we say that $f$ is the polynomial of this relation. \end{definition} The main benefit of the $[\,]$ notation resides in the immediate fact that for all index $\bi$, $\left[\x^\bi\,f\right]=\sum_{\bk\in\cK} \alpha_\bk\,u_{\bk+\bi}$. In the previous example, the Pascal's rule relation is associated with polynomial $P=x\,y-y-1$, so that \[\forall (i,j)\in\N^2,\,[x^i\,y^j\,P]=0.\] \begin{definition}[\cite{FitzpatrickN90,Sakata88}]~\label{def:lin_rec} Let $\bu=(u_{\bi})_{\bi\in\N^n}$ be an $n$-dimensional sequence with coefficients in $\K$. The sequence $\bu$ is \emph{linear recurrent} if from a nonzero finite number of initial terms $\{u_{\bi},\ \bi\in S\}$, and a finite number of linear recurrence relations, without any contradiction, one can compute any term of the sequence. Equivalently, $\bu$ is linear recurrent if its ideal of relations $\{f,\ \forall\,m\in\K[\x],\cro{m\,f}=0\}$ is \emph{zero-dimensional}. \end{definition} \subsection{\gbs} Let $\cT=\{\x^{\bi},\ \bi\in\N^n\}$ be the set of all monomials in $\K[\x]$. A monomial ordering $\prec$ on $\K[\x]$ is an order relation satisfying the following three classical properties: \begin{enumerate} \item for all $m\in\cT$, $1\preceq m$; \item for all $m,m',s\in\cT$, $m\prec m'\Rightarrow m\,s\prec m'\,s$; \item every subset of $\cT$ has a least element for $\prec$. \end{enumerate} For a monomial ordering $\prec$ on $\K[\x]$, the \emph{leading monomial} of $f$, denoted $\LM(f)$, is the greatest monomial in the support of $f$ for $\prec$. The \emph{leading coefficient} of $f$, denoted $\LC(f)$, is the nonzero coefficient of $\LM(f)$. The \emph{leading term} of $f$, $\LT(f)$, is defined as $\LT(f)=\LC(f)\,\LM(f)$. For an ideal $I$, we denote, classically, $\LM(I)=\{\LM(f),\ f\in I\}$. We recall briefly the definition of a \gb and a staircase. \begin{definition}\label{def:staircase} Let $I$ be a nonzero ideal of $\K[\x]$ and let $\prec$ be a monomial ordering. A set $\cG\subseteq I$ is a \emph{\gb} of $I$ if for all $f\in I$, there exists $g\in\cG$ such that $\LM(g)|\LM(f)$. The set $\cG$ is a \emph{minimal} \gb of $I$ if for any $g\in\cG$, $\cG\setminus\{g\}$ does not span $I$. Furthermore, $\cG$ is (minimal) \emph{reduced} if for any $g,g'\in\cG$, $g\neq g'$ and any monomial $m\in\supp g'$, $\LT(g)\nmid m$. Let $\cG$ be a reduced truncated \gb, the \emph{staircase} of $\cG$ is \[S=\Staircase(\cG)=\{s\in\cT,\ \forall\,g\in\cG, \LM(g)\nmid s\}.\] It is also the canonical basis of $\K[\x]/I$. \end{definition} \gb theory allows us to choose any monomial ordering $\prec$. Among all the monomial ordering, we will mainly use the \begin{itemize} \item $\LEX(x_n\prec\cdots\prec x_1)$ ordering which compares monomials as follows $\x^{\bi}\prec\x^{\bi'}$ if, and only if, there exists $k$, $1\leq k\leq n$ such that for all $\ell<k$, $i_{\ell}=i_{\ell}'$ and $i_k<i_k'$, see~\cite[Chapter~2, Definition~3]{CoxLOS2015}; \item $\DRL(x_n\prec\cdots\prec x_1)$ order which compares monomials as follows $\x^{\bi}\prec\x^{\bi'}$ if, and only if, $i_1+\cdots+i_n<i_1'+\cdots+i_n'$ or $i_1+\cdots+i_n=i_1'+\cdots+i_n'$ and there exists $k$, $2\leq k\leq n$ such that for all $\ell>k$, $i_{\ell}=i_{\ell}'$ and $i_k>i_k'$. Equivalently, there exists $k$, $1\leq k\leq n$ such that for all $\ell>k$, $i_1+\cdots+i_{\ell}=i_1'+\cdots+i_{\ell}'$ and $i_1+\cdots+i_k<i_1'+\cdots+i_k'$, see~\cite[Chapter~2, Definition~6]{CoxLOS2015}. \end{itemize} However, in the \BMS algorithm, we need to be able to enumerate all the monomials up to a bound monomial. This forces the user to take an ordering $\prec$ such that for all $M\in\cT$, the set $\{m\prec M,\ m\in\cT\}$ is finite. Such an ordering $\prec$ makes $(\N^n,\prec)$ isomorphic to $(\N,<)$, thus it makes sense to speak about the next monomial for $\prec$. This request excludes for instance the $\LEX$ ordering, and more generally any elimination ordering. In other words, only weighted degree ordering, or \emph{weight ordering}, should be used. \subsection{Multi-Hankel matrices} A matrix $H\in\K^{m\times n}$ is \emph{Hankel}, if there exists a sequence $\bu=(u_i)_{i\in\N}$ such that for all $(i,i')\in\{1,\ldots,m\}\times\{1,\ldots,n\}$, the coefficient $h_{i,i'}$ lying on the $i$th row and $i'$th column of $H$ satisfies $h_{i,i'}=u_{i+i'}$. In a multivariate setting, we can extend this Hankel matrices notion to \emph{multi-Hankel} matrices. Indexing the rows and columns with monomials $\x^{\bi}=x_1^{i_1}\,\cdots\,x_n^{i_n}$ and $\x^{\bi'}=x_1^{i'_1}\,\cdots\,x_n^{i'_n}$, the coefficient of $H$ lying on the row labeled with $\x^{\bi}$ and column labeled with $\x^{\bi'}$ is $u_{\bi+\bi'}$. Given two sets of monomials $U$ and $T$, we let $H_{U,T}$ be the multi-Hankel matrix with rows (\resp columns) indexed with monomials in $U$ (\resp $T$). \begin{example} Let $\bu=(u_{i,j})_{(i,j)\in\N^2}$ be a sequence. \begin{enumerate} \item Let $U=\{1,y,y^2,x,x\,y,x\,y^2,x^2,x^2\,y,x^2\,y^2\}$ and $T=\{1,y,x,x\,y,x^2,x^2\,y,x^3,x^3\,y\}$, then \[H_{U,T}=\kbordermatrix{ &1 &y & &x &x\,y & &x^2 &x^2\,y & &x^3 &x^3\,y\\ 1 &u_{0,0} &u_{0,1} &\vrule &u_{1,0} &u_{1,1} &\vrule &u_{2,0} &u_{2,1} &\vrule &u_{3,0} &u_{3,1}\\ y &u_{0,1} &u_{0,2} &\vrule &u_{1,1} &u_{1,2} &\vrule &u_{2,1} &u_{2,2} &\vrule &u_{3,1} &u_{3,2}\\ y^2 &u_{0,2} &u_{0,3} &\vrule &u_{1,2} &u_{1,3} &\vrule &u_{2,2} &u_{2,3} &\vrule &u_{3,2} &u_{3,3}\\ \cline{2-12} x &u_{1,0} &u_{1,1} &\vrule &u_{2,0} &u_{2,1} &\vrule &u_{3,0} &u_{3,1} &\vrule &u_{4,0} &u_{4,1}\\ x\,y &u_{1,1} &u_{1,2} &\vrule &u_{2,1} &u_{2,2} &\vrule &u_{3,1} &u_{3,2} &\vrule &u_{4,1} &u_{4,2}\\ x\,y^2 &u_{1,2} &u_{1,3} &\vrule &u_{2,2} &u_{2,3} &\vrule &u_{3,2} &u_{3,3} &\vrule &u_{4,2} &u_{4,3}\\ \cline{2-12} x^2 &u_{2,0} &u_{2,1} &\vrule &u_{3,0} &u_{3,1} &\vrule &u_{4,0} &u_{4,1} &\vrule &u_{5,0} &u_{5,1}\\ x^2\,y &u_{2,1} &u_{2,2} &\vrule &u_{3,1} &u_{3,2} &\vrule &u_{4,1} &u_{4,2} &\vrule &u_{5,1} &u_{5,2}\\ x^2\,y^2 &u_{2,2} &u_{2,3} &\vrule &u_{3,2} &u_{3,3} &\vrule &u_{4,2} &u_{4,3} &\vrule &u_{5,2} &u_{5,3} }. \] We can see that this matrix is a $3\times 4$ \emph{block-Hankel} matrix with Hankel blocks of size $3\times 2$. \item Let $T=\{1,y,x,y^2,x\,y,x^2,y^3,x\,y^2,x^2\,y,x^3\}$, then the following matrix has a less obvious structure: \[H_{T,T}=\kbordermatrix{ &1 &y &x &y^2 &x\,y &x^2 &y^3 &x\,y^2 &x^2\,y &x^3\\ 1 &u_{0,0} &u_{0,1} &u_{1,0} &u_{0,2} &u_{1,1} &u_{2,0} &u_{0,3} &u_{1,2} &u_{2,1} &u_{3,0}\\ y &u_{0,1} &u_{0,2} &u_{1,1} &u_{0,3} &u_{1,2} &u_{2,1} &u_{0,4} &u_{1,3} &u_{2,2} &u_{3,1}\\ x &u_{1,0} &u_{1,1} &u_{2,0} &u_{1,2} &u_{2,1} &u_{3,0} &u_{1,3} &u_{2,2} &u_{3,1} &u_{4,0}\\ y^2 &u_{0,2} &u_{0,3} &u_{1,2} &u_{0,4} &u_{1,3} &u_{2,2} &u_{0,5} &u_{1,4} &u_{2,3} &u_{3,2}\\ x\,y &u_{1,1} &u_{1,2} &u_{2,1} &u_{1,3} &u_{2,2} &u_{3,1} &u_{1,4} &u_{2,3} &u_{3,2} &u_{4,1}\\ x^2 &u_{2,0} &u_{2,1} &u_{3,0} &u_{2,2} &u_{3,1} &u_{4,0} &u_{2,3} &u_{3,2} &u_{4,1} &u_{5,0}\\ y^3 &u_{0,3} &u_{0,4} &u_{1,3} &u_{0,5} &u_{1,4} &u_{2,3} &u_{0,6} &u_{1,5} &u_{2,4} &u_{3,3}\\ x\,y^2 &u_{1,2} &u_{1,3} &u_{2,2} &u_{1,4} &u_{2,3} &u_{3,2} &u_{1,5} &u_{2,4} &u_{3,3} &u_{4,2}\\ x^2\,y &u_{2,1} &u_{2,2} &u_{3,1} &u_{2,3} &u_{3,2} &u_{4,1} &u_{2,4} &u_{3,3} &u_{4,2} &u_{5,1}\\ x^3 &u_{3,0} &u_{3,1} &u_{4,0} &u_{3,2} &u_{4,1} &u_{5,0} &u_{3,3} &u_{4,2} &u_{5,1} &u_{6,0} }. \] \end{enumerate} \end{example} \subsection{Closed staircase} In~\cite[Section~5.1, Theorem~7]{part1}, we show that the \BMS algorithm always returns a zero-dimensional ideal while the \sFGLM algorithm can return a zero-dimensional or a positive-dimensional ideal. This is in fact one of the main differences between these two algorithms. In the following theorem, we prove that the \aBMS algorithm and the \asFGLM algorithm are closer on that matter assuming one knows the size of the output staircase in advance. \begin{theorem}\label{th:closed_staircase_adapt} Let $\bu$ be a sequence, $\prec$ be a monomial ordering and $d$ be the size of the staircase. Calling the \aBMS algorithm on $\bu$, $\prec$, $d$ and a stopping monomial $M$ yields a truncated \gb of a zero-dimensional ideal. Calling the \asFGLM algorithms on $\bu$, $\prec$ and $d$ yields a truncated \gb of a zero-dimensional ideal. \end{theorem} \begin{proof} The first part of the result comes directly from the line $G':=\Border(S')$ in the description of the \aBMS algorithm, Algorithm~\ref{algo:abms}. The second part of the result comes from the fact that the leading terms of the relations are lying in the border of the staircase and are minimal for both $\prec$ and $|$. Thus, for any variable $x_i$, there always exists a relation with leading term a pure power of $x_i$. \end{proof} It is possible to change this early termination procedure so that the \asFGLM algorithm is closer to the \sFGLM algorithm, yielding a potential positive-dimensional algorithm. If we still want to try to close as much as possible the staircase with degenerate square matrices, it suffices to check that the relation $t'+\sum_{s\in S}\alpha_s\,s$ is valid with a shift $S\cup\{t'\}$. This yields Algorithm~\ref{algo:tweaked_adapt_sfglm}. \begin{algorithm2e}[htbp!]\label{algo:tweaked_adapt_sfglm} \small \DontPrintSemicolon \TitleOfAlgo{Tweaked \asFGLM.} \KwIn{A table $\bu=(u_{\bi})_{\bi\in\N^n}$ with coefficients in $\K$, $\prec$ a monomial ordering and $d$ a given bound.} \KwOut{A reduced truncated \gb of a zero-dimensional ideal of degree $\geq d$.} $L:=\{1\}$.\tcp*{set of next terms to study} $S:=\emptyset$.\tcp*{the useful staircase with respect to $\prec$} $G:=\emptyset,G':=\emptyset$.\; \While{$L\neq\emptyset$}{ $t:=\min_{\prec}(L)$.\; \uIf{$H_{S\cup\{t\},S\cup\{t\}}$ is full rank}{ $S:=S\cup\{t\}$ and $L:=L\cup\left\{x_i\,t, i=1,\ldots,n\right\}\setminus\{t\}$.\; Remove multiples of elements of $G'$ in $L$.\; \If(\tcp*[f]{early termination}){$\#\,S\geq d$}{ \While{$L\neq\emptyset$}{ $t':=\min_{\prec}(L)$.\; Find $\balpha$ such that $H_{S,S}\,\balpha + H_{S,\{t'\}}=0$.\; \If{$H_{\{t'\},S}\,\balpha+H_{\{t'\},\{t'\}}=0$}{ $G:=G\cup\left\{t'+\sum_{s\in S}\alpha_s\,s\right\}$.\; } Remove multiples of elements of $t'$ in $L$.\; } \KwRet $G$. } } \Else{ Find $\balpha$ such that $H_{S,S}\,\balpha + H_{S,\{t\}}=0$.\; $G':=G'\cup\{t\}$.\; $G:=G\cup\left\{t+\sum_{s\in S}\alpha_s\,s\right\}$.\; Remove multiples of $t$ in $L$ and sort $L$ by increasing order. } } \KwErr ``Run \sFGLM''. \end{algorithm2e} \subsection{Reduction of relations} The \asFGLM algorithm computes a staircase and then relations with support in the staircase except their leading terms that lie on the border. On the other hand, although the \aBMS algorithm may compute the same ideal of relations as the \asFGLM algorithm, their \gb can be different. \begin{theorem}\label{th:reduced_gb_adapt} Let $\bu$ be a sequence, $\prec$ be a monomial ordering and $d$ be the size of the staircase. Calling the \asFGLM algorithms on $\bu$, $\prec$, and $d$ yields a truncated reduced \gb of an ideal. Calling the \aBMS algorithm on $\bu$, $\prec$, $d$ and a stopping monomial $M$ yields a truncated minimal \gb of an ideal, which is not necessarily reduced. Furthermore, even if $\bu$ is linear recurrent and the \asFGLM algorithm computes the ideal of relations of $\bu$, then there is no reason for the output of the \aBMS algorithm to be reduced. \end{theorem} \begin{proof} For two distinct polynomials $g,g'$ in the \gb returned by \asFGLM algorithm, $\LT(g)$ does not divide any monomial in the support of $g'$. Hence the \gb is reduced. For two distinct polynomials $g,g'$ in the \gb returned by \aBMS algorithm, $\LT(g)$ does not divide $\LT(g')$. Hence the \gb is minimal. However, there is no reason for $\LT(g)$ not to divide any monomial in the support of $g'$. \end{proof} \begin{example}\label{ex:reduced} We let $\bu=\pare{i^2+j^2-1}_{(i,j)\in\N^2}$ be a sequence and consider the $\DRL(y\prec x)$ ordering. The ideal of relations of $\bu$ is $I=\langle x\,y-x-y+1,x^2-y^2-2\,x+2\,y,y^3-3\,y^2+3\,y-1\rangle$. The \aBMS algorithm called on $\bu$ and the stopping monomial $y^5$ returns $g_1=x\,y-x-y+1$, with shift $x^2$, $g_2=x^2-\frac{1}{3}\,x\,y-y^2-\frac{5}{3}\,x+\frac{7}{3}\,y-\frac{1}{3}$, with shift $x^2$ and $g_3=y^3-\frac{1}{2}\,x\,y-3\,y^2+\frac{1}{2}\,x+\frac{7}{2}\,y-\frac{3}{2}$, with shift $y^2$. We can notice that $\{g_1,g_2,g_3\}$ is a \gb but not a reduced \gb of $I$. The \asFGLM algorithm called on $\bu$ and the set of all the monomials of degree at most $3$ yields relations $g_1'=x\,y-x-y+1,g_2'=x^2-y^2-2\,x+2\,y,g_3'=y^3-3\,y^2+3\,y-1$. We can notice that $\{g_1',g_2',g_3'\}=\{g_1,g_2+\frac{1}{3}\,g_1,g_3+\frac{1}{2}\,g_1\}$ is a reduced \gb of $I$. \end{example} As for the \BMS algorithm, it is not hard to tweak the \aBMS algorithm so that it returns a reduced \gb. It suffices to perform an inter-reduction of the relations either at the end of each step of the main \textbf{For} loop or just before returning the \gb, see Algorithm~\ref{algo:tweaked_abms}. \begin{algorithm2e}[htbp!]\label{algo:tweaked_abms} \small \DontPrintSemicolon \TitleOfAlgo{Tweaked \aBMS algorithm.} \KwIn{A table $\bu=(u_{\bi})_{\bi\in\N^n}$ with coefficients in $\K$, a monomial ordering $\prec$, a given bound $d$ and a monomial $M$ as the stopping condition.} \KwOut{A set $G$ of relations generating $I_M$.} $T := \{m\in\K[\x], m\preceq M\}$.\; $G := \{1\}$.\; $S := \emptyset$.\; \Forall{$m\in T$}{ $S' := S$.\; \For{$g\in G$}{ \If{$\LM(g)| m$}{ \If{$\frac{m}{\LM(g)}\not\in\Stabilize(S)$ \KwAnd $\#\,\Stabilize\pare{S\cup\acc{\LM(g), \frac{m}{\LM(g)}}}> d$}{ \KwNext.\tcp*{skip this relation testing} } $e:=\cro{\frac{m}{\LM(g)}\,g}_{\bu}$\; \If{$e\neq 0$}{ $S':=S'\cup \acc{\cro{\frac{g}{e},\frac{m}{\LM(g)}}}$.\; } } } $S':= \min_{\fail(h)\in S'}\acc{[h,\fail(h)/\LM(h)]}$.\; $G':= \Border (S')$.\; \For{$g'\in G'$}{ Let $g\in G$ such that $\LM(g)|\LM(g')$.\; \uIf{$\LM(g) \nmid m$}{ $g':=\frac{\LM(g')}{\LM(g)}\,g$.\; } \uElseIf{$\exists\,h\in S, \frac{m}{\LM(g')} |\fail(h)$}{ $g':= \frac{\LM(g')}{\LM(g)}\,g -\cro{\frac{m}{\LM(h)}\,h}_{\bu}\, \frac{\LM(g')\,\fail(h)}{m}\,h$.\; } \lElse{ $g':=g$. } } $G := \InterReduce(G')$ \; $S := S'$.\; } \KwRet $G$. \end{algorithm2e} \subsection{Validity of relations}\label{ss:validity} One of the main differences between the \BMS and the \sFGLM algorithms is the validity of the relations they return. Given a \gb returned by both algorithms. Loosely speaking, the \sFGLM algorithm will only ensure that all the relations in the \gb have the same shifts while for the \BMS algorithm, the smaller the leading term of a relation is, the larger its shift is computed. See~\cite[Theorem~19]{part1}. Naturally, if the given upper bound on the size of the staircase to the \aBMS algorithm is correct, then the shifts computed by the \aBMS algorithm are the same as those computed by the \BMS algorithm. In Examples~\ref{ex:asfglm} and~\ref{ex:fail_asfglm}, we can see that the shifts computed by the \asFGLM algorithm are not all the same. This is the main difference between the \sFGLM and the \asFGLM algorithms. In fact, we prove in the following Theorem~\ref{th:valid_shift_adapt} that the larger the leading term of a computed relation, the larger its shift. \begin{theorem}\label{th:valid_shift_adapt} Let $\bu$ be a sequence, $\prec$ be a monomial ordering and $d$ be the size of the output staircase $S$. Let $S_M=\{m\in S,\ m\prec M\}$. Calling the \aBMS algorithm on $\bu$, $\prec$, $d$ and a stopping monomial $M$ yields relations $g_1,\ldots,g_r$ and shifts $v_1,\ldots,v_r$ such that \[\forall\,i, 1\leq i\leq r,\quad v_i\,\LM(g_i)\preceq M\] and $g_i$ is valid with a shift $v_i$, potentially $0$. Calling the \asFGLM algorithm on $\bu$, $\prec$ and $d$ yields relations $g_1',\ldots,g_{r'}'$ such that \[\forall\,i, 1\leq i\leq r',\quad \deg\LM(g_i')\leq d\] and $g_i'$ has a shift $S_{\LM(g_i')}\cup\{\LM(g_i')\}$ if $\LM(g_i')\succ\max_{\prec}(S)$ and $S$ otherwise. \end{theorem} \begin{proof} The first part is clear from the behavior of both the \BMS and the \aBMS algorithms. The second part comes from the fact that if $g_i'$, with $\LM(g_i')=t$ is found before $S$ is completed, then it was because the matrix $H_{S^t\cup\{t\},S^t\cup\{t\}}$ had a rank default, where $S^t$ is the state for $S$ at loop $t$. Furthermore, $S^t=S_{\LM(g_i')}=S_t$. Otherwise, it is computed by solving $H_{S,S}\,\balpha+H_{S,\{t'\}}=0$ so that the relation has only been tested with a shift $S$. \end{proof} In a way, the behavior of the \asFGLM algorithm is the opposite of the behaviors of the \BMS and the \aBMS algorithms. Furthermore, if one uses Algorithm~\ref{algo:tweaked_adapt_sfglm} instead of the \asFGLM algorithm, then each returned relation $g_i'$ has a shift $S_{\LM(g_i')}\cup\{\LM(g_i')\}$. \begin{example} Let us consider the sequence $\bu=(F_{i+1})_{(i,j)\in\N^2}$, where $(F_i)_{i\in\N}$ is the Fibonacci sequence. Its ideal of relation is $\langle y-1,x^2-x-1\rangle$ so that its staircase has size $2$. Calling the \asFGLM algorithm on this sequence with this bound of the staircase makes us creating the matrices \begin{enumerate} \item[] $H_{\{1\},\{1\}}$, which is full rank, hence $1\in S$; \item[] $H_{\{1,y\},\{1,y\}}$, which is not full rank, hence the relation $y-1$ is found with a shift $\{1,y\}$; \item[] $H_{\{1,x\},\{1,x\}}$, which is full rank, hence $x\in S$. \end{enumerate} Now, the staircase is found so it remains to solve $H_{S,S}\,\balpha+H_{S,\{x^2\}}=0$ yielding the relation $x^2-x-1$ with a shift $S$. \end{example} \subsection{Monomial ordering and Set of Terms}\label{ss:shape_position} In this section, we study how both algorithms handle a monomial ordering that is not a weighted degree ordering. The classical specification of the \BMS algorithm are that the ordering must be a weighted ordering. However, when running the \aBMS algorithm, the upper bound on the staircase size makes us never visit monomials of degree more than twice this size. Therefore, we can now use any monomial ordering with the \aBMS algorithm by just enumerating, in increasing order, all the monomials of degree less than twice the upper bound. This allows us to deal with ideal in shape position with both the \aBMS and the \asFGLM algorithms. \begin{theorem}\label{th:shape_position_adapt} Let $\bu$ be a linear recurrent sequence whose ideal of relation $I$ is in shape position for the $\LEX(x_n\prec\cdots\prec x_2\prec x_1)$ ordering, \ie there exist $g_n$ squarefree and $f_{n-1},\ldots,f_1\in\K[x_n]$ with $\deg g_n=d,\deg f_i<d$ such that $I=\langle g_n(x_n),x_{n-1}-f_{n-1}(x_n),\ldots,x_1-f_1(x_n)\rangle$. Assuming no error is thrown in the execution of the \asFGLM algorithm called on $\bu$, $d$ and $\LEX(x_n\prec\cdots\prec x_2\prec x_1)$, then the ouput is $I$. Calling the \aBMS algorithm on $\bu$, $d$ and $\LEX(x_n\prec\cdots\prec x_2\prec x_1)$ yields $I$. \end{theorem} \begin{proof} Assuming no error is thrown during the execution of the \asFGLM algorithm, the staircase is incrementally updated from $\emptyset$ to $\acc{1,x_n,\ldots,x_n^{d-1}}$. Then, the staircase size is reached and the early termination procedure solves the system $H_{S,S}\,\balpha+H_{S,\{t\}}=0$ for $t\in\acc{x_n^d,x_{n-1},\ldots,x_1}$ yielding $g_n(x_n),x_{n-1}-f_{n-1}(x_n),\ldots, x_1-f_1(x_n)$. For the \aBMS algorithm, we visit every monomial of degree at most $2\,d-1$. The first relation, $g_n(x_n)$ is computed by the algorithm visiting monomials $1,x_n,\ldots,x_n^{2\,d-1}$ like the \BM algorithm. Then, each relation $x_i-f_i(x_n)$ is computed by visiting monomials $x_i,x_i\,x_n,\ldots,x_i\,x_n^{d-1}$, all of degree less than $2\,d-1$. \end{proof} \begin{example} We let $\bu=(F_{4\,i+k+1})_{(i,j,k)\in\N^3}$, where $(F_i)_{i\in\N}$ is the Fibonacci sequence. The ideal of relations of $\bu$ is $I=\langle z^2-z-1,y-1,x-3\,z-2\rangle$ with a staircase of size $2$. For the \asFGLM called on $\bu$, $d=2$ and the $\LEX(z\prec y\prec x)$ ordering, the algorithm creates the matrices \begin{enumerate} \item[] $H_{\{1\},\{1\}}=\pare{ \begin{smallmatrix} 1 \end{smallmatrix}} $, which is full rank, hence $1\in S$; \item[] $H_{\{1,z\},\{1,z\}}=\pare{ \begin{smallmatrix} 1 &1\\1 &2 \end{smallmatrix}}$, which is full rank, hence $z\in S$. \end{enumerate} Now, the staircase is found so it remains to solve \begin{enumerate} \item[] $H_{S,S}\,\balpha+H_{S,\{z^2\}}=0$ yielding the relation $g_1=z^2-z-1$; \item[] $H_{S,S}\,\balpha+H_{S,\{y\}}=0$ yielding the relation $g_2=y-1$; \item[] $H_{S,S}\,\balpha+H_{S,\{x\}}=0$ yielding the relation $g_3=x-3\,z-2$. \item[] The algorithm returns $\langle g_1,g_2,g_3\rangle=I$. \end{enumerate} Calling the \aBMS algorithm on $\bu$, $d=2$, the stopping monomial $x\,z$ and $\LEX(z\prec y\prec x)$ ordering makes us visit the set of all monomials of degree at most $2\,d-1=3$ less than $x\,z$, \ie $\{1,z,z^2,z^3,y,y\,z,y\,z^2,y^2,y^2\,z,y^3,x,x\,z\}$. \begin{enumerate} \item[] The algorithms tests the relation $g=1$ in $u_{0,0,0}=F_1=1$ where it fails. It has now relations $g_1=x,g_2=y$ and $g_3=z$. \item[] Testing $g_3=z$ in $u_{0,0,2}=F_2=1$, it updates now the relation to $g_3=z-1$. Going on testing $g_3=z-1$ in $u_{0,0,2}=F_3=2$ and $u_{0,0,3}=F_4=3$, it is able to guess that $g_3=z^2-z-1$. The staircase is now $\{1,z\}$ of size $2$ so it has been found. As anticipated, there is no need to go further in that direction. \item[] Testing $g_2=y$ in $u_{0,1,0}=F_1=1$, the relation is updated to $g_2=y-1$. \item[] Then, it checks that this relation is valid in $u_{0,1,1}$ but skips $u_{0,1,2},u_{0,2,0},u_{0,2,1},u_{0,3,0}$ thanks to its criterion. \item[] It remains to test $g_3=x$ in $u_{1,0,0}=F_5=5$. It fails and the algorithm updates the relation to $g_3=x-5$. \item[] Finally, $g_3=x-5$ is tested in $u_{1,0,1}=F_6=8$ and the relation is updated to $g_3=x-3\,z-2$. \item[] The algorithm returns $\langle g_1,g_2,g_3\rangle=I$. \end{enumerate} \end{example} \subsection{Counting the number of table queries} The \asFGLM algorithm computes all the multi-Hankel matrices whose rows and columns are all the terms that are in the staircase or are a leading monomial in the \gb. Likewise, the \aBMS algorithm needs to test each relation, with support in $S\cup\LM(\cG)$, shifted by as many monomial as in $S$. Therefore, we have the following proposition. \begin{proposition}\label{prop:asfglm_queries} Let $\bu=(u_{\bi})_{\bi\in\N^n}$ be a sequence and $\cG$ be a reduced \gb of its ideal of relations for a total degree ordering. Let $S$ be the staircase of $\cG$, $S^+=S\cup\LM(\cG)$. Let $S+T=\{s\,t,\ s\in S,t\in T\}$ and $2\,S=S+S=\{s\,s',\ s,s'\in S\}$. Let $d_S$ be the greatest degree of the elements in $S$, $d_{\cG}$ be the greatest degree of the elements in $\cG$ and $d_{\max}=\max(d_S,d_{\cG})$. Let $\cS(d)$ be the simplex of all monomials of degree $d$. Then, the \aBMS algorithm needs to perform at least $\#\,(S+S^+)$ and at most $\#\,\cS(d_S+d_{\max})=\binom{n+d_S+d_{\max}}{n}$ queries to the sequence. The \asFGLM algorithm needs to perform at least $\#\,(2\,\,S)$ and fewer than $\#\,(2\,S^+)$ queries to $\bu$. In the worst case, this number grows as $(\#\,S^+)^2$. \end{proposition} \begin{figure}[htbp!] \pgfplotsset{ small, width=12cm, height=7cm, legend cell align=left, legend columns=5, legend style={at={(-0.05,0.98)},anchor=south west,font=\scriptsize, } } \centering \begin{tikzpicture}[baseline] \begin{axis}[ ymode=log, xlabel={$d$}, xlabel style={at={(0.95,0.1)}}, xmin=3.8,xmax=25.2,ymin=1.8,ymax=18, xtick={2,...,25}, ytick={1,2,3,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90, 100,200,300,400,500,600,700,800,900,1000}, yticklabels={}, extra y ticks={1,5,10,50,100,500}, extra y tick labels={1,5,10,50,100,500}, ylabel={\#\,Queries/\#\,S}, ylabel style={at={(0.08,0.750)}} ] \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{legend image with text=Rectangle} \addlegendentry{} \addlegendimage{legend image with text=\textsc{L} shape} \addlegendentry{} \addlegendimage{legend image with text=Simplex} \addlegendentry{} \addlegendimage{legend image with text=Shape position} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{\asFGLM} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,red,mark phase=1,mark repeat=4] plot coordinates { (4,25/8) (5,31/10) (6,60/18) (7,70/21) (8,110/32) (9,124/36) (10,176/50) (11,194/55) (12,258/72) (13,280/78) (14,356/98) (15,382/105) (16,470/128) (17,500/136) (18,600/162) (19,634/171) (20,746/200) (21,784/210) (22,908/242) (23,950/253) (24,1086/288) (25,1132/300) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,blue,mark phase=2,mark repeat=4] plot coordinates { (2,10/3) (3,19/5) (4,30/7) (5,43/9) (6,58/11) (7,75/13) (8,94/15) (9,115/17) (10,138/19) (11,163/21) (12,190/23) (13,219/25) (14,250/27) (15,283/29) (16,318/31) (17,355/33) (18,394/35) (19,435/37) (20,478/39) (21,523/41) (22,570/43) (23,619/45) (24,670/47) (25,723/49) }; \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,orange,mark phase=4,mark repeat=4] plot coordinates { (2,6/2) (3,9/3) (4,11/4) (5,13/5) (6,15/6) (7,17/7) (8,19/8) (9,21/9) (10,23/10) (11,25/11) (12,27/12) (13,29/13) (14,31/14) (15,33/15) (16,35/16) (17,37/17) (18,39/18) (19,41/19) (20,43/20) (21,45/21) (22,47/22) (23,49/23) (24,51/24) (25,53/25) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{\aBMS} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,red,mark phase=1,mark repeat=4] plot coordinates { (4,41/8) (5,58/10) (6,103/18) (7,132/21) (8,198/32) (9,236/36) (10,320/50) (11,371/55) (12,478/72) (13,541/78) (14,663/98) (15,731/105) (16,882/128) (17,967/136) (18,1141/162) (19,1238/171) (20,1418/200) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,blue,mark phase=2,mark repeat=4] plot coordinates { (2,10/3) (3,21/5) (4,36/7) (5,55/9) (6,78/11) (7,105/13) (8,136/15) (9,171/17) (10,210/19) (11,253/21) (12,300/23) (13,351/25) (14,406/27) (15,465/29) (16,528/31) (17,595/33) (18,666/35) (19,741/37) (20,820/39) (21,903/41) (22,990/43) (23,1081/45) (24,1176/47) (25,1275/49) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,orange,mark phase=4,mark repeat=4] plot coordinates { (2,6/2) (3,10/3) (4,14/4) (5,18/5) (6,22/6) (7,26/7) (8,30/8) (9,34/9) (10,38/10) (11,42/11) (12,46/12) (13,50/13) (14,54/14) (15,58/15) (16,62/16) (17,66/17) (18,70/18) (19,74/19) (20,78/20) (21,82/21) (22,86/22) (23,90/23) (24,94/24) (25,98/25) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{Both algorithms} \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{} \addplot[thick,every mark/.append style={solid}, mark=square*,green,mark phase=3,mark repeat=4] plot coordinates { (2,10/3) (3,21/6) (4,36/10) (5,55/15) (6,78/21) (7,105/28) (8,136/36) (9,171/45) (10,210/55) (11,253/66) (12,300/78) (13,351/91) (14,406/105) (15,465/120) (16,528/136) (17,595/153) (18,666/171) (19,741/190) (20,820/210) (21,903/231) (22,990/253) (23,1081/276) (24,1176/300) (25,1275/325) }; \addlegendentry{ \end{axis} \end{tikzpicture} \caption{Number of table queries (\textsc{2D}): \asFGLM \& \aBMS} \label{fig:queries2Dadapt} \end{figure} In the experiments of Figures~\ref{fig:queries2Dadapt} and~\ref{fig:queries3Dadapt}, we can see that for the Rectangle family, the \asFGLM algorithm perform much fewer queries than the \aBMS. For the \textsc{L} shape family, the size of the staircase only grows as $O(d)$. Our experiments suggest that the number of queries grows as $O(d^n)$ for the \aBMS algorithm, while it only grows as $O(d^2)$ for the \asFGLM algorithm. This can be a huge advantage in dimension at least $3$. We can see that the \aBMS algorithm cannot take profit from the size of the staircase in the \textsc{L} shape family as it needs as many queries as in the Simplex family. Yet, although the \textsc{L} shape family is a worst case for the \asFGLM algorithm, it is still able to query fewer sequence terms for the \textsc{L} shape family than for the Simplex family. \begin{figure}[htbp!] \pgfplotsset{ small, width=12cm, height=7cm, legend cell align=left, legend columns=5, legend style={at={(-0.05,0.98)},anchor=south west,font=\scriptsize, } } \centering \begin{tikzpicture}[baseline] \begin{axis}[ ymode=log, xlabel={$d$}, xlabel style={at={(0.95,0.1)}}, xmin=3.7,xmax=15.2,ymin=3.8,ymax=65, xtick={2,...,15}, ytick={1,2,3,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90, 100,200,300,400,500,600,700,800,900, 1000,2000,3000,4000,5000,6000,7000,8000,9000, 10000,20000}, yticklabels={}, extra y ticks={1,5,10,50,100,500,1000,5000,10000}, extra y tick labels={1,5,10,50,100,500,1000,5000,10000}, ylabel={\#\,Queries/\#\,S}, ylabel style={at={(0.08,0.75)}} ] \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{legend image with text=Rectangle} \addlegendentry{} \addlegendimage{legend image with text=\textsc{L} shape} \addlegendentry{} \addlegendimage{legend image with text=Simplex} \addlegendentry{} \addlegendimage{legend image with text=Shape position} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{\asFGLM} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,red,mark phase=1,mark repeat=4] plot coordinates { (4,72/16) (5,90/20) (6,174/36) (7,334/63) (8,534/96) (9,604/108) (10,1206/200) (11,1332/220) (12,1780/288) (13,2484/390) (14,3168/490) (15,3402/525) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,blue,mark phase=2,mark repeat=4] plot coordinates { (2,20/4) (3,42/7) (4,69/10) (5,102/13) (6,141/16) (7,186/19) (8,237/22) (9,294/25) (10,357/28) (11,426/31) (12,501/34) (13,582/37) (14,669/40) (15,762/43) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,orange,mark phase=4,mark repeat=4] plot coordinates { (2,8/2) (3,13/3) (4,18/4) (5,21/5) (6,25/6) (7,30/7) (8,33/8) (9,36/9) (10,40/10) (11,46/11) (12,49/12) (13,52/13) (14,55/14) (15,59/15) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{\aBMS} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,red,mark phase=1,mark repeat=4] plot coordinates { (4,207/16) (5,330/20) (6,709/36) (7,1265/63) (8,2159/96) (9,2739/108) (10,4819/200) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,blue,mark phase=2,mark repeat=4] plot coordinates { (2,20/4) (3,56/7) (4,120/10) (5,218/13) (6,357/16) (7,545/19) (8,784/22) (9,1090/25) (10,1457/28) (11,1907/31) (12,2424/34) (13,3043/37) (14,3741/40) (15,4557/43) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,orange,mark phase=4,mark repeat=4] plot coordinates { (2,8/2) (3,13/3) (4,18/4) (5,23/5) (6,28/6) (7,33/7) (8,38/8) (9,43/9) (10,48/10) (11,53/11) (12,58/12) (13,63/13) (14,68/14) (15,73/15) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{Both algorithms} \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{} \addplot[thick,every mark/.append style={solid}, mark=square*,green,mark phase=3,mark repeat=4] plot coordinates { (2,20/4) (3,56/10) (4,120/20) (5,220/35) (6,364/56) (7,560/84) (8,816/120) (9,1140/165) (10,1540/220) (11,2024/286) (12,2600/364) (13,3276/455) (14,4060/560) (15,4960/680) }; \addlegendentry{ \end{axis} \end{tikzpicture} \caption{Number of table queries (\textsc{3D}): \asFGLM \& \aBMS} \label{fig:queries3Dadapt} \end{figure} \subsection{Counting the number of basic operations} The complexity of the \BMS algorithm has been studied in~\cite{Sakata09} yielding the following proposition. \begin{proposition}\label{prop:bms_basicop} Let $\bu=(u_{\bi})_{\bi\in\N^n}$ be a sequence, $\cG$ be a minimal \gb of its ideal of relations for a total degree ordering and $S$ be the staircase of $\cG$. Then, the \BMS algorithm performs at most $O\pare{(\#\,S)^2\,\LM(\cG)}$ operations to recover the ideal of relations of $\bu$. \end{proposition} Obviously, the bound of Proposition~\ref{prop:bms_basicop} on the number of basic operations applies to the \aBMS algorithm. Yet, since the number of skipped relation testings is hard to predict, it is not clear how to make it sharper for the \aBMS algorithm. The \asFGLM computes the rank of a matrix of size at most $\#\,S$. Furthermore, it solves as many linear systems with this matrix as there are polynomials in the \gb. All in all, we have the following result. \begin{proposition}\label{prop:asfglm_basicop} Let $\bu=(u_{\bi})_{\bi\in\N^n}$ be a sequence, $\cG$ be a reduced \gb of its ideal of relations for a total degree ordering and $S$ be the staircase of $\cG$. Then, the number of operations performed by the \asFGLM algorithm to recover the ideal of relations of $\bu$ is at most $O\pare{(\#\,S)^2\,(\#\,S+\#\,\LM(\cG))}$. \end{proposition} In the following Figures~\ref{fig:basicop2Dadapt} and~\ref{fig:basicop3Dadapt}, we report on the ratio between the number of basic operations and the cube of the size of the staircase. \begin{figure}[htbp!] \pgfplotsset{ small, width=12cm, height=7cm, legend cell align=left, legend columns=5, legend style={at={(-0.05,0.98)},anchor=south west,font=\scriptsize, } } \centering \begin{tikzpicture}[baseline] \begin{axis}[ ymode=log, xlabel={$d$}, xlabel style={at={(0.95,0.1)}}, xmin=3.8,xmax=25.2, ymin=0.08,ymax=22, xtick={4,...,25}, ytick={0.09,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1,2,3,4,5,6,7,8,9, 10,20,30,40,50,60,70,80,90,100,200,300,400,500,600,700,800,900, 1000,2000,3000,4000,5000,6000,7000,8000,9000,10000,20000}, yticklabels={}, extra y ticks={0.1,0.5,1,5,10,50,100,500,1000,5000,10000}, extra y tick labels={0.1,0.5,1,5,10,50,100,500,1000,5000,10000}, ylabel={\#\,Basic Op/\#\,S$^3$}, ylabel style={at={(0.08,0.75)}}, ] \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{legend image with text=Rectangle} \addlegendentry{} \addlegendimage{legend image with text=\textsc{L} shape} \addlegendentry{} \addlegendimage{legend image with text=Simplex} \addlegendentry{} \addlegendimage{legend image with text=Shape position} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{\asFGLM} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,red,mark phase=1,mark repeat=4] plot coordinates { (4,210/8^3) (5,337/10^3) (6,1595/18^3) (7,2355/21^3) (8,7147/32^3) (9,9871/36^3) (10,24599/50^3) (11,32254/55^3) (12,69587/72^3) (13,87739/78^3) (14,170015/98^3) (15,208053/105^3) (16,371335/128^3) (17,443935/136^3) (18,742787/162^3) (19, 871616/171^3) (20,1384599/200^3) (21,1600259/210^3) (22,2436147/242^3) (23,2780359/253^3) (24,4085075/288^3) (25,4613103/300^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,blue,mark phase=2,mark repeat=4] plot coordinates { (2,52/3^3) (3,128/5^3) (4,215/7^3) (5,342/9^3) (6,517/11^3) (7,748/13^3) (8,1043/15^3) (9,1410/17^3) (10,1857/19^3) (11,2392/21^3) (12,3023/23^3) (13,3758/25^3) (14,4605/27^3) (15,5572/29^3) (16,6667/31^3) (17,7898/33^3) (18,9273/35^3) (19,10800/37^3) (20,12487/39^3) (21,14342/41^3) (22,16373/43^3) (23,18588/45^3) (24,20995/47^3) (25,23602/49^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,green,mark phase=3,mark repeat=4] plot coordinates { (2,52/3^3) (3,200/6^3) (4,615/10^3) (5,1610/15^3) (6,3724/21^3) (7,7812/28^3) (8,15150/36^3) (9,27555/45^3) (10,47520/55^3) (11,78364/66^3) (12,124397/78^3) (13,191100/91^3) (14,285320/105^3) (15,415480/120^3) (16,591804/136^3) (17,826557/153^3) (18,1134300/171^3) (19,1532160/190^3) (20,2040115/210^3) (21,2681294/231^3) (22,3482292/253^3) (23,4473500/276^3) (24,5689450/300^3) (25,7169175/325^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,orange,mark phase=4,mark repeat=4] plot coordinates { (2,22/2^3) (3,41/3^3) (4,59/4^3) (5,83/5^3) (6,114/6^3) (7,153/7^3) (8,201/8^3) (9,259/9^3) (10,328/10^3) (11,409/11^3) (12,503/12^3) (13,611/13^3) (14,734/14^3) (15,873/15^3) (16,1029/16^3) (17,1203/17^3) (18,1396/18^3) (19,1609/19^3) (20,1843/20^3) (21,2099/21^3) (22,2378/22^3) (23,2681/23^3) (24,3009/24^3) (25,3363/25^3) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{\aBMS} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,red,mark phase=1,mark repeat=4] plot coordinates { (4,2115/8^3) (5,3108/10^3) (6,12589/18^3) (7,18206/21^3) (8,50410/32^3) (9,65117/36^3) (10,142499/50^3) (11,180157/55^3) (12,352288/72^3) (13,426269/78^3) (14,738578/98^3) (15,865913/105^3) (16,1432507/128^3) (17,1664165/136^3) (18,2567128/162^3) (19, 2926327/171^3) (20,4307996/200^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,blue,mark phase=2,mark repeat=4] plot coordinates { (2,438/3^3) (3,1111/5^3) (4,2259/7^3) (5,3721/9^3) (6,5817/11^3) (7,8417/13^3) (8,11375/15^3) (9,14881/17^3) (10,19457/19^3) (11,23359/21^3) (12,29387/23^3) (13,34697/25^3) (14,40977/27^3) (15,48195/29^3) (16,55495/31^3) (17,63243/33^3) (18,72697/35^3) (19,80719/37^3) (20,91271/39^3) (21,101213/41^3) (22,113021/43^3) (23,122455/45^3) (24,136731/47^3) (25,148925/49^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,green,mark phase=3,mark repeat=4] plot coordinates { (2,427/3^3) (3,1759/6^3) (4,5241/10^3) (5,12860/15^3) (6,27552/21^3) (7,53414/28^3) (8,95823/36^3) (9,161690/45^3) (10,259672/55^3) (11,400330/66^3) (12,596325/78^3) (13,862630/91^3) (14,1216684/105^3) (15,1678622/120^3) (16,2271453/136^3) (17,3021248/153^3) (18,3957182/171^3) (19,5112366/190^3) (20,6522843/210^3) (21,8228829/231^3) (22,10275367/253^3) (23,12709953/276^3) (24,15585915/300^3) (25,18960680/325^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,orange,mark phase=4,mark repeat=4] plot coordinates { (2,136/2^3) (3,337/3^3) (4,686/4^3) (5,1108/5^3) (6,1966/6^3) (7,2794/7^3) (8,3205/8^3) (9,4506/9^3) (10,5598/10^3) (11,6283/11^3) (12,8114/12^3) (13,9678/13^3) (14,10331/14^3) (15,12964/15^3) (16,15034/16^3) (17,16314/17^3) (18,18661/18^3) (19,21618/19^3) (20,23282/20^3) (21,26869/21^3) (22,29037/22^3) (23,31586/23^3) (24,35273/24^3) (25,39316/25^3) }; \addlegendentry{ \end{axis} \end{tikzpicture} \caption{Number of basic operations (\textsc{2D}): \asFGLM \& \aBMS} \label{fig:basicop2Dadapt} \end{figure} \begin{figure}[htbp!] \pgfplotsset{ small, width=12cm, height=7cm, legend cell align=left, legend columns=5, legend style={at={(-0.05,0.98)},anchor=south west,font=\scriptsize, } } \centering \begin{tikzpicture}[baseline] \begin{axis}[ ymode=log, xlabel={$d$}, xlabel style={at={(0.95,0.1)}}, xmin=3.7,xmax=15.2, ymin=0.08,ymax=250, xtick={2,...,15}, ytick={0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1,2,3,4,5,6,7,8,9,10, 20,30,40,50,60,70,80,90,100,200,300,400,500,600,700,800,900, 1000,2000,3000}, yticklabels={}, extra y ticks={0.1,0.5,1,5,10,50,100,500,1000}, extra y tick labels={0.1,0.5,1,5,10,50,100,500,1000}, ylabel={\#\,Basic Op/\#\,S$^3$}, ylabel style={at={(0.08,0.75)}}, ] \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{legend image with text=Rectangle} \addlegendentry{} \addlegendimage{legend image with text=\textsc{L} shape} \addlegendentry{} \addlegendimage{legend image with text=Simplex} \addlegendentry{} \addlegendimage{legend image with text=ShapePosition} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{\asFGLM} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,red,mark phase=1,mark repeat=4] plot coordinates { (4,1244/16^3) (5,2132/20^3) (6,9923/36^3) (7,47318/63^3) (8,159544/96^3) (9,225344/108^3) (10,1379580/200^3) (11,1831620/220^3) (12,4076069/288^3) (13,10052631/390^3) (14,19865227/490^3) (15,24417922/525^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,blue,mark phase=2,mark repeat=4] plot coordinates { (2,140/4^3) (3,384/7^3) (4,628/10^3) (5,998/13^3) (6,1521/16^3) (7,2224/19^3) (8,3134/22^3) (9,4278/25^3) (10,5683/28^3) (11,7376/31^3) (12,9384/34^3) (13,11734/37^3) (14,14453/40^3) (15,17568/43^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,green,mark phase=3,mark repeat=4] plot coordinates { (2,140/4^3) (3,1000/10^3) (4,5350/20^3) (5,22575/35^3) (6,78848/56^3) (7,237160/84^3) (8,633100/120^3) (9,1534225/165^3) (10,3433100/220^3) (11,7186608/286^3) (12,14216930/364^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dashed,orange,mark phase=4,mark repeat=4] plot coordinates { (2,31/2^3) (3,57/3^3) (4,76/4^3) (5,101/5^3) (6,133/6^3) (7,173/7^3) (8,222/8^3) (9,281/9^3) (10,351/10^3) (11,433/11^3) (12,528/12^3) (13,637/13^3) (14,761/14^3) (15,901/15^3) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{\aBMS} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,red,mark phase=1,mark repeat=4] plot coordinates { (4,25728/16^3) (5,46093/20^3) (6,205772/36^3) (7,919757/63^3) (8,2802347/96^3) (9,3880340/108^3) (10,19393287/200^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,blue,mark phase=2,mark repeat=4] plot coordinates { (2,1988/4^3) (3,6477/7^3) (4,16081/10^3) (5,36225/13^3) (6,58536/16^3) (7,104025/19^3) (8,155495/22^3) (9,229920/25^3) (10,317121/28^3) (11,422449/31^3) (12,546251/34^3) (13,710852/37^3) (14,865148/40^3) (15,1106272/43^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,green,mark phase=3,mark repeat=4] plot coordinates { (2,1774/4^3) (3,14591/10^3) (4,75757/20^3) (5,298521/35^3) (6,964815/56^3) (7,2689885/84^3) (8,6679544/120^3) (9,15125328/165^3) (10,31763926/220^3) (11,62657181/286^3) (12,117227645/364^3) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,orange,mark phase=4,mark repeat=4] plot coordinates { (2,292/2^3) (3,668/3^3) (4,2213/4^3) (5,3551/5^3) (6,5006/6^3) (7,7142/7^3) (8,9096/8^3) (9,12903/9^3) (10,16536/10^3) (11,20235/11^3) (12,24496/12^3) (13,32110/13^3) (14,39217/14^3) (15,45012/15^3) }; \addlegendentry{ \end{axis} \end{tikzpicture} \caption{Number of basic operations (\textsc{3D}): \asFGLM \& \aBMS} \label{fig:basicop3Dadapt} \end{figure} It seems that the \asFGLM always perform fewer operations than the \aBMS algorithm. Though, it is possible that, in dimension $2$, for larger parameters, the \aBMS becomes more efficient than the \asFGLM algorithm as suggested by the graphs. Concerning the \textsc{L} shape family, although the \aBMS algorithm do not reduce much its number of table queries, it performs in fact much fewer basic operations than the \BMS algorithm. For instance, in~\cite[Section~6]{part1}, we can see that the \BMS algorithm performs four times (\resp seven times) as many basic operations as the \aBMS algorithm in dimension $2$ (\resp dimension $3$). It is also possible that the larger number of operations the \aBMS algorithm performs compared to the \sFGLM algorithm is due to the larger number of queries it needs to recover the relations. Therefore, we now also compare the ratio between their number of basic operations and their number of queries in Figures~\ref{fig:basicop/queries2Dadapt} and~\ref{fig:basicop/queries3Dadapt}. \begin{figure}[htbp!] \pgfplotsset{ small, width=12cm, height=7cm, legend cell align=left, legend columns=5, legend style={at={(-0.1,0.98)},anchor=south west,font=\scriptsize, } } \centering \begin{tikzpicture}[baseline] \begin{axis}[ ymode=log, xlabel={$d$}, xlabel style={at={(0.95,0.1)}}, xmin=3.8,xmax=25.2, ymin=3.8,ymax=25000, xtick={3,...,25}, ytick={1,2,3,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90,100, 200,300,400,500,600,700,800,900,1000, 2000,3000,4000,5000,6000,7000,8000,9000,10000, 20000,30000,40000,50000,60000,70000,80000,90000,100000, 200000}, yticklabels={}, extra y ticks={5,10,50,100,500,1000,5000,10000,50000,100000}, extra y tick labels={5,10,50,100,500,1000,5000,10000,50000,100000}, ylabel={\#\,Basic Op/\#\,Queries}, ylabel style={at={(0.08,0.72)}}, ] \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{legend image with text=Rectangle} \addlegendentry{} \addlegendimage{legend image with text=\textsc{L} shape} \addlegendentry{} \addlegendimage{legend image with text=Simplex} \addlegendentry{} \addlegendimage{legend image with text=Shape Position} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{\asFGLM} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,red,mark phase=1,mark repeat=4] plot coordinates { (4,210/25) (5,337/31) (6,1595/60) (7,2355/70) (8,7147/110) (9,9871/124) (10,24599/176) (11,32254/194) (12,69587/258) (13,87739/280) (14,170015/356) (15,208053/382) (16,371335/470) (17,443935/500) (18,742787/600) (19,871616/634) (20,1384599/746) (21,1600259/784) (22,2436147/908) (23,2780359/950) (24,4085075/1086) (25,4613103/1132) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,blue,mark phase=2,mark repeat=4] plot coordinates { (2,52/10) (3,128/19) (4,215/30) (5,342/43) (6,517/58) (7,748/75) (8,1043/94) (9,1410/115) (10,1857/138) (11,2392/163) (12,3023/190) (13,3758/219) (14,4605/250) (15,5572/283) (16,6667/318) (17,7898/355) (18,9273/394) (19,10800/435) (20,12487/478) (21,14342/523) (22,16373/570) (23,18588/619) (24,20995/670) (25,23602/723) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,green,mark phase=3,mark repeat=4] plot coordinates { (2,52/10) (3,200/21) (4,615/36) (5,1610/55) (6,3724/78) (7,7812/105) (8,15150/136) (9,27555/171) (10,47520/210) (11,78364/253) (12,124397/300) (13,191100/351) (14,285320/406) (15,415480/465) (16,591804/528) (17,826557/595) (18,1134300/666) (19,1532160/741) (20,2040115/820) (21,2681294/903) (22,3482292/990) (23,4473500/1081) (24,5689450/1176) (25,7169175/1275) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,orange,mark phase=4,mark repeat=4] plot coordinates { (2,22/6) (3,41/9) (4,59/11) (5,83/13) (6,114/15) (7,153/17) (8,201/19) (9,259/21) (10,328/23) (11,409/25) (12,503/27) (13,611/29) (14,734/31) (15,873/33) (16,1029/35) (17,1203/37) (18,1396/39) (19,1609/41) (20,1843/43) (21,2099/45) (22,2378/47) (23,2681/49) (24,3009/51) (25,3363/53) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{\aBMS} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,red,mark phase=1,mark repeat=4] plot coordinates { (4,2115/41) (5,3108/58) (6,12589/103) (7,18206/132) (8,50410/198) (9,65117/236) (10,142499/320) (11,180157/371) (12,352288/478) (13,426269/541) (14,738578/663) (15,865913/731) (16,1432507/882) (17,1664165/967) (18,2567128/1141) (19,2926327/1238) (20,4307996/1418) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,blue,mark phase=2,mark repeat=4] plot coordinates { (2,438/10) (3,1111/21) (4,2259/36) (5,3721/55) (6,5817/78) (7,8417/105) (8,11375/136) (9,14881/171) (10,19457/210) (11,23359/253) (12,29387/300) (13,34697/351) (14,40977/406) (15,48195/465) (16,55495/528) (17,63243/595) (18,72697/666) (19,80719/741) (20,91271/820) (21,101213/903) (22,113021/990) (23,122455/1081) (24,136731/1176) (25,148925/1275) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,green,mark phase=3,mark repeat=4] plot coordinates { (2,427/10) (3,1759/21) (4,5241/36) (5,12860/55) (6,27552/78) (7,53414/105) (8,95823/136) (9,161690/171) (10,259672/210) (11,400330/253) (12,596325/300) (13,862630/351) (14,1216684/406) (15,1678622/465) (16,2271453/528) (17,3021248/595) (18,3957182/666) (19,5112366/741) (20,6522843/820) (21,8228829/903) (22,10275367/990) (23,12709953/1081) (24,15585915/1176) (25,18960680/1275) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,orange,mark phase=4,mark repeat=4] plot coordinates { (2,136/6) (3,337/10) (4,686/14) (5,1108/18) (6,1966/22) (7,2794/26) (8,3205/30) (9,4506/34) (10,5598/38) (11,6283/42) (12,8114/46) (13,9678/50) (14,10331/54) (15,12964/58) (16,15034/62) (17,16314/66) (18,18661/70) (19,21618/74) (20,23282/78) (21,26869/82) (22,29037/86) (23,31586/90) (24,35273/94) (25,39316/98) }; \addlegendentry{ \end{axis} \end{tikzpicture} \caption{Number of basic operations by queries (\textsc{2D}): \asFGLM \& \aBMS} \label{fig:basicop/queries2Dadapt} \end{figure} In dimension $2$, the \asFGLM algorithm seems to have a better ratio between the number of operations and the number of queries than the \aBMS algorithm. Yet, once again, it is possible that this statement is not true for larger $d$. \begin{figure}[htbp!] \pgfplotsset{ small, width=12cm, height=7cm, legend cell align=left, legend columns=5, legend style={at={(-0.1,0.98)},anchor=south west,font=\scriptsize, } } \centering \begin{tikzpicture}[baseline] \begin{axis}[ ymode=log, xlabel={$d$}, xlabel style={at={(0.95,0.1)}}, xmin=3.7,xmax=15.2, ymin=3.8,ymax=25000, xtick={2,...,15}, ytick={1,2,3,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90,100, 200,300,400,500,600,700,800,900,1000,2000,3000,4000,5000,6000, 7000,8000,9000,10000,20000,30000,40000,50000,60000}, yticklabels={}, extra y ticks={5,10,50,100,500,1000,5000,10000,50000}, extra y tick labels={5,10,50,100,500,1000,5000,10000,50000}, ylabel={\#\,Basic Op/\#\,Queries}, ylabel style={at={(0.08,0.72)}}, ] \addlegendimage{empty legend} \addlegendentry{} \addlegendimage{legend image with text=Rectangle} \addlegendentry{} \addlegendimage{legend image with text=\textsc{L} shape} \addlegendentry{} \addlegendimage{legend image with text=Simplex} \addlegendentry{} \addlegendimage{legend image with text=Shape Position} \addlegendentry{} \addlegendimage{empty legend} \addlegendentry{\asFGLM} \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,red,mark phase=1,mark repeat=4] plot coordinates { (4,1244/72) (5,2132/90) (6,9923/174) (7,47318/334) (8,159544/534) (9,225344/604) (10,1379580/1206) (11,1831620/1332) (12,4076069/1780) (13,10052631/2484) (14,19865227/3168) (15,24417922/3402) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,blue,mark phase=2,mark repeat=4] plot coordinates { (2,140/20) (3,384/42) (4,628/69) (5,998/102) (6,1521/141) (7,2224/186) (8,3134/237) (9,4278/294) (10,5683/357) (11,7376/426) (12,9384/501) (13,11734/582) (14,14453/669) (15,17568/762) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,green,mark phase=3,mark repeat=4] plot coordinates { (2,140/20) (3,1000/56) (4,5350/120) (5,22575/220) (6,78848/364) (7,237160/560) (8,633100/816) (9,1534225/1140) (10,3433100/1540) (11,7186608/2024) (12,14216930/2600) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid}, mark=triangle*,dashed,orange,mark phase=4,mark repeat=4] plot coordinates { (2,31/8) (3,57/13) (4,76/18) (5,101/21) (6,133/25) (7,173/30) (8,222/33) (9,281/36) (10,351/40) (11,433/46) (12,528/49) (13,637/52) (14,761/55) (15,901/59) }; \addlegendentry{ \addlegendimage{empty legend} \addlegendentry{\aBMS} \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,red,mark phase=1,mark repeat=4] plot coordinates { (4,25728/207) (5,46093/330) (6,205772/709) (7,919757/1265) (8,2802347/2159) (9,3880340/2739) (10,19393287/4819) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,blue,mark phase=2,mark repeat=4] plot coordinates { (2,1988/20) (3,6477/56) (4,16081/120) (5,36225/218) (6,58536/357) (7,104025/545) (8,155495/784) (9,229920/1090) (10,317121/1457) (11,422449/1907) (12,546251/2424) (13,710852/3043) (14,865148/3741) (15,1106272/4557) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,green,mark phase=3,mark repeat=4] plot coordinates { (2,1774/20) (3,14591/56) (4,75757/120) (5,298521/220) (6,964815/364) (7,2689885/560) (8,6679544/816) (9,15125328/1140) (10,31763926/1540) (11,62657181/2024) (12,117227645/2600) }; \addlegendentry{ \addplot[thick,every mark/.append style={solid,rotate=180}, mark=triangle*,dotted,orange,mark phase=4,mark repeat=4] plot coordinates { (2,292/8) (3,668/13) (4,2213/18) (5,3551/23) (6,5006/28) (7,7142/33) (8,9096/38) (9,12903/43) (10,16536/48) (11,20235/53) (12,24496/58) (13,32110/63) (14,39217/68) (15,45012/73) }; \addlegendentry{ \end{axis} \end{tikzpicture} \caption{Number of basic operations by queries (\textsc{3D}): \asFGLM \& \aBMS} \label{fig:basicop/queries3Dadapt} \end{figure} In dimension $3$, however, our experiments lead us to believe that this ratio will always be larger for the \aBMS algorithm than for the \asFGLM algorithm. \subsection{A Polynomial interpretation of the \BMS algorithm} Given a table $\bu=(u_{\bi})_{\bi\in\N^n}$ and a weight ordering $\prec$ for $\x$. We let $\cT_0=\{0\}\cup\{\x^{\bi},\ \bi\in\N^n\}$ and extend $\prec$ (still denoted by $\prec$) to $\cT_0$ with the convention that $0\prec 1$. The goal is to iterate on a monomial $m$, by only considering, at each step, the table $(u_{\bi})_{\bi\in\{\bk,\ \x^{\bk}\preceq m\}}$. As we only know partially the table $\bu$, we need to define some notions according to this partial knowledge at step $m$. \begin{definition} \label{def:shift} Let $m\in\cT_0$. Let $f\in\K[\x]$, we say that the relation $f$ is \emph{valid up to $m$}, whenever \[\forall t\in\cT_0,\, \LM(t\,f) \preceq m \Rightarrow [t\,f]=0.\] We thus define the \emph{shift} of $f$ as $\shift (f)=\frac{m}{\LM(f)}$. We say that the relation $f$ \emph{fails} at $m$ whenever \begin{align*} \forall t\in\cT_0,\, t\,f \prec m \Rightarrow [t\,f]&=0,\\ \cro{\frac{m}{\LM(f)}\,f}&\neq 0. \end{align*} We define the \emph{fail} of $f$ as $\fail(f)=m$. If the relation $f$ never fails, that is for all $t\in\cT_0$, $[t\,f]=0$, then by convention $\fail(f)=\shift(f)=+\infty$. \end{definition} \begin{proposition} Let $\bu$ be a table and $f\in\K[x]$ such that $\fail(f)\succ m$. For all $g\in\K[\x]$, if $\LM(g\,f)\preceq m$, then $[g\,f]=0$. \end{proposition} The following proposition show how to combine two failing relations with the same shift in order to obtain a new relation valid with a bigger shift. \begin{proposition} \label{prop:augm_shift} Let $f_1$ and $f_2$ be two relations such that $v=\frac{\fail (f_1)}{\LM(f_1)}=\frac{\fail (f_2)}{\LM (f_2)}$ and $e_1=\cro{v\,f_1}$, $e_2=\cro{v\,f_2}$. Let $f$ be the nonzero polynomial $f_1 - \frac{e_1}{e_2}\,f_2$. Then, for $i\in\{1,2\}$, $\fail(f)\succ \fail(f_i)$, \ie $\frac{\fail(f)}{\LM(f)}\succ v$. \end{proposition} \begin{proof} For any $c\in\K$ and any $\mu\in\K[\x]$ such that $\LM(g)\prec v$, we have $[\mu\,(f_1+c\,f_2)]=[\mu\,f_1]+c\,[\mu\,f_2]=0$, hence $\fail (f_1+c\,f_2)\succeq \fail (f_i)$. It remains to prove that for a good choice of $c$, we have a strict inequality: as, $[v\,(f_1+c\,f_2)]=[v\,f_1]+c\,[v\,f_2]=e_1+c\,e_2$, it is clear that $[v\,f]=[v\,(f_1-\frac{e_1}{e_2}\,f_2)]=0$, so that $\fail (f)\succ v\,\LM(f)\succeq\fail (f_i)$. \end{proof} \begin{definition}\label{def:im} Using the same notation as in Definition~\ref{def:staircase}, we let \[I_m = \{f\in\K[\x],\ \fail\pare{f}\succ m\},\] and $\cG_m$ be the least elements for $\prec$ of $I_m$, it is a truncated \gb of $I_m$: \begin{align*} \cG_m &= \min_{\prec}\{g,\ g\in I_m\},\\ S_m &= \Staircase(\cG_m). \end{align*} \end{definition} \begin{example} Let us go back to Example~\ref{ex:binom} with sequence $\bin = \left(\binom{i}{j}\right)_{(i,j)\in\N^2}$. Consider $\K[x,y]$ with the $\DRL(y\prec x)$ ordering, and $m=x^2$. \[ \begin{ytableau} \none[y^2] & 0 \\ \none[y] & 0 & 1 \\ \none[1] & 1 & 1 & *(green)1 \\ \none & \none[1] & \none[x] & \none[x^2] \end{ytableau} \] From this table, on the one hand, we can deduce that \begin{itemize} \item since it is not identically $0$, there is no relation with leading monomial $1$ valid up to $x^2$, hence $1\in S_{x^2}$; \item since $[y+\alpha]=\alpha$ and $[x\,(y+\alpha)]=1+\alpha$, there is no relation with leading monomial $y$ valid up to $x\,y$ and thus $x^2$, hence $y\in S_{x^2}$; \item since $[y\,(x+\beta\,y+\alpha)]=1$, there is no relation with leading monomial $x$ valid up to $x\,y$ and thus $x^2$, hence $x\in S_{x^2}$. \end{itemize} On the other hand, we can check that \begin{itemize} \item since $[y^2]=0$, relation $y^2$ is valid up to $y^2$ and thus $x^2$, hence $y^2\in\cT\setminus S_{x^2}$; \item since $[x\,y-1]=0$, relation $x\,y-1$ is valid up to $x\,y$ and thus $x^2$, hence $x\,y\in\cT\setminus S_{x^2}$; \item since $[x^2-x]=0$, relation $x^2-x$ is valid up to $x^2$, hence $x^2\in\cT\setminus S_{x^2}$. \end{itemize} Therefore, $S_{x^2} = \{1,y,x\}$, $\max_|(S_{x^2})=\{y,x\}$ and $\min_|(\cT\setminus S_{x^2})=\{y^2,x\,y,x^2\}$. This is summed up in the following diagram. \[ \begin{ytableau} \none[y^2] & \bigodot \\ \none[y] & \bigotimes & \bigodot\\ \none[1] & & \bigotimes & *(green)\bigodot \\ \none & \none[1] & \none[x] & \none[x^2] \end{ytableau} \quad\quad \begin{array}{cl} \\\\ \bigodot: &\min_|(\cT\setminus S_{x^2})\\ \bigotimes: &\max_|(S_{x^2}) \end{array} \] Let us notice that many relations with respective leading monomials $y^2,x\,y,x^2$ suit actually. These would be $y^2-\alpha_1\,x+\alpha_y\,y+\alpha_1,x\,y-(1+\alpha_1)\,x +\alpha_y\,y+\alpha_1$ and $x^2-(1+\alpha_1)\,x+\alpha_y\,y+\alpha_1$. Furthermore, $I_{x^2}$ is not stable by addition: $(x^2-x),(x^2-2\,x+1)\in I_{x^2}$ but $x^2-x-(x^2-2\,x+1)=(x-1)\not\in I_{x^2}$ since $\fail\pare{x-1}=x\,y$. Hence, $I_{x^2}$ is not an ideal of $\K[x,y]$. For $m=x^3$, with the following table, we find that \[ \begin{ytableau} \none[y^3] &0\\ \none[y^2] & 0 &0 \\ \none[y] & 0 & 1 & 2\\ \none[1] & 1 & 1 & 1 & *(green)1 \\ \none & \none[1] & \none[x] & \none[x^2] &\none[x^3] \end{ytableau} \] \begin{itemize} \item since $[y^2]=[y\,y^2]=[x\,y^2]=0$, then $y^2$ is valid up to $x\,y^2$ and thus $x^3$; \item since $[x\,y-1]=[y\,(x\,y-1)]=0$ and $[x\,(x\,y-y)]=1$, then $x\,y-1$ fails at $x^2\,y$. Yet, since $[y]=[y\,y]=0$ and $[x\,y]=1$, then by Proposition~\ref{prop:augm_shift}, $[x\,y-y-1]=[y\,(x\,y-y-1)]=0$ and $[x\,(x\,y-y-1)]$ vanishes as well. Hence, $x\,y-y-1$ is valid up to $x^2\,y$ and thus $x^3$; \item since $[x^2-x]=0$ and $[y\,(x^2-x)]=1$, then $x^2-x$ fails at $x^2\,y$. Likewise, since $[x-1]=0$ and $[y\,(x-1)]=1$, then $[x^2-2\,x+1]=0$ and $[y\,(x^2-2\,x+1)]=0$. Furthermore, $[x\,(x^2-2\,x+1)]=0$, so that $x^2-2\,x+1$ is valid up to $x^3$. \end{itemize} Therefore, $S_{x^3} = \{1,y,x\}$, $\max_|(S_{x^3})=\{y,x\}$ and $\min_|(\cT\setminus S_{x^3})=\{y^2,x\,y,x^2\}$. We can also check that these relations span the only valid relations with support in $S_{x^3}\cup\{y^2,x\,y,x^2\}$. \[ \begin{ytableau} \none[y^3] & \\ \none[y^2] & \bigodot & \\ \none[y] & \bigotimes & \bigodot & \\ \none[1] & & \bigotimes & \bigodot & *(green)\\ \none & \none[1] & \none[x] & \none[x^2] & \none[x^3]\\ \end{ytableau} \] \end{example} Although $I_m$ is not an ideal in general, we have the following results: \begin{proposition}\label{prop:Im-closed} Using the notation of Definitions~\ref{def:shift} and~\ref{def:im}, \begin{enumerate} \item \label{eq:stab} $I_m$ is closed under multiplication by elements of $\K[\x]$, \item for all monomials $t,t'$ such that $t| t'$, \begin{enumerate} \item \label{it:staircase} if $t'\in S_m$, then $t\in S_m$. \item \label{it:compl_staircase} if $t\in\cT\setminus S_m$, then $t'\in\cT\setminus S_m$, \end{enumerate} \end{enumerate} \end{proposition} Moreover, it is clear that the sequence $(I_m)_{m\in\cT_0}$ is decreasing and that if $\bu$ is linear recurrent then $I = \bigcap_{m\in\cT_0}I_m$. Therefore, $\left(S_m \right)_{m\in\cT_0}$ is increasing and its limit is $S$ the finite target staircase. Hence, for $m$ big enough, $S_m$ will be the target staircase. We will give an upper bound in Proposition~\ref{prop:upperbound}. The following result gives an intrinsic characterization of $S_m$ that is key in the iteration of the \BMS algorithm. \begin{proposition}\label{prop:iter} For all monomial $m\in\cT_0$, $S_m = \acc{\frac{\fail(f)}{\LM(f)},\ f\notin I_m}$. Furthermore, let $m^+$ be the successor of $m$. Let $s$ be a monomial in the staircase $S_{m^+}$. Then, $s$ was added at step $m^+$, \ie $s\notin S_m$, if, and only if, $s|m^+$ and $\frac{m^+}{s}\in S_{m^+}\setminus S_m$. \end{proposition} \begin{proof} We shall prove the first assertion by double inclusion. If $s=\frac{\fail(f)}{\LM(f)}$ then for all $g\in\K[\x]$ such that $\LM(g)=s$, $\fail(g)\preceq m$, hence $s\notin\LM(I_m)$, $s\in S_m$. The reverse inclusion is proved by induction on $m$. For $m=0$, $S_m=\emptyset$ and there is nothing to do. Let us assume the inclusion is satisfied for a monomial $m$. Let $s\in S_{m^+}$. On the one hand, if $s\in S_m$, then there exists $f\in\K[\x]\setminus I_m\subseteq\K[\x]\setminus I_{m^+}$ such that $s=\frac{\fail(f)}{\LM(f)}$. If, on the other hand, $s\in S_{m^+}\setminus S_m$, then there exists a relation $f\in\K[\x]$ such that $\LM(f)=s$, and $m\prec \fail(f) \preceq m^+$, hence $\fail(f)=m^+$ and $s$ divides $m^+$. Let us assume that for all $g\in\K[\x]$ with $\LM(g)=\frac{m^+}{s}$, we have $\fail (g)\preceq m\prec m^+$. Therefore, $\frac{m^+}{s}\in S_m$ and there exists $h\notin I_m$ such that $\frac{\fail(h)}{\LM(h)}=\frac{m^+}{s}$. By Proposition~\ref{prop:augm_shift}, there is $\alpha\in\K$ such that $\fail(f-\alpha\,h)\succ m^+$. Since $\fail(h)\preceq m\prec m^+$, then $\LM(h)\preceq s$ and $\LM(f-\alpha\,h)=s$, hence $\frac{\fail(f-\alpha\,h)}{\LM(f-\alpha\,h)}\succ\frac{m^+}{s}$. This contradicts the fact that $\frac{m^+}{s}\in S_m$. Thus there exists a $g\in\K[\x]$ with $\LM(g)=\frac{m^+}{s}$ and $\fail(g)\succeq m^+$. Let $g$ be such a relation, since $\fail(f)=m^+$, then $[g\,f]\neq0$ and $\fail(g)=m^+$. Therefore, $\frac{\fail(g)}{\LM(g)}=\frac{m^+}{m^+/s}=s$ so that $s\in\acc{\frac{\fail(f)}{\LM(f)},\ f\notin I_{m^+}}$. Now, we proved that $s\in S_{m^+}\setminus S_m$ implies $s|m^+$ and $\frac{m^+}{s}\in S_{m^+}\setminus S_m$. This implication is clearly an equivalence. \end{proof} From this proposition it follows that if $m\in\cT_0$, and if $m^+$ is its successor: \begin{equation}\label{eq:recdelta} \max_|(S_{m^+}) = \max_{|}\pare{\max_|(S_m) \cup \acc{\frac{m^+}{s},\ s\in\min_|(\cT\setminus S_m)\cap S_{m^+}}} \end{equation} Relation~\ref{eq:recdelta} allows us to construct, iterating on the monomial $m$, the set of relations $G_m$ representing the truncated \gb of $I_m$. Relations $g\in G_m$ are indexed by their leading monomials, describing $\cT\setminus S_m$. \begin{remark}\label{rk:choose_Sm} We can also construct another set, describing the edge of $S_m$, still denoted $S_m$, as there is a one-to-one correspondence between a staircase and its edge. The relations $h\in S_m$ are indexed by their ratio $\frac{\fail(h)}{\LM(h)}$ between their fail and their leading monomial, describing the full staircase of $I_m$. When two relations $h$ and $h'$ in $S_m$ are such that $\frac{\fail(h)}{\LM(h)}=\frac{\fail(h')}{\LM(h')}$, then we only need to keep one. Since the goal is to combine a relation of $S_m$ with a relation failing at $m^+$ to make a new one with a bigger shift, as in Proposition~\ref{prop:augm_shift}, it is best to handle smaller polynomials. \end{remark} This yields Algorithm~\ref{algo:bms}. \begin{algorithm2e}[htbp!]\label{algo:bms} \small \DontPrintSemicolon \TitleOfAlgo{The \BMS algorithm.} \KwIn{A table $\bu=(u_{\bi})_{\bi\in\N^n}$ with coefficients in $\K$, a monomial ordering $\prec$ and a monomial $M$ as the stopping condition.} \KwOut{A set $G$ of relations generating $I_M$.} $T := \{m\in\K[\x],\ m\preceq M\}$.\tcp*{ordered for $\prec$} $G := \{1\}$.\tcp*{the future \gb} $S := \emptyset$.\tcp*{staircase edge, elements will be $[h,\fail(h)/\LM(h)]$} \Forall{$m\in T$}{ $S' := S$.\; \For{$g\in G$}{ \If{$\LM(g)| m$}{ $e:=\cro{\frac{m}{\LM(g)}\,g}_{\bu}$.\; \If{$e\neq 0$}{ $S':=S'\cup\acc{\cro{\frac{g}{e},\frac{m}{\LM(g)}}}$.\; } } } $S':=\min_|\acc{[h,\fail(h)/\LM(h)]}$. \tcp*{see Remark~\ref{rk:choose_Sm}} $G':= \Border (S')$.\; \For{$g'\in G'$}{ Let $g\in G$ such that $\LM(g)|\LM(g')$.\; \uIf{$\LM(g) \nmid m$}{ $g':=\frac{\LM(g')}{\LM(g)}\,g$.\tcp*{translates the relation} } \uElseIf{$\exists\,h\in S, \frac{m}{\LM(g')} |\fail(h)$}{ $g':= \frac{\LM(g')}{\LM(g)}\,g -\cro{\frac{m}{\LM(g)}\,g}_{\bu}\, \frac{\LM(g')\,\fail(h)}{m}\,h$.\tcp*{see Proposition~\ref{prop:augm_shift}} } \lElse{ $g':=g$. } } $G := G'$.\; $S := S'$.\; } \KwRet $G$. \end{algorithm2e} We saw that for $m$ big enough, $S_m$ will be the target staircase. We now give an upper bound. \begin{proposition}\label{prop:upperbound} Let $\bu$ be a linear recurrent sequence and $I$ be its ideal of relations. Let $S$ be the staircase of $I$ for $\prec$. Let $s_{\max}$ be the largest monomial in $S$. Then, for $m\succeq (s_{\max})^2$, $S_m = S$. Let $\cG$ be a minimal \gb of $I$ for $\prec$ and let $g_{\max}$ be the largest leading monomial of $\cG$. Then, for $m\succeq s_{\max}\cdot\max_{\prec}(g_{\max},s_{\max})$, the \BMS algorithm returns a minimal \gb of $I$ for $\prec$. \end{proposition} \begin{example} For the $\DRL(y\prec x)$ ordering, $I=\langle x^p,y^q\rangle$ and $q>p\geq 1$, we have, $s_{\max}=x^{p-1}\,y^{q-1}$ and $g_{\max}=y^q$. Therefore, the right staircase is found at most at step $m=x^{2\,p-2}\,y^{2\,q-2}$, while the \gb is found at most at step $x^{p-1}\,y^{q-1}\,\max_{\prec}(x^{p-1}\,y^{q-1},y^q)$, \ie $y^{2\,q-1}$ if $p=1$ and $x^{2\,p-2}\,y^{2\,q-2}$ otherwise. \end{example} From Propositions~\ref{prop:iter} and~\ref{prop:upperbound}, we can deduce that $S=\acc{\frac{\fail(f)}{\LM(f)},\ f\notin I}$.% \begin{example}\label{ex:binom_bms} We give the trace of the algorithm called on the binomial sequence $\bin$ for the $\DRL(y\prec x)$ ordering up to monomial $x^3$ (hence visiting all the monomials of degree at most $3$). To simplify the reading, whenever a relation succeeds in $m$ or cannot be tested in $m$, we skip the updating part as this relation remains the same. We start with the empty staircase $S$ and the relation $G=\{1\}$. \begin{enumerate} \item[] For the monomial $1$ \begin{enumerate} \item[] The relation $g_1=1$ fails since $[\bin_{0,0}]=1$. Thus $S'=\{[1,1]\}$. \item[] $S'$ is updated to $\{[1,1]\}$ and $G'=\{y,x\}$. \item[] For the relation $g_1'=y$, $y\nmid 1$ thus $g_1'=y$. \item[] For the relation $g_2'=x$, $x\nmid 1$ thus $g_2'=x$. \item[] We update $G:=G'=\{y,x\}$ and $S:=S'=\{[1,1]\}$. \end{enumerate} \item[] For the monomial $y$ \begin{enumerate} \item[] The relation $g_1=y$ succeeds since $[\bin_{0,1}]=0$. \item[] Nothing must be done for the relation $g_2=x$. \item[] $S'$ is set to $\{[1,1]\}$ and $G'=\{y,x\}$. \item[] We set $g_1'=y$ and $g_2'=x$. \item[] We update $G:=G'=\{y,x\}$ and $S:=S'=\{[1,1]\}$. \end{enumerate} \item[] For the monomial $x$ \begin{enumerate} \item[] Nothing must be done for the relation $g_1=y$. \item[] The relation $g_2=x$ fails since $[\bin_{1,0}]=1$. Thus $S'=\{[1,1],[x,1]\}$. \item[] $S'$ is set to $\{[1,1]\}$ and $G'=\{y,x\}$. \item[] We set $g_1'=y$. \item[] For the relation $g_2'=x$, $x| x$ and $\frac{x}{x}|\fail(1)$, hence $g_2'=x-1$. \item[] We update $G:=G'=\{y,x-1\}$ and $S:=S'=\{[1,1]\}$. \end{enumerate} \item[] For the monomial $y^2$ \begin{enumerate} \item[] The relation $g_1=y$ succeeds since $[\bin_{0,2}]=0$. \item[] Nothing must be done for the relation $g_2=x-1$. \item[] $S'$ is set to $\{[1,1]\}$ and $G'=\{y,x\}$. \item[] We set $g_1'=y$ and $g_2'=x-1$. \item[] We update $G:=G'=\{y,x-1\}$ and $S:=S'=\{[1,1]\}$. \end{enumerate} \item[] For the monomial $x\,y$ \begin{enumerate} \item[] The relation $g_1=y$ fails since $[\bin_{1,1}]=1$. Thus $S'=\{[1,1],[y,x]\}$. \item[] The relation $g_2=x-1$ fails since $[\bin_{1,1}-\bin_{0,1}]=1$. Thus $S'=\{[1,1],[y,x],[x-1,y]\}$. \item[] $S'$ is set to $\{[y,x],[x-1,y]\}$ and $G'=\{y^2,x\,y,x^2\}$. \item[] For the relation $g_1'=y^2$, $y^2\nmid x\,y$ thus $g_1'=y^2$. \item[] For the relation $g_2'=x\,y$, $x\,y| x\,y$ and $\frac{x\,y}{x\,y}|\fail(y)$, hence $g_2'=x\,y-1$. \item[] For the relation $g_3'=x^2$, $x^2\nmid x\,y$ thus $g_3'=x^2-x$. \item[] We update $G:=G'=\{y^2,x\,y-1,x^2-x\}$ and $S:=S'=\{[y,x],[x-1,y]\}$. \end{enumerate} \item[] For the monomial $x^2$ \begin{enumerate} \item[] Nothing must be done for the relation $g_1=y^2$. \item[] Nothing must be done for the relation $g_2=x\,y-1$. \item[] The relation $g_3=x^2-x$ succeeds since $[\bin_{2,0}-\bin_{1,0}]=0$. \item[] $S'$ is set to $\{[y,x],[x-1,y]\}$ and $G'=\{y^2,x\,y,x^2\}$. \item[] We set $g_1'=y^2$, $g_2'=x\,y-1$ and $g_3'=x^2-x$. \item[] We update $G:=G'=\{y^2,x\,y-1,x^2-x\}$ and $S:=S'=\{[y,x],[x-1,y]\}$. \end{enumerate} \item[] For the monomial $y^3$ \begin{enumerate} \item[] The relation $g_1=y^2$ succeeds since $[\bin_{0,3}]=0$. \item[] Nothing must be done for the relation $g_2=x\,y-1$. \item[] Nothing must be done for the relation $g_3=x^2-x$. \item[] $S'$ is set to $\{[y,x],[x-1,y]\}$ and $G'=\{y^2,x\,y,x^2\}$. \item[] We set $g_1'=y^2$, $g_2'=x\,y-1$ and $g_3=x^2-x$. \item[] We update $G:=G'=\{y^2,x\,y-1,x^2-x\}$ and $S:=S'=\{[y,x],[x-1,y]\}$. \end{enumerate} \item[] For the monomial $x\,y^2$ \begin{enumerate} \item[] The relation $g_1=y^2$ succeeds since $[\bin_{1,2}]=0$. \item[] The relation $g_2=x\,y-1$ succeeds since $[\bin_{1,2}-\bin_{0,1}]=0$. \item[] Nothing must be done for the relation $g_3=x^2-x$. \item[] $S'$ is set to $\{[y,x],[x-1,y]\}$ and $G'=\{y^2,x\,y,x^2\}$. \item[] We set $g_1'=y^2$, $g_2'=x\,y-1$ and $g_3=x^2-x$. \item[] We update $G:=G'=\{y^2,x\,y-1,x^2-x\}$ and $S:=S'=\{[x,y],[y,x-1]\}$. \end{enumerate} \item[] For the monomial $x^2\,y$ \begin{enumerate} \item[] Nothing must be done for the relation $g_1=y^2$. \item[] The relation $g_2=x\,y-1$ fails since $[\bin_{2,1}-\bin_{1,0}]=1$. Thus $S'=\{[y,x],[x-1,y],[x\,y-1,x]\}$. \item[] The relation $g_3=x^2-x$ fails since $[\bin_{2,1}-\bin_{1,1}]=1$. Thus $S'=\{[y,x],[x-1,y],[x\,y-1,x],[x^2-x,y]\}$. \item[] $S'$ is set to $\{[y,x],[x-1,y]\}$ and $G'=\{y^2,x\,y,x^2\}$. \item[] We set $g_1'=y^2$. \item[] For the relation $g_2'=x\,y$, $x\,y| x^2\,y$ and $\frac{x^2\,y}{x\,y}|\fail(y)$, hence $g_3'=x\,y-y-1$. \item[] For the relation $g_3'=x^2$, $x^2| x^2\,y$ and $\frac{x^2\,y}{x^2}|\fail(x-1)$, hence $g_3'=x^2-2\,x+1$. \item[] We update $G:=G'=\{y^2,x\,y-y-1,x^2-2\,x+1\}$ and $S:=S'=\{[y,x],[x-1,y]\}$. \end{enumerate} \item[] For the monomial $x^3$ \begin{enumerate} \item[] Nothing must be done for the relation $g_1=y^2$. \item[] Nothing must be done for the relation $g_2=x\,y-y-1$. \item[] The relation $g_3=x^2-2\,x+1$ succeeds since $[\bin_{3,0}-2\,\bin_{2,0}+\bin_{1,0}]=0$. \item[] $S'$ is set to $\{[y,x],[x-1,y]\}$ and $G'=\{y^2,x\,y,x^2\}$. \item[] We set $g_1'=y^2$, $g_2'=x\,y-y-1$ and $g_3=x^2-2\,x+1$. \item[] We update $G:=G'=\{y^2,x\,y-y-1,x^2-2\,x+1\}$ and $S:=S'=\{[y,x],[x-1,y]\}$. \end{enumerate} \item[] The algorithm returns relations $y^2,x\,y-y-1,x^2-2\,x+1$, all three with a shift $x$. \end{enumerate} \end{example} \subsection{A Linear Algebra interpretation of the \BMS algorithm} \label{ss:bms_lin_alg} In order to make the presentation of the \BMS algorithm closer to that of the \sFGLM algorithm, we propose to replace every evaluation using the $[\,]$ operator with a matrix-vector product. As stated above, given a monic relation $f=\LM(f)+\sum_{s\in S}\alpha_s\,s$, testing the shift of this relation by a monomial $m$ is done with the bracket operator, \ie testing whether $[m\,f]=0$ or not. Denoting $\vec{f}$, the vector \[ \vec{f}=\kbordermatrix{ &1\\ \vdots &\vdots\\ s\in S &\alpha_s\\ \vdots &\vdots\\ \LM(f) &1 }, \] this can also be done through testing if the following matrix-vector product \[H_{m,S\cup\{\LM(f)\}}\,\vec{f}= \kbordermatrix{ &\cdots &s\in S&\cdots &\LM(f)\\ m &\cdots &[m\,s] &\cdots &[m\,\LM(f)] }\, \begin{pmatrix} \vdots\\\alpha_s\\\vdots\\1 \end{pmatrix} =0 \] or not. In this setting, the definitions of the \emph{shift} and the \emph{fail} of a relation, \ie Definition~\ref{def:shift}, become as follows. \begin{definition}\label{def:fail_linal} Let $f=\LT(f)+\sum_{s\in S}\alpha_s\,s$ be a polynomial. The monomial $m$ is a \emph{shift of $f$} if \[H_{\{1,\ldots,m\},S\cup\{\LM(f)\}}\,\vec{f}= \kbordermatrix{ &\cdots &s\in S&\cdots &\LM(f)\\ 1 &\cdots &[s] &\cdots &[\LM(f)]\\ \vdots &&\vdots &&\vdots\\ m &\cdots &[m\,s] &\cdots &[m\,\LM(f)]\\ }\, \begin{pmatrix} \vdots\\\alpha_s\\\vdots\\1 \end{pmatrix} = \begin{pmatrix} 0\\\vdots\\0 \end{pmatrix}. \] Let $m^+$ be the successor of $m$, $m^+\,\LM(f)$ is the \emph{fail of $f$} if \[H_{\{1,\ldots,m,m^+\},S\cup\{\LM(f)\}}\,\vec{f}= \kbordermatrix{ &\cdots &s\in S&\cdots &\LM(f)\\ 1 &\cdots &[s] &\cdots &[\LM(f)]\\ \vdots &&\vdots &&\vdots\\ m &\cdots &[m\,s] &\cdots &[m\,\LM(f)]\\ m^+ &\cdots &[m^+\,s] &\cdots &[m^+\,\LM(f)] }\, \begin{pmatrix} \vdots\\\alpha_s\\\vdots\\1 \end{pmatrix} = \begin{pmatrix} 0\\\vdots\\0\\e \end{pmatrix}, \] with $e\neq 0$. \end{definition} We can also write another proof of Proposition~\ref{prop:augm_shift} with a matrix viewpoint. \begin{proof}[Proof of Proposition~\ref{prop:augm_shift}] Let $f_1=\LM(f_1)+\sum_{s\in S}\alpha_s\,s$ and $f_2=\LM(f_2)+\sum_{s\in S'}\beta_s\,s$ be monic. Let $v^-$ be the predecessor of $v$. Let $\tilde{S}=S\cup S'\setminus\{\LM(f_2),\LM(f_1)\}$, assuming $\LM(f_2)\neq\LM(f_1)$, then we have \begin{align*} H_{\{1,\ldots,v^-,v\},\tilde{S}\cup\{\LM(f_2),\LM(f_1)\}}\,(\vec{f_1}+c\,\vec{f_2}) &= \begin{pmatrix} 0\\\vdots\\0\\e_1+c\,e_2 \end{pmatrix}\\ \kbordermatrix{ &\cdots &s\in \tilde{S}&\cdots &\LM(f_2) &\LM(f_1)\\ 1 &\cdots &[s] &\cdots &[\LM(f_2)] &[\LM(f_1)]\\ \vdots &&\vdots &&\vdots&\vdots\\ v^- &\cdots &[v^-\,s] &\cdots &[v^-\,\LM(f_2)] &[v^-\,\LM(f_1)]\\ v &\cdots &[v\,s] &\cdots &[v\,\LM(f_2)] &[v\,\LM(f_1)]\\ }\, \begin{pmatrix} \vdots\\\alpha_s+c\,\beta_s\\\vdots\\c\\1 \end{pmatrix} &= \begin{pmatrix} 0\\\vdots\\0\\e_1+c\,e_2 \end{pmatrix}. \end{align*} It is now clear that vector $\vec{f}_1-\frac{e_1}{e_2}\,\vec{f}_2$ is in the kernel of this matrix. That is, polynomial $f_1-\frac{e_1}{e_2}\,f_2$ has a shift $v$. \end{proof} Changing every evaluation into a matrix-vector product in the \BMS algorithm yields the following presentation of the \BMS algorithm, namely Algorithm~\ref{algo:bms_linalg}. \begin{algorithm2e}[htbp!]\label{algo:bms_linalg} \small \DontPrintSemicolon \TitleOfAlgo{Linear Algebra variant of the \BMS algorithm.} \KwIn{A table $\bu=(u_{\bi})_{\bi\in\N^n}$ with coefficients in $\K$, a monomial ordering $\prec$ and a monomial $M$ as the stopping condition.} \KwOut{A set $G$ of relations generating $I_M$.} $T := \{m\in\K[\x], m\preceq M\}$.\tcp*{ordered for $\prec$} $G := \{1\}$.\tcp*{the future \gb} $S := \emptyset$.\tcp*{staircase edge, elements will be $[h,\fail(h)/\LM(h)]$} \Forall{$m\in T$}{ $S' := S$.\; \For{$g\in G$}{ \If{$\LM(g)| m$}{ $e:=H_{\acc{\frac{m}{\LM(g)}},\supp(g)}\,\vec{g}$.\; \If{$e\neq 0$}{ $S':=S'\cup \acc{\cro{\frac{g}{e},\frac{m}{\LM(g)}}}$.\; } } } $S':= \min_{\fail(h)\in S'}\acc{[h,\fail(h)/\LM(h)]}$. \tcp*{see Remark~\ref{rk:choose_Sm}} $G':= \Border (S')$.\; \For{$g'\in G'$}{ Let $g\in G$ such that $\LM(g)|\LM(g')$.\; \uIf{$\LM(g) \nmid m$}{ $g':=\frac{\LM(g')}{\LM(g)}\,g$.\tcp*{shifts the relation} } \uElseIf{$\exists\,h\in S, \frac{m}{\LM(g')} |\fail(h)$}{ $g':= \frac{\LM(g')}{\LM(g)}\,g -\pare{H_{\acc{\frac{m}{\LM(g)}},\supp(g)}\,\vec{g}}\, \frac{\LM(g')\,\fail(h)}{m}\,h$.\tcp*{see Prop.~\ref{prop:augm_shift}} } \lElse{ $g':=g$. } } $G := G'$ \; $S := S'$ \; } \KwRet $G$. \end{algorithm2e} \section{Introduction}\label{s:intro} \input{1-intro} \section{Preliminaries}\label{s:prelim} \input{2-prelim} \section{An Adaptive version of the \BMS algorithm}\label{s:aBMS} \input{3-aBMS} \section{The Adaptive version of the \sFGLM algorithm} \label{s:asFGLM} \input{4-asFGLM} \section{Analogies and differences of the adaptive variants} \label{s:comparison_adapt} \input{5-comparison_adapt} \section{Complexity and Benchmarks of the adaptive variants} \label{s:implem_adapt} \input{6-implem_adapt} \bibliographystyle{elsarticle-harv} \addcontentsline{toc}{section}{References}
{ "timestamp": "2018-06-05T02:14:43", "yymm": "1806", "arxiv_id": "1806.00978", "language": "en", "url": "https://arxiv.org/abs/1806.00978" }
\section{Introduction} As computing power has become greater and as data sets have become simultaneously larger and more complicated, demand for statistical methods that are increasingly flexible and data driven has increased. Two related methods for capturing the complex structure of a data set from a true density $f_0$ are to estimate either the density's {\em level sets} (LS's) or the density's {\em highest-density regions} (HDR's). (We will explain the difference between estimating LS's and estimating HDR's shortly.) For a density function $f_0$ defined on $\bb{R}^d$ and a given constant $c> 0$, the $c$-level set (sometimes known as a {\em density contour}) of $f_0$ is $ \beta(c):=\{\boldsymbol{x}\in\bb{R}^d:f_0(\boldsymbol{x})= c\},$ and the corresponding super-level set is \begin{align} \label{eq:suplevel-t} \mc{L}(c):=\{\boldsymbol{x}\in\bb{R}^d:f_0(\boldsymbol{x})\ge c\}. \end{align} Under some basic regularity conditions, the density super-level set is a set of minimum volume having $f_0$-probability at least $\int_{ \mc{L}(c)}f_0(\boldsymbol{x})\,d\boldsymbol{x}$ \citep{garcia2003level}. For this reason, perhaps the most common use for HDR estimation occurs in Bayesian statistics. An HDR of a posterior density is a so-called (minimum volume) credible region, which is one of the most fundamental tools in Bayesian statistics. There are quite a wide range of other applications for estimation of density LS's or density HDR's and these estimation problems have received increasing attention in the statistics and machine learning literatures in recent years. (We consider estimation of density level sets and estimation of density super-level sets to be equivalent tasks.) The applications of LS or HDR estimation include outlier/novelty detection \citep{Lichman:2014jn,Park2010:cx}, discriminant analysis \citep{MR1765618} and clustering analysis \citep{hartigan1975clustering,rinaldo2010generalized, cuevas2001cluster}. LS estimation is one of the fundamental tools in estimation of cluster trees and persistence diagrams, used in topological data analysis (\cite{Chen:2017wn}, \cite{Wasserman:2016ua}). A common way to estimate the density super-level set $\mc{L}(c)$ based on independent and identically distributed (i.i.d.) $\boldsymbol{X}_1,\ldots, \boldsymbol{X}_n \in \mathbb{R}^d$ is to replace the density function in \eqref{eq:suplevel-t} with a kernel density estimator (KDE) \begin{align} \label{eq:kde-def} \ffnH (\boldsymbol{x}) := \inv{n} \sum_{i=1}^n K( \boldsymbol{H}^{-1/2}(\boldsymbol{x}-\boldsymbol{X}_i)) |\boldsymbol{H}|^{-1/2}, \end{align} where $\boldsymbol{H} \in \mathbb{R}^{d\times d}$ is a symmetric positive definite bandwidth matrix and $K$ is a kernel function. This gives us the so-called plug-in estimator \begin{align} \label{eq:plug-in-t} \widehat{\mc{L}}_{n,\boldsymbol{H}}(c):=\{\boldsymbol{x}\in\bb{R}^d:\ffnH(\boldsymbol{x})\ge c\}. \end{align} We now explain the difference between ``LS estimation'' and ``HDR estimation.'' Often the level of interest is only specified indirectly through a given probability $\tau\in(0,1)$ which yields a level $f_{\tau,0}:=\mathrm{inf} \{y > 0 :\int_{\bb{R}^d}f_0(\boldsymbol{x})\mathbbm{1}_{\{f(\boldsymbol{x})\ge y\}}\,d\boldsymbol{x}\le 1-\tau\}$. Then the corresponding super-level set is \begin{align} \label{eq:suplevel-ft} \mc{L}(f_{\tau,0}):=\{\boldsymbol{x}\in\bb{R}^d:f_0(\boldsymbol{x})\ge f_{\tau,0}\}, \end{align} and the corresponding plug-in estimators are $$\hat{f}_{\tau,n}:=\mathrm{inf} \lb y\in(0,\infty):\int_{\bb{R}^d}\ffnH(\boldsymbol{x})\mathbbm{1}_{\{\ffnH(\boldsymbol{x})\ge y\}}\,d\boldsymbol{x}\le 1-\tau \right\}$$ and \begin{align} \label{eq:plug-in-ft} \widehat{\mc{L}}_{n,\boldsymbol{H}}(\hat{f}_{\tau,n}):=\{\boldsymbol{x}\in\bb{R}^d:\ffnH(\boldsymbol{x})\ge \hat{f}_{\tau,n}\}. \end{align} Estimating \eqref{eq:suplevel-ft} based on specifying $\tau$ is known as the {\em HDR estimation} problem; this has extra complication over the LS estimation problem because $f_{\tau,0}$ has to be estimated rather than being fixed in advance. Thus we use the phrase {\em LS estimation} to mean estimation of \eqref{eq:suplevel-t} with $c$ fixed in advance (equivalently, estimation of \eqref{eq:suplevel-ft} with $f_{\tau,0}$ fixed). When we use the phrase {\em HDR estimation} we mean estimation of \eqref{eq:suplevel-ft} with $\tau$ (but not $f_{\tau,0}$) fixed in advance. Thus, LS's and HDR's are mathematically equivalent, but estimating LS's and estimating HDR's are statistically different tasks. Early work on LS or HDR estimation includes \cite{Hartigan:1987hx}, \cite{Muller:1991vn}, \cite{Polonik:1995kr}, \cite{Tsybakov:1997jv}, and \cite{Walther:1997eu}. Some recent work has focused on asymptotic properties of KDE plug-in estimators, including results about consistency, limit distribution theory, and statistical inference. \citet{Baillo:2001fe} show that the probability content of the plug-in estimator converges to the probability of the true super-level set as the sample size tends to infinity. \citet{Baillo:2003ds} proves the strong consistency of the plug-in estimator under an integrated symmetric difference metric. \citet{Cadre:2006db} further obtains the rate of convergence of the plug-in estimator when the loss is given by the generalized symmetric difference of sets. \citet{Mason:2009dk} give the asymptotic normality of estimated super-level sets under the same metric as \citet{Cadre:2006db}. \citet{Chen:2015uj} find a more practically usable limiting distribution of the plug-in estimator for LS's by using Hausdorff distance as the metric for set difference and provide methods for constructing confidence regions for LS's based on this limiting distribution. \cite{Jankowski:2012wv} and \cite{Mammen:2013hs} also investigate the formation of confidence regions for LS's. It is well known that KDE's are sensitive to the choice of the bandwidth (matrix). The optimal bandwidth (matrix) depends on the objective of estimation. There are many tools that have been developed for selecting the bandwidth when $d=1$ or the bandwidth matrix when $d > 1$; these include minimizing an asymptotic approximation to an appropriate risk function, as well as computational methods such as the bootstrap or cross-validation, and are largely focused on globally estimating the density or its derivatives well. A good summary of those methods can be found in \citet{Wand:1995kv}, \citet{sain1994cross}, or \cite{Jones:1996vg}. However, \citet[page 505]{Duong:2009ek} state that, ``a number of practical issues in highest density region estimation, such as good data-driven rules for choosing smoothing parameters, are yet to be resolved.'' \citet{Samworth:2010cj} is the only published work we know of that investigates the problem of selecting bandwidths for HDR estimation (and we know of no published works that directly investigate bandwidth selection for LS estimation). \citet{Samworth:2010cj} study the KDE plug-in estimator when $d=1$, and show by simulation that the kernel density estimator aiming for HDR estimation can be very different from the one aiming for global density estimation. They also propose an asymptotic approximation to a risk function that is suitable for HDR estimation and a corresponding bandwidth selection procedure based on the approximation, all when $d=1$. In this paper, we consider the multivariate setting, where $d\ge 2$. In this case, we are estimating a level set manifold, which involves some added technical difficulties over the case $d=1$ (in which case the level set is a finite point set), but we believe that LS or HDR estimation when $d\ge 2$ is of great practical interest because of the large variety of complicated structures that multivariate level sets can reveal. We derive asymptotic approximations to a risk function for LS estimation and to a risk function for HDR estimation. We believe that our approximations and derivations will be very valuable for any future procedures that do (either) LS or HDR bandwidth selection. Our calculations shed light on the important quantities relating to LS or HDR estimation. Furthermore, we develop a ``plug-in'' bandwidth selector method based on minimizing an estimate of the LS or the HDR risk approximation. This approach can be used to optimize over all positive definite bandwidth matrices or over restricted classes of matrices (e.g., diagonal ones). Our theory applies for all $ d \ge 2$. We have developed code to implement our bandwidth selector when $d=2$. It is straightforward to implement a numeric approximation to Hausdorff integrals that appear in our approximations (see Subsection~\ref{subsec:notation-assumptions} for discussion of the Hausdorff measure) when $d=2$. It is less immediately obvious how to implement such approximations when $d \ge 3$, although we indeed believe that implementation is feasible for such approximations. In fact, we believe that computational feasibility is an important benefit of using a closed-form approximation to the risk, particularly in the multivariate setting that we consider in this paper. As will be discussed later in the paper, many simple problems in the univariate setting are more complicated in the multivariate setting and must be solved by Monte Carlo. Thus performing bootstrap or cross-validation, which involves nested Monte Carlo computations, quickly becomes infeasible. During the development of the present paper we became aware of the recent related work, \cite{Qiao:2017wq}. \cite{Qiao:2017wq} also considers problems about bandwidth selection for KDE's in settings related to level set estimation. However, the main focus of \cite{Qiao:2017wq} is somewhat different than the one here. In fact, \cite{Qiao:2017wq} states that bandwidth selection for multivariate HDR estimation is ``far from trivial'' and does not consider this problem. We will discuss the approach taken by \cite{Qiao:2017wq} again in the \nameref{sec:conclusion} section. The structure of the paper is as follows. We present our two asymptotic risk approximation theorems, as well as corollaries about the risk approximation minimizers, in Section~\ref{sec:asymptotic-risk-expansions}. We present methodology to select bandwidth matrices in Section~\ref{sec:methodology}. In Section~\ref{sec:simulations-data} we study the performance of our bandwidth selector in simulation experiments as well as in analysis of two real data sets, the Wisconsin Breast Cancer Diagnostic data and the Banknote Authentication data. We give concluding discussion in Section~\ref{sec:conclusion}. Proofs of the main results are given in Appendix~\ref{app:A-main-proofs}, and further details, technical results, and intermediate lemmas are given in Appendix~\ref{app:additional-thms} and Appendix~\ref{sec:proofs-intermediate}. Some notation and assumptions are presented in Subsections~\ref{subsec:notation-assumptions} and \ref{subsec:assumptions}. \section{Asymptotic risk results} \label{sec:asymptotic-risk-expansions} \subsection{Notation} \label{subsec:notation-assumptions} We use the following notation throughout. For a density function $f_0$ on $\mathbb{R}^d$ and a Borel measurable set $A\subset \mathbb{R}^d$, define the measure $\mu_{f_0}(A)=\int_Af_0(\boldsymbol{x})\,d\boldsymbol{x}$. For a function $f$ on $\mathbb{R}^d$, a measure $P$, and $1 \le p < \infty$, we let $\| f\|_{p,P}^p = \int_{\mathbb{R}^d} |f(\bs{z})|^p dP(\bs{z})$ if this quantity is finite. If $P$ is Lebesgue measure we abbreviate $\| f \|_{p,P} \equiv \| f\|_p$, $ 1 \le p < \infty$. Let $\| f \|_{\infty} = \mathrm{sup}_{\bs{z} \in \mathbb{R}^d} |f( \bs{z}) |$, and for a function $g$ with vector or matrix values, that is, $g:\bb{R}^d\rightarrow \bb{R}^{p\times q}$, let $\|g\|_{\infty}=\max_{1\le i\le p,1\le j\le q}\|g_{ij}\|_{\infty}$. We let $\Vert {\bs x} \Vert = ( \sum_{i=1}^d x_i^2 )^{1/2}$ for $\boldsymbol{x} \in \mathbb{R}^d$. Let $\nabla f$ be the gradient (column) vector of $f$ and let $\nabla^2 f$ be the Hessian matrix $\lp \derivtwo{x_i}{x_j}[f] \right)_{i,j}$. Let $\mathcal{H}$ be $d-1$ dimensional Hausdorff measure \citep{Evans:2015uy}. The Hausdorff measure is useful for measuring the volume of lower dimensional sets, like manifolds, embedded in a higher dimensional ambient space. Let $\lambda$ denote Lebesgue measure. Recall that $\beta(c):=\{\boldsymbol{x}\in\mathbb{R}^d:f_0(\boldsymbol{x})=c\}$ and $\mc{L}(c):=\{\boldsymbol{x}\in\mathbb{R}^d:f_0(\boldsymbol{x})\ge c\}$, we let $\mc{L}_{\tau}\equiv \mc{L}(f_{\tau,0})$ and $\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\equiv\widehat{\mc{L}}_{\boldsymbol{H}}(\widehat{f}_{\tau,n})$. We generally use bold to denote vectors. We use ``$\equiv$'' to denote notational equivalences and ``$:=$'' or ``$=:$'' for definitions. Any integral whose domain is not specified explicitly is taken over all of $\mathbb{R}^d$. We will occasionally omit the integrating variable when there's no confusion in doing so. We use $\mc{S}$ to denote the set of all $d\times d$ symmetric positive definite matrices. For a symmetric matrix $\bs{A}$, we use $\lambda_{\max}(\bs{A})$ and $\lambda_{\min}(\bs{A})$ to denote the largest and the smallest eigenvalues of $\bs{A}$ respectively. In this paper, we will use the $f_0$-probability volume of the symmetric difference as the distance between the true set and its estimator. We use $\Delta$ to denote the symmetric difference operation between two sets: for two sets $A$ and $B$, $A\Delta B :=(A\cup B)\setminus (A\cap B)$ where ``$\setminus$'' is set difference. Figure~\ref{fig:symm-diff} shows the symmetric difference between the $0.02$ super-level set of standard bivariate normal distribution and an ``estimated'' super-level set. We let $A^c$ be the complement of a set $A$. For $\delta > 0$ and $\boldsymbol{x} \in \mathbb{R}^d$, let $B(\boldsymbol{x}, \delta) := \lb \bs y \in \mathbb{R}^d \colon \| \bs y - \boldsymbol{x} \| \le \delta \right\}$, and for a set $A$, let $A^\delta := \cup_{\boldsymbol{x} \in A} B(\boldsymbol{x}, \delta)$. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{symmetric_diff.pdf} \caption{Symmetric difference between the true level set and an estimated level set. The solid black line is the boundary of the true level set and the dashed red line is the boundary of the estimated level set. The shaded area is the symmetric difference of the two sets. \label{fig:symm-diff}} \end{figure} \subsection{Assumptions} \label{subsec:assumptions} To derive our asymptotic expansion, we make the following basic assumptions on the underlying density, kernel function and bandwidth matrix. \begin{assumption}{D1a} \label{assm:DA-ls} $\phantom{blah}$ \begin{enumerate} \item \label{assm:DA:item1} Let ${\boldsymbol{X}}_1, \ldots, \boldsymbol{X}_n$ be i.i.d.\ from a bounded density $f_0$ on $\mathbb{R}^d$, $d \ge 2$. \item\label{assm:DA:item2} Fix $ \mathrm{inf}_{x \in \mathbb{R}^d} f_0(x) < c < \| f_0 \|_\infty$. There exists a constant $a>0$ such that \begin{enumerate*} \item \label{item:DA-ls-derivs} $f_0$ has two bounded continuous partial derivatives over $U_a := \{\bs{x}:c-a\le f_0(\boldsymbol{x})\le c+a\}$, \item \label{item:DA-ls-grad} $\mathrm{inf}_{U_a}\|\nabla f_0\|>0$, and \item \label{item:DA-ls-localization} $U_a$ is contained in $\beta(c)^\delta$ for some $\delta > 0$. \end{enumerate*} \end{enumerate} \end{assumption} \begin{assumption}{D1b} \label{assm:DA-hdr} $\phantom{blah}$ \begin{enumerate} \item \label{assm:DA:item1} Let ${\boldsymbol{X}}_1, \ldots, \boldsymbol{X}_n$ be i.i.d. from a bounded density $f_0$ on $\mathbb{R}^d$, $d \ge 2$. \item\label{assm:DA:item2} The density $f_0$ has two bounded continuous partial derivatives for all $\boldsymbol{x}\in\mathbb{R}^d$. \item \label{assm:DA:item3} There exists a constant $a > 0$ such that $U_a:= \{\boldsymbol{x}:f_{\tau,0}-a\le f_0(\boldsymbol{x})\le f_{\tau,0}+a\}$ satisfies \begin{enumerate*} \item \label{item:DA-hdr-grad} $\mathrm{inf}_{U_a}\|\nabla f_0\|>0$, and \item \label{item:DA-hdr-localization} $U_a$ is contained in $\beta_\tau^\delta$ for some $\delta > 0$. \end{enumerate*} \begin{mylongform} \begin{longform} \item \label{assm:DA:item 4}For any $c>0$, $\lambda(\{\boldsymbol{x}\in \mathbb{R}^d:f_0(\boldsymbol{x})=c\})=0$, where $\lambda$ is the Lebesgue measure in $\mathbb{R}^d$. (indicated by 3, see \citet{NunezGarcia:2003ci}) \end{longform} \end{mylongform} \end{enumerate} \end{assumption} Assumption~\ref{assm:DA-ls} will be used for LS estimation and Assumption~\ref{assm:DA-hdr} for HDR estimation. We need the stronger global twice differentiability assumption in HDR estimation because of the need to estimate $\fftau$ (which involves estimating the $f_0$-probability content of $\mc{L}_\tau$). The global twice differentiability assumption in Assumption~\ref{assm:DA-hdr} could be weakened to an assumption of twice differentiability either on $\mc{L}_\tau^\delta$ or on $(\mc{L}_\tau^c)^\delta$. Assumptions~\ref{assm:DA-ls} and \ref{assm:DA-hdr} entail that the gradient of $f_0$ is nonzero on (a neighborhood of) the level set of interest. This implies by the preimage theorem that the level set $\beta$, taken to be either $\beta(c)$ or $\beta_\tau$, is a $(d-1)$-dimensional (boundaryless) manifold \citep{Guillemin:1974ti}. The only additional assumption we need is one of compactness, which rules out only very pathological cases, where $f_0$ has ``spikes'' of increasingly small width going out towards infinity. \begin{assumption}{D2} \label{assm:BA} Let $ \mathrm{inf}_{x \in \mathbb{R}^d} f_0(x) < c < \| f_0 \|_\infty$ or $0 < \tau < 1$ be as in Assumptions~\ref{assm:DA-ls} and \ref{assm:DA-hdr}. Assume that $\beta(c)$ or $\beta_\tau$ is compact. \end{assumption} Our assumption on the kernel will come in the form of a so-called {\em Vapnik-Chervonenkis (VC)} \citep{Dudley:1999dc} type of assumption. For a metric space $(T,d)$ and $\tau > 0$, the covering number $N( T, d, \tau)$ is the smallest number of balls of radius $\tau$ (and centers which may or may not be in $T$) needed to cover $T$. If a class of functions ${\mc F}$ is a VC class, we have that \begin{equation} \label{eq:VC-defn} \mathrm{sup}_P N( {\mc F}, \| \cdot \|_{2,P}, \tau \| F \|_{2, P} ) \le \lp \frac{A}{\tau} \right)^{v} \end{equation} for some positive $A, v$, where the sup is over all probability measures $P$, and where $F$ is the envelope of ${\mc F}$ meaning $\mathrm{sup}_{f \in {\mc F}} |f| \le F$ (Chapter 2.6, \cite{vanderVaart:1996tf}). We will simply directly assume that the needed classes satisfy \eqref{eq:VC-defn}. Thus our assumptions are as follows. \begin{assumption}{K} \label{assm:KA} $\phantom{blah}$ \begin{enumerate}[leftmargin=*] \item \label{assm:KA-1} The kernel $K$ is an everywhere continuously differentiable bounded density on $\mathbb{R}^d$ with bounded partial derivatives. Both $\int K^2 \, d\lambda$ and $\int (\nabla K) (\nabla K)' \, d\lambda$ are finite or have finite entries, respectively. Assume $\int K(\boldsymbol{x})\boldsymbol{x}\,d\boldsymbol{x}=\boldsymbol{0}$, $\int \boldsymbol{x}\bx'K(\boldsymbol{x})\,d\boldsymbol{x}=\mu_2(K)\boldsymbol{I}$, where $\boldsymbol{I}$ is the identity matrix and $\mu_2(K)=\int x_i^2K(\boldsymbol{x})\,d\boldsymbol{x}$ is independent of $i$. \item \label{assm:KA-VC} Assume that \eqref{eq:VC-defn} is satisfied with $\mc{F}$ taken to be \begin{align} \label{eq:assumption-VC-K} & \lb K \lp \boldsymbol{H}^{-1/2} (t - \cdot) \right): t \in \mathbb{R}^d, \boldsymbol{H} \in {\mc S} \right\} \qquad \text{ and } \\ \label{eq:assumption-VC-gradK} & \lb \| \nabla K \big( \boldsymbol{H}^{-1/2} (t - \cdot) \big) \| : t \in \mathbb{R}^d, \boldsymbol{H} \in {\mc S} \right\}. \end{align} \end{enumerate} \end{assumption} \noindent Let $R(K) := \int K^2 d\lambda$ and let $R( \nabla K)$ be the largest eigenvalue of $\int (\nabla K) (\nabla K)' \, d\lambda$. \begin{assumption}{H} \label{assm:HA} $\phantom{blah}$ \begin{enumerate}[leftmargin=*] \item\label{assm:HA-1} Let $\boldsymbol{H} \equiv \boldsymbol{H}_n \in {\mc S}$, such that for some $c > 0$, $| \boldsymbol{H} | \searrow 0$, $n |\boldsymbol{H}|^{1/2} / \log |\boldsymbol{H}|^{-1/2} \to \infty$, $\log \log n / \log |\boldsymbol{H}|^{-1/2} \to 0$, as $n \to \infty$, and $|\boldsymbol{H}_n|^{1/2} \le c | \boldsymbol{H}_{2n}|^{1/2}$. \item \label{assm:HA-2} Assume that $\lambda_{\max}(\boldsymbol{H})=O\{\lambda_{\min}(\boldsymbol{H})\}$ and $n |\boldsymbol{H}|^{1/2} \lambda_{\min}(\boldsymbol{H}) / \log |\boldsymbol{H}|^{-1/2} \to \infty$ and $\lambda_{\max}=O(n^{-2/(4+d)})$ as $n\to\infty$. \begin{mylongform} \begin{longform} $\lambda_+^{(d+4)/4}=o\{(\log n/n)^{1/2}\}$, $\lambda_+^{(d+8)/2}=O(n^{-1/2}\log n)$ \end{longform} \end{mylongform} \end{enumerate} \end{assumption} \noindent Here, $a_n \searrow 0$ means that $a_n$ decreases monotonically to $0$. \begin{mylongform} \begin{longform} From the above assumption, we know $\lambda_{\max}(\boldsymbol{H})$ and $\lambda_{\min}(\boldsymbol{H})$ are of the same order. So we make further assumptions on $\boldsymbol{H}$ through $\lambda_{\max}(\boldsymbol{H})$. Let $\lambda_1\equiv\lambda_{\max}(\boldsymbol{H})$, and then we know $|\boldsymbol{H}|\sim \lambda_1^d$ and $\tr(\boldsymbol{H})\sim \lambda_1$. We list out the assumptions we need for $\boldsymbol{H}$ during the proof. \begin{enumerate} \item $n|\boldsymbol{H}|^{1/2}\to \infty$. This is equivalent to $n\lambda_1^{d/2}\to \infty$. \item For the proof of Lemma~\ref{lem:hdr-step2}, we require $\log n=O(n|\boldsymbol{H}|^{1/2})$. So this requires $\liminf n\lambda_1^{d/2}/\log n>0$. \item In the proof of Lemma~\ref{lem:hdr-step3}, we need $\lambda_1=o(\delta_n)$ while $\log n=O(\delta_n^2n|\boldsymbol{H}|^{1/2})$. \item When we apply Lemma~\ref{lem:COV-approx} before Lemma~\ref{lem:hdr-step5}, we need $\delta_n^2=o(\inv{\sqrt{n|\boldsymbol{H}|^{1/2}}})$. \end{enumerate} \end{longform} \end{mylongform} \begin{mylongform} \begin{longform} We can see the above requirement are satisfied by our assumption on $\boldsymbol{H}$. Since for any $n$ and $\boldsymbol{H}$, we can always let \begin{align*} \delta_n^2\sim \frac{\log n}{n\lambda_1^{d/2}}=\frac{\log n}{\sqrt{n\lambda_1^{d/2}}}\frac{1}{\sqrt{n\lambda_1^{d/2}}}, \end{align*} then by our assumption on $\lambda_{-}$, $\log n=O(\delta_n^2n|\boldsymbol{H}|^{1/2})$ and $\delta_n^2=o(\inv{\sqrt{n|\boldsymbol{H}|^{1/2}}})$ are satisfied. We need to verify that $\lambda_1=o(\delta_n)$, which can be shown by \begin{align*} \frac{\lambda_1}{\delta_n}=\frac{\lambda_1}{\sqrt{\log n/(n\lambda_1^{d/2})}}=\frac{\lambda_1^{(d+4)/4}}{\sqrt{\log n/n}}\rightarrow 0, \end{align*} by our assumption of $\lambda_+$. \end{longform} \end{mylongform} Assumptios~\ref{assm:DA-ls} and \ref{assm:DA-hdr} are standard in the KDE literature (see, e.g., page 95 of \cite{Wand:1995kv}). Note that Assumption~\ref{assm:DA:item3} of Assumption~\ref{assm:DA-hdr} implies that there exists a constant $L > 0$ such that for $\delta>0 $ small enough that $\lambda( f_0^{-1}( [\fftau - \delta, \fftau+\delta])) \le L \delta $; this is a standard type of assumption that appears in the level set estimation literature \citep{Polonik:1995kr}. Assumption~\ref{assm:BA} is not very limiting and only rules out pathological cases. Our Assumption~\ref{assm:KA} on the kernel function is not restrictive and all of the conditions imposed are fairly standard. For Assumption~\ref{assm:KA-1} see, e.g., page 95 of \cite{Wand:1995kv} where similar conditions are imposed. Assumption~\ref{assm:KA-VC} is also fairly standard in the KDE literature (e.g., \cite{Chen:2015uj} uses similar conditions in the context of inference for level sets). This assumption is needed to apply the results of \cite{Gine:2002jc} to get almost sure convergence rates of $\ffnH$ and $\nabla \ffnH$. Assumption $K_1$ of \cite{Gine:2002jc} (or Assumption~K, page 2572, of \cite{MR2078551}) is an easy-to-verify condition that implies Assumption~\ref{assm:KA-VC} holds, and shows that Assumption~\ref{assm:KA-VC} holds for Gaussian kernels and for many compactly supported kernels. The expansions given in our Theorem~\ref{thm:levelset} and \ref{thm:hdr} hold for the range of bandwidths given in Assumption~\ref{assm:HA}. This is sufficient to develop a practical bandwidth selector, since larger or smaller bandwidths can be easily ruled out. See Corollaries~\ref{cor:ls-oracle-bandwidth} and \ref{cor:hdr-oracle-bandwidth}. \subsection{Asymptotic risk expansions} Our main results are stated in the following two theorems. The first gives the asymptotic risk expansion for level set estimation. Let $\Phi(\cdot)$ and $\phi(\cdot)$ denote the standard normal distribution function and density function, respectively. \begin{theorem} \label{thm:levelset} For given constant $c$ with $ \mathrm{inf}_{\boldsymbol{x} \in \mathbb{R}^d} f_0(\boldsymbol{x}) <c<\|f_0\|_{\infty}$, let Assumptions \ref{assm:KA}, \ref{assm:HA}, \ref{assm:DA-ls} and \ref{assm:BA} hold. Moreover, the kernel function $K$ has bounded support. Then \begin{equation*} \bb{E}\ls \mu_{f_0}\{\mc{L}(c)\Delta\widehat{\mc{L}}_{\boldsymbol{H}}(c)\}\right] = \LS(\boldsymbol{H}) + o \lb (n |\boldsymbol{H}|^{1/2})^{-1/2}+\tr(\boldsymbol{H})\right\} \end{equation*} as $n\rightarrow \infty$, where \begin{align*} \LS(\boldsymbol{H}) := \frac{c}{\sqrt{n|\boldsymbol{H}|^{1/2}}} \int_{\beta(c)} \frac{2\phi(B_{\boldsymbol{x}}(\boldsymbol{H}))+2\Phi(B_{\boldsymbol{x}}(\boldsymbol{H}))B_{\boldsymbol{x}}(\boldsymbol{H}) - B_{\boldsymbol{x}}(\boldsymbol{H})}{-A_{\boldsymbol{x}}}\,d\mc{H}(\boldsymbol{x}), \end{align*} \begin{equation} \label{eq:27} A_{\boldsymbol{x}} := -\frac{\|\nabla f_0(\boldsymbol{x})\|}{\sqrt{R(K)c}}, \quad \text{ and } \quad B_{\boldsymbol{x}}(\boldsymbol{H}) := -\frac{\sqrt{n|\boldsymbol{H}|^{1/2}}D_1(\boldsymbol{x},\boldsymbol{H})}{\sqrt{R(K)c}}, \end{equation} with $D_1(\boldsymbol{x},H):=\frac{1}{2}\mu(K)\tr(\boldsymbol{H}\nabla^2f_0(\boldsymbol{x}))$. \end{theorem} \medskip \par\noindent Note that the first summand (including the factor $c/\sqrt{n|\boldsymbol{H}|^{1/2}}$) in the integral defining $\LS(\boldsymbol{H})$ is of the order of magnitude of a variance term in a mean-squared error decomposition, and the second two summands are of the same order of magnitude of a squared bias term. The next theorem gives the HDR asymptotic risk expansion. \begin{theorem} \label{thm:hdr} Let Assumptions \ref{assm:DA-hdr},\ref{assm:BA},\ref{assm:KA} and \ref{assm:HA} hold. Then \begin{equation*} \bb{E}\ls\mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\}\right] = \HDR(\boldsymbol{H}) + o \lb( n |\boldsymbol{H}|^{1/2})^{-1/2}+\tr(\boldsymbol{H})\right\} \end{equation*} as $n \to \infty$, where \begin{equation*} \HDR(\boldsymbol{H}) := \frac{f_{\tau,0}}{\sqrt{n|\boldsymbol{H}|^{1/2}}}\int_{\beta_{\tau}} \frac{2\phi(C_{\boldsymbol{x}}(\boldsymbol{H})) + 2\Phi(C_{\boldsymbol{x}}(\boldsymbol{H}))C_{\boldsymbol{x}}(\boldsymbol{H}) - C_{\boldsymbol{x}}(\boldsymbol{H})}{-A_{\boldsymbol{x}}} \, d\mc{H}(\boldsymbol{x}), \end{equation*} \begin{align*} C_{\boldsymbol{x}}(\boldsymbol{H}):=B_{\boldsymbol{x}}(\boldsymbol{H})+\sqrt{\frac{n|\boldsymbol{H}|^{1/2}}{R(K)f_{\tau,0}}}D_2(\boldsymbol{H}). \end{align*} $A_{\boldsymbol{x}}$ and $B_{\boldsymbol{x}}(\boldsymbol{H})$ are defined in the same way as in Theorem~\ref{thm:levelset} with $c$ replaced by $f_{\tau,0}$. And \begin{align*} D_2(\boldsymbol{H})&:=w_0\left\{V_1(\boldsymbol{H})+V_2(\boldsymbol{H})\right\}, \end{align*} with $ w_0:=(\int_{\beta_{\tau}}1/\nabla f_0\,d\mc{H})^{-1}$ and \begin{align*} V_1(\boldsymbol{H}):=\int_{\beta_{\tau}}\frac{D_1(\boldsymbol{x},\boldsymbol{H})}{\|\nabla f_0(\boldsymbol{x})\|}\,d\mc{H}(\boldsymbol{x})\qquad V_2(\boldsymbol{H}):=\inv{f_{\tau,0}}\int_{\mc{L}_{\tau}}D_1(\boldsymbol{x},\boldsymbol{H})\,d\boldsymbol{x}. \end{align*} \end{theorem} \medskip We defer the proofs to the appendix. Next, we would like to study the theoretical behavior of the minimizers of $\LS(\cdot)$ and $\HDR(\cdot)$. Note that the minimizers of $\LS(\cdot)$ or of $\HDR(\cdot)$ are not practically usable bandwidth matrices, since $\LS(\cdot)$ and $\HDR(\cdot)$ depend on the true, unknown density $f_0$. We will discuss estimation of $\HDR(\cdot)$ and of $\LS(\cdot)$ and practical bandwidth selectors in the next section. Presently, we consider the minimizers of $\LS(\cdot)$ and $\HDR(\cdot)$, which serve as {\em oracle} bandwidth selectors. Unfortunately, $\LS(\cdot)$ and $\HDR(\cdot)$ are quite complicated functions so studying their minimizers in general is not at all straightforward. Thus we will make some simplifying assumptions. We will consider $f_0$ that is unimodal and spherically symmetric about some point (taken to be the origin in Corollary~\ref{cor:ls-oracle-bandwidth} and \ref{cor:hdr-oracle-bandwidth}). We will consider optimizing over the subclass $\mc{S}_1 := \lb h^2 \bs{I} : h > 0 \right\} $ of bandwidth matrices, where $\bs{I}$ is the $d \times d$ identity matrix. These assumptions are made largely for simplicity and ease of presentation of the following two corollaries, and are far from necessary for the conclusions to hold. We discuss these assumptions again after the corollaries. By a slight abuse of notation, we let $\LS(h) \equiv \LS( h^2 \bs{I})$ and $\HDR(h) \equiv \HDR( h^2 \bs{I}).$ \begin{corollary} \label{cor:ls-oracle-bandwidth} Let the assumptions of Theorem~\ref{thm:levelset} hold. Assume further that $f_0(x) = g( \|x\|)$ and that the function $g(r) $ defined for $r > 0$ is strictly decreasing on $[0,\infty)$. Then there exists a constant $s_{\text{opt}}$ depending on $f_0$ and $K$ (but not on $n$) such that there is a unique positive number $h_{\text{opt}} = \mathrm{argmin}_{h \in [0,\infty)} \LS(h)$ satisfying \begin{equation*} h_{\text{opt}} = s_{\text{opt}} n^{-1 / (d+4)} \quad \text{ and } \quad h_0 = h_{\text{opt}} (1+o(1)) \qquad \text{ as } n \to \infty, \end{equation*} where $h_0$ is any minimizer of $\mathbb{E}[ \mu_{f_0}\{\mc{L}(c)\Delta\widehat{\mc{L}}_{\boldsymbol{H}}(c)\} ]$. \end{corollary} \begin{corollary} \label{cor:hdr-oracle-bandwidth} Let the assumptions of Theorem~\ref{thm:hdr} hold. Assume further that $f_0(x) = g( \|x\|)$ and that the function $g(r) $ defined for $r > 0$ is strictly decreasing on $[0,\infty)$. Then there exists a constant $s_{\text{opt}}$ depending on $f_0$ and $K$ (but not on $n$) such that there is a unique positive number $h_{\text{opt}} = \mathrm{argmin}_{h \in [0,\infty)} \HDR(h)$ satisfying \begin{equation*} h_{\text{opt}} = s_{\text{opt}} n^{-1 / (d+4)} \quad \text{ and } \quad h_0 = h_{\text{opt}} (1+o(1)) \qquad \text{ as } n \to \infty, \end{equation*} where $h_0$ is any minimizer of $\mathbb{E}[ \mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\} ]$. \end{corollary} \medskip \noindent The proof of the two corollaries follows exactly the same way, so we provide the proof for HDR estimation and omit that for LS estimation. The corollaries tell us the order of magnitude of the true optimal bandwidths and of the oracle bandwidths. We used the assumptions of unimodality and spherical symmetry because these assumptions imply that $f_0$, $\nabla f_0$, and $\nabla^2 f_0$ are constant on $\beta_{\tau}$ and $\beta(c)$. We believe that (an analogous form of) the conclusions of Corollary~\ref{cor:ls-oracle-bandwidth} and \ref{cor:hdr-oracle-bandwidth} hold for $\boldsymbol{H}_{\text{opt}} \in \mathrm{argmin}_{\boldsymbol{H} \in \mc{S}} \HDR(\boldsymbol{H})$ and for $\boldsymbol{H}_{\text{opt}} \in \mathrm{argmin}_{\boldsymbol{H} \in \mc{S}} \LS(\boldsymbol{H})$, and for much more general densities $f_0$. Our simulations show that our practical bandwidth selector (studied in the next section) does not require such extreme assumptions. \section{Bandwidth selection methodology} \label{sec:methodology} In the previous section, we provided asymptotic expansions of symmetric risks for HDR estimation and LS estimation, which could be used as guidance for bandwidth selection in those two scenarios. Minimizers of $\LS(\boldsymbol{H})$ and $\HDR(\boldsymbol{H})$ are natural bandwidth selectors for HDR estimation and LS estimation, respectively. The theoretical performance of the bandwidth selector using ``oracle'' knowledge of the functionals of the true density is studied in Corollary~\ref{cor:ls-oracle-bandwidth} and \ref{cor:hdr-oracle-bandwidth}. Of course, in practice, one does not have this oracle knowledge. In the present section, we develop an effective practical bandwidth selection procedure for HDR estimation (a procedure for level set estimation is simpler and can be derived in a similar way). We will also study the theoretical performance of our bandwidth selector restricted to a simplified class $\mc{S}_1=\{h^2\bs{I},h>0\}$. Since there are unknown quantities that $\HDR(\boldsymbol{H})$ depends on, a natural ``plug-in'' approach is to estimate those quantities using different kernel density estimators and plug the estimates in. Moreover, the unknown functionals depend on the truth through $f_0,\nabla f_0,\nabla^2f_0$, so we will use three pilot kernel density estimators. To be specific, we use $\widehat{f}_{n,\boldsymbol{H}_0}$ to estimate $f_{\tau,0}$ and $\mc{L}_{\tau}$; we use $\nabla\widehat{f}_{n,\boldsymbol{H}_1}$ to estimate $\nabla f_0$, and $\beta_{\tau}$ combined with the pilot estimator of $f_{\tau,0}$; we use $\nabla^2\widehat{f}_{n,\boldsymbol{H}_2}$ to estimate $\nabla^2 f_0$, where $\boldsymbol{H}_0$, $\boldsymbol{H}_1$ and $\boldsymbol{H}_2$ are corresponding pilot bandwidth matrices for the three kernel density estimators. (One could also use three different kernels for $\widehat{f}_{n,\boldsymbol{H}_i}$, $i=0,1,2$, but we will use the same kernel for all three.) For our theoretical results to hold, we require just the bandwidth matrix $\boldsymbol{H}_r$ to be of the optimal order for estimating the $r$th derivatives of $f_0$ (see Corollary~\ref{cor:hdr-bandwidth-selector} and Assumption~\ref{assm:HA2}, below). We use two-stage direct plug-in estimators for the pilot bandwidths in our algorithm below, which converge at the correct rate. A detailed description about plug-in estimators could be found in \citet[Chapter 3]{Wand:1995kv} and \cite{chacon2010multivariate}. Once we have those estimated functionals, we can plug them into $\HDR(\boldsymbol{H})$ to obtain an estimated loss function $\widehat{\HDR}(\boldsymbol{H})$. Note $\boldsymbol{H}$ appears in the integrand of a Hausdorff integral and cannot be factored out of the integral; thus minimizing $\widehat{\HDR}(\boldsymbol{H})$ directly is infeasible. Instead, we minimize a discretized approximation to $\widehat{\HDR}(\boldsymbol{H})$. To illustrate this idea, we use the minimization of $\HDR(\boldsymbol{H})$ as an example. Let $\mc{A}=\{A_i\}_{i=1}^m$ be a partition of $\beta_{\tau}$ such that $\mc{H}(A_i)$ is sufficiently small for $i=1,2,\ldots,m$. Then $w_0=(\int_{\beta_{\tau}}\frac{1}{\|\nabla f_0\|}\,d\mc{H})^{-1}$ can be approximated by $ \tilde{w}_0=\sum_{i=1}^m\inv{\|\nabla f_0(\tilde{\boldsymbol{x}}_i)\|}\mc{H}(A_i)$, where $\tilde{\boldsymbol{x}}_i$ is an arbitrary point belonging to $A_i$. Note for $d=2$, $\mc{H}(A_i)$ is well approximated by the length of the line segment connecting the boundary points of $A_i$. $V_1(\boldsymbol{H})$ and $V_2(\boldsymbol{H})$ can be computed approximately in similar ways. Replacing $w_0$, $V_1(\boldsymbol{H})$, $V_2(\boldsymbol{H})$ with corresponding discretized approximations in $C_{\boldsymbol{x}}(\boldsymbol{H})$ gives us an approximation $\tilde{C}_{\boldsymbol{x}}(\boldsymbol{H})$ for each $\boldsymbol{x}$. Then \begin{align} \HDR(\boldsymbol{H}) &\approx \frac{f_{\tau,0}}{\sqrt{n|\boldsymbol{H}|^{1/2}}}\int_{\beta_{\tau}} \frac{2\phi(\tilde{C}_{\boldsymbol{x}}(\boldsymbol{H})) + 2\Phi(\tilde{C}_{\boldsymbol{x}}(\boldsymbol{H}))\tilde{C}_{\boldsymbol{x}}(\boldsymbol{H}) - \tilde{C}_{\boldsymbol{x}}(\boldsymbol{H})}{-A_{\boldsymbol{x}}} \, d\mc{H}(\boldsymbol{x}) \nonumber \\ &\approx \frac{f_{\tau,0}}{\sqrt{n|\boldsymbol{H}|^{1/2}}}\sum_{i=1}^m \frac{2\phi(\tilde{C}_{\tilde{\boldsymbol{x}}_i}(\boldsymbol{H})) + 2\Phi(\tilde{C}_{\tilde{\boldsymbol{x}}_i}(\boldsymbol{H}))\tilde{C}_{\tilde{\boldsymbol{x}}_i}(\boldsymbol{H}) - \tilde{C}_{\tilde{\boldsymbol{x}}_i}(\boldsymbol{H})}{-A_{\tilde{\boldsymbol{x}}_i}} \mc{H}(A_i). \label{eq:numerical-hausdorff-integral-2} \end{align} The last line above provides a computable, optimizable and close approximation to $\HDR(\boldsymbol{H})$ as long as $\mc{H}(A_i)$ is small enough for each $i$. We use $K=\phi$ throughout the algorithm. \noindent The full algorithm for the HDR bandwidth selector is as follows: \begin{enumerate} \item With given i.i.d random sample $\boldsymbol{X}_1,\boldsymbol{X}_2,\ldots,\boldsymbol{X}_n$, estimate $\boldsymbol{H}_0$, $\boldsymbol{H}_1$, $\boldsymbol{H}_2$ using two-stage direct plug-in strategies. \item Obtain the pilot estimator of $f_0$, $\nabla f_0$, $\nabla^2 f_0$ based on the kernel density estimators $\widehat{f}_{n,\boldsymbol{H}_0}$, $\widehat{f}_{n,\boldsymbol{H}_1}$, $\widehat{f}_{n,\boldsymbol{H}_2}$. \item \label{item:step3} Let $\widehat{f}_{\tau,n,\boldsymbol{H}_0}:=\mathrm{inf}\{y\in(0,\infty):\int_{\mathbb{R}^d}\widehat{f}_{n,\boldsymbol{H}_0}(\boldsymbol{x})\mathbbm{1}_{\{\widehat{f}_{n,\boldsymbol{H}_0}(\boldsymbol{x})\ge y\}}\,d\boldsymbol{x}\le 1-\tau\}$ be the pilot estimator of $f_{\tau,0}$, $\widehat{\mc{L}}_{\tau,\boldsymbol{H}_0}:=\{\boldsymbol{x}\in \mathbb{R}^d:\widehat{f}_{n,\boldsymbol{H}_0}(\boldsymbol{x})\ge \widehat{f}_{\tau,n,\boldsymbol{H}_0}\}$ be the pilot estimator of $\mc{L}_{\tau}$ and $\widehat{\beta}_{\tau,\boldsymbol{H}_1}:=\{\boldsymbol{x}\in\mathbb{R}^d:\widehat{f}_{n,\boldsymbol{H}_1}(\boldsymbol{x})=\widehat{f}_{\tau,n,\boldsymbol{H}_0}\}$ be the pilot estimator of $\beta_{\tau}$. \item Substitute the estimators from Step 2 and 3 into the expressions for $C_{\boldsymbol{x}}$ and $A_{\boldsymbol{x}}$ to obtain $\widehat{C}_{\boldsymbol{x}}$ and $\widehat{A}_{\boldsymbol{x}}$. Then \begin{equation*} \widehat{\HDR}(\boldsymbol{H}) =\frac{\widehat{f}_{\tau,n, \boldsymbol{H}_0}}{\sqrt{n|\boldsymbol{H}|^{1/2}}}\int_{\widehat{\beta}_{\tau,\boldsymbol{H}_1}} \frac{2\phi(\widehat{C}_{\boldsymbol{x}}(\boldsymbol{H})) + 2\Phi(\widehat{C}_{\boldsymbol{x}}(\boldsymbol{H}))\widehat{C}_{\boldsymbol{x}}(\boldsymbol{H}) - \widehat{C}_{\boldsymbol{x}}(\boldsymbol{H})}{-\widehat{A}_{\boldsymbol{x}}} \, d\mc{H}(\boldsymbol{x}). \end{equation*} \item Minimize the discretized approximation of $ \widehat{\HDR}(\boldsymbol{H})$ described in the previous paragraph with Newton's method to obtain the estimated optimal HDR bandwidth. \end{enumerate} Note for the above procedure, in step 3, unlike the pilot estimator for $\mc{L}_{\tau}$, the pilot estimator for $\beta_{\tau}$ is obtained using $\widehat{f}_{n,\boldsymbol{H}_1}$ with $\widehat{f}_{\tau,n,\boldsymbol{H}_0}$ as the level. The reason we use $\widehat{f}_{n,\boldsymbol{H}_1}$ instead of $\widehat{f}_{n,\boldsymbol{H}_0}$ is because the error bound for estimating $\beta_{\tau}$ depends on the difference between the gradient of true density and that of the kernel density estimator and using $\widehat{f}_{n,\boldsymbol{H}_1}$ yields a better error bound (See Lemma \ref{lem:Hauss-diff} and proof of Corollary \ref{cor:ls-bandwidth-selector}, \ref{cor:hdr-bandwidth-selector} for details). Newton's method does not guarantee the optimum will be a positive definite bandwidth matrix. Luckily, in practice the global minimum appears to always be positive definite. The objective function $\widehat{\HDR}$ appear to be locally convex although not globally convex (see Figures~\ref{fig:level-risk} and \ref{fig:hdr-risk} for some plots of $\LS(\cdot)$ and $\HDR(\cdot)$), so one has to be slightly careful about starting values for Newton's algorithm. Notice also that in Step~\ref{item:step3} of the above algorithm we need to calculate the level $\widehat{f}_{\tau,n,\boldsymbol{H}_0}$ having $\widehat{f}_{n,\boldsymbol{H}_0}$-probability $1-\tau$. \citet{Hyndman:1996bf} suggests two similar methods for calculating $\widehat{f}_{\tau,n,\boldsymbol{H}_0}$. One is to use an appropriate empirical quantile of the values $\widehat{f}_{n,\boldsymbol{H}_0}(\boldsymbol{X}_i),$ $i=1,\ldots, n$ (``Approach H1''). An approach of this type is studied by \cite{Cadre:2013fv} (and by \cite{Chen:2016vv} in calculating his $\hat{\alpha}_n(x)$). However, this estimator is not equal to $\widehat{f}_{\tau,n,\boldsymbol{H}_0}$, and we have not yet quantified the difference, so we choose not to use this approach. Alternatively, \citet{Hyndman:1996bf} suggests resampling $\tilde{\boldsymbol{X}}_1, \ldots, \tilde{\boldsymbol{X}}_M \stackrel{\text{iid}}{\sim} \ffnH$, and then using the appropriate empirical quantile of $\widehat{f}_{n,\boldsymbol{H}_0}(\tilde{\boldsymbol{X}}_i),$ $i=1,\ldots, M$ (``Approach H2''). Any desired accuracy can be attained by taking $M$ large enough. Another method is to simply use numeric integration: one can do a binary search over $(0, \| \widehat{f}_{n,\boldsymbol{H}_0} \|_\infty )$, computing the integral (numerically) at each level until one arrives at $\widehat{f}_{\tau,n,\boldsymbol{H}_0}$ within desired accuracy. When $d=2$, we found the numeric integration and binary search to be the fastest method for calculating $\widehat{f}_{\tau,n,\boldsymbol{H}_0}$. We suspect for higher dimensions, Approach H2 will be faster than numeric integration. Of course, Approach H1 is faster than the other two, and so it would be helpful to study how the Approach H1 estimator compares to $\widehat{f}_{\tau,n,\boldsymbol{H}_0}$. In our pilot estimation process when $d=2$, we use numerical interpolation to generate points on $\widehat \beta_{\tau, \boldsymbol{H}_1}$ and to calculate $\mc{A}$. In more detail: we generate dense grid points along both the $x$-axis and the $y$-axis, and we estimate the density values at those grid points. Then we perform interpolation between grid points to get points such that the estimated density values at those points are (approximately) $\widehat f_{\tau,n, \boldsymbol{H}_0}$, and those points induce a partition of $\widehat \beta_{\tau, \boldsymbol{H}_1}$. Then for any $A_i$ in the partition, $A_i$ is defined by two end points, and $\mc{H}(A_i)$ can be approximated by the length of the line segment connecting those two end points. By generating enough dense and equally spaced grid points, we expect those line segments will approximate the true partition $\mc{A}$ well and thus the Hausdorff integral will also be well approximated. However, this method is hard to implement in dimension larger than $2$ because there is no simple approximation for the volumes of corresponding partition sets of $\widehat \beta_{\tau,\boldsymbol{H}_1}$. One approach that may be fruitful for solving this problem is to use Quasi-Monte Carlo integration to calculate the Hausdorff integral \citep[see][]{de2018quasi}. The idea is to generate a set of points $\bs{b}_1,\ldots,\bs{b}_m$ on the manifold $\beta$ such that those points are approximately uniformly distributed and then we can approximate $\int_{\beta}\gamma(\boldsymbol{x})\,d\mc{H}$ by $\frac{1}{m}\sum_{i=1}^m\gamma(\bs{b}_i)$. Analysis and numerical simulation for the method has been done for special Hausdorff integrals over special manifolds (cone, cylinder, sphere and torus). There is further work needed to extend the method to the more general manifolds that arise in our problem, which we believe is non-trivial and beyond the scope of this paper. Note that the method just described for computing the approximation \eqref{eq:numerical-hausdorff-integral-2} can be implemented as a so-called midpoint method of numerical integration, for which classical analysis shows an error rate of $O(m^{-2})$ ($m$ is the number of equi-sized partitioning sets of the interval), provided that the function being integrated has bounded second derivative and the domain being integrated is a compact interval in $\mathbb{R}$ \citep{Hammerlin:1991kz}. The same error applies for using the midpoint method to numerically compute Hausdorff integrals over one dimensional compact manifolds embedded in $\mathbb{R}^2$, by the change of variables Theorem 2 (page 99) of \cite{Evans:2015uy}. Thus the errors for our selected bandwidths in the corollaries below will also have an error dependent on $m$, but in our experience $m$ can be chosen large enough that this is negligible (when $d=2$), so we do not include it in the analysis. To give the asymptotic performance of our bandwidth selector, we need the following additional assumptions. \begin{assumption}{D3} \label{assm:DA3} The true density function $f_0$ has four continuous bounded and square integrable derivatives. \end{assumption} \begin{assumption}{K2} \label{assm:KA2} $K$ is symmetric, i.e., $K(x_1,\ldots,x_i,\ldots,x_d)=K(x_1,\ldots,-x_i,\ldots,x_d)$ for $i=1,\ldots,d$. And all the first and second partial derivatives of $K$ are square integrable. \end{assumption} \begin{assumption}{H2} \label{assm:HA2} For $r=0,1,2$, the bandwidth matrix $\boldsymbol{H}_r$ is symmetric, positive definite, such that $\boldsymbol{H}_r \to 0$ elementwise, and $n^{-1} |\boldsymbol{H}_r |^{-1/2}(\boldsymbol{H}_r^{-1})^{\otimes r}\to 0$ as $n\to\infty$, where $\otimes$ stands for Kronecker product \end{assumption} \noindent This assumption and notation is as in \citet{chacon2011asymptotics}. Here for a matrix $\bs{A}$, $\bs A^{\otimes 0} = 1 \in \mathbb{R}$ and $\bs A^{\otimes 1} = \bs{A}$. Now, recall that \begin{align*} \LS(h):=\LS(h^2\bs{I})=\frac{c}{(nh^d)^{1/2}}\int_{\beta(c)}\frac{\phi(B_{\boldsymbol{x}}(h))+2\Phi(B_{\boldsymbol{x}}(h))B_{\boldsymbol{x}}(h)-B_{\boldsymbol{x}}(h)}{-A_{\boldsymbol{x}}}\,d\mc{H}(\boldsymbol{x}), \end{align*} and $B_{\boldsymbol{x}}(h)=(bh^{d+4})^{1/2}F_{\boldsymbol{x}}$ with $F_{\boldsymbol{x}}=-\frac{1}{2}\mu(K)\tr(\nabla^2 f_0(\boldsymbol{x}))/\sqrt{R(K)c}$. And \begin{align*} \text{HDR}(h):=\text{HDR}(h^2\bs{I})=\frac{f_{\tau,0}}{(nh^d)^{1/2}}\int_{\beta_{\tau}}\frac{\phi(C_{\boldsymbol{x}}(h))+2\Phi(C_{\boldsymbol{x}}(h))C_{\boldsymbol{x}}(h)-C_{\boldsymbol{x}}(h)}{-A_{\boldsymbol{x}}}\,d\mc{H}(\boldsymbol{x}), \end{align*} and $ C_{\boldsymbol{x}}(h)=(nh^{d+4})^{1/2}G_{\boldsymbol{x}}$, where \begin{align*} G_{\boldsymbol{x}}=-\frac{\mu(K)\tr(\nabla^2f_0(\boldsymbol{x}))}{\sqrt{R(K)f_{\tau,0}}}+\frac{w_0\int_{\beta_{\tau}}\frac{\mu(K)\tr(\nabla^2 f_0)}{2\|\nabla f_0\|}\,d\mc{H}+\frac{w_0}{f_{\tau,0}}\int_{\mc{L}_{\tau}}\frac{\mu(K)\tr(\nabla^2f_0)}{2}\,d\lambda}{\sqrt{R(K)f_{\tau,0}}}. \end{align*} By letting $s=(nh^{d+4})^{1/2}$, we see that minimizing $\LS(h)$ is equivalent to minimizing \begin{align*} \text{AR}_{\LS}(s):=s^{-d/(d+4)}\int_{\beta(c)}\frac{\phi(sF_{\boldsymbol{x}})+2\Phi(sF_{\boldsymbol{x}})sF_{\boldsymbol{x}}-sF_{\boldsymbol{x}}}{-A_{\boldsymbol{x}}}\, d\mc{H}(\boldsymbol{x}), \end{align*} and minimizing $\text{HDR}(h)$ is equivalent to minimizing \begin{align*} \text{AR}_{\HDR}(s):=s^{-d/(d+4)}\int_{\beta_{\tau}}\frac{\phi(sG_{\boldsymbol{x}})+2\Phi(sG_{\boldsymbol{x}})sG_{\boldsymbol{x}}-sG_{\boldsymbol{x}}}{-A_{\boldsymbol{x}}}\, d\mc{H}(\boldsymbol{x}). \end{align*} he following corollaries show the convergence rate of the estimated optimal bandwidth for $\boldsymbol{H}\in\mc{S}_1$. \begin{corollary}\label{cor:ls-bandwidth-selector} Let Assumptions \ref{assm:DA-ls}, \ref{assm:BA}, \ref{assm:DA3}, \ref{assm:KA}, \ref{assm:KA2} and \ref{assm:HA2} hold. Assume further that $s_{\text{opt}}$ is a unique minimizer of $\text{AR}_{\LS}(s)$ for $s>0$ and $\text{AR}^{\prime \prime}_{\LS}(s_{\text{opt}})>0$. Then \begin{align*} \frac{\hat{h}_{\text{opt}}}{h_{\text{opt}}}=1+O_p\lp n^{-2/(d+8)}\right) \quad \text{and} \quad \frac{\hat{h}_{\text{opt}}}{h_{0}}=1+O_p\lp n^{-2/(d+8)}\right), \end{align*} as $n\to\infty$, where $\hat{h}_{\text{opt}}$ is the minimizer of $\widehat{\text{LS}}(h)$, $h_{\text{opt}}$ is the minimizer of $\text{LS}(h)$ and $h_{0}$ is any minimizer of $\mathbb{E}[\mu_{f_0}\{\mc{L}(c)\Delta \widehat{\mc{L}}_{\boldsymbol{H}}(c)\}]$ over the class $\mc{S}_1=\{h^2\bs{I},h>0\}$. \end{corollary} \begin{corollary}\label{cor:hdr-bandwidth-selector} Let Assumptions \ref{assm:DA-hdr}, \ref{assm:DA3}, \ref{assm:KA}, \ref{assm:KA2} and \ref{assm:HA2} hold. Assume further that $s_{\text{opt}}$ is a unique minimizer of $\text{AR}_{\HDR}(s)$ for $s>0$ and $\text{AR}^{\prime \prime}_{\HDR}(s_{\text{opt}})>0$. Then \begin{align*} \frac{\hat{h}_{\text{opt}}}{h_{\text{opt}}}=1+O_p\lp n^{-2/(d+8)}\right), \end{align*} as $n\to\infty$, where $\hat{h}_{\text{opt}}$ is the minimizer of $\widehat{\text{HDR}}(h)$ and $h_{\text{opt}}$ is the minimizer of $\text{HDR}(h)$. \end{corollary} \medskip \noindent Corollaries \ref{cor:ls-bandwidth-selector} and \ref{cor:hdr-bandwidth-selector} both assume existence of a point $s_{\text{opt}}$. Corollary~\ref{cor:ls-oracle-bandwidth} and \ref{cor:hdr-oracle-bandwidth} show the existence of $s_{\text{opt}}$ under one set of assumptions, although (as discussed after those corollaries) this conclusion holds in many other scenarios. \begin{remark} In Corollary~\ref{cor:ls-bandwidth-selector}, we provide the rates of convergence for both the estimated optimal bandwidth to the oracle bandwidth selector and the estimated optimal bandwidth to the true minimizer of $\mathbb{E}[\mu_{f_0}\{\mc{L}(c)\Delta\widehat{\mc{L}}_{\boldsymbol{H}}(c)\}]$, while in Corollary~\ref{cor:hdr-bandwidth-selector}, we only provide the rate of convergence for the estimated optimal bandwidth to the oracle bandwidth selector. The main difficulty for proving the convergence rate of the estimated optimal bandwidth to the true minimizer of $\mathbb{E}[\mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\}]$, as we can see from the proof of Theorem~\ref{thm:hdr}, is understanding the $\Var \fftaun$ term. At present, we can only show that $\Var \fftaun$ is $o(\frac{1}{n|\boldsymbol{H}|^{1/2}})$, but do not have a more explicit expression. Thus (even with higher order derivative assumptions) we cannot say anything stronger about $\Var \fftaun$, which is different than when $\beta_\tau$ is a discrete point set, in the $d=1$ case. \end{remark} \begin{remark} The rates of convergence given in Corollaries \ref{cor:ls-bandwidth-selector} and \ref{cor:hdr-bandwidth-selector} are known as {\it relative rates of convergence} since they are of the form $(\hat{h}_{\text{opt}} - \tilde{h}) / \tilde{h}$ for some $\tilde{h}$ (which is itself converging to $0$) \citep{Wand:1995kv}. One can compare the relative rates from Corollaries \ref{cor:ls-bandwidth-selector} and \ref{cor:hdr-bandwidth-selector} to the relative rates of other KDE bandwidth selectors. If we plug $d=1$ into the rate $n^{-2 / (d+8)}$ we recover the rate that arose in Theorem 3 of \citet{Samworth:2010cj}. We can also make comparisons to bandwidth selector relative rates based on global loss functions. \begin{mylongform} \begin{longform} We consider the case where $d=1$, where the most results are available for comparison, even though we focus on the case $d \ge 2$ for our results in Corollaries \ref{cor:ls-bandwidth-selector} and \ref{cor:hdr-bandwidth-selector}. \cite{Scott:1987vc} show that when $f_0$ has four derivatives that satisfy some further integrability conditions, then $n^{1/10} (\widehat h - h_{\text{MISE}}) / h_{\text{MISE}} = O_p(1)$ where $\widehat h$ is either the bandwidth based on a ``least squares (unbiased) cross validation'' procedure or on a ``biased cross validation'' procedure. (This requires also some assumptions on the kernel.) This slow rate of $n^{-1/10}$ can be sped up; under various differentiability assumptions on $f_0$, $n^{5/14} (\widehat h - h_{\text{MISE}}) / h_{\text{MISE}} = O_p(1)$ where $\widehat h$ is the bandwidth based on a ``two-stage direct plug-in'' procedure, a two-stage ``solve the equation'' procedure, or a ``smoothed cross validation'' procedure \citep{Wand:1995kv,Sheather:1991tp,Hall:1992gx}. To achieve this rate, certain pilot bandwidths and kernels must be chosen correctly, in addition to having appropriate smoothness of $f_0$. (In fact, somewhat more complicated versions of these procedures can achieve root-$n$ rates of relative convergence, again with appropriate smoothness of $f_0$; but simulations show that very large sample sizes are needed for these more complicated procedures to perform well, see e.g.\ \cite[Section 3.8]{Wand:1995kv}.) The $d=1$ rate exponent of $2/9 \approx .22$ is faster than $1/10$ but slower than $5/14 \approx .36$. \end{longform} \end{mylongform} \cite{Duong:2005ir} study relative rates of convergence for various bandwidth selectors to the bandwidth matrix that minimizes mean integrated squared error, $E \int_{\mathbb{R}^d} (\ffnH(\boldsymbol{x}) - f_0(\boldsymbol{x}))^2 \, d\boldsymbol{x}$. (An alternative benchmark is the bandwidth that minimizes {\em integrated squared error}, $\int_{\mathbb{R}^d} (\widehat{f}_{n,h}(\boldsymbol{x}) - f_0(\boldsymbol{x}))^2 \, d\boldsymbol{x}$, for which e.g., LSCV performs well \citep{Hall:1987ik}, but the relative rates for that problem behave quite differently than the ones we study in Corollaries \ref{cor:ls-bandwidth-selector} and \ref{cor:hdr-bandwidth-selector}, so we do not mention them here.) Table 1 of \cite{Duong:2005ir} presents the convergence rates for plug-in, unbiased cross validation, biased cross validation, and smoothed cross validation bandwidth matrix estimators. (See also \cite{Sain:1994hs,Wand:1994tn,Duong:2003kd, Scott:1987vc,Sheather:1991tp,Hall:1992gx}.) Consider $d \ge 2$. The unbiased and biased cross validation methods have relative convergence rates of $n^{-\min(d,4)/ (2d + 8)}$. The smoothed cross validation method and the plug-in method of \cite{Duong:2003kd} both have rates of $n^{-2 / (d + 6)}$. The plug-in method of \cite{Wand:1994tn} has a rate of $n^{-4 / (d + 12)}$ which is the fastest rate for all $d$. The rate presented in our corollaries is faster than $n^{-\min(d,4)/ (2d + 8)}$ but slower than $n^{-2 / ( d + 6)}$. This suggests that more careful development of our plug-in procedure, perhaps involving more careful pilot bandwidth selection procedures, could potentially improve the asymptotic rate. However the analysis (in particular understanding how $\Var (\fftaun)$ behaves) may not be trivial. Also, procedures with better asymptotics may be inferior until the sample size is unrealistically large (this is somewhat common in bandwidth selection settings \cite[Section 3.8]{Wand:1995kv}). \end{remark} \section{Simulations and data analysis} \label{sec:simulations-data} In Section~\ref{sec:methodology}, we used LS$(\boldsymbol{H})$ and HDR$(\boldsymbol{H})$ to develop a bandwidth selection procedure for level set and HDR estimation. We have implemented our procedure in an \proglang{R} \citep{R-core} package \pkg{lsbs}. In this section, we assess the accuracy of $\text{LS}(\boldsymbol{H})$ and $\text{HDR}(\boldsymbol{H})$ at approximating the true risks. We also use simulation to compare our procedure with the least square cross validation procedure (LSCV), An established ISE-based bandwidth selector \citep[See][]{Rudemo:1982,Bowman:1984iv}. We simulate from the 12 bivariate normal mixture densities constructed by \citet{Wand:1993jl}. These densities have a variety of shapes and have between 1 and 4 modes. In addition to those 12 density functions, we also simulate from \begin{align} \label{eq:sharpmode-density} \frac{2}{3}N\lp \begin{pmatrix} 0\\0 \end{pmatrix}, \begin{pmatrix} 1/4&0\\0&1 \end{pmatrix}\right)+\frac{1}{3}N\lp \begin{pmatrix} 0\\0 \end{pmatrix}, \frac{1}{50} \begin{pmatrix} 1/4&0\\0&1 \end{pmatrix}\right), \end{align} which is constructed to play a bivariate analogy to the sharp mode density 4 in \citet{marron1992exact} (see also Figure~1 of Samworth and Wand (2010)). The specific form in \eqref{eq:sharpmode-density} is chosen to match that used by \cite{Qiao:2017wq}. We will close this section with a real data analysis in which we apply HDR estimation to novelty detection for the Wisconsin Diagnostic Breast Cancer dataset and Banknote Authentication dataset, which are available on the UCI Machine Learning Repository (\url{http://archive.ics.uci.edu/ml/}). \subsection{Assessment of approximation and estimation comparison} Since it is infeasible to exactly evaluate the true symmetric risk $\mathbb{E}[\mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\}]$, we approximate the true risk through Monte Carlo. For given $n,\tau,\boldsymbol{H}$, for a large Monte Carlo sample size $M$, $ \mathbb{E}[\mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\}]\approx\inv{M}\sum_{i=1}^M\mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}^{[i]}_{\tau,\boldsymbol{H}}\}, $ where $\widehat{\mc{L}}^{[1]}_{\tau,\boldsymbol{H}},\widehat{\mc{L}}^{[2]}_{\tau,\boldsymbol{H}},\ldots,\widehat{\mc{L}}^{[M]}_{\tau,\boldsymbol{H}}$ are $M$ independent realizations of $\widehat{\mc{L}}_{\tau,\boldsymbol{H}}$. In a multivariate KDE the bandwidth matrix contains $d(d+1)/2$ parameters. For the purpose of visualization, we restrict $\boldsymbol{H}\in \mc{S}_1=\{h^2\bs{I}\}$ so that it can be parametrized by a single parameter $h$. Figures~\ref{fig:level-risk} and \ref{fig:hdr-risk} compare the asymptotic risk approximation with the simulated true risk for HDR estimation and LS estimation, respectively, for densities corresponding to Densities C, D, E and K of \citet{Wand:1993jl}. Contour plots of the densities are given in the top row of the figures. In Figure~\ref{fig:hdr-risk}, we choose $\tau$ to be 0.2, 0.5 and 0.8 while in Figure~\ref{fig:level-risk}, we use the same levels but with true level values computed from the underlying true density functions. For both scenarios, the sample size is chosen to be 2000 and the kernel is set to be the Gaussian kernel throughout the simulation (Theorem~\ref{thm:levelset} requires $K$ to be compactly supported, but nonetheless, the simulation results are not sensitive to the choice of Guassian kernel). We can see from Figures~\ref{fig:level-risk} and \ref{fig:hdr-risk}, in both scenarios, our asymptotic expansions provide a good approximation to the truth. The approximation works fairly well for the small values of bandwidth but the discrepancy becomes obvious when $h$ is larger, which is unlike what was observed from the simulation in univariate cases \citep[see][]{Samworth:2010cj}. This is consistent with our Assumption~\ref{assm:HA} which imposes an upper bound on the largest eigenvalue of the bandwidth matrix, restricting it not to converge too slowly. One more thing to notice from these two figures is that the optimal bandwidth chosen from the asymptotic expansion serves as a good approximation to the true optimal bandwidth, as we can see they are quite close in most cases in simulation. \begin{figure} \centering \includegraphics[width=\textwidth]{LevelRisk_combine_plot_color.pdf} \caption{\label{fig:level-risk}Comparison of the simulated true risk function $\mathbb{E}[\mu_{f_0}\{\mc{L}(c)\Delta\widehat{\mc{L}}_{\boldsymbol{H}}(c)\}]$ with $\text{LS}(\boldsymbol{H})$ for four densities in \citet{Wand:1993jl}. The panels in the first row are the contour plots for four densities with the contours of interest plotted in red color. The panels in the rest of the rows are the comparison plots for the simulated true risk (solid line) and $\text{LS}(\boldsymbol{H})$ (dashed line) corresponding to the density at the top of the column for $\tau=0.2,0.5,0.8$. The positions of the solid vertical line and the dashed line stand for the optimal bandwidths obtained from the simulated true risk and the asymptotic approximation respectively over the restricted class $\mc{S}_1$. The sample size for all the cases is 2000.} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{risk_combine_plot_color.pdf} \caption{ \label{fig:hdr-risk}Comparison of the simulated true risk function $\mathbb{E}[\mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\}]$ with $\text{HDR}(\boldsymbol{H})$ for four densities in \citet{Wand:1993jl}. The panels in the first row are the contour plots for four densities with the contours of interest plotted in red color. The panels in the rest of the rows are the comparison plots for the simulated true risk (solid line) and the $\text{HDR}(\boldsymbol{H})$ (dashed line) corresponding to the density at the top of column for $\tau=0.2,0.5,0.8$. The positions of the solid vertical line and the dashed line stand for the optimal bandwidths obtained from the simulated true risk and the asymptotic approximation respectively over the restricted class $\mc{S}_1$. The sample size for all the cases is 2000.} \end{figure} We ran a simulation study to compare the performance of our bandwidth selection method with LSCV for all the 12 densities in \citet{Wand:1993jl} and for density~\eqref{eq:sharpmode-density}. For each density function, 250 Monte Carlo samples with 2000 observations were generated. For each sample, we estimated the 0.2, 0.5, 0.8 HDR with bandwidth matrices chosen by our method and LSCV respectively. The HDR error $\mu_{f_0}\{\mc{L}_{\tau}\Delta\widehat{\mc{L}}_{\tau,\boldsymbol{H}}\}$ was calculated for each method in each replication. Figure~\ref{fig:DensityS} shows the plot of the estimation errors generated by the two methods for density~\eqref{eq:sharpmode-density}. Figure~\ref{fig:DensityS-contour} shows the boundaries of the estimated HDR by HDR bandwidth and by the LSCV bandwidth selector from one of the simulated samples. We can see for $\tau=0.2,0.5$, the performance of HDR bandwidth selector outperformed LSCV bandwidth selector greatly for each simulated instance. For $\tau=0.8$, the HDR bandwidth performed slightly less well than the LSCV bandwidth on average. One hypothesis for why our method suffers when $\tau=.8$ is that Assumption~\ref{assm:DA-hdr} requires that $\| \nabla f_0 \| > 0$ in a neighborhood of the HDR. However, when $\tau=.8$, $f_0$ is close to having gradient zero on the true HDR which is close to the density mode. \begin{figure} \centering \includegraphics[width=\textwidth]{DensityS.pdf} \caption{Plot of simulated errors generated by HDR-tailored bandwidth and LSCV for the sharp mode density \eqref{eq:sharpmode-density}. The horizontal axis stands for errors of HDR bandwidth and vertical axis stands for errors of LSCV bandwidth. } \label{fig:DensityS} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{DensityS_contour_est.pdf} \caption{Plot of boundaries of true HDR, HDR estimated by HDR bandwidth and HDR estimated by LSCV bandwidth from one simulated sample with 2000 observations. The three panels correspond to $\tau=0.2,0.5,0.8$ respectively.} \label{fig:DensityS-contour} \end{figure} It is worth noticing in Figure~\ref{fig:DensityS-contour} that the HDR estimated by our method discovers the true underlying topological structure of the density, while the HDR estimated by LSCV does a very poor job of revealing the topological structure when $\tau=.2$ or $.5$ (the LSCV estimates have many spurious separate connected components rather than a single one). Applying the Wilcoxon signed rank test to the simulated paired errors genererated by our HDR bandwidth and LSCV bandwidth showed that for $\tau=0.2$, our method outperformed LSCV for 12 out of 13 density functions; for $\tau=0.5$, our method did better for 8 out of 13 density functions; for $\tau=0.8$, our method did better in 8 out of 13 density functions. Note that for any given fixed density, it is likely to be the case for some HDR that the MISE-optimal bandwidth and the HDR-optimal bandwidth will approximately coincide. Thus we may not expect our method to be better than LSCV for all densities and levels simultaneously. Of course, in practice one does not know whether LSCV will work well for the $\tau$ value one is interested in. Our HDR method appears to work well for lower $\tau$ values, which are the useful values in many applications of HDR estimation. For example in novelty detection, the value of $\tau$ equals the probability of type-I error which is often set to be $0.05$ or $0.1$; in clustering analysis, $\tau$ corresponds to fraction of the data that will be discarded during analysis and is also set to be a value close to $0$. As mentioned in the previous paragraph, this may be related to the assumption that $\| \nabla f_0 \| > 0$ on the HDR boundary. Relaxing this assumption is an important direction for future work, but seems likely to involve somewhat different approximations than the ones used in this paper. \begin{mylongform} \begin{longform} \begin{table}[H] \centering \begin{tabular}{l|cc|cc|cc|} \hline &\multicolumn{2}{|c|}{$\tau=0.2$}&\multicolumn{2}{|c|}{$\tau=0.5$}&\multicolumn{2}{|c|}{$\tau=0.8$}\\ \hline Density&Test statistic&p-value&Test statistic&p-value&Test statistic&p-value\\ \hline Density A&27743&<2.2e-16&25356&1.816e-14&26608&<2.2e-16\\ Density B&27743&<2.2e-16&25297&<2.2e-16&26361&<2.2e-16\\ Density C&23200&<2.2e-16&25419&<2.2e-16&20725&5.388e-06\\ Density D&31349&<2.2e-16&29159&<2.2e-16&799&1\\ Density E&28630&<2.2.e-16&22430&1.923e-09&24220&4.503e-14\\ Density F&24756&1.159e-15&3223&1&6433&1\\ Density G&29997&<2.2e-16&14861&0.765&16265&0.3071\\ Density H&25968&<2.2e-16&11889&0.9996&10535&1\\ Density I&8419&1&3713&1&3643&1\\ Density J&29488&<2.2e-16&18411&0.008676&28905&<2.2e-16\\ Density K&24555&4.689e-15&21411&2.861e-07&19746&0.0001958\\ Density L&24658&2.3e-15&15659&0.5101&18997&0.001919\\ \hline \end{tabular} \caption{Summary of Wilcoxon tests for the errors generatated by LSCV bandwidth and HDR bandwidth. The alternative hypothesis is the errors generated by LSCV bandwidth are large. Tests were done for 250 errors simulated from each of the 12 densities in \citet{Wand:1993jl} at $\tau=0.2,0.5,0.8$. Test statistics and the corresponding p-values are summarized in the table. Use $\widehat{f}_{n,\boldsymbol{H}_0}$ to estimate $\beta_{\tau}$} \end{table} \begin{table}[H] \centering \begin{tabular}{l|cc|cc|cc|} \hline &\multicolumn{2}{|c|}{$\tau=0.5$}&\multicolumn{2}{|c|}{$\tau=0.2$}&\multicolumn{2}{|c|}{$\tau=0.8$}\\ \hline Density&Test statistic&p-value&Test statistic&p-value&Test statistic&p-value\\ \hline Density A&25385&<2.2e-16&29238&<2.2e-16&26520&<2.2e-16\\ Density B&25690&<2.2e-16&29290&<2.2e-16&23990&2.027e-13\\ Density C&27574&<2.2e-16&24585&3.814e-15&21568&1.392e-07\\ Density D&29298&<2.2e-16&31363&<2.2e-16&1486&1\\ Density E&21824&4.133e-08&29934&2.2e-16&22241&5.154e-09\\ Density F&5344&1&25023&<2.2e-16&9866&1\\ Density G&15348&0.6168&28737&<2.2e-16&17723&0.0377\\ Density H&16244&0.3136&25987&<2.2e-16&11732&1\\ Density I&4573&1&9796&1&4485&1\\ Density J&20982&<1.868e-06&29863&<2.2e-16&29830&<2.2e-16\\ Density K&21595&1.227e-07&21842&3.788e-08&22701&4.463e-10\\ Density L&16854&0.1542&24077&1.153e-13&20861&3.094e-06\\ \hline \end{tabular} \caption{Summary of Wilcoxon tests for the errors generatated by LSCV bandwidth and HDR bandwidth. The alternative hypothesis is the errors generated by LSCV bandwidth are large. Tests were done for 250 errors simulated from each of the 12 densities in \citet{Wand:1993jl} at $\tau=0.2,0.5,0.8$. Test statistics and the corresponding p-values are summarized in the table. Use $\widehat{f}_{n,\boldsymbol{H}_1}$ to estimate $\beta_{\tau}$} \end{table} \end{longform} \end{mylongform} \subsection{Real data analysis} We now discuss two real datasets. The Wisconsin Diagnostic Breast Cancer data contains 699 instances of breast cancer cases with 458 of them being benign instances and 241 being malignant instances. Nine cancer-related features were measured for each instance. For the Banknote Authentication data, images were taken of 1372 banknotes, some fake and some genuine. Wavelet transformation tools were used to extract four descriptive features of the images. For both datasets, we reduced the original features to the first two principal components. We apply our method to perform novelty detection for the two data sets. Novelty detection is like a classification problem where only the ``normal'' class is observed in the training data. Then, for a new data point $\boldsymbol{x}_{\text{new}}$, we want to test the null hypothesis $H_0:\boldsymbol{x}_{\text{new}}\text{ is a normal point}$ (or, alternatively, to classify $\boldsymbol{x}_{\text{new}}$ as ``normal'' or ``anomalous''). For level set (HDR) based novelty detection, we can consider an oracle decision rule, or acceptance region, $A := \{\boldsymbol{x}:f_0(\boldsymbol{x})\ge c\}$ (based on knowing $f_0$); if $f_0(\boldsymbol{x}_{\text{new}})\in A$, we accept the null hypothesis, and we reject otherwise. For the breast cancer data, ``normal'' means healthy, and for the banknote data, ``normal'' means genuine. If we take $c=f_{\tau}$, then the oracle decision rule will have type-I error, or False Positive Rate (FPR), of $\tau$ (under a regularity condition). Additionally, under regularity conditions, $A$ has the minimum volume of any acceptance rule with FPR of $\tau$, since HDR's are minimum volume sets \citep{garcia2003level}. This property is beneficial for controlling the type-II error rate, or False Negative Rate (although the actual False Negative Rate depends on the unknown ``anomaly'' distribution). In this section, for each of the two data sets we use a KDE with our bandwidth selection procedure to estimate an HDR based on the ``normal'' class data and use the estimated HDR to perform classification. We delete the observations with missing values for any covariates and randomly split the data set into two parts, training data and testing data. For the Wisconsin Breast Cancer data, 345 benign instances are contained in the training data and 200 (with half being benign and another half being malignant) are contained in the testing data. For the Banknote Authentication data, 400 genuine instances are contained in the training data and again, 200 (with half being genuine and another half being fake) are contained in the testing data. We estimate the $90\%$ HDR using our method based on the training data. The first row of Figure~\ref{fig:cancer-bank} shows the plot of the data and the boundaries of the $90\%$ HDR which are the decision boundaries for the two classification problems. The asymptotic FPR in these two classification problems is $\tau=0.1$. For the Wisconsin Breast Cancer data, on the test data, the observed FPR is $0.09$ and the True Positive Rate (TPR) is $0.99$. For the Banknote Authentication data, the observed FPR is $0.04$, and the observed TPR is $0.61$. We also generated full ROC curves for the two datasets which are shown in the second row of Figure~\ref{fig:cancer-bank}. The ROC curves are based on $30$ different splits of the data into training and test sets (with the reported FPR and TPR given by the averages over the 30 test sets). The ROC curve clearly shows that the Wisconsin Breast Cancer data is an example where HDR-based anomaly detection is highly effective. The Banknote data is not as easy for our method; it may be the case that using an HDR based on all four variables improves the classification performance. We leave the very interesting question of how best to combine HDR-based classification with dimension reduction for future work. \begin{figure} \centering \includegraphics[width=\textwidth]{cancer_bank_combine.pdf} \caption{Plot of data and boundary of estimated $90\%$ HDR for the Wisconsin Diagnostic Breast Cancer Data and Banknote Authentication Data. Solid dots correspond to training data, circles are testing data of normal instances and crosses are testing data of anomaly instances. The two panels in the second row are the corresponding ROC curves for the two classification problems.} \label{fig:cancer-bank} \end{figure} \section{Discussion} \label{sec:conclusion} In this paper, we derive asymptotic expansions of the symmetric risk for LS estimation and HDR estimation based on kernel density estimators. We provide an efficient bandwidth selection procedure using a plug-in strategy. We also study by theory and by simulation the performance of our bandwidth selector. Simulation studies show that both our asymptotic expansion and our bandwidth selector are effective tools. The two asymptotic risk approximations we provide may also be useful in the analysis of other procedures, developed in future work, for doing LS or HDR bandwidth selection. As discussed in the Introduction, the interesting paper \cite{Qiao:2017wq} also considers problems of bandwidth selection for KDE's via minimizing asymptotic expansions of risk functions that are based on loss functions related to level sets. \cite{Qiao:2017wq} does not consider HDR estimation. \cite{Qiao:2017wq} does consider the LS estimation problem. Our Theorem 2.1 is similar to \cite{Qiao:2017wq}'s Corollary~3.1; both results consider the LS estimation setting, and give risk expansions based on loss functions that are given by integrating the symmetric set differences against $f_0$ (or against something similar). Our theorem requires only that $f_0$ have two continuous derivatives in a neighborhood of $\beta(c)$ (which we believe to be approximately the weakest possible conditions), whereas \cite{Qiao:2017wq} requires four continuous derivatives. On the other hand, \cite{Qiao:2017wq} allows for using higher order kernels if one has higher order smoothness of $f_0$. While \cite{Qiao:2017wq}'s Corollary~3.1 studies the same risk function approximation, $\LS( \cdot)$, that we study in our Theorem~2.1, \cite{Qiao:2017wq} does not present any algorithm for minimizing $\LS(\cdot)$ and thus presents no simulations related to $\LS(\cdot)$. Rather, \cite{Qiao:2017wq} focuses more attention on a different risk function (the ``excess risk'') approximation that allows for an analytic solution, at least when $d=2$. There are many interesting avenues for extending the work done in the present paper. We describe a few here. \begin{enumerate}[label=(\Alph*).,leftmargin=*] \item (Regression and classification) In the present paper we have considered only the density estimation context, but estimation of level sets of regression functions estimated by kernel-based methods is also interesting, as is consideration of classification problems. Regression level set estimation has received less attention than density level set estimation, although it has been studied in some settings; \cite{Cavalier:1997ef} studies multivariate nonparametric regression level set minimax rates of convergence. One method for classification is to estimate densities for different classes and then classify a point by the class density having highest value at the point. In that case, rather than estimating a level set of one density, one is estimating the $0$ level set of a difference of two densities. \citet[page 1110]{Mason:2009dk} discuss this approach to classification. In the context of an application in flow cytometry, \cite{Duong:2009ek} also study estimation of HDR's of density differences (without specifically focusing on classification). We believe the methods of this paper can be extended to those contexts. \item (Topological data analysis and critical points) Another important avenue of research is to consider modifications of the assumptions under which our approximations hold. Level set estimation is one of the main tools in topological data analysis (TDA). Estimation of LS's which have zero gradient (at some points) on the boundary (which is ruled out by our assumptions) is of great interest in TDA, because the topology of level sets can change as the level crosses critical points (points having zero gradient). In fact, in the context of using tools based on level set estimates, \citet[Section 5]{Wasserman:2016ua} states that ``the problem of choosing tuning parameters is one of the biggest open challenges in TDA''. Thus, developing tools for bandwidth selection when the gradient is zero would be very useful for TDA. Unfortunately, at points where the gradient is zero we cannot apply the inverse function theorem which is used in Lemma~\ref{lem:hdr-step1} (implicitly) and by several results in Appendix~\ref{app:additional-thms}, so a very different analysis than the one we completed here may be necessary in such cases. In general, there are very few theoretical works on level set estimation at levels that contain critical values (points where $\nabla f_0$ is $0$). In fact, the only one we know of is \cite{Chen:2016vv}, in which a rate of convergence of $\lambda \lb \mc{L}(c) \Delta \widehat{\mc{L}}_{\boldsymbol{H}}(c) \right\}$ (where $\lambda$ is Lebesgue measure) is derived. \item (MCMC level sets) The work in this paper is restricted to the case where $\boldsymbol{X}_1, \ldots, \boldsymbol{X}_n$ are independent. An important extension is to allow the $\boldsymbol{X}_i$ to be samples from a Markov chain. It is well known that KDE's often work similarly when the data exhibit weak dependence as when they are independent \citep{Wand:1995kv}. This would allow our tools for HDR estimation to be used to form credible regions based on Markov chain Monte Carlo output in Bayesian statistical analyses. At present, ad-hoc methods are often used for forming credible regions based on Markov chain Monte Carlo output. \end{enumerate}
{ "timestamp": "2018-10-26T02:05:12", "yymm": "1806", "arxiv_id": "1806.00731", "language": "en", "url": "https://arxiv.org/abs/1806.00731" }
\section{Introduction} The vacuum static solutions of the Einstein equations have played since early days a fundamental role in the study of Einstein's theory and the classification theorems have been at the center of the work. In this context, the celebrated {\it uniqueness theorem of the Schwarzschild solution} asserts that the Schwarzschild black holes are the only asymptotically flat vacuum static solutions with compact but non-necessarily connected horizon (Israel \cite{Israel}, Robinson et al \cite{RobinsonII}, Bunting/Masood-ul-Alam \cite{MR876598}; for a review on the history of this theorem see \cite{Robinson}). Between this article and its sequel it is proved a classification theorem extending Schwarzschild's uniqueness theorem to vacuum static solutions having compact but non-necessarily connected horizon without making further assumptions on their topology or asymptotic. Static solutions appear in many contexts. In Riemannian geometry they model for instance the blow up of singularities forming along sequences of Yamabe metrics \cite{MR1452867},\cite{MR1726233}, \cite{MR1837365}, and provide interesting examples of Ricci-flat Riemannian metrics with a warped $\Sa$-factor \cite{MR1809792}. In physics they are crucial for example in the study of mass, quasi-local mass and initial data sets \cite{MR996396}, \cite{MR3064190}, \cite{MR3037574}, or in the exploration of certain high-dimensional theories \cite{PhysRevD.35.455}. A classification theorem can be relevant in any of these contexts. Stated below is the classification theorem that we shall prove. The objects to classify are {\it static black hole data sets} that condensate the notion of static black hole at the initial data level\footnote{which is the viewpoint adopted in these articles. We classify static black hole spacetimes having a Cauchy hypersurface orthogonal to the static Killing field. The problem of classifying static spacetimes without such condition is not treated here, see for instance \cite{MR3077927}.}. Their definition and the discussion of the three main families in the theorem is given right after. Full technical details can be found in the background subsection \ref{SDSMT}. Previous work and references related to these articles are discussed at the end of this section. For better clarity the proof's structure of the classification theorem is explained separately in the next subsection \ref{TSOTP}. A detailed account of the contents of this Part I is given in subsection \ref{CSTA}. \begin{Theorem}[The classification Theorem]\label{TCTHM} Any static black hole data set is either, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\Roman*)}, widest=a, align=left] \item\label{FTII} A Schwarzschild black hole, or, \item\label{FTI} a Boost, or, \item\label{FTIII} is of Myers/Korotkin-Nicolai type. \end{enumerate} \end{Theorem} Formally, a {\it (vacuum) static data set} $(\Sigma; g, N)$ consists of an orientable three manifold $\Sigma$, a function $N$ called the lapse and positive in the interior $\Sigma^{\circ}=\Sigma\setminus \partial \Sigma$ of $\Sigma$, and a Riemannian metric $g$ on $\Sigma$ satisfying the vacuum static equations, \begin{gather} \label{EEII} NRic = \nabla\nabla N,\quad \Delta N=0 \end{gather} A static data set $(\Sigma;g,N)$ gives rise to a vacuum static spacetime (${\bf Ric}=0$), \begin{equation}\label{SPTE} {\bf \Sigma}=\mathbb{R}\times \Sigma,\quad {\bf g}=N^{2}dt^{2}+g, \end{equation} where $\partial_{t}$ is the static Killing field. Conversely, a static spacetime of the form (\ref{SPTE}), gives rise to a static data set $(\Sigma;g,N)$. Throughout this article we will work with static data sets rather than their associated spacetimes. A {\it static black hole data set} is defined as a static data $(\Sigma;g,N)$ such that $\partial \Sigma=\{N=0\}\neq \emptyset$ is compact and $(\Sigma;g)$ is metrically complete. In this definition no special asymptotic or global topological structure is assumed. The boundary of $\Sigma$ is non-necessarily connected and is called the horizon. Without further justification, we will say that the spacetime of a static black hole data set is a `black hole spacetime', \footnote{Indeed the outer-communication region.}. We stress that all the analysis in these articles is carried only on static data sets, leaving the spacetime picture aside. Let us discuss now the families \ref{FTII}, \ref{FTI} and \ref{FTIII} of static black hole data sets. The Schwarzschild static black hole data sets are spherically symmetric and asymptotically flat, and are given explicitly by, \begin{equation}\label{SCHDT} \Sigma=\mathbb{R}^{3}\setminus B(0,2m),\quad g=\frac{1}{1-2m/r}dr^{2}+r^{2}d\Omega^{2}\quad {\rm and}\quad N=\sqrt{1-2m/r} \end{equation} where $m>0$ is the mass and $B(0,2m)$ is the open ball of radius $2m$\footnote{The spacetime (\ref{SPTE}) corresponding to (\ref{SCHDT}) is just the region of exterior communication of a Schwarzschild black hole of mass $m$. The horizon is the boundary $\partial \Sigma = \{N=0\}$. Restricted to $r\geq R(t)> 2m$, the Schwarzschild space models the gravitational field of any isolated but spherically symmetric physical body of radius $R(t)$. The object itself may be transiting a dynamical process (for instance in a star), but the spacetime outside remains spherically symmetric and thus Schwarzschild by Birkhoff's theorem. If the radius $R(t)$ goes below the threshold of $2m$, no equilibrium is possible, the body undergoes a complete gravitational collapse and a Schwarzschild black hole remains.}. The family is parameterised by the mass $m>0$. It is of course the paradigmatic family of static black hole data sets. \begin{figure}[h] \centering \includegraphics[width=3.7cm,height=3.7cm]{Schwarzschild.jpg} \caption{A Schwarzschild black hole. The grey region is $\Sigma$ and is diffeomorphic to $\mathbb{R}^{3}$ minus the open (black) ball $B(0,2m)$. The solution is spherically symmetric and thus axisymmetric.} \label{FigureBoost} \end{figure} The flat static data \begin{equation}\label{BOOSTDEF} \Sigma=[0,\infty)\times \mathbb{R}^{2};\quad g= dx^{2}+dy^{2}+dz^{2},\quad N=x, \end{equation} is called the {\it Boost}. The spacetime (\ref{SPTE}) associated to (\ref{BOOSTDEF}) is the Rindle-wedge of the Minkowski spacetime and the static Killing field is the boost generator $x\partial_{t}$, hence the name. The quotients of the Boost by any $\mathbb{Z}^{2}$ group of isometries generated by two translations along the factor $\mathbb{R}^{2}$, are data of the form, \begin{equation}\label{BOOSTQUOTIENT} \Sigma=[0,\infty)\times {\rm T}^{2},\quad g=dx^{2}+h,\quad N=x \end{equation} where $h$ is a flat metric on the two-torus ${\rm T}^{2}=\Sa\times \Sa$. As the lapse $N$ is zero on the boundary of $\Sigma$, these are static black hole data sets. They define the Boost family in the classification theorem, and is parametrised by the set of flat two-tori. Other relevant examples of static data sets are the {\it Kasner data sets} (a complete discussion is given in subsection \ref{SSKK} of Part II), \begin{equation}\label{Kasner} \Sigma=(0,\infty)\times \mathbb{R}^{2};\quad g= dx^{2}+x^{2\alpha}dy^{2}+x^{2\beta}dz^{2},\quad N=x^{\gamma}, \end{equation} where $y$ and $z$ are coordinates on each of the factors $\mathbb{R}$ of $\mathbb{R}^{2}$, and $\alpha, \beta$ and $\gamma$ are any numbers satisfying, \begin{equation} \alpha+\beta+\gamma=1,\qquad \alpha^{2}+\beta^{2}+\gamma^{2}=1 \end{equation} \begin{figure}[h] \centering \includegraphics[width=6cm, height=6cm]{KVar.pdf} \caption{The circle that defines the range of the Kasner parameters $\alpha$, $\beta$, $\gamma$.} \label{Figure21} \end{figure} (see Figure \ref{Figure21}). The Kasner space $(\alpha,\beta,\gamma)=(0,0,1)$ is the Boost\footnote{One must add indeed the set $\{0\}\times \mathbb{R}^{2}$.} and is the Kasner data with faster growth of the lapse (linear). We denote it by the letter $B$. The Kasner spaces $(1,0,0)$ and $(0,1,0)$, that have constant lapse and are therefore flat, are denoted respectively by the letters $A$ and $C$. As with the Boost, one can quotient a general Kasner data to obtain data of the form, \begin{equation}\label{KASNERQUOTIENT} \Sigma=(0,\infty)\times {\rm T}^{2},\quad g=dx^{2}+h(x),\quad N=x^{\gamma} \end{equation} where, $h(x)$ is a certain path of flat metrics on ${\rm T}^{2}$. This is the Kasner family and is parametrised by the set of possible Kasner triples $(\alpha,\beta,\gamma)$ (a circle) times the set of flat two-tori up to isometry. The Myers/Korotkin-Nicolai data sets, that we describe a few lines below, are asymptotic to them. Finally, we denote also by $A$, $B$, $C$, to the quotients of the spaces $A$, $B$, $C$ respectively. \begin{figure}[h] \centering \includegraphics[width=3.6cm,height=3.6cm]{Boost.jpg} \caption{A Boost black hole. The grey region is $\Sigma$ and is diffeomorphic to a solid torus minus an open (black) solid torus.} \label{FigureBoost} \end{figure} Let us see the last family in the classification theorem, namely the static black hole data sets of Myers/Korotkin-Nicolai type. A static black hole data set is said to be of {\it Myers/Korotkin-Nicolai type} if its topology is that of a solid three-torus minus a finite number of balls and is asymptotic to a Kasner space (\ref{KASNERQUOTIENT}), (see Definition \ref{KADEF}). Black holes with such properties were found by Myers in \cite{PhysRevD.35.455} and were rediscovered and further investigated by Korotkin and Nicolai in \cite{94aperiodic}, \cite{KOROTKIN1994229}. Myers and Korotkin/Nicolai's construction used first Weyl's method to find a `periodic' static solution by superposing along a common axis an infinite number of Schwarzschild solutions separated by the same distance $L$ (see Figure \ref{UMKN}). Simple quotients give then the desired solutions with any number of holes (see Figure \ref{Myers-Korotkin-Nicolai}), \footnote{As the Schwarzschild solutions are axisymmetric, they can be superposed along an axis by Weyl's method. When superposing a finite number of holes, angle deficiencies appear on the axis between them and the solution resulting is non-smooth. This deficiency can be understood from the fact that a repulsive force must keep the holes in equilibrium. However when infinitely many of them are superposed along the axis, say at a distance $L$ from each other, no extra force is needed and the angle deficiency is no longer present. This gives a `periodic' solution that can be quotient to obtain M/KN solutions with any number of holes.}. The details of such data sets $(\Sigma; g ,N)$ are mainly irrelevant to us but for the sake of completeness the main features of the data in the universal cover space can be summarised as follows (see \cite{PhysRevD.35.455},\cite{94aperiodic}). \begin{figure}[h] \centering \includegraphics[width=2cm,height=7cm]{UMKN.jpg} \caption{A 'universal M/KN data'. The grey region is $\Sigma$ and is diffeomorphic to $\mathbb{R}^{3}$ minus an infinite number of (black) open balls. The solution is axisymmetric.} \label{UMKN} \end{figure} \begin{figure}[h] \centering \includegraphics[width=3.6cm,height=3.6cm]{MyersKorotkinNicolai.jpg} \caption{A M/KN data with one hole. The grey region is $\Sigma$ and is diffeomorphic to a solid torus minus an open (black) ball. The solution is axisymmetric.} \label{Myers-Korotkin-Nicolai} \end{figure} The metric and the lapse have the form, \begin{equation} g =e^{-\omega}(e^{2k}(dx^{2}+d\rho^{2})+\rho^{2}d\phi^{2}),\qquad N=e^{\omega/2}, \end{equation} where $(x,\rho)$ are Weyl coordinates ($\rho>0$ is the radial coordinate) and $\phi\in [0,2\pi)$ is the angular coordinate. The function $\omega$ is defined through the convergent series, \begin{equation} \omega(x,\rho)=\omega_{0}(x,\rho)+\sum_{n=1}^{\infty}[\omega_{0}(x+nL,\rho)+\omega_{0}(x-nL,\rho)+\frac{4M}{nL}] \end{equation} where $\omega_{0}(x,\rho)$ is, \begin{equation} \omega_{0}=\ln \mathcal{E}_{0},\qquad \mathcal{E}_{0}(x,\rho)=\frac{\sqrt{(x-M)^{2}+\rho^{2}}+\sqrt{(x+M)^{2}+\rho^{2}}-2M}{\sqrt{(x-M)^{2}+\rho^{2}}+\sqrt{(x+M)^{2}+\rho^{2}}+2M} \end{equation} and the function $k(x,\rho)$ is found by quadratures through the equations, \begin{equation} k_{\rho}=\frac{\rho}{4}(\omega_{\rho}^{2}-\omega_{x}^{2}),\qquad k_{x}=\frac{\rho}{2}\omega_{x}\omega_{\rho}, \end{equation} The metric $g$, the lapse $N$ and the function $k$ are invariant under the translations $x\rightarrow x+L$, hence periodic. The asymptotic of the solution is Kasner and has the form, \begin{equation} g \approx c_{1}\rho^{\alpha^{2}/2-\alpha}(dx^{2}+d\rho^{2})+c_{2}\rho^{2-\alpha}d\phi^{2},\qquad N \approx c_{3}\rho^{\alpha/2} \end{equation} where $\alpha=4M/L$ and so $0< \alpha<2$. Note that the range of $\alpha$ excludes the Kasner spaces $A$, $B$ and $C$, and clearly those with $\gamma<0$ for which $N\rightarrow 0$ at infinity. Therefore the asymptotic of such static black hole data sets is Kasner but different from $A$, $B$, $C$ and those Kasner with $\gamma<0$. This fact was not incorporated in the definition of static black hole data set of M/KN type. It will be shown however in Part II that the Kasner asymptotic of a black hole of M/KN type is indeed different from $A$ and $C$, although we cannot exclude the possibility of being asymptotic to $B$. Of course by the maximum principle, the Kasner asymptotic cannot be one with $\gamma<0$ (if so then it must be $N=0$ on $\Sigma$ because $N=0$ on $\partial \Sigma$ and $N\rightarrow 0$ at infinity). We leave it as an open problem to prove that the only static black hole data sets asymptotic to a Boost are in fact the Boosts. The construction of Myers/Korotkin-Nicolai that we briefly described above can be generalised to allow a periodic superposition of Schwarzschild holes of different masses provided they are kept separated from each other at the right distances. The outcome, (after quotient), are static black hole data sets of M/KN type different from the ones just described. To embrace all the possibilities we define the {\it Myers/Korotkin-Nicolai data sets} as any axisymmetric static black hole data set obtained using Myers/Korotkin-Nicolai's method. It could be that such data sets are the only black hole static data sets of M/KN type. We leave this as an open problem (see Problem \ref{OPENPRO}). Note that the precise global geometry of the M/KN data sets won't be discussed in this article and won't play a role (for a discussion see \cite{KOROTKIN1994229}) as we will deal only with data sets of M/KN-type that are defined by abstracting the main geometric features of the M/KN data sets. \vspace{0.2cm} The proof of the classification theorem is divided between Part I (this article) and Part II (its sequel), and each article has a clear and distinct motivation. The main purpose of this Part I, that we elaborate in detail in the subsections \ref{TSOTP} and \ref{CSTA} below, is to study global properties of the lapse of static black hole data sets and its implications on the global geometry. Part II discusses, on one side, $\Sa$-symmetric static data sets and, on the other side, provides a detailed study of the asymptotic of static ends. Part I uses techniques in conformal geometry and comparison geometry \'a la Bakry \'Emery, whereas Part II uses techniques in standard comparison geometry and convergence and collapse of Riemannian manifolds. Several sections inside each part are new and have their own interest going behind the main purpose of these articles. To make it more clear, the proof's structure of the classification theorem is explained separately in subsection \ref{TSOTP} below. \vspace{0.2cm} These articles continue in a sense our work on static solutions in \cite{MR2919527}, \cite{Reiris2017}, \cite{MR3233266}, and \cite{MR3233267}. In particular, in \cite{MR3233266} and \cite{MR3233267} it was shown that asymptotic flatness in Schwarzschild's uniqueness theorem can be replaced (still preserving uniqueness) by the metric completeness of $(\Sigma;g)$ plus the condition that, outside a compact set, $\Sigma$ is diffeomorphic to $\mathbb{R}^{3}$ minus a ball. Without any topological hypothesis Schwarzschild's uniqueness of course fails. Thus \cite{MR3233266} and \cite{MR3233267} prove a classification theorem somehow in between Schwarzschild's uniqueness theorem and the classification Theorem \ref{TCTHM}. We do not know of any attempt in the literature pointing to a general classification theorem of static vacuum black holes, except, perhaps, a conjecture stated by Anderson in \cite{MR1452867} (Conjecture 6.2), that appears to be incomplete. Still, vacuum static solutions have been deeply investigated along the years, so to conclude this introduction let us recall former developments that are related technically or conceptually to this work. We point out connections when it is appropriate. Vacuum static solutions with symmetries have been investigated since early days by Schwarzschild \cite{Schwarzschild}, Levi-Civita \cite{Levi-Civita2011a}, \cite{Levi-Civita2011b}, Kasner \cite{MR1501305}, \cite{MR1501301}, Weyl \cite{ANDP:ANDP19173591804} and many others, and there is an advanced understanding of them (for a review see \cite{Jordan2009} and references therein). Understanding static solutions without any a priori symmetry is vast more complex. Schwarzschild's uniqueness theorem was perhaps the first general classification theorem although it demands global assumptions. Israel's seminal work required that the lapse $N$ can be chosen as a global coordinate and therefore required a connected spherical horizon. This technical global condition on the lapse was removed later by M\"uller, Robinson and Seifert in \cite{Hagen1973}, but keeping the hypothesis of a connected horizon. A simpler proof of their result was found later by Robinson by means of a remarkable integral formula \cite{RobinsonII} (the proof used also previous work by K\"unzle \cite{kunzle1971}). Altogether, this proved that the only asymptotically flat solution with a connected compact horizon is Schwarzschild. The analysis of the geometry of the level sets of the lapse function that play a fundamental role in \cite{Israel} and \cite{RobinsonII} and in other works on static solutions as well, will be also relevant here when we study Kasner asymptotic in subsection \ref{ENDSAK} of Part II. We will follow however different techniques. Other proofs of the Israel-Robinson theorem were given more recently by the author in \cite{MR2919527} and by Agostiniani and Mazzieri in \cite{2015arXiv150404563A}. In \cite{MR2919527} techniques in comparison geometry were used and in \cite{2015arXiv150404563A} monotonic quantities along the level sets of the lapse were introduced. Some of the arguments in this article will follow similar ideas though technically distinct. The uniqueness of Schwarzschild even when multiple horizons are in principle allowed was settled by Bunting/Masood-ul-Alam\cite{MR876598}, using the positive mass theorem. As mentioned earlier, there seems to be no previous attempt in the literature to classify static black holes data sets that are not asymptotically flat, except perhaps, the conjecture in \cite{MR1452867}. Connected to that work, Anderson performed a general study of static and stationary solutions in \cite{MR1809792} and \cite{MR1806984} respectively, obtaining a fundamental decay estimate for the curvature and the gradient of the logarithm of the lapse. Among other things, this establishes the first uniqueness theorem of the Minkowski solution (as a static solution) without assuming any type of asymptotic but just geodesic completeness. In \cite{Reiris2017} it was shown that Anderson's estimate holds too in any dimension by importing techniques in comparison geometry \'a la Backry-\'Emery that were introduced by J. Case in \cite{MR2741248} in a context somehow related to that of static solutions. These new techniques in comparison geometry a la Bakry-Emery play a fundamental role in this Part I as we will explain below. The global study of the lapse function that we do is based largely upon these ideas. \subsection{The proof's structure of the classification theorem}\label{TSOTP} The proof of the classification theorem is divided in three steps. Say $(\Sigma;g,N)$ is a static black hole data set. Then the proof requires proving that, \begin{enumerate} \item\label{Step-a} $\Sigma$ has only one end. \item\label{Step-b} The horizons are {\it weakly outermost} (see Definition \ref{DWO}). \item\label{Step-c} The end is asymptotically flat or asymptotically Kasner. \end{enumerate} Once this is achieved the proof of the classification theorem is direct from known results. Indeed, assume \ref{Step-a}-\ref{Step-c} hold. If the data is asymptotically flat, it follows that it must be Schwarzschild by the uniqueness theorem. If the data is asymptotically Kasner, then it is deduced that it is either a Boost or is of M/KN type as follows. First, by step \ref{Step-b} the horizons are weakly outermost, and thus by Schoen-Galloway \cite{MR2238889} and Galloway \cite{4b6cb19bc94d4cf485e58571e3062f77}, either the data is a Boost or every horizon is a totally geodesic sphere. Let us assume the data is not a Boost. If the Kasner asymptotic is different from $B$, then, as any constant $x$-coordinate torus of any Kasner space different from $B$ has positive outwards mean curvature (from (\ref{Kasner}) the mean curvature is $\theta=(\alpha+\beta)/x$ with $\alpha+\beta>0$ if $(\alpha,\beta,\gamma)\neq (0,0,1)$), we can clearly find (using the fast decay into the Kasner space) a two-torus $T$ separating $\Sigma$ into two manifolds, $\Sigma_{1}$ and $\Sigma_{2}$, with $\overline{\Sigma}_{2}$ diffeomorphic to $[0,\infty)\times T$ and $\overline{\Sigma}_{1}$ a compact manifold whose boundary consist of $T$, of positive outwards mean curvature, and a finite number of spherical-horizons. It then follows from Galloway's \cite{MR1201655} that $\overline{\Sigma}_{1}$ is diffeomorphic to a solid three-torus minus a finite number of open three-balls\footnote{Galloway's results precisely asserts that if a static data set $(\Sigma;N,g)$ is such that $\Sigma$ is compact and $\partial \Sigma$ consists of a convex sphere plus $h$ horizons, then $\Sigma$ is diffeomorphic to a closed three-ball minus $h$-open three-balls. If instead of having a convex spherical component of $\partial \Sigma$ there is a convex toroidal component, the one can use Galloway's argumentation (without any substantial change) to show that $\Sigma$ is diffeomorphic to a closed solid three-torus minus a finite number of open three-balls.\label{FN}}. Hence, $\Sigma$ is diffeomorphic to an open three-torus minus a finite number of open three-balls. This type of topology and the Kasner asymptotic imply, by definition, that the data is of M/KN type. If the Kasner asymptotic is $B$, then there are no obvious embedded tori $T$ of positive outwards mean curvature, but it will be proved that there are in fact tori $T$ separating $\Sigma$ in $\Sigma_{1}$ and $\Sigma_{2}$ as before, but having area strictly less than the asymptotic area of the `transversal' tori over the end. This is enough to repeat Galloway's argument and conclude that indeed $\Sigma$ has the desired topology. The main motivation of this article (Part I) is to prove the steps \ref{Step-a}, \ref{Step-b}. We do that in section \ref{CTPL}. The proof of step \ref{Step-c} is done in section \ref{VWAE} of Part II and requires using section \ref{S1S} of Part II at some particular instances. Part II uses Part I as follows. Until subsection \ref{FTKASS}, it is either not used, or it is used only that if $\partial \Sigma$ is compact, then the metric $\hg$ is complete at infinity. This is shown in Theorem \ref{COMN2} of subsection \ref{CMMC} of Part I. Subsection \ref{POKA}, proving the Kasner asymptotic of static black hole ends with sub-cubic volume growth, uses the completeness of $\hg$ at infinity, and steps \ref{Step-a} and \ref{Step-b}. We pass now to discuss the structure of the different sections of this article and the main points behind the various proofs. \subsection{The contents and the structure of this article (Part I)}\label{CSTA} Section \ref{BACKGROUNDMATERIAL} contains the background material, including notation and terminology. Subsection \ref{SDSMT} contains the main definitions, as the one of static black hole data set or Kasner asymptotic, and states again the classification theorem as Theorem \ref{TCTHM}. Subsection \ref{SAP} defines annuli and partitions cuts, that are useful to study asymptotic properties. The body of the article begins in section \ref{CTPL} where we discuss the properties of metrics $\overline{g}$ conformally related to a static metric $g$ by powers of the lapse, namely $\overline{g}=N^{-2\epsilon}g$ where $\epsilon$ is just a constant. The reasons why we study these conformal metrics are mainly the following. First, we will use the metrics $\overline{g}=N^{-2\epsilon}g$ with $\epsilon>0$ to accomplish step \ref{Step-a} (of subsection \ref{TSOTP}), that is, proving that static black hole data sets have only one end, Theorem \ref{KUNO}. Second, the proof of step \ref{Step-b}, that the horizons of black hole data sets are weakly outermost, requires proving in particular the metric completeness of $\hg=N^{2}g$ (i.e. $\epsilon=-1$) away from the boundary\footnote{Namely $(\Sigma_{\delta};\overline{g})$ is metrically complete where $\Sigma_{\delta}$ is $\Sigma$ with a collar around the boundary removed. Note that the metric $\hg$ is singular at $\partial \Sigma$, so to speak about completeness we need to remove a collar around $\partial \Sigma$.}. This is done in Proposition \ref{SOFOR} again using the metrics $\overline{g}=N^{-2\epsilon}g$ with $\epsilon$ in a certain range, Theorem \ref{COMN2}. Third, in section \ref{VWAE} of Part II, and because of its nice properties, we will use mainly $\hg$ to study the asymptotic of black hole data sets. Once more, it is necessary to grant that $\hg$ is complete at infinity. The results of Section \ref{CTPL}, in particular the investigation of the conformal metrics $\overline{g}$, rely in casting the static equations in a framework \'a la Bakry-\'Emery, and then using some general properties of these spaces in a suitable way. Let us make this more precise. Using $f=-\ln N$ instead of the variable $N$, the static equations read, \begin{equation}\label{EESTR} Ric^{1}_{f}=0,\qquad \Delta_{f}f=0 \end{equation} where for any $\alpha$ the $\alpha$-Bakry-\'Emery Ricci tensor $Ric^{\alpha}_{f}$ is, \begin{equation} Ric^{\alpha}_{f}:=Ric+\nabla\nabla f-\alpha\nabla f \nabla f,\\ \end{equation} whereas the $f$-Laplacian $\Delta_{f} \phi$ of a function $\phi$ is, \begin{equation} \Delta_{f}\phi:=\Delta\phi-\langle \nabla f,\nabla \phi\rangle \end{equation} If instead of $g$ and $f=-\ln N$ we use the variables $\overline{g}=N^{-2\epsilon}g$ and $f=-(1+\epsilon)\ln N$, then the static equations are, \begin{equation}\label{EESTR2} \overline{Ric}^{\alpha}_{f}=0,\qquad \overline{\Delta}_{f} f=0 \end{equation} where $\alpha=(1-2\epsilon-\epsilon^{2})/(1+\epsilon)^{2}$. The constant $\alpha$ is positive for $\epsilon$ in the range $-1-\sqrt{2}<\epsilon<-1+\sqrt{2}$. The equations (\ref{EESTR}) and (\ref{EESTR2}) share the same structure (only the $\alpha$ is different), and is the right way to present these equations to apply techniques \'a la Bakry-\'Emery. Spaces having $Ric^{\alpha}_{f}\geq 0$ with $\alpha>0$, have been studied in recent years under the context of comparison geometry (see \cite{MR2577473} and references therein). The crucial fact is that several well known results that hold for spaces with $Ric\geq 0$ hold too for spaces with $Ric^{\alpha}_{f}\geq 0$, $\alpha>0$, no matter the form of $f$. Thus, one can obtain geometric information without assuming any a priori knowledge on $N$. In turn, that information is then used to prove properties of $N$. The detailed contents of Section \ref{CTPL} are as follows. Subsection \ref{BESEC} explains the structure of the conformal equations, Proposition \ref{FELIZ}. Subsection \ref{CMACD} proves the crucial Lemma \ref{LEMMAME} (essentially due to Case) and from it it is obtained a generalised Anderson's decay estimate for the conformally related data, Lemma \ref{CDLEMMA}. These estimates are used in subsection \ref{CMMC} to show the metric completeness of the manifolds $(\Sigma; \overline{g}=N^{-2\epsilon}g)$ for $-1-\sqrt{2}<\epsilon<-1+\sqrt{2}$ (provided $\partial \Sigma$ is compact, $N|_{\Sigma}>0$ and $(\Sigma;g)$ is metrically complete), Theorem \ref{COMN2}. Until here the results are on general non-necessarily black hole data sets. Subsection \ref{APP} contains important applications to particular situations. First, in subsection \ref{CDPL} remarks are pointed out on the conformal data $(\Sigma; N^{-2\epsilon}g)$ of the data $(\Sigma;g)$ of a static black hole data set, Proposition \ref{PIV}. It is particularly stressed here that, when $\epsilon>0$ is small, the manifold $(\Sigma; N^{-2\epsilon}g)$ is still metrically complete, while the boundary becomes strictly convex (indeed the boundary of $\Sigma$ minus a small collar around $\partial \Sigma$). Then, in subsection \ref{STR} it is proved using the previous subsection and a generalised splitting theorem \'a la Backry-\'Emery that static black hole data sets have only one end, Proposition \ref{KUNO}. This accomplishes step \ref{Step-a}. In subsection \ref{HTT} it is proved using the completeness at infinity of $(\Sigma; N^{2}g=\hg)$ that either black hole data sets are boosts, or every horizon component is a sphere and weakly outermost. This accomplishes step \ref{Step-b}. Finally in subsection \ref{TAIS} it is proved that static isolated systems in GR are asymptotically flat. This application is independent of the rest of the article. Section \ref{GLOBP} proves that the lapse on static black hole data sets is bounded away from zero at infinity. This result is not used per-se in the proof of the classification theorem, although it provides an alternative proof that the metric $\hg$ on static black hole ends is complete at infinity. The section \ref{GLOBP} relies on techniques introduced in the previous section \ref{CTPL} and in a sense can be seen as another application. It could be useful and interesting in other contexts as well, for instance to investigate higher dimensional black hole data sets. \vspace{.3cm} {\bf Acknowledgment} I would like to thank Herman Nicolai, Marc Mars, Marcus Kuhri, Gilbert Weinstein, Michael Anderson, Greg Galloway, Miguel Sanchez, Carla Cederbaum, Lorenzo Mazzieri, Virginia Agostiniani and John Hicks for discussions and support. Also my gratefulness to Carla Cederbaum for inviting me to the conference `Static Solutions of the Einstein Equations' (T\"ubingen, 2016), to Piotr Chrusciel for inviting me to the meeting in `Geometry and Relativity' (Vienna, 2017) and to Helmut Friedrich for the very kind invitation to visit the Albert Einstein Institute (Max Planck Institute, Potsdam, 2017). This work has been largely discussed at them. Finally my gratefulness to the support received from the Mathethamical Center at the Universidad de la Rep\'ublica, Uruguay. \section{Background material}\label{BACKGROUNDMATERIAL} \subsection{Static data sets and the main Theorem}\label{SDSMT} Manifolds will always be smooth ($C^{\infty}$). Riemannian metrics as well as tensors will also be smooth. If $g$ is a Riemannian metric on a manifold $\Sigma$, then \begin{equation} \dist_{g}(p,q)= \inf\big\{\length_{g}(\gamma_{pq}):\gamma_{pq}\ \text{smooth curve joining $p$ to $q$}\big\}, \end{equation} is a metric, where $L_{g}$ is the notation we will use for length (when it is clear from the context we will remove the sub-index $g$ and write simply $\dist$ and $L$). A Riemannian manifold $(\Sigma;g)$ is {\it metrically complete} if the metric space $(\Sigma; \dist)$ is complete. \begin{Definition}[Static data set]\label{SDS} A static (vacuum) data set $(\Sigma;\sg,N)$ consists of an orientable three-manifold $\Sigma$, possibly with boundary, a Riemannian metric $\sg$, and a function $N$, such that, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\roman*)}, widest=a, align=left] \item $N$ is strictly positive in the interior $\Sigma^{\circ}(=\Sigma\setminus \partial\Sigma)$ of $\Sigma$, \item $(\sg,N)$ satisfy the vacuum static Einstein equations, \begin{equation} \label{SEQ} N Ric = \nabla\nabla N,\qquad \Delta N=0 \end{equation} \end{enumerate} \end{Definition} The definition is quite general. Observe in particular that $\Sigma$ and $\partial \Sigma$ could be compact or non-compact. To give an example, a data set $(\Sigma;\sg,N)$ can be simply the data inherited on any region of the Schwarzschild data. This flexibility in the definition of static data set allows us to write statements with great generality. A horizon is defined as usual. \begin{Definition}[Horizons] Let $(\Sigma;\sg,N)$ be a static vacuum data set. A horizon is a connected component of $\partial \Sigma$ where $N$ is identically zero. \end{Definition} Note that the Definition \ref{SDS} doesn't require $\partial \Sigma$ to be a horizon, though the data sets that we classify in this article are those with $\partial \Sigma$ consisting of a finite set of compact horizons ($\Sigma$ is a posteriori non compact). It is known that the norm $|\nabla N|$ is constant on any horizon and different from zero. It is called the surface gravity. It is convenient to give a name to those spaces that are the final object of study of this article. Naturally we will call them {\it static black hole} data sets. \begin{Definition}[Static black hole data sets]\label{DWO} A metrically complete static data set $(\Sigma;\sg,N)$ with $\partial \Sigma=\{N=0\}$ and $\partial \Sigma$ compact, is called a static black hole data set. \end{Definition} The following definition, taken from \cite{4b6cb19bc94d4cf485e58571e3062f77}, recalls the notion of {\it weakly outermost} horizon. \begin{Definition}[Galloway, \cite{4b6cb19bc94d4cf485e58571e3062f77}] Let $(\Sigma; \sg, N)$ be a static black hole data set. Then, a horizon $H$ is said weakly outermost if there are no embedded surfaces $S$ homologous to $H$ having negative outwards mean curvature. \end{Definition} The following is the definition of Kasner asymptotic. It requires a decay into a background Kasner space faster than any inverse power of the distance. The definition follows the intuitive notion and it is written in the coordinates of the background Kasner, very much in the way AF is written in Schwarzschildian coordinates. \begin{Definition}[Kasner asymptotic]\label{KADEF} A data set $(\Sigma; g,N)$ is asymptotic to a Kasner data $(\Sigma^{\mathbb{K}};g^{\mathbb{K}},N^{\mathbb{K}})$, $\Sigma_{\mathbb{K}}=(0,\infty)\times {\rm T}^{2}$, if for any $m\geq 1$ and $n\geq 0$ there is $C>0$, a bounded set $K\subset \Sigma$ and a diffeomorphism into the image $\phi:\Sigma\setminus K\rightarrow \Sigma_{\mathbb{K}}$ such that, \begin{gather} |\partial_{I}(\phi_{*}g)_{ij}-\partial_{I}g^{\mathbb{K}}_{ij}|\leq \frac{C}{x^{m}}\\ |\partial_{I}(\phi_{*}N)-\partial_{I}N^{\mathbb{K}}|\leq \frac{C}{x^{m}} \end{gather} for any multi-index $I=(i_{1},i_{2},i_{3})$ with $|I|=i_{1}+i_{2}+i_{3}\leq n$, where, if $x, y$ and $z$ are the coordinates in the Kasner space, then $\partial_{I}=\partial_{x}^{i_{1}}\partial_{y}^{i_{2}}\partial_{z}^{i_{3}}$. \end{Definition} The next is the definition of data set of Myers/Korotkin-Nicolai type that we use. \begin{Definition}[Black holes of M/KN type]\label{KNTDEF} A static-black hole data set $(\Sigma;\sg,N)$ is of Myers/Korotkin-Nicolai type if \begin{enumerate} \item $\partial \Sigma$ consist of $h\geq 1$ weakly outermost (topologically) spherical horizons, \item $\Sigma$ is diffeomorphic to a solid three-torus minus $h$-open three-balls, \item the asymptotic is Kasner. \end{enumerate} \end{Definition} It is worth to restate now the main classification theorem that we shall prove \begin{Theorem}[The classification Theorem]\label{TCTHM2} Any static black hole data set is either, \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\Roman*)}, widest=a, align=left] \item\label{FTII} a Schwarzschild black hole, or, \item\label{FTI} a Boost, or, \item\label{FTIII} is of Myers/Korotkin-Nicolai type. \end{enumerate} \end{Theorem} As an outcome of the proof (see Part II) it will be shown that the Kasner asymptotic of the static black holes of type \ref{FTIII}, that is of M/KN type, is different from the Kasner $A$ and $C$ (of course, as explained earlier, it can't be asymptotic to a Kasner with $\gamma<0$ by the maximum principle). We leave it as an open problem to prove that the only static black hole data sets asymptotic to $B$ are the Boosts. \begin{Problem} Prove that the Boosts are the only static black hole data sets asymptotic to a Boost. \end{Problem} It is also not known if the only static vacuum black holes of type \ref{FTIII} are the Myers/Korotkin-Nicolai static black holes. We state this as an open problem. \begin{Problem}\label{OPENPRO} Prove (or disprove) that the only static vacuum black holes of type \ref{FTIII} are the Myers/Korotkin-Nicolai black holes. \end{Problem} On a large part of the article we will use the variables $(\hg,U)$ with $\hg=N^{2}g$ and $U=\ln N$, instead of the natural variables $(g,N)$. The data $(\Sigma;\hg,U)$ is the {\it harmonic presentation} of the data $(\Sigma;g,N)$. The static equations in these variables are, \begin{gather} Ric_{\hg}=2\nabla U\nabla U,\quad \Delta_{\hg} U=0 \end{gather} and therefore the map $U:(\Sigma;\hg)\rightarrow \mathbb{R}$ is harmonic, (hence the name). \subsection{Metric balls, annuli and partitions}\label{SAP} \begin{enumerate}[leftmargin=*, label={\rm \arabic*}, widest=a, align=left] \item {\sc Metric balls}. If $C$ is a set and $p$ a point then $\dist_{g}(C,p)=\inf\{\dist_{g}(q,p):q\in C\}$. Very often we take $C=\partial \Sigma$. If $C$ is a set and $r>0$, then, define the open ball of `center' $C$ and radius $r$ as, \begin{equation} B_{g}(C,r)=\{p\in \Sigma:\dist_{g}(C,p)<r\} \end{equation} \item {\sc Annuli}. Let $(\Sigma;g)$ be a metrically complete and non-compact Riemannian manifold with non-empty boundary $\partial \Sigma$. - Let $0<a<b$, then we define the open annulus $\mathcal{A}_{g}(a,b)$ as \begin{equation} \mathcal{A}_{g}(a,b)=\{p\in \Sigma: a<\dist_{g}(p,\partial \Sigma)<b\} \end{equation} We write just $\mathcal{A}(a,b)$ when the Riemannian metric $g$ is clear from the context. - If $C$ is a connected set included in $\mathcal{A}_{g}(a,b)$, then we write, \begin{equation} \mathcal{A}^{c}_{g}(C;a,b) \end{equation} to denote the connected component of $\mathcal{A}_{g}(a,b)$ containing $C$. The set $C$ could be for instance a point $p$ in which case we write $\mathcal{A}^{c}_{g}(p;a,b)$. \item {\sc Partitions cuts and end cuts}. To understand the asymptotic geometry of data sets, we will study the geometry of scaled annuli. Sometimes however it will be more convenient and transparent to use certain sub-manifolds instead of annuli. For this purpose we define partitions, partition cuts, end cuts, and simple end cuts. {\it Assumption}: Below we assume that $(\Sigma;g)$ is a metrically complete and non-compact Riemannian manifold with non-empty and compact boundary $\partial \Sigma$. \begin{Definition}[Partitions] A set of connected compact submanifolds of $\Sigma$ with non-empty boundary \begin{equation} \{\mathcal{P}^{m}_{j,j+1},\ j=j_{0},j_{0}+1,\ldots;\ m=1,2,\ldots,m_{j}\geq 1\}, \end{equation} ($j_{0}\geq 0$), is a {\it partition} if, \begin{enumerate} \item $\mathcal{P}^{m}_{j,j+1}\subset \mathcal{A}(2^{1+2j},2^{4+2j})$ for every $j$ and $m$. \item $\partial \mathcal{P}^{m}_{j,j+1}\subset (\mathcal{A}(2^{1+2j},2^{2+2j})\cup \mathcal{A}(2^{3+2j},2^{4+2j}))$ for every $j$ and $m$. \item The union $\cup_{j,m}\mathcal{P}^{m}_{j,j+1}$ covers $\Sigma\setminus B(\partial \Sigma,2^{2+2j_{0}})$. \end{enumerate} \end{Definition} \begin{figure}[h] \centering \includegraphics[width=7cm, height=9cm]{Partition.pdf} \caption{The figure shows the annuli $\mathcal{A}(2^{1+2j},2^{2+2j})$, $\mathcal{A}(2^{3+2j},2^{4+2j})$ and the two components, for $m=1,2$ of $\mathcal{P}^{m}_{j,j+1}$.} \label{PARTITIONF} \end{figure} Figure \ref{PARTITIONF} shows schematically a partition. The existence of partitions is done (succinctly) as follows. Let $j_{0}\geq 0$ and let $j\geq j_{0}$. Let $f:\Sigma\rightarrow [0,\infty)$ be a (any) smooth function such that $f\equiv 1$ on $\{p:\dist(p,\partial \Sigma)\leq 2^{1+2j}\}$ and $f\equiv 0$ on $\{p: \dist(p,\partial \Sigma)\geq 2^{2+2j}\}$, \footnote{Consider a partition of unity $\{\chi_{i}\}$ subordinate to a cover $\{\mathcal{B}_{i}\}$ where the neighbourhoods $\mathcal{B}_{i}$ are small enough that if $\mathcal{B}_{i}\cap \{p:\dist(p,\partial \Sigma)\leq 2^{1+2j}\}\neq \emptyset$ then $\mathcal{B}_{i}\cap \{p: \dist(p,\partial \Sigma)\geq 2^{2+2j}\}=\emptyset$. Then define $f=\sum_{i\in I}\chi_{i}$, where $i\in I$ iff $\mathcal{B}_{i}\cap \{p:\dist(p,\partial \Sigma_{i})\leq 2^{1+2j}\}\neq \emptyset$.}. Let $x$ be any regular value of $f$ in $(0,1)$. For each $j$ let $\mathcal{Q}_{j}$ be the compact manifold obtained as the union of the closure of the connected components of $\Sigma\setminus \{f=x\}$ containing at least a component of $\partial \Sigma$. Then the manifolds $\mathcal{P}^{m}_{j,j+1}$, $m=1,\ldots,m_{j}$, are defined as the connected components of $\mathcal{Q}_{j+1}\setminus \mathcal{Q}_{j}^{\circ}$. We let $\partial^{-}\mathcal{P}^{m}_{j,j+1}$ be the union of the connected components of $\partial \mathcal{P}^{m}_{j,j+1}$ contained in $\mathcal{A}(2^{1+2j},2^{2+2j})$. Similarly, we let $\partial^{+}\mathcal{P}^{m}_{j,j+1}$ be the union of the connected components of $\partial \mathcal{P}^{m}_{j,j+1}$ contained in $\mathcal{A}(2^{3+2j},2^{4+2j})$. \begin{Definition}[Partition cuts] If $\mathcal{P}$ is a partition, then for each $j$ we let \begin{equation} \{\mathcal{S}_{jk},k=1,\ldots,k_{j}\} \end{equation} be the set of connected components of the manifolds $\partial^{-}\mathcal{P}^{m}_{j,j+1}$ for $m=1,\ldots,m_{j}$. The set of surfaces $\{\mathcal{S}_{jk},j\geq j_{0},\ldots,k=1,\ldots,k_{j}\}$ is called a {\it partition cut}. \end{Definition} \begin{Definition}[End cuts] Say $\Sigma$ has only one end. Then, a subset, $\{\mathcal{S}_{jk_{l}}, l=1,\ldots,l_{j}\}$ of a partition cut $\{\mathcal{S}_{jk},k=1,\ldots,k_{j}\}$ is called an end cut if when we remove all the surfaces $\mathcal{S}_{jk_{l}}$, $l=1,\ldots,l_{j}$, from $\Sigma$, then every connected component of $\partial \Sigma$ belongs to a bounded component of the resulting manifold, whereas if we remove all but one of the surfaces $\mathcal{S}_{jk_{l}}$, then at least one connected component of $\partial \Sigma$ belongs to an unbounded component of the resulting manifold. \end{Definition} If $\Sigma$ has only one end, then one can always remove if necessary manifolds from a partition cut $\{\mathcal{S}_{jk},k=1,\ldots,k_{j}\}$ to obtain an end cut. \begin{Definition}[Simple end cuts] Say $\Sigma$ has only one end. If an end cut $\{\mathcal{S}_{jk_{l}},j\geq j_{0},l=1,\ldots,l_{j}\}$ has $l_{j}=1$ for each $j\geq j_{0}$ then we say that the end is a {\it simple end cut} and write simply $\{\mathcal{S}_{j}\}$. \end{Definition} If $\{\mathcal{S}_{j}\}$ is a simple end cut and $j_{0}\leq j<j'$ we let $\mathcal{U}_{j,j'}$ be the compact manifold enclosed by $\mathcal{S}_{j}$ and $\mathcal{S}_{j'}$. This notation will be used very often. \end{enumerate} \subsection{A Harnak-type of estimate for the Lapse}\label{TBCPHTE1} Let $(\Sigma;\sg,N)$ be a metrically complete static data set with $\partial \Sigma$ compact. In \cite{MR1809792}, Anderson observed that, as the four-metric $N^{2}dt^{2}+g$ is Ricci-flat, then Liu's ball-covering property holds \cite{MR1216638} (the compactness of $\partial \Sigma$ is necessary here because Liu's theorem is for manifolds with non-negative Ricci curvature outside a compact set). Namely, for any $b>a>\delta>0$ there is $n$ and $r_{0}$ such that for any $r\geq r_{0}$ the annulus $\mathcal{A}(ra,rb)$ can be covered by at most $n$ balls of $g$-radius $r\delta$ centred in the same annulus. Hence any two points $p$ and $q$ in a connected component of $\mathcal{A}(ra,rb)$ can be joined through a chain, say $\alpha_{pq}$, of at most $n+2$ radial geodesic segments of the balls of radius $\delta$ covering $\mathcal{A}(ra,rb)$. On the other hand Anderson's estimate (see subsection \ref{CMACD}) implies that the $g$-gradient $|\nabla \ln N|_{r}$ is bounded by $C/r$. Integrating $|\nabla \ln N|$ along the curves $\alpha_{pq}$ and using Anderson's bound we arrive at a relevant Harnak estimate controlling uniformly the quotients $N(p)/N(q)$. The estimate is due to Anderson and is summarised in the next Proposition (for further details see, \cite{0264-9381-32-19-195001}). \begin{Proposition}{\rm (Anderson, \cite{MR1809792})}\label{MAXMINU11} Let $(\Sigma;g,N)$ be a metrically complete static data set with $\partial \Sigma$ compact and let $0<a<b$. Then, there is $r_{0}$ and $\eta>0$, such that for any $r>r_{0}$ and for any set $Z$ included in a connected component of $\mathcal{A}(a,b)$ we have, \begin{equation}\label{EQHARN1} \max\{N(p):p\in Z\}\leq \eta \min\{N(p):p\in Z\} \end{equation} \end{Proposition} \section{Conformal transformations by powers of the lapse}\label{CTPL} In this section we study conformal transformations of static metrics by powers of the lapse from the point of view \'a la Backry-\'Emery. The contents are the following. Subsection \ref{BESEC} explains the structure of the conformal equations, Proposition \ref{FELIZ}. Subsection \ref{CMACD} proves Lemma \ref{LEMMAME} and from it its is obtained a generalised Anderson's decay estimate for the conformally related data, Lemma \ref{CDLEMMA}. These estimates are used in subsection \ref{CMMC} to show the metric completeness of the manifolds $(\Sigma; \overline{g}=N^{-2\epsilon}g)$ for $-1-\sqrt{2}<\epsilon<-1+\sqrt{2}$ (provided $\partial \Sigma$ is compact and $N|_{\Sigma}>0$ and $(\Sigma;g)$ is metrically complete), Theorem \ref{COMN2}. Subsection \ref{APP} contains important applications. First, in subsection \ref{CDPL} a few important remarks are pointed out on the conformal data $(\Sigma; N^{-2\epsilon}g)$ of a static data $(\Sigma;g)$, Proposition \ref{PIV}. It is particularly stressed here that when $\epsilon>0$ is small, the manifold $(\Sigma; N^{-2\epsilon}g)$ is still metrically complete, while the boundary becomes strictly convex (indeed the boundary of $\Sigma$ minus a small collar around $\partial \Sigma$). In subsection \ref{STR} it is proved using the previous subsection and a generalised splitting theorem \'a la Backry-\'Emery that static black hole data sets have only one end, Proposition \ref{KUNO}. In subsection \ref{HTT} it is proved using the completeness at infinity of $(\Sigma; N^{2}g=\hg)$ that either black holes data sets are boosts, or every horizon component is a sphere and is weakly outermost. Finally in subsection \ref{TAIS} it is proved that static isolated systems in GR are asymptotically flat. This application is independent of the rest of the article. \subsection{Conformal metrics, the Bakry-\'Emery Ricci tensor and the static equations}\label{BESEC} Given a Riemannian metric $g$, function $f$ and constant $\alpha$, the $\alpha$-Bakry-\'Emery Ricci tensor $Ric^{\alpha}_{f}$ is defined as (see \cite{MR2577473}; note that \cite{MR2577473} uses the notation $1/N$ instead of $\alpha$), \begin{equation} Ric^{\alpha}_{f}:=Ric+\nabla\nabla f-\alpha\nabla f \nabla f,\\ \end{equation} where the tensors $Ric$ and $\nabla$ on the right hand side are with respect to $g$. The $f$-Laplacian $\Delta_{f}$ acting on a function $\phi$ is defined as \begin{equation} \Delta_{f}\phi:=\Delta\phi-\langle \nabla f,\nabla \phi\rangle \end{equation} where again $\Delta$ on the right hand side are with respect to $g$ and $\langle\ ,\ \rangle=g(\ ,\ )$. Now observe that letting $f:=-\ln N$, the static Einstein equations (\ref{SEQ}) read \begin{equation}\label{RICCILN} Ric = -\nabla\nabla f +\nabla f\nabla f,\qquad \Delta f - \langle \nabla f,\nabla f\rangle =0 \end{equation} In the notation above, this is nothing else than to say that \begin{equation} Ric^{\alpha}_{f}=0,\qquad \Delta_{f} f=0 \end{equation} with $\alpha=1$ and $f=-\ln N$. It is an important fact that the structure of these equations is preserved along a one parameter family of conformal transformations. The following calculation explains this fact. \begin{Proposition}\label{FELIZ} Let $(\sM; g,N)$ be a static data set. Fixed $\epsilon$ define \begin{equation} \overline{\sg}=N^{-2\epsilon}g. \end{equation} Then, \begin{equation}\label{OOO} \overline{Ric}^{\alpha}_{f}=0,\qquad \overline{\Delta}_{f} f=0 \end{equation} where $\alpha=(1-2\epsilon-\epsilon^{2})/(1+\epsilon)^{2}$ and $f=-(1+\epsilon)\ln N$. \end{Proposition} We used the notation $\overline{Ric}$ for $Ric_{\overline{g}}$ and $\overline{\Delta}$ for $\Delta_{\overline{g}}$. Note that when $\epsilon=-1$, we obtain $\alpha=+\infty$, $f=0$ and $\overline{Ric}^{\alpha}_{f}=\overline{Ric}-2\nabla \ln N \nabla \ln N$. In particular we recover $\overline{Ric}=2\nabla \ln N\nabla \ln N$. \begin{proof} We prove first $\overline{\Delta}_{f} f=0$. Recall from standard formulae that if $\overline{g}=e^{2\psi}g$ then for every $\phi$ we have \begin{equation}\label{EQU} e^{-2\psi}\Delta \phi = \overline{\Delta} \phi -\langle \nabla \phi,\nabla \psi\rangle_{\overline{g}} \end{equation} Making $\phi=\ln N$ and $e^{\psi}=N^{-\epsilon}$, the left hand side of (\ref{EQU}) is equal to $-|\nabla \ln N|^{2}_{\overline{g}}$ because $\Delta \ln N=-|\nabla \ln N|^{2}_{g}$. Thus (\ref{EQU}) is $\overline{\Delta} \ln N - \langle \nabla \ln N, - (1 +\epsilon)\nabla \ln N\rangle_{\overline{g}}=0$ as wished. Let us prove now $\overline{Ric}^{\alpha}_{f}=0$. Recall first that if $\overline{g}=e^{2\psi}g$ then \begin{equation} \overline{Ric}=Ric-(\nabla\nabla \psi-\nabla\psi\nabla\psi)-(\Delta\psi+|\nabla \psi|^{2})g \end{equation} Choosing $\psi=-\epsilon\ln N$ and replacing $Ric$ by (\ref{RICCILN}) then gives \begin{equation}\label{FAFA} \overline{Ric}=(1+\epsilon)\nabla\nabla \ln N+(1+\epsilon^{2})\nabla\ln N\nabla \ln N-(\epsilon+\epsilon^{2})|\nabla \ln N|^{2}g \end{equation} Use now the usual general formula \begin{equation} \overline{\nabla}_{i}V_{j}=\nabla_{i}V_{j}-\big[V_{j}\nabla_{i}\psi+V_{i}\nabla_{j}\psi-(V^{k}\nabla_{k}\psi)g_{ij}\big] \end{equation} with $V^{j}=\nabla_{j} \ln N$ and with $\psi=-\epsilon\ln N$, to obtain \begin{equation}\label{FAF} \nabla\nabla \ln N=\overline{\nabla}\nabla \ln N -\epsilon \big[2\nabla\ln N\nabla \ln N - |\nabla \ln N|^{2} g\big] \end{equation} Plugging (\ref{FAF}) in (\ref{FAFA}) gives \begin{equation} \overline{Ric}=(1+\epsilon)\overline{\nabla}\nabla \ln N+(1-2\epsilon-\epsilon^{2})\nabla \ln N\nabla \ln N \end{equation} which is $\overline{Ric}^{\alpha}_{f}=0$ as claimed. \end{proof} \subsection{Conformal metrics and Anderson's curvature decay}\label{CMACD} In \cite{MR1806984} Anderson proved the following fundamental quadratic curvature decay for static data sets. \begin{Lemma}[Anderson, \cite{MR1806984}]\label{LACD1} There is a constant $\eta>0$ such that for any metrically complete static data set $(\Sigma;g,N)$ we have, \begin{equation}\label{CURVDEC} |Ric|(p)\leq \frac{\eta}{\dist^{2}(p,\partial \Sigma)},\qquad |\nabla \ln N|^{2}(p)\leq \frac{\eta}{\dist^{2}(p,\partial \Sigma)}, \end{equation} for any $p\in \Sigma^{\circ}$. \end{Lemma} This decay estimate is linked to a similar one for the metric $\hg=N^{2}\sg$ that we state below. It was proved also by Anderson in \cite{MR1806984}. We require $N>0$ everywhere and not only on $\Sigma^{\circ}$, to guarantee that $\hg$ is regular on $\partial \Sigma$. Note that imposing $N>0$ on $\Sigma$, does not make $(\Sigma;\hg=N^{2}\sg)$ automatically metrically complete. Indeed if $\Sigma$ is non-compact then $N$ could tend to zero over a divergent sequence of points and this may cause the metric incompleteness of the space $(\Sigma;\hg)$. \begin{Lemma}[Anderson \cite{MR1806984}]\label{LACD2} There is a constant $\eta>0$ such that, for any static data set $(\Sigma;g,N)$ with $N>0$ and for which $(\Sigma;\hg=N^{2}\sg)$ is metrically complete, we have \begin{equation}\label{CURVDEC2} |Ric_{\hg}|_{\hg}(p)\leq \frac{\eta}{\dist^{2}_{\hg}(p,\partial \Sigma)},\qquad |\nabla \ln N|_{\hg}^{2}(p)\leq \frac{\eta}{\dist^{2}_{\hg}(p,\partial \Sigma)} \end{equation} for any $p\in \Sigma^{\circ}$. \end{Lemma} The estimates (\ref{CURVDEC}) and (\ref{CURVDEC2}) are particular instances of a whole family of estimates for the conformal metrics $\overline{g}=N^{-2\epsilon}g$, with $\epsilon$ ranging in the interval $(-1-\sqrt{2},-1+\sqrt{2})$ which is the interval where the polynomial $1-2\epsilon-\epsilon^{2}$ is positive. We prove the estimates below using the results in Section \ref{BESEC}. As a byproduct we provide concise proofs of Lemmas \ref{LACD1} and \ref{LACD2}. This will be the goal of this section. We start with a lemma that to our knowledge is essentially due to J. Case \cite{MR2741248} (though similar techniques are well known too at least in the theory of minimal surfaces). This lemma was first presented in \cite{Reiris2017}, but due to its importance we prove it again here. \begin{Lemma}\label{LEMMAME} Let $(\Sigma,g)$ be a metrically complete Riemannian three-manifold with $Ric^{\alpha}_{f}\geq 0$ for some function $f$ and constant $\alpha>0$. Let $\phi$ be a non-negative function such that \begin{equation}\label{FUNLAP} \Delta_{f}\phi\geq c\phi^{2} \end{equation} for some constant $c>0$. Then, for any $p\in \Sigma^{\circ}$ we have \begin{equation}\label{FUNDEST} \phi(p)\leq \frac{\eta}{\dist^{2}(p,\partial \Sigma)} \end{equation} where $\eta=(36+4/\alpha)/c$. \end{Lemma} Observe that the lemma applies too to manifolds with $Ric\geq 0$ as this corresponds to the case $Ric_{f=0}^{\alpha}\geq 0$ for any $\alpha>0$. \begin{proof} For any function $\chi$ the following general formula holds \begin{equation} \Delta_{f}(\chi\phi)=\phi(\Delta_{f} \chi)+2\langle \nabla\chi,\nabla \phi\rangle +\chi\Delta_{f}\phi \end{equation} Thus, if $\chi\geq 0$ and if $q$ is a local maximum of $\chi\phi$ on $\Sigma^{\circ}$, we have \begin{equation}\label{PRELCALC} 0\geq \bigg[\Delta_{f}(\chi\phi)\bigg]\bigg|_{q} \geq \bigg[\phi \Delta_{f}\chi - 2\frac{|\nabla \chi|^{2}}{\chi}\phi +c\chi\phi^{2}\bigg]\bigg|_{q} \end{equation} where to obtain the second inequality we used (\ref{FUNLAP}). Let $r_p=\dist(p,\partial \Sigma)$. On $B(p,r_p)$ let the function $\chi(x)$ be $\chi(x)=(r_p^{2}-r(x)^{2})^{2}$. To simplify notation make $r=r(x)=\dist(x,p)$. Let $q$ be a point in the closure of $B(p,r_p)$ where the maximum of $\chi\phi$ is achieved. If $\phi(q) = 0$, then $\phi = 0$ and (\ref{FUNDEST}) holds for any $\eta>0$. So let us assume that $\phi(q)>0$. In particular $p$ belongs to the interior of $B(p,r_p)$. By (\ref{PRELCALC}) we have \begin{align}\label{PREVIOUS} cr_p^{4}\phi(p) \leq c(\chi\phi)(q) & \leq \bigg[2\frac{|\nabla\chi|^{2}}{\chi}-\Delta_{f}\chi \bigg]\bigg|_{q}\\ & =\bigg[4(r_p^{2}-r^{2})r\Delta_{f}r+4r_p^{2}+20r^{2}\bigg]\bigg|_{q} \end{align} But if $Ric_{f}^{\alpha}\geq 0$ then $\Delta_{f} r\leq (3+1/\alpha)/r$, (see \cite{MR2577473} Theorem A.1; On non-smooth points of $r$ this equations holds in the barrier sense\footnote{This is an important property as it allows us to make analysis as if $r$ were a smooth function, see \cite{MR2243772}.}). Using this in (\ref{PREVIOUS}) and after a simple computation we deduce, \begin{equation} \phi(p)\leq \frac{(4(3+1/\alpha)+24)}{c r_p^{2}}, \end{equation} which is (\ref{FUNDEST}). \end{proof} Let us see now an application of the previous Lemma. Let $(\Sigma;g,N)$ be a static data with $N>0$. Let $\epsilon$ be a number in $(-1-\sqrt{2},-1+\sqrt{2})$ and assume that the space ($\Sigma$; $\overline{g}=N^{-2\epsilon}g$) is metrically complete. We claim that there is $\eta(\epsilon)>0$, such that for all $p\in \Sigma^{\circ}$ we have \begin{equation}\label{AESTS} |\nabla \ln N|^{2}_{\overline{g}}(p)\leq \frac{\eta(\epsilon)}{\dist^{2}_{\overline{g}}(p,\partial \sM)} \end{equation} Let us prove the claim. Assume first $\epsilon\neq -1$. From Lemma \ref{LEMMAME} we know that $\overline{Ric}^{\alpha}_{f}=0$ where $f=-(1+\epsilon)\ln N$ and where $\alpha=(1-2\epsilon-\epsilon^{2})/(1+\epsilon)^{2}$. The factor $(1-2\epsilon-\epsilon^{2})$ is greater than zero by the assumption on the range of $\epsilon$. Now use the general formula (see\cite{MR2741248}) \begin{equation}\label{BOCHNERF} \frac{1}{2}\overline{\Delta}_{f} |\nabla \phi|_{\overline{g}}^{2} = |\overline{\nabla}\nabla \phi|_{\overline{g}}^{2}+\langle\nabla \phi,\nabla(\overline{\Delta}_{f}\phi)\rangle_{\overline{g}} +\overline{Ric}^{\alpha}_{f}(\nabla\phi, \nabla \phi) +\alpha\langle \nabla f,\nabla \phi\rangle_{\overline{g}}^{2} \end{equation} with $\phi=\ln N$, together with $\overline{Ric}^{\alpha}_{f}=0$, to obtain \begin{equation} \overline{\Delta}_{f} |\nabla \ln N|_{\overline{g}}^{2}\geq 2(1-2\epsilon-\epsilon^{2})|\nabla \ln N|_{\overline{g}}^{4} \end{equation} and thus (\ref{AESTS}) from Lemma \ref{LEMMAME}. When $\epsilon=-1$ then $\overline{Ric}^{\alpha}_{f=0}\geq 0$ for any $\alpha>0$ and \begin{equation} \overline{\Delta}_{f=0} |\nabla \ln N|_{\overline{g}}^{2}\geq 4|\nabla \ln N|_{\overline{g}}^{4} \end{equation} The claim again follows from Lemma \ref{LEMMAME}. Note that Lemma \ref{LEMMAME} provides the following explicit expression for $\eta(\epsilon)$, \begin{equation} \eta(\epsilon)= \frac{1}{2(1-2\epsilon-\epsilon^{2})}\bigg[36+\frac{4(1+\epsilon)^{2}}{(1-2\epsilon-\epsilon^{2})}\bigg] \end{equation} What we just showed is a part of the {\it generalised Anderson's quadratic curvature decay} mentioned earlier, that we now state and prove. \begin{Lemma}\label{CDLEMMA} Let $\epsilon$ be a number in the interval $(-1-\sqrt{2},-1+\sqrt{2})$. Then there is $\eta(\epsilon)$ such that for any static data set $(\Sigma;g,N)$ with $N>0$ and for which $(\Sigma; \overline{g}=N^{-2\epsilon}\sg)$ is metrically complete, we have, \begin{equation}\label{CDLEMMAEST} |\overline{Ric}|_{\overline{g}}(p)\leq \frac{\eta(\epsilon)}{\dist^{2}_{\overline{g}}(p,\partial \Sigma)}, \qquad |\nabla \ln N|^{2}_{\overline{g}}(p)\leq \frac{\eta(\epsilon)}{\dist^{2}_{\overline{g}}(p,\partial \Sigma)}, \end{equation} for any $p\in \Sigma^{\circ}$. \end{Lemma} \begin{proof} We have already shown the second estimate of (\ref{CDLEMMAEST}). If $\partial \Sigma=\emptyset$ then $N$ is constant and $\overline{g}$ is flat. So let us assume that $\partial \Sigma\neq \emptyset$. Let $p\in \Sigma^{\circ}$. By scaling we can assume without loss of generality that $N(p)=1$ and $\overline{d}_{p}=\dist_{\overline{g}}(p,\partial \Sigma)=1$. In this setup, we need to prove that \begin{equation}\label{RMNN} |\overline{Ric}|_{\overline{\sg}}(p)\leq c_{0}(\epsilon), \end{equation} for $c_{0}$ independent of the data. The second estimate of (\ref{CDLEMMAEST}) yields, \begin{equation}\label{RAPA1} |\nabla \ln N|_{\overline{g}}(x)\leq c_{1},\\ \end{equation} for all $x\in B_{\overline{g}}(p,1/2)$ and where $c_{1}=c_{1}(\epsilon)$ is independent of the data. Therefore, as, \begin{equation} \overline{Ric}=(1+\epsilon)\overline{\nabla}\nabla \ln N+(1-2\epsilon-\epsilon^{2})\nabla \ln N\nabla \ln N, \end{equation} then to prove (\ref{RMNN}) it is enough to prove \begin{equation}\label{CARTOON3} |\overline{\nabla}\nabla \ln N|_{\overline{\sg}}(p)\leq c'_{0}(\epsilon) \end{equation} for a $c'_{0}(\epsilon)$ independent of the data. Let $\gamma(s)$ be a geodesic segment joining $p$ to $x$. Then we can write, \begin{equation} \big|\ln \frac{N(x)}{N(p)}\big|=\big|\int \nabla_{\gamma'}\ln Nds\big|\leq \int |\nabla\ln N|_{\overline{\sg}}ds\leq c_{1}/2 \end{equation} where we used (\ref{RAPA1}). Because $N(p)=1$, this inequality gives, \begin{equation}\label{RAPA2} 0<c_{2}\leq N(x)\leq c_{3}<\infty \end{equation} for all $x\in B_{\overline{g}}(p,1/2)$ and where $c_{2}=c_{2}(\epsilon)$ and $c_{3}=c_{3}(\epsilon)$. Let $\hg=N^{2+2\epsilon}\overline{\sg}=N^{2}g$. If $\epsilon\geq -1$ let $r_{0}=c_{2}^{1+\epsilon}$, whereas if $\epsilon<-1$ let $r_{0}=c_{3}^{1+\epsilon}$. Then, clearly $B_{\hg}(p,r_{0})\subset B_{\overline{g}}(p,1/2)$. Moreover (\ref{RAPA1}) and (\ref{RAPA2}) show that for all $x\in B_{\hg}(p,r_{0})$ we have, \begin{equation}\label{CARTOON1} |\nabla \ln N|_{\hg}(x)\leq c_{4}(\epsilon),\\ \end{equation} As $Ric_{\hg}=2\nabla \ln N \nabla \ln N$, we deduce that \begin{equation} |Ric_{\hg}|_{\hg}(x)\leq c_{5}(\epsilon) \end{equation} for all $x\in B_{\hg}(p,r_{0})$. In dimension three the Ricci tensor determines the Riemann tensor, so, \begin{equation}\label{RARA2} |Rm_{\hg}|_{\hg}(x)\leq c_{6}(\epsilon) \end{equation} Hence, by standard arguments, there is $r_{1}(\epsilon)\leq r_{0}$ such that the exponential map $exp:B^{\mathcal{T}}_{\hg}(p,r_{1})\rightarrow \Sigma$, is a diffeomorphism into the image, ($B_{\hg}^{\mathcal{T}}(p,r_{1})$ is a ball in $\mathcal{T}_{p}\Sigma$). Let $\tilde{\hg}$ be the lift of $\hg$ to $B^{\mathcal{T}}_{\hg}(p,r_{1})$ by $exp^{-1}$. We still have the bound (\ref{RARA2}) for $\tilde{\hg}$ and as the injectivity radius $inj_{\hg}(p)$ is bounded from below by $r_{1}$, then the {\it harmonic radius} $i_{h}(p)$, which controls the geometry in $C^{2}$ (see \cite{MR2243772}), is bounded from below by $r_{2}(\epsilon)\leq r_{1}$. As $\Delta_{\tilde{\hg}}\ln N=0$, then standard elliptic estimates give \begin{equation}\label{CARTOON2} |\nabla^{\tilde{\hg}} \nabla \ln N|_{\tilde{\hg}}(p)\leq c_{7}(\epsilon), \end{equation} where $\nabla^{\tilde{\hg}}$ is the covariant derivative of $\tilde{\hg}$. Finally, (\ref{RAPA2}), (\ref{CARTOON1}), (\ref{CARTOON2}) and the general formula, \begin{equation} \overline{\nabla}\nabla\ln N=\nabla^{\hg}\nabla \ln N-(1+\epsilon)\big[2\nabla\ln N\nabla \ln N-|\nabla \ln N|^{2}_{\hg}\hg\big] \end{equation} provide the required bound (\ref{CARTOON3}). This completes the proof. \end{proof} It is easy to check using elliptic estimates that the proof of the Lemma (\ref{CDLEMMA}) leads also to the estimates \begin{equation}\label{ESTCHEC} |\overline{\nabla}^{(k)}\overline{Ric}|_{\overline{g}}(p)\leq \frac{\eta_{k}(\epsilon)}{\dist^{2+k}_{\overline{g}}(p,\partial \Sigma)}, \qquad |\overline{\nabla}^{(k)}\nabla \ln N|^{2}_{\overline{g}}(p)\leq \frac{\eta_{k}(\epsilon)}{\dist^{2+2k}_{\overline{g}}(p,\partial \Sigma)} \end{equation} for every $k\geq 1$, where $\overline{\nabla}^{(k)}$ is $\overline{\nabla}$ applied $k$-times and where the positive constants $\eta(\epsilon)$, $\eta_{1}(\epsilon)$, $\eta_{2}(\epsilon)$, $\eta_{3}(\epsilon),\ldots$ are independent of the data set. \subsection{Conformal metrics and metric completeness}\label{CMMC} In this section we aim to prove that metric completeness of data sets (with $N>0$ and $\partial \Sigma$ compact) imply the metric completeness of the conformal spaces $(\Sigma;\overline{\sg}=N^{-2\epsilon}\sg)$ for any $\epsilon$ in the range $(-1-\sqrt{2},-1+\sqrt{2})$. Note that until now, when it was necessary we have been including the completeness of the metrics $\overline{g}$ as a hypothesis. \begin{Theorem}\label{COMN2} Let $\epsilon$ be a number in the interval $(-1-\sqrt{2},-1+\sqrt{2})$. Let $(\Sigma;g,N)$ be a metrically complete static data set with $N>0$ and $\partial \Sigma$ compact. Then $(\Sigma;\overline{\sg}=N^{-2\epsilon}\sg)$ is metrically complete. \end{Theorem} We start proving a corollary to Lemma \ref{CDLEMMA} that estimates $N$. \begin{Corollary} {\rm (to Lemma \ref{CDLEMMA})} Let $\epsilon$ be a number in the interval $(-1-\sqrt{2},-1+\sqrt{2})$. Let $(\Sigma;g,N)$ be a static data set with $N>0$ and $\partial \Sigma$ compact, and for which $(\Sigma, \overline{g}=N^{-2\epsilon}\sg)$ is metrically complete. Then, there is $c>0$ (depending on the data) such that \begin{equation}\label{OBS} \frac{1}{c(1+\dist_{\overline{\sg}}(p,\partial \Sigma))^{\sqrt{\eta}}}\leq N(p)\leq c(1+\dist_{\overline{\sg}}(p,\partial \Sigma))^{\sqrt{\eta}} \end{equation} for any $p\in \Sigma^{\circ}$, where $\eta=\eta(\epsilon)$ is the coefficient in the decay estimate (\ref{CDLEMMAEST}) of Lemma \ref{CDLEMMA}. \end{Corollary} \begin{proof} Let $p\in \Sigma$ such that $\overline{d}_{p}:=\dist_{\overline{\sg}}(p,\partial \Sigma)\geq 1$ (if it exists). Let $\gamma(\overline{s})$ be a $\overline{\sg}$-geodesic segment joining $\partial \Sigma$ to $p$ and realising the $\overline{\sg}$-distance between them (in particular $N(\gamma(\overline{d}_{p}))=N(p)$). Then we can write \begin{equation} \bigg|\ln \frac{N(\gamma(\overline{d}_{p}))}{N(\gamma(1))}\bigg|=\bigg|\int_{1}^{\overline{d}_{p}}\nabla_{\gamma'}\ln Nd\overline{s}\bigg|\leq \int_{1}^{\overline{d}_{p}}\big|\nabla \ln N\big|d\overline{s}\leq \sqrt{\eta(\epsilon)}\ln \overline{d}_{p} \end{equation} where to obtain the last inequality we have used (\ref{AESTS}). Therefore, \begin{equation} N(p)\leq N(\gamma(1))\overline{d}_{p}^{\sqrt{\eta}}\quad {\rm and}\quad N(p)\geq N(\gamma(1))/\overline{d}_{p}^{\sqrt{\eta}} \end{equation} Thus, \begin{equation}\label{OBSE} \overline{m}\overline{d}_{p}^{\sqrt{\eta}}\geq N(p) \geq \underline{m}/d_{p}^{\sqrt{\eta}} \end{equation} where $\overline{m}=\max\{N(q):\dist_{\overline{\sg}}(q,\partial \Sigma)=1\}$ and $\underline{m}=\min\{N(q):\dist_{\overline{\sg}}(q,\partial \Sigma)\}$. This clearly implies (\ref{OBS}). Obtaining (\ref{OBS}) for all $p\in \Sigma^{\circ}$, namely even for those with $\overline{d}_{p}\leq 1$, is direct due to the compactness of $\partial \Sigma$. \end{proof} \begin{Proposition}\label{PURURU} Let $\epsilon$ be a number in the interval $(-1-\sqrt{2},-1+\sqrt{2})$. Let $(\Sigma;g,N)$ be a static data set with $N>0$ and for which $(\Sigma, \overline{g}=N^{-2\epsilon}\sg)$ is metrically complete. Then, for any $\zeta$ such that $|\zeta|\leq 1/(2\sqrt{\eta})$, the space $(\Sigma;N^{2\zeta}\overline{\sg})$ is metrically complete, where $\eta=\eta(\epsilon)$ is the coefficient in (\ref{CDLEMMAEST}). \end{Proposition} \begin{proof} Let us assume that $\Sigma$ is non-compact otherwise there is nothing to prove. Let $\hat{\sg}=N^{2\zeta}\overline{\sg}$. To prove that $(\Sigma;\hat{\sg})$ is complete, we need to show that the following holds: for any sequence of points $p_{i}$ whose $\overline{g}$-distance to $\partial \Sigma$ diverges, then the $\hat{g}$-distance to $\partial \Sigma$ also diverges. Equivalently, we need to prove that for any sequence of curves $\alpha_{i}$ starting at $\partial \Sigma$ and ending at $p_{i}$ we have \begin{equation} \int_{0}^{\overline{s}_{i}}N^{\zeta}(\alpha_{i}(\overline{s}))d\overline{s}\longrightarrow \infty \end{equation} where $\overline{s}$ is the $\overline{\sg}$-arc length of $\alpha_{i}$ counting from $\partial \Sigma$. From (\ref{OBS}) we get, \begin{equation} N^{\zeta}(p)\geq \frac{c^{-|\zeta|}}{(1+\dist_{\overline{\sg}}(p,\partial \Sigma))^{|\zeta|\sqrt{\eta}}} \end{equation} for all $p$. But, $\dist_{\overline{\sg}}(\alpha_{i}(\overline{s}),\partial \Sigma)\leq \overline{s}$ and $|\zeta|\leq 1/(2\sqrt{\eta})$, so we deduce, \begin{equation} N^{\zeta}(\alpha_{i}(\overline{s}))\geq \frac{c^{-|\zeta|}}{(1+\overline{s})^{1/2}} \end{equation} Thus, \begin{equation} \int_{0}^{\overline{s}_{i}}{N^{\zeta}(\alpha_{i}(\overline{s}))}d\overline{s} \geq \int_{0}^{\overline{s}_{i}} \frac{c^{-|\zeta|}}{(1+\overline{s})^{1/2}}d\overline{s}\longrightarrow \infty \end{equation} as $\overline{s}_{i}\rightarrow \infty$ as wished. \end{proof} We prove now Theorem \ref{COMN2}. \begin{proof}[Proof of Theorem \ref{COMN2}] Let $\epsilon\in (-1-\sqrt{2},-1+\sqrt{2})$. Assume $\epsilon\neq 0$ otherwise there is nothing to prove. Let $n>0$ be an integer such that for any $i=0,1,\ldots,n-1$, \begin{equation}\label{SAYS} \big|\frac{\epsilon}{n}\big|\leq \frac{1}{2\sqrt{\eta(i\epsilon/n)}} \end{equation} where $\eta$ is the coefficient in (\ref{CDLEMMAEST}). According to Proposition \ref{PURURU}, the condition (\ref{SAYS}) says that if $\overline{\sg}_{i}=N^{-2(i\epsilon/n)}g$ is complete then so is $\overline{\sg}_{i+1}=N^{-2\epsilon/n}\overline{\sg}_{i}=N^{-2(i+1)\epsilon/n}g$ for any $i=0,1,\ldots,n-1$. Therefore, as $g$ is complete, then so are $\overline{\sg}_{1}$, $\overline{\sg}_{2}$, $\overline{\sg}_{3}$, until $\overline{\sg}_{n}=N^{-2\epsilon}g$ as wished. \end{proof} \subsection{Applications}\label{APP} \subsubsection{Conformal transformations of black hole metrics}\label{CDPL} Let $(\Sigma;g,N)$ be a static black hole data set. We denote by $\Sigma_{\delta}$ the manifold resulting after removing from $\Sigma$ the $g$-tubular neighbourhood of $\partial \Sigma$ and radius $\delta$, i.e. $\Sigma_{\delta}=\Sigma\setminus B(\partial \Sigma,\delta)$. Let $\delta_{0}$ be small enough that $\partial \Sigma_{\delta}$ is always smooth and isotopic to $\partial \Sigma$ for any $\delta\leq\delta_{0}$. Given $\epsilon>0$ let $\overline{g}=N^{-2\epsilon}g$. Let $\delta>0$ such that $\delta<\delta_{0}$. The second fundamental form $\overline{\Theta}$ of $\partial \Sigma_{\delta}$, (with respect to $\overline{g}$ and with respect to the inward normal to $\Sigma_{\delta}$), is \begin{equation}\label{GOGO} \overline{\Theta}=N^{\epsilon}\Theta -\epsilon\frac{\nabla_{n} N}{N^{1-\epsilon}}g \end{equation} where $\Theta$ is the second fundamental form of $\partial \Sigma_{\delta}$ with respect to $g$ and $n$ is the inward $g$-unit normal. If we let $\delta\rightarrow 0$, the function $\nabla_{n}N|_{\partial \Sigma_{\delta}}$ converges (on each connected component) to a positive constant (the surface gravity) while $N|_{\partial \Sigma_{\delta}}$ converges to zero. Hence if $\delta$ is small enough, the second term on the right hand side of (\ref{GOGO}) dominates over the first, and the boundary $\partial \Sigma_{\delta}$ is strictly convex with respect to $\overline{\sg}$. Combining this discussion with Theorem \ref{COMN2} we deduce the following Proposition that was proved for the first time in \cite{0264-9381-32-19-195001} and that will be used fundamentally in the next section. \begin{Proposition}\label{PIV} Let $(\Sigma;g,N)$ be a static black hole data set. Then, for every $0<\epsilon<-1+\sqrt{2}$ there is $0<\delta<\delta_{0}$ such that $(\Sigma_{\delta}; \overline{g}=N^{-2\epsilon}g)$ is metrically complete and $\partial \Sigma_{\delta}$ is strictly convex (with respect to $\overline{g}$ and with respect to the inward normal). \end{Proposition} The Riemannian spaces $(\Sigma_{\delta};\overline{g})$ have a metric, as discussed earlier, that we will denote by $\dist_{\overline{g}}^{\delta}$. The strict convexity of the boundaries as well as the metric completeness of the spaces $(\Sigma_{\delta}; \overline{g})$ imply two basic, albeit important, geometric facts: \begin{enumerate} \item[(i)] The distance $\dist_{\overline{g}}^{\delta}(p,q)$ between two points in $\Sigma_{\delta}$ is always realised by the length of a geodesic segment joining $p$ to $q$, and disjoint from $\partial \Sigma_{\delta}$ except, possibly, at the end-points $p$ and $q$. \item[(ii)] Given a curve $I$ embedded in $\Sigma_{\delta}$ and with end-points $p$ and $q$, there is always a geodesic segment minimising length in the class of curves embedded in $\Sigma_{\delta}$, isotopic to $I$ and having the same end-points. The minimising segment is disjoint from $\partial \Sigma_{\delta}$ except, possibly, at the end points $p$ and $q$. \end{enumerate} These properties allow us to make analysis as if the manifold $\Sigma_{\delta}$ were in practice boundary-less, and thus to import a series of results from {\it comparison geometry}, as developed for instance in \cite{MR2577473}, without worrying about the existence of the boundary. \vspace{0.2cm} \subsubsection{The structure of infinity}\label{STR} The following proposition shows that static black hole data sets have only one end and moreover admit simple end cuts. \begin{Proposition}\label{KUNO} Let $(\sM;\sg,N)$ be a static black hole data set. Then $\sM$ has only one end. Moreover $(\Sigma;\sg)$ admits a simple end cut. \end{Proposition} \begin{proof} We work with the manifolds $(\Sigma_{\delta},\overline{g}=N^{-2\epsilon}g)$ from Proposition \ref{PIV}, with $0<\epsilon<-1+\sqrt{2}$ and $\delta=\delta(\epsilon)\leq \delta_{0}$. We argue first in a fixed $(\Sigma_{\delta}; \overline{g})$ and then let $\epsilon\rightarrow 0$. If $i_{\sM}>1$, i.e. if $\Sigma$ has at least two ends, then $\Sigma_{\delta}$ has also at least two ends. Hence $\Sigma_{\delta}$, (which has convex boundary) contains a line diverging through two of them. The presence of a line is relevant because, even having $\partial\Sigma_{\delta} \neq \emptyset$, the geometry of $(\Sigma_{\delta}; \overline{g},N)$ is such (recall the discussion in Section \ref{CDPL}) that the {\it Splitting Theorem} as proved in \cite{MR2577473} applies \footnote{Theorem 6.1 in \cite{MR2577473} is stated for spaces with $Ric^{0}_{f}\geq 0$ and $f$ bounded. The boundedness of $f$ is required to have a Laplacian comparison for distance functions ($\S$ \cite{MR2577473} Theorem 1.1). No such condition on $f$ (hence on $N$, because $f=-(1+\epsilon)\ln N$) is required in our case, as we have $\overline{Ric}^{0}_{f}=\alpha\nabla f\nabla f$ with $\alpha>0$ and a Laplacian comparison holds without further assumptions ($\S$ \cite{MR2577473}, Theorem A.1).}. More precisely, repeating line by line the proof of Theorem 6.1 in \cite{MR2577473}, one concludes that (see comments below after \ref{aWW}, \ref{bWW} and \ref{cWW}), \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\alph*)}, widest=a, align=left] \item\label{aWW} there is a smooth Busemann function $b^{+}_{\epsilon}$, ($b^{+}$ in the notation of \cite{MR2577473}), with $|\nabla b^{+}_{\epsilon}|_{\overline{g}}=1$ and whose level sets are totally geodesic, \item\label{bWW} the Ricci tensor is zero in the normal direction to the level sets, that is \begin{equation} \overline{Ric}(\nabla b^{+}_{\epsilon}, - )=0, \end{equation} \item\label{cWW} $N$ is constant in the normal directions to the level sets, that is $\langle \nabla b_{\epsilon}^{+}, \nabla N\rangle_{\overline{g}}=0$. \end{enumerate} The item \ref{aWW} is what is proved in Theorem 6.1 of \cite{MR2577473} and requires no comment. The items \ref{bWW} and \ref{cWW} follow instead from formula (6.11) in \cite{MR2577473} after recalling that in our case we have $\overline{Ric}^{0}_{f}=\alpha\nabla f\nabla f$, with $f=-(1+\epsilon)\ln N$ and $\alpha>0$. Of course \ref{aWW} implies that $\overline{g}$ locally splits. Namely, defining a coordinate $x$ by $x=b^{+}$, one can locally write $\overline{g}=dx^{2}+\overline{h}$, where $\overline{h}$ is the metric inherited from $\overline{g}$ on the level sets of $x$, that (under a natural identification) does not depend on $x$. The conclusions \ref{aWW}, \ref{bWW} and \ref{cWW} imply a contradiction as follows. Fix a point $p$ in $\Sigma_{\delta_{0}}^{\circ}$ and take a sequence $\epsilon_{i}\rightarrow 0$. Then, in a small but fixed neighbourhood $\mathcal{U}$ of $p$, the sequence $b^{+}_{\epsilon_{i}}$ sub-converges to a limit function $b^{+}_{0}$, with the same properties \ref{aWW}, \ref{bWW}, \ref{cWW} as each $b^{+}_{\epsilon_{i}}$ but now on $(\mathcal{U}; g,N)$, \footnote{The existence of the limit is easy to see because $|\nabla b^{+}_{\epsilon}|_{\overline{g}}=1$ and the level sets of $b^{+}_{\epsilon}$ are totally geodesic, (for every $\epsilon$). At every point the level set is just defined by geodesics perpendicular to $\nabla b^{+}_{\epsilon}$}. Hence $(\mathcal{U}; g)$ also splits. We claim that the Gaussian curvature $\gcur$ of the level sets of $b^{+}_{0}$ in $\mathcal{U}$ is zero. Indeed, as: (i) the level sets of $b_{0}^{+}$ are totally geodesic by \ref{aWW}, (ii) $Ric(\nabla b^{+}_{0},\nabla b^{+}_{0})=0$ by \ref{bWW}, and (iii) the scalar curvature $R$ of $g$ is zero by the static equations, then the Gauss-Codazzy equations yield $\gcur=0$. As $(\mathcal{U}; g)$ is flat then the static solution is flat everywhere by analyticity. The only flat static black hole data set with compact boundary is the Boost. As Boosts have only one end we reach a contradiction. Hence $i_{\Sigma}=1$. Let us prove now that $(\Sigma;g)$ admits simple cuts. Let $\{\mathcal{S}_{jk},j=0,1,2,\ldots,k=1,\ldots,k_{j}\}$ be an end cut. Suppose that $k_{j}>1$ for some $j\geq 0$. If we cut $\sM$ along $\mathcal{S}_{j1}$ we obtain a connected manifold, say $\sM'$, with two new boundary components, say $\mathcal{S}'_{1}$ and $\mathcal{S}'_{2}$, both of which are copies of $\mathcal{S}_{j1}$ (if cutting $\sM$ along $\mathcal{S}_{j1}$ results in two connected components then $k_{j}=1$ because of how simple cuts are constructed). Consider another copy of $\sM'$, denoted by $\sM''$ and denote the corresponding new boundary components as $\mathcal{S}''_{1}$ and $\mathcal{S}''_{2}$. By gluing $\mathcal{S}_{1}'$ to $\mathcal{S}''_{2}$ and $\mathcal{S}'_{2}$ to $\mathcal{S}''_{1}$ we obtain a static solution (a double cover of the original) with two ends, and one can proceed as earlier to obtain a contradiction. \end{proof} \subsubsection{Horizons's types and properties}\label{HTT} The following Proposition, about the structure of horizons, uses the completeness at infinity of $\hg$ and a pair of results due to Galloway \cite{4b6cb19bc94d4cf485e58571e3062f77}, \cite{MR1201655}. \begin{Proposition}\label{SOFOR} Let $(\sM; g, N)$ be a static black hole data set. Then, either \begin{enumerate}[labelindent=\parindent, leftmargin=*, label={\rm (\roman*)}, widest=a, align=left] \item $(\sM; g, N)$ is a Boost and therefore $\partial \sM$ is a totally geodesic flat torus, or, \item every component of $\partial \sM$ is a totally geodesic, weakly outermost, minimal sphere. \end{enumerate} \end{Proposition} \begin{proof} The idea is to prove that every component $H$ of $\partial \Sigma$ is a weakly outermost. Then, it is direct from Theorem 1.1 and 1.2 in \cite{4b6cb19bc94d4cf485e58571e3062f77} that either $H$ is a sphere or is a torus and if it is a torus then the whole space is a Boost. So let us prove that every component is weakly outermost. Let $\{H_{1},\ldots,H_{h}\}$, $h\geq 1$, be the set of horizons, i.e. the connected components of $\partial \Sigma$. Assume that there is an embedded orientable surface $\mathcal{S}$, homologous to one of the $H$'s, (say $H_{1}$), and with outer-mean curvature $\theta_{\mathcal{S}}$ strictly negative. For reference below define the negative constant $c$ as \begin{equation} c=\sup\bigg\{\frac{\theta_{\mathcal{S}}(q)}{N(q)}: q\in \mathcal{S}\bigg\} \end{equation} Let $\{\mathcal{S}_{j},j=j_{0},j_{1},\ldots\}$ be a simple end cut of $(\Sigma;g)$ (Proposition \ref{KUNO}). For each $j$, let $\Omega(\partial \Sigma,\mathcal{S}_{j})$ be the closure of the connected component of $\Sigma\setminus \mathcal{S}_{j}$ containing $\partial \Sigma$. Let $\mathcal{U}$ be the closed region enclosed by $H_{1}$ and $\mathcal{S}$ and assume that $j_{0}$ is large enough that $\mathcal{S}_{j}\cap \mathcal{U}=\emptyset$ for all $j\geq j_{0}$. For every $j\geq j_{0}$ let $\mathcal{M}_{j}$ be the closed region enclosed by $\mathcal{S},H_{2},\ldots,H_{h}$ and $\mathcal{S}_{j}$, that is $\mathcal{M}_{j}=\Omega(\partial \Sigma, \mathcal{S}_{j})\setminus \mathcal{U}^{\circ}$. Finally let \begin{equation} \hat{\mathcal{M}}_{j}=\mathcal{M}_{j}\setminus (H_{2}\cup\ldots\cup H_{h}) \end{equation} and note that now $\partial \hat{\mathcal{M}}_{j}=\mathcal{S}\cup \mathcal{S}_{j}$. On $\hat{\mathcal{M}}_{j}$ consider the optical metric $\overline{g}=N^{-2}g$. The Riemannian space $(\hat{\mathcal{M}}_{j};\overline{g})$ is metrically complete, (roughly speaking the horizons $H_{i},i\geq 2$ have been blown to infinity). Now, for every $j\geq j_{0}$ let $\gamma_{j}$ be the $\overline{g}$-geodesic segment inside $\hat{\mathcal{M}}_{j}$, realising the $\overline{g}$-distance between $\mathcal{S}$ and $\mathcal{S}_{j}$. The segments $\gamma_{j}$ are perpendicular to $\mathcal{S}$. Also, as they are length-minimising the $\overline{g}$-expansion $\overline{\theta}$ of the congruence of $\overline{g}$-geodesics emanating perpendicularly from $\mathcal{S}$, remains finite all along $\gamma_{j}$. Let $s\in [0,s_{j}]$ be the $g$-arc-length of $\gamma_{j}$ measured from $\mathcal{S}$. Note that $s$ is not the arc-length with respect to $\overline{g}$, that would be natural. We are going to use this parameterisation of $\gamma_{j}$ below. Observe that $s_{j}\rightarrow \infty$ as $j\rightarrow \infty$. Along $\gamma_{j}(s)$ let \begin{equation}\label{RRR} F(s)=\overline{\theta}(\gamma_{j}(s))+\frac{2}{N^{2}(\gamma_{j}(s))}\frac{d N(\gamma_{j}(s))}{ds} \end{equation} Then, as shown by Galloway \cite{MR1201655} (see also \cite{MR3077927}), the function $F$ satisfies the following differential inequality \begin{equation} \frac{dF}{ds}\leq -\frac{N}{2}F^{2} \end{equation} Now, a simple computation shows that $F(0)=\theta(0)/N(0)\leq c<0$. But from (\ref{RRR}) it is easily deduced that if \begin{equation} \int_{0}^{s_{j}} N(\gamma_{j}(s))ds>-\frac{2}{c} \end{equation} then there is $s^{*}\in (0,s_{j})$ such that $F(s^{*})=-\infty$, thus $\overline{\theta}(s^{*})=-\infty$ and the $\gamma_{j}$ would not be $\overline{g}$-length minimising. Thus, a contradiction is reached if we prove that $\int_{0}^{s_{j}} N(\gamma_{j}(s))ds\rightarrow \infty$. But his follows from the completeness of the metric $\hg=N^{2}g$ from Theorem \ref{COMN2}. \end{proof} \subsubsection{The asymptotic of isolated systems.}\label{TAIS} Theorem \ref{COMN2} shows that if $N>0$ and $\partial \Sigma$ is compact then $(\Sigma; \hg=N^{2}g)$ is metrically complete. On the other hand it was proved in \cite{MR3233266}, \cite{MR3233267}, that if $\Sigma$ is diffeomorphic to $\mathbb{R}^{3}$ minus a ball and $\hg$ is complete then the space $(\Sigma;g,N)$ is asymptotically flat. Combining these two results we obtain that: if $\Sigma$ minus a compact set $K$ is diffeomorphic to $\mathbb{R}^{3}$ minus a closed ball then the data set $(\Sigma; g,N)$ is asymptotically flat. Asymptotic flatness is thus characterised only by the asymptotic topology of $\Sigma$. This fact has physically interesting consequences. Following physical intuition define a {\it static isolated system} as a static space-time $(\mathbb{R}\times\Sigma; -N^{2}dt^{2}+g)$, ($\partial \Sigma=\emptyset$ and $(\Sigma; g)$ metrically complete), for which there is a set $K\subset \Sigma$ such that $\Sigma\setminus K$ is diffeomorphic to $\mathbb{R}^{3}$ minus a closed ball and such that the region $\mathbb{R}\times (\Sigma\setminus K)$ is vacuum (i.e. matter lies only in $\mathbb{R}\times K$). The most obvious example of static isolated system one can think of is that of body like a planet or a star. Then, using what we explained in the previous paragraph, static isolated systems are always asymptotically flat. This conclusion was reached in \cite{0264-9381-32-19-195001} but requiring as part of the definition of static isolated system that the space-time is null geodesically complete at infinity. What we are showing here is that this condition is indeed unnecessary and the completeness of the hypersurface $(\Sigma; g)$ is sufficient. \section{Global properties of the lapse}\label{GLOBP} We aim to prove that the lapse $N$ of any black hole data set is bounded away from zero at infinity, namely that there is $c>0$ such that for any divergent sequence $p_{n}$ we have $\lim N(p_{n})\geq c$. \begin{Theorem}\label{BNFB} Let $(\sM;g,N)$ be a static black hole data set. Then, $N$ is bounded away from zero at infinity. \end{Theorem} The proof of this theorem will follow after some propositions that we state and prove below. \begin{Proposition}\label{PT1} Let $(\Sigma_{\delta};\overline{g})$ be a space as in Proposition \ref{PIV}, with $0<\epsilon<1/4$. Let $p$ and $q$ be two different points in $\Sigma_{\delta}$ and let $\gamma:[0,L]\rightarrow \sM_{\delta}$ be a $\overline{g}$-geodesic (parameterised with the arc-length $\overline{s}$) starting at $p$ and ending at $q$ and minimising the $\overline{g}$-length in its own isotopy class (with fixed end points). Then, for any $0<s<t<L$ we have \begin{equation}\label{BFN} -\sqrt{50\bigg[\frac{(t-s)}{s}+\frac{(t-s)}{L-t}\bigg]}\leq \ln \bigg[\frac{N(\gamma(t))}{N(\gamma(s))}\bigg]\leq \sqrt{50\bigg[\frac{(t-s)}{s}+\frac{(t-s)}{L-t}\bigg]} \end{equation} \end{Proposition} Note that in this statement, $s$, $t-s$ and $L-t$ are, respectively, the $\overline{\sg}$-distances along $\gamma$ between the pairs of points $(p,\gamma(s))$, $(\gamma(s),\gamma(t))$ and $(\gamma(t),q)$. \begin{proof} Let $f$ and $\alpha$ be as in Proposition \ref{FELIZ}. Let $\gamma$, $s$ and $t$ be as in the hypothesis. Let $\theta(\overline{s})$ be the expansion along $\gamma$ of the congruence of geodesics emanating from $p$, where $\overline{s}$ is the arc-length. From (\ref{OOO}) we can write \begin{equation}\label{RBE2} \overline{Ric}^{\alpha/2}_{f}=\overline{Ric}+\overline{\nabla}\overline{\nabla}f-\frac{\alpha}{2}\overline{\nabla}f\overline{\nabla}f=\frac{\alpha}{2}\overline{\nabla}f\overline{\nabla}f \end{equation} where $0<\alpha$ because $0<\epsilon<1/4<-1+\sqrt{2}$. Let $\theta_{f}=\theta-f'$ where $f'=df(\gamma(\overline{s}))/d\overline{s}$. As shown in \cite{MR2577473}, (\ref{RBE2}) implies that, \begin{equation} \theta_{f}'\leq -\frac{1}{2/\alpha+3}\theta_{f}^{2}-\frac{\alpha}{2}(f')^{2}=-a^{2}\theta_{f}^{2}-b^{2}\bigg(\frac{N'}{N}\bigg)^{2} \end{equation} where $'=d/d\overline{s}$ and \begin{equation}\label{514} a^{2}=\frac{1}{2/\alpha+\epsilon},\quad \text{and}\quad b^{2}=\frac{(1+\epsilon)^{2}\alpha}{2} \end{equation} From the differential inequality $\theta_{f}'\leq -a^{2}\theta_{f}^{2}$ we deduce, \begin{equation}\label{INEC1} \theta_{f}(s)\leq \frac{1}{a^{2}s} \end{equation} and also we deduce \begin{equation}\label{INEC2} \theta_{f}(t)\geq -\frac{1}{a^{2}(L-t)} \end{equation} because if $\theta_{f}(s)<-\frac{1}{L-t}$ then there exists $r$, with $t<r<L$, for which $\theta_{f}(r)=-\infty$, and therefore $\theta(r)=-\infty$, contradicting that $\gamma$ is length minimising within its isotopy class. Hence, we can use (\ref{INEC1}) and (\ref{INEC2}) and $\theta_{f}'\leq -b^{2}(N'/N)^{2}$ to deduce \begin{align} \bigg|\ln \frac{N(t)}{N(s)}\bigg|^{2}&=\bigg|\int_{s}^{t}\frac{N'}{N}d\overline{s}\bigg|^{2}\leq (t-s)\int_{s}^{t}\bigg(\frac{N'}{N}\bigg)^{2}d\overline{s}\\ &\leq (t-s)\frac{1}{b^{2}}(\theta_{f}(s)-\theta_{f}(t))\leq \frac{(t-s)}{a^{2}b^{2}}\bigg(\frac{1}{s}+\frac{1}{L-t}\bigg) \end{align} which gives (\ref{BFN}) if one observes that $1/a^{2}b^{2}\leq 50$, after a short computation involving (\ref{514}), the form of $\alpha$ from Proposition \ref{FELIZ}, and the fact that $\epsilon<1/4$. \end{proof} \begin{Proposition}\label{PT2} Let $(\Sigma;g,N)$ be a static black hole data set. Let $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ be two disjoint, connected, compact, boundary-less and orientable surfaces, embedded in $\sM^{\circ}$. Let $W:\mathbb{R}\rightarrow \sM^{\circ}$ be a smooth embedding, intersecting $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ only once and transversely and with $W(t)$ diverging as $t\rightarrow \pm\infty$. Then, there is $p_{1}\in \mathcal{S}_{1}$ and $p_{2}\in \mathcal{S}_{2}$ such that $N(p_{1})=N(p_{2})$. \end{Proposition} \begin{proof} We work in a manifold $(\sM_{\delta};\overline{\sg})$ as in Proposition \ref{PIV} and with $0<\epsilon<1/4$. Assume thus that $\delta$ is small enough that $(W\cup \mathcal{S}_{1}\cup \mathcal{S}_{2})\subset \sM^{\circ}_{\delta}$. Orient $W$ in the direction of increasing $t$. Orient also $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ in such a way that the intersection number between $\mathcal{S}$ and $W$, and between $\mathcal{S}_{2}$ and $W$, are both equal to one. All intersection numbers below are defined with respect to these orientations. Redefine the parameter $t$ if necessary to have $W(-1)\in \mathcal{S}_{1}$ and $W(1)\in \mathcal{S}_{2}$. Then, for every natural number $m\geq 1$ let $\gamma_{m}(\overline{s})$ be a $\overline{g}$-geodesic minimising the $\overline{g}$-length among all the curves embedded in $\sM_{\delta}^{\circ}$, with end points $W(-1-m)$ and $W(1+m)$ and having non-zero intersection number with $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, \footnote{The existence of such geodesic is as follows. Let ${\mathcal{C}}$ be the family of all curves joining $W(-1-m)$ and $W(1+m)$ and having non-zero intersection number with $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$. As the intersection number is an isotopy-invariant, the family ${\mathcal{C}}$ is a union of isotopy classes. In each class consider a representative minimising length inside the class (recall the discussion in Section \ref{CDPL}). Let $C_{i}$ be a sequence of such representatives and (asymptotically) minimising length in the family ${\mathcal{C}}$. Such sequence has a convergent subsequence, to, say, $C_{\infty}$. As for $i\geq i_{0}$ with $i_{0}$ big enough, $C_{i}$ is isotopic to $C_{\infty}$ we conclude that $C_{\infty}\in {\mathcal{C}}$ as wished.}. We denoted by $\overline{s}$ the $\overline{\sg}$-arc length starting from $W(-1-m)$. The $\overline{\sg}$-length of $\gamma_{m}$ is denoted by $L_{m}$. We want to prove that there are points $p^{1}_{m}:=\gamma_{m}(\overline{s}^{1}_{m})\in \mathcal{S}_{1}$ and $p^{2}_{m}:=\gamma_{m}(\overline{s}^{2}_{m})\in \mathcal{S}_{2}$, (for some $\overline{s}^{1}_{m}$ and $\overline{s}^{2}_{m}$), with $|\overline{s}^{2}_{m}-\overline{s}^{1}_{m}|$ uniformly bounded above. Once this is done the proof is finished as follows. As the initial and final points $W(-1-m)$ and $W(1+m)$ get further and further away from $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, then we have $\overline{s}^{1}_{m}\rightarrow \infty$, $\overline{s}^{2}_{m}\rightarrow \infty$, $L_{m}-\overline{s}^{2}_{m}\rightarrow \infty$, and $L_{m}-\overline{s}^{1}_{m}\rightarrow \infty$. Therefore we can rely in Proposition \ref{PT1} used with $\gamma=\gamma_{m}$, $\gamma(s)=p^{1}_{m}$, and $\gamma(t)=p^{2}_{m}$, to conclude that \begin{equation} \lim_{m\rightarrow \infty} |N(p^{1}_{m})-N(p^{2}_{m})|= 0 \end{equation} Hence, if $p_{1}$ is an accumulation point of $\{p^{1}_{m}\}$ and $p_{2}$ an accumulation point of $\{p^{2}_{m}\}$ we will have $N(p_{1})=N(p_{2})$ as desired. Consider now the set of embedded curves $X:[-1,1]\rightarrow \sM^{\circ}$, starting at $\mathcal{S}_{1}$ and transversely to it, ending at $\mathcal{S}_{2}$ and transversely to it, and not intersecting $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ except of course at the initial and final points. There are at most four classes of curves $X$, distinguished according to the direction to which the vectors $X'(-1)$ and $X'(1)$ point. For each non-empty class fix a representative, so there are at most four of them, and let $B$ be a common upper bound of their lengths. Without loss of generality assume that each $\gamma_{m}$, as defined earlier, intersects $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ transversely\footnote{Otherwise use suitable small deformations}. Let also $\{\gamma_{m}(\overline{s}^{1}_{1m}),\ldots,\gamma_{m}(\overline{s}^{1}_{l_{1}m})\}$ and $\{\gamma_{m}(\overline{s}^{2}_{1m}),\ldots,\gamma_{m}(\overline{s}^{2}_{l_{2}m})\}$ be the points of intersection of $\gamma_{m}$ with $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ respectively. For each $m$ choose any two $\overline{s}^{1}_{i_{1}m}$ and $\overline{s}^{2}_{i_{2}m}$ consecutive, namely that the open interval \begin{equation} (\min\{\overline{s}^{1}_{i_{1}m},\overline{s}^{2}_{i_{2}m}\},\max\{\overline{s}^{1}_{i_{1}m},\overline{s}^{2}_{i_{2}m}\}) \end{equation} does not contain any of the elements $\{\overline{s}^{1}_{1m},\ldots,\overline{s}^{1}_{l_{1}m};\overline{s}^{2}_{1m},\ldots,\overline{s}^{2}_{l_{2}m}\}$. Without loss of generality we assume that $\overline{s}^{1}_{i_{1}m}<\overline{s}^{2}_{i_{2}m}$ for all $m$. To simplify notation let $\overline{s}^{1}_{m}:=\overline{s}^{1}_{i_{1}m}$ and $\overline{s}^{2}_{m}:=\overline{s}^{2}_{i_{2}m}$. The curves $X_{m}(\overline{s}):=\gamma_{m}(\overline{s})$, $\overline{s}\in [\overline{s}^{1}_{m},\overline{s}^{2}_{m}]$, can be thought (after reparameterisation) as belonging to one of the four classes of curves $X$ described above. For every $m$ let then $\hat{X}_{m}$ be the representative, chosen earlier, of the class to which $X_{m}$ belongs. We compare now the length of $\gamma_{m}$ with the length of a competitor curve, that we denote by $\hat{\gamma}_{m}$, and that is constructed out of $\hat{X}_{m}$ and $\gamma_{m}$ itself. The construction of $\hat{\gamma}_{m}$ is better described in words. Starting from $\gamma_{m}(0)$ we move forward through $\gamma_{m}$, reach $\mathcal{S}_{1}$ at $\gamma_{m}(\overline{s}^{1}_{m})$, and cross it slightly. From there we move through a curve very close to $\mathcal{S}_{1}$ and of length less than $2\diam(\mathcal{S}_{1})$ until reaching a point in $\hat{X}_{m}$. Then we move through $\hat{X}_{m}$ until a point right before $\mathcal{S}_{2}$. Finally we move through a curve very close to $\mathcal{S}_{2}$ and of length less than $2\diam(\mathcal{S}_{2})$ until reaching a point in $\gamma_{m}$ right before $\gamma_{m}(\overline{s}^{2}_{m})$, from which we move through $\gamma_{m}$ until reaching $\gamma_{m}(L_{m})$. Clearly $\gamma_{m}$ has the same intersection numbers with $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ as $\gamma_{m}$ has, hence non-zero. Thus, by the definition of $\gamma_{m}$ we have, \begin{equation} \length(\gamma_{m})\leq \length(\hat{\gamma}_{m}) \end{equation} But we have \begin{equation} \length(\gamma_{m})=\overline{s}^{1}_{m}+(\overline{s}^{2}_{m}-\overline{s}^{1}_{m})+(L_{m}-\overline{s}^{2}_{m}) \end{equation} and (if the construction of $\hat{\gamma}_{m}$ is fine enough) \begin{equation} \length(\hat{\gamma}_{m})\leq \overline{s}^{1}_{m}+2\diam(\mathcal{S}_{1})+\length(\hat{X}_{m})+2\diam(\mathcal{S}_{2})+(L_{m}-\overline{s}^{2}_{m}) \end{equation} Hence, as $\length(\hat{X}_{m})\leq B$ we conclude that \begin{equation} \overline{s}^{2}_{m}-\overline{s}^{1}_{m}\leq B+2\diam(\mathcal{S}_{1})+2\diam(\mathcal{S}_{2}) \end{equation} That is, $|\overline{s}^{2}_{m}-\overline{s}^{1}_{m}|$ is uniformly bounded as wished. \end{proof} Let us introduce the setup required for the next Proposition \ref{PT3} and for the proof of Theorem \ref{BNFB}. Although it was proved earlier that static black hole data sets have only one end, below we will work as if the manifold could have more than one end. The reason for this is that the framework below is valid in higher dimensions, and this could help to investigate whether higher dimensional vacuum static black holes have also only one end. The proof of this fact that we gave earlier holds only in dimension three. Choose $\sM_{i},i=1,\ldots, i_{\sM}\geq 1$ a set of non-compact and {\it connected} regions of $\sM^{\circ}$, with compact (and smooth) boundaries, each containing only one end, and the union covering $\sM$ except for a {\it connected} set of compact closure, (i.e. $\sM\setminus (\cup \sM_{i}^{\circ})$ is compact and connected). For each end $\sM_{i}$ we consider an end cut $\{\mathcal{S}_{ijk}, j\geq 0, k=1,\ldots,k_{ij}\}$. The surfaces $\mathcal{S}_{ijk}$ are considered only to serve as a `reference'. Their geometry plays no role. The condition that the union of the ends $\Sigma_{i}$ covers $\Sigma$ except for a connected set of compact closure will be technically relevant in the proof below. It ensures that given any two $\mathcal{S}_{ijk}$ and $\mathcal{S}_{i'j'k'}$ with either: $i\neq i'$ ($j,k,j',k'$ any), or $i=i'$, $j=j'$ ($k,k'$ any), one can always find an immersed curve $W:\mathbb{R}\rightarrow \Sigma$ intersecting $\mathcal{S}_{ijk}$ and $\mathcal{S}_{i'j'k'}$ only once and such that $W(t)$ diverges as $t\rightarrow\pm \infty$. This fact follows directly from the definition of end cut. \begin{Proposition}\label{PT3} (setup above) Let $(\sM;g,N)$ be a static black hole data set. Then, \begin{enumerate} \item If $i_{\sM}>1$, then for any $\mathcal{S}_{ijk}$ and $\mathcal{S}_{i'j'k'}$, with $i\neq i'$, there are points $p\in \mathcal{S}_{ijk}$ and $p'\in \mathcal{S}_{i'j'k'}$ such that $N(p)=N(p')$. \item If $i_{\sM}=1$, then for every $j$ with $k_{1j}>1$ and $1\leq k\neq k'\leq k_{1j}$, there are points $p\in \mathcal{S}_{1jk}$ and $p'\in \mathcal{S}_{1jk'}$ such that $N(p)=N(p')$. \end{enumerate} \end{Proposition} \begin{proof} If $i_{\sM}>1$ then we can easily construct an embedding $W:\mathbb{R}\rightarrow \sM^{\circ}$ intersecting the manifolds $\mathcal{S}_{ijk}$ and $\mathcal{S}_{i'j'k'}$ only once and with $W(t)\rightarrow \infty$ as $t\rightarrow \pm \infty$. The existence of $p\in \mathcal{S}_{ijk}$ and $p'\in \mathcal{S}_{i'j'k'}$ for which $N(p)=N(p')$ then follows from Proposition \ref{PT2}. The case $i_{\sM}=1$ is treated in exactly the same way. \end{proof} We are ready to prove Theorem \ref{BNFB}. \begin{proof}[\it Proof of Theorem \ref{BNFB}.] We use the same setup as in Proposition \ref{PT3}. Also we let $\mathcal{S}_{ij}:=\cup_{k=1}^{k=k_{ij}}\mathcal{S}_{ijk}$ and given $j'>j$, $\mathcal{U}_{i;jj'}$ denotes the closed region enclosed by $\mathcal{S}_{ij}$ and $\mathcal{S}_{ij'}$. Also, given a closed set $C$, we let $\min\{N;C\}:=\min\{N(x):x\in C\}$ and similarly for $\max\{N;C\}$. We want to show that $N$ is bounded from below away from zero at every one of the ends $\sM_{i}$. We distinguish two cases: $i_{\Sigma}>1$ and $i_{\Sigma}=1$. {\it Case $i_{\sM}>1$}. Without loss of generality we prove this only for $\sM_{1}$. Let us fix a surface $\mathcal{S}_{2j_{0}k_{0}}$ in $\sM_{2}$. By Proposition \ref{PT3} we know that at every $\mathcal{S}_{1jk}$ we have \begin{equation}\label{BEFF} 0<\min\{N;\mathcal{S}_{2j_{0}k_{0}}\}\leq \max\{N;\mathcal{S}_{1jk}\} \end{equation} On the other hand the Harnak estimate (\ref{EQHARN1}) in Proposition \ref{MAXMINU11} gives us \begin{equation}\label{GO} \max\{N;\mathcal{S}_{1jk}\}\leq \eta' \min\{N;\mathcal{S}_{1jk}\} \end{equation} where $\eta'$ is independent of $j$ and $k$. Combined with (\ref{BEFF}) this gives us the bound \begin{equation}\label{FDDA} 0<\eta''<\min\{N;\mathcal{S}_{1jk}\} \end{equation} where $\eta''$ is independent of $j$ and $k$. Now, recall that the manifolds $\mathcal{U}_{1;j,j+1},j=0,1,\ldots$ cover $\sM_{1}$ up to a set of compact closure and that for each $j$, $\partial \mathcal{U}_{1;j,j+1}$ is the union of the surfaces $\mathcal{S}_{1jk};k=1,\ldots,k_{1j}$ and $\mathcal{S}_{1,j+1,k};k=1,\ldots,k_{1,j+1}$. Therefore by (\ref{FDDA}) and the maximum principle we deduce, \begin{equation} 0< \eta''< \min\{N;\partial \mathcal{U}_{1;j,j+1}\}\leq \min\{N; \mathcal{U}_{1;j,j+1}\} \end{equation} from which the lower bound for $N$ away from zero over $\sM_{1}$ follows. {\it Case $i_{\sM}=1$}. We observe first that, as in this case $\sM_{1}$ is the only end and as $N=0$ on $\partial \sM$, then $N$ cannot go uniformly to zero at infinity (this would violate the maximum principle). We prove now that, if there is a diverging sequence $p_{l}$ such that $N(p_{l})\rightarrow 0$, then $N$ must go to zero uniformly at infinity. The proof will then be finished. As $i_{\Sigma}=1$ we will remove the index $i=1$ everywhere from now on. For every $l$ let $j_{l}$ be such that $p_{l}\in \mathcal{U}_{j_{l},j_{l}+1}$ and let $\mathcal{U}^{c}_{j_{l},j_{l}+1}$ be the connected component of $\mathcal{U}_{j_{l},j_{l}+1}$ containing $p_{l}$. By the maximum principle we have \begin{equation} \min\{N;\partial \mathcal{U}^{c}_{j_{l},j_{l}+1}\}\leq \min\{N;\mathcal{U}^{c}_{j_{l},j_{l}+1}\}\leq N(p_{l}) \end{equation} Therefore we can extract a sequence of connected components of $\partial \mathcal{U}^{c}_{j_{l},j_{l}+1}$, denoted by $\mathcal{S}_{j^{l}k_{l}}$ ($j^{l}$ is either $j_{l}$ or $j_{l}+1$), such that \begin{equation} \min\{N; \mathcal{S}_{j^{l}k_{l}}\}\rightarrow 0 \end{equation} From this and (\ref{GO}) we obtain \begin{equation}\label{GFGF} \max\{N;\mathcal{S}_{j^{l}k_{l}}\}\rightarrow 0 \end{equation} Then, by Proposition \ref{PT3} we have \begin{equation}\label{POPP} \min\{N;\mathcal{S}_{j^{l}k}\}\leq \max\{N;\mathcal{S}_{j^{l}k_{l}}\} \end{equation} (note the difference in the subindexes $k$ and $k_{l}$) for all $k= 1,\ldots,k_{j^{l}}$ (it could be of course $k_{j^{l}}=1$). Using (\ref{GO}) in the left hand side of (\ref{POPP}) and using (\ref{GFGF}) we get \begin{equation}\label{SSAA} \max\{N;\mathcal{S}_{j^{l}}\}\rightarrow 0 \end{equation} By the maximum principle again we deduce for any $l'>l$ the inequality \begin{equation} \max\{N;\mathcal{U}_{j^{l}j^{l'}}\}\leq \max\{\max\{N;\mathcal{S}_{j^{l}}\}; \max\{N;\mathcal{S}_{j^{l'}}\}\} \end{equation} Taking the limit $l'\rightarrow \infty$ we deduce that the supremum of $N$ over the unbounded connected component of $\sM\setminus \mathcal{S}_{j^{l}}$ is less or equal than the maximum of $N$ over $\mathcal{S}_{j^{l}}$. Hence $N$ must tend uniformly to zero at infinity because of (\ref{SSAA}). \end{proof} \bibliographystyle{plain}
{ "timestamp": "2018-06-05T02:11:29", "yymm": "1806", "arxiv_id": "1806.00818", "language": "en", "url": "https://arxiv.org/abs/1806.00818" }
\section{Introduction} \label{sec:intro} It is well known that the first step in using what is perhaps the most popular method to find the unknown generating polynomial of a sequence is to calculate the \dt of the sequence by subtracting successive terms of the sequence. The next row in the \dt is formed in a like manner, by subtracting successive terms. If the $d^{th}$ row of the \dt, as defined below in \Cref{def:ddt}, stays constant for a sufficient number of terms, then it is known that that the generating sequence is a polynomial of degree $d$. Once it is established that the sequence is generated via a polynomial of degree $d$, there are various methods that may be used to obtain a formula for the generating polynomial. Such methods include generating and solving linear equations for the coefficients of each power in the polynomial, or using Newton's Divided Difference Formula. We present what we believe is an easy and newly described approach of formulating the unknown polynomial using a \wnt and the \md of the \dt of the sequence, given, or assuming that the sequence is generated with input data consisting of integers starting at either 0 or 1. We then show how the method may be generalized to allow it to be used on a sequence with input data consisting of an arbitrary starting number and an arbitrary constant differential. \section{Outline} \label{sec:outl} The remainder of this paper consists of a number of sections. Following this outline, in the next section the definitions, notations, and the specific formulas that are used in the paper are given. In addition, references to alternate versions of the \wnts are presented. Next, three examples are given, with only the practical calculations shown. We feel that this is desirable in order to show the ease of using the method. The three examples show the calculations for a \ps generated with input data consisting of integers starting at 0, then for a sequence generated with input data consisting of integers starting at 1, and finally for a sequence generated with input data starting at 3.3 with an increment of 0.1. After that, in the next section we provide the mathematical basis for the method, and in the following two sections we first present the the full rendition of one of the examples, and then a partial rendition of another example. We feel that this will make the mathematics behind the method more apparent. In the final section, we present our closing remarks. \section{Definitions, Notation, and Existing Terminology} \label{sec:term} \begin{defn} \label{def:dmwnt} \textbf{\mwnt} or \boldmath \mwntm \unboldmath -- The triangle formed from the numbers, $n$ and $k$, in \oeis \cite{bboeis} \mwnta \cite{bbmwnta} as shown in \cite[Example]{bbmwnta} and in \Cref{tbl:mwntt}. One formula for the numbers in the \mwnt is $(k-1)! \cdot \snskm$, where $S(n,k)$ is the \snsk \cite{bbqgsn} \cite{bbcmsn}. A triangle of these numbers is given in \oeis \snska \cite{bbsnsk}. \vspace{0.1em} The specific formula for \mwntm used in this paper, equivalent to the one given above, is: \begin{equation} \label{eq:mwnt} \mwntm = \frac{1}{k} \sum_{i=0}^{k} (-1)^{k-i} \binom{k}{i} i^n \ ; \ n \geq 1, \ k \geq 1 \end{equation} \vspace{0.1em} The mirror image of \mwnta was recently referred to as the \wnt by Vandervelde \cite{bbvdv}, and we yield to that reference by using the term ``Mirrored'' in our definition. The referenced triangle may be found on the \oeis as \wnta \cite{bbmiwnt}. However, it should be noted that in \oeis \mwnta, \wnta is referred to as ``The mirror image of the Worpitzky triangle'' \cite[Comments]{bbmwnta}. \vspace{0.1em} In addition, what we refer to as the \mwnt (\mwnta) appears elsewhere in the \oeis, such as in OEIS sequence A005460 \cite[Links]{bbrsa} (\cite{bbrsanps} provides a direct link). A005460 is described \cite[Comments]{bbrsa} as: ``third external diagonal of Worpitzky triangle \mwnta''.\footnote{Although we had figured out that the first diagonal in what we would eventually call the \mwnt, \mwntm, is $(n-1)!$, and that the second diagonal is $n!/2$, we were perplexed about the third diagonal which is 1, 7, 50, 390, 3360, etc. A search on the OEIS turned up A005460, which referenced \mwnta, and all of the succeeding diagonals that we checked matched. Since these numbers were readily available on the \oeis in look up table form, we decided to write this paper.} Obviously, the use of the term \wnt (or similar) varies. \end{defn} \begin{table}[!htbp] \centering \caption{The \mwnt, \oeis \mwnta, with zeros for $n < k$} \label{tbl:mwntt} \begin{tabular}{l|rrrrrrrrr} \backslashbox{n}{\vspace{-1.5em} k} & \ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 1 & 3 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 1 & 7 & 12 & 6 & 0 & 0 & 0 & 0 & 0 \\ 5 & 1 & 15 & 50 & 60 & 24 & 0 & 0 & 0 & 0 \\ 6 & 1 & 31 & 180 & 390 & 360 & 120 & 0 & 0 & 0 \\ 7 & 1 & 63 & 602 & 2100 & 3360 & 2520 & 720 & 0 & 0 \\ 8 & 1 & 127 & 1932 & 10206 & 25200 & 31920 & 20160 & 5040 & 0 \\ 9 & 1 & 255 & 6050 & 46620 & 166824 & 317520 & 332640 & 181440 & 40320 \\ \end{tabular} \end{table} \begin{defn} \label{def:dawnt} \textbf{\awnt} or \boldmath \awntm \unboldmath -- The triangle formed from the numbers, $n$ and $k$, in \oeis \awnta \cite{bbawnta} as shown in \cite[Example]{bbawnta} and in \Cref{tbl:awntt}. Perhaps providing justification for referring to this triangle as a \wnt comes from \gaq \cite[Equation 11.3]{bbqg}, who provide an equation for \wns in general. A specific case is mentioned \cite{bbqgsc} which results in the numbers in the \awnt, with a formula given as as $k! \cdot \snskm$. \vspace{0.1em} The specific formula for \awntm used in this paper, equivalent to the one given above, is: \begin{equation} \label{eq:awnt} \awntm = \sum_{i=0}^{k} (-1)^{k-i} \binom{k}{i} i^n \ ; \ n \geq 1, \ k \geq 1 \end{equation} \end{defn} \begin{table}[!htbp] \centering \caption{The \awnt, \oeis \awnta, with zeros for $n < k$} \label{tbl:awntt} \begin{tabular}{l|rrrrrrrrr} \backslashbox{n}{\vspace{-1.5em} k} & \ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 1 & 6 & 6 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 1 & 14 & 36 & 24 & 0 & 0 & 0 & 0 & 0 \\ 5 & 1 & 30 & 150 & 240 & 120 & 0 & 0 & 0 & 0 \\ 6 & 1 & 62 & 540 & 1560 & 1800 & 720 & 0 & 0 & 0 \\ 7 & 1 & 126 & 1806 & 8400 & 16800 & 15120 & 5040 & 0 & 0 \\ 8 & 1 & 254 & 5796 & 40824 & 126000 & 191520 & 141120 & 40320 & 0 \\ 9 & 1 & 510 & 18150 & 186480 & 834120 & 1905120 & 2328480 & 1451520 & 362880 \\ \end{tabular} \end{table} \begin{defn} \label{def:ps} \textbf{\psc} -- A sequence, $a_i, a_{i+1},a_{i+2}$, etc., generated by the polynomial of finite degree $d$ and written in long form as $c_d x^d + c_{d-1} x^{d-1} + c_{d-2} x^{d-2} + \cdots + c_1 x + c_0$. In this paper the more compact form, $\sum_{j=0}^d c_j x^j$ will primarily be used. The $x$ values may be integers or real numbers with an arbitrary starting value and an arbitrary constant differential. \end{defn} \begin{defn} \label{def:ddt} \textbf{The \dtc and the \mdc} -- The difference triangle of a sequence is the triangle formed by subtracting the preceding element of a sequence from the current element, and continuing this process for successive rows. An example is shown in \Cref{tbl:gdt}. Note that the row containing the sequence values, $a_0,a_1,a_2$, etc. is row number 0, and the succeeding rows are numbered $1,2,3,$ etc. The left-most diagonal is shown in bold and is known as the \md. \end{defn} \begin{table}[!htbp] \centering \caption{The General Difference Table for a Sequence} \label{tbl:gdt} \begin{tabular}{cccccccc} $\boldsymbol{a_0}$ & & $a_1$ & & $a_2$ & & \hspace{0.5em} $a_3$ \\ & \hspace{1em} $\boldsymbol{a_1-a_0}$ & & $a_2-a_1$ & & \hspace{-0.5em} $a_3-a_2$ \\ & & $\boldsymbol{a_2-2a_1+a_0}$ & & $a_3-2a_2+a_1$ \\ & & & $\boldsymbol{a_3-3a_2+3a_1-a_0}$ \\ \end{tabular} \end{table} \begin{defn} \label{def:dztoz} \boldmath $0^0 = 1$ \unboldmath -- More specifically, $x^0 = 1$ for all $x$, per Graham, Knuth, and Patashnik, (1994) \cite{bbcm}. This is common when using binomials, and it allows for the $c_0x^0$ term to be $c_0$ when $x=0$ (as is necessary) in the compact formula for the \ps given in \Cref{def:ps}. \end{defn} \section{Examples of Using the Method} \label{sec:ex} The following examples provide a brief description of how to use the method. \begin{examp} \label{ex:awnt} We first consider the sequence generated by the polynomial: \[4x^6 + 5x^5 + 6x^4 + 7x^3 + 8x^2 + 9x + 10; \ x \in 0..7\] The difference table for this sequence is given in \Cref{tbl:awntex}, with the \md in bold. The sequence values are in row 0 in accordance with \Cref{def:ddt}. In this example it is either given or assumed that the the integers starting with 0 were used to generate the sequence. Therefore, we will use the \awnt and the \md values to easily solve for the ``unknown'' coefficients. \begin{table}[!htbp] \centering \caption{The Difference Table for $4x^6 + 5x^5 + 6x^4 + 7x^3 + 8x^2 + 9x + 10; \ x \in 0..7$} \label{tbl:awntex} \begin{tabular}{cccccccccccccccl} 0 & & \moem 1 & & \moem 2 & & \moem 3 & & \moem 4 & & \moem 5 & & \moem 6 & & \moem 7 & $\leftarrow x$ \\ \textbf{10} & & \moem 49 & & \moem 628 & & \moem 4915 & & \moem 23662 & & \moem 83005 & & \moem 235144 & & \moem 571903 \\ & \moem \textbf{39} & & \moem 579 & & \moem 4287 & & \moem 18747 & & \moem 59343 & & \moem 152139 & & \moem 336759 \\ & & \moem \textbf{540} & & \moem 3708 & & \moem 14460 & & \moem 40596 & & \moem 92796 & & \moem 184620 \\ & & & \moem \textbf{3168} & & \moem 10752 & & \moem 26136 & & \moem 52200 & & \moem 91824 \\ & & & & \moem \textbf{7584} & & \moem 15384 & & \moem 26064 & & \moem 39624 \\ & & & & & \moem \textbf{7800} & & \moem 10680 & & \moem 13560 \\ & & & & & & \moem \textbf{2880} & & \moem 2880 \\ \end{tabular} \end{table} \vspace{0.1em} Since the integers that generate the sequence start at 0, we know that $c_0=10$, as 10 is the initial value in row 0. Row 6 of the \dt is constant, which, of course, matches the degree, $d$, of the polynomial. Therefore, we start at $n,k=d=6$ in \Cref{tbl:awntt}, and use column $k$ to calculate $c_{n=k}$ of the polynomial using the entries in the table as multipliers for each $c_n, n \in 1..6$. The $c_n$ values are multiplied by the values in row $n$ of \Cref{tbl:awntt} for that column. As can be seen in the calculations below, this process turns the rows of the table into columns, and the columns of the table into rows, when presented in the manner shown. We will move backwards along the columns, $k$, in turn calculating the $c_{n=k}$ values as we go. The reasons for these steps and the mathematical relationships between the coefficients and the multiplier values in the table will be shown in \Cref{ss:mdawnt}. The multipliers (elements of the table) appear in parenthesis below, with multipliers of 0 not shown. Thus, we have: \[ \begin{array}{ccccccccccccccccc} 2880 & = & c_6(720) & & & & & & & & & & & \Rightarrow & c_6 & = & 4 \\ 7800 & = & 4(1800) & + & c_5(120) & & & & & & & & & \Rightarrow & c_5 & = & 5 \\ 7584 & = & 4(1560) & + & 5(240) & + & c_4(24) & & & & & & & \Rightarrow & c_4 & = & 6 \\ 3168 & = & 4(540) & + & 5(150) & + & 6(36) & + & c_3(6) & & & & & \Rightarrow & c_3 & = & 7 \\ 540 & = & 4(62) & + & 5(30) & + & 6(14) & + & 7(6) & + & c_2(2) & & & \Rightarrow & c_2 & = & 8 \\ 39 & = & 4(1) & + & 5(1) & + & 6(1) & + & 7(1) & + & 8(1) & + & c_1(1) & \Rightarrow & c_1 & = & 9 \\ \end{array} \] Since we already know that $c_0=10$, we have the complete solution for \Cref{ex:awnt}. \end{examp} \begin{examp} \label{ex:mwnt} In this example, we show how to directly calculate the coefficients of a \ps given that the integers starting with 1 (instead of 0) are used to generate the sequence. We will use the values in the \mwnt (instead of the \awnt) and in the \md of the \dt. \vspace{0.1em} We could have repeated \Cref{ex:awnt} using the next diagonal (adjacent to the \md) of \Cref{tbl:awntex}, but we elect to use a different sequence to add more variety to the examples. We consider the sequence generated by the polynomial: \[2x^6 + 3x^5 + 5x^4 + 7x^3 + 11x^2 + 13x + 17; \ x \in 1..8\] The difference table for this sequence is given in \Cref{tbl:wntex}, with the \md in bold. Again, the sequence values are in row 0 in accordance with \Cref{def:ddt}. \begin{table}[!htbp] \centering \caption{The Difference Table for $2x^6 + 3x^5 + 5x^4 + 7x^3 + 11x^2 + 13x + 17; \ x \in 1..8$} \label{tbl:wntex} \begin{tabular}{cccccccccccccccl} \moem 1 & & \moem 2 & & \moem 3 & & \moem 4 & & \moem 5 & & \moem 6 & & \moem 7 & & \moem 8 & $\leftarrow x$ \\ \moem \textbf{58} & & \moem 447 & & \moem 2936 & & \moem 13237 & & \moem 44982 & & \moem 125123 & & \moem 300772 & & \moem 647481 \\ & \moem \textbf{389} & & \moem 2489 & & \moem 10301 & & \moem 31745 & & \moem 80141 & & \moem 175649 & & \moem 346709 \\ & & \moem \textbf{2100} & & \moem 7812 & & \moem 21444 & & \moem 48396 & & \moem 95508 & & \moem 171060 \\ & & & \moem \textbf{5712} & & \moem 13632 & & \moem 26952 & & \moem 47112 & & \moem 75552 \\ & & & & \moem \textbf{7920} & & \moem 13320 & & \moem 20160 & & \moem 28440 \\ & & & & & \moem \textbf{5400} & & \moem 6840 & & \moem 8280 \\ & & & & & & \moem \textbf{1440} & & \moem 1440 \\ \end{tabular} \end{table} \vspace{0.1em} Row 6 of the \dt is constant, again matching the degree, $d$, of the polynomial. We start at $n,k=d+1=7$ in \Cref{tbl:mwntt}, and use column $k$ to calculate $c_{n-1=k-1}$ of the polynomial using the entries in the table as multipliers for each $c_{n-1}, n \in 1..7$. The $c_{n-1}$ values are multiplied by the values in row $n$ of \Cref{tbl:mwntt} for that column. Again, we will move backwards along the columns, calculating the $c_{n-1=k-1}$ values as we go. The reasons for these steps and the mathematical relationships between the coefficients and the multiplier values in the table will be shown in \Cref{ss:mdmwnt}. The multipliers (elements of the table) appear in parenthesis below, with multipliers of 0 not shown. Thus, we have: \[ \arraycolsep=3.0pt \begin{array}{ccccccccccccccccccc} 1440 & = & c_6(720) & & & & & & & & & & & & & \Rightarrow & c_6 & = & 2 \\ 5400 & = & 2(2520) & + & c_5(120) & & & & & & & & & & & \Rightarrow & c_5 & = & 3 \\ 7920 & = & 2(3360) & + & 3(360) & + & c_4(24) & & & & & & & & & \Rightarrow & c_4 & = & 5 \\ 5712 & = & 2(2100) & + & 3(390) & + & 5(60) & + & c_3(6) & & & & & & & \Rightarrow & c_3 & = & 7 \\ 2100 & = & 2(602) & + & 3(180) & + & 5(50) & + & 7(12) & + & c_2(2) & & & & & \Rightarrow & c_2 & = & 11 \\ 389 & = & 2(63) & + & 3(31) & + & 5(15) & + & 7(7) & + & 11(3) & + & c_1(1) & & & \Rightarrow & c_1 & = & 13 \\ 58 & = & 2(1) & + & 3(1) & + & 5(1) & + & 7(1) & + & 11(1) & + & 13(1) & + & c_0(1) & \Rightarrow & c_0 & = & 17 \\ \end{array} \] \end{examp} \begin{examp} \label{ex:awntni} In this example, we show how to extend the method to \pspl generated with integers not starting at either 0 or 1, or with input data with non-unity differentials. Since it is probably easiest to start with an integer index of 0 rather than 1 for the extension of the method, we will assign a function relating the input data to the integers $0,1,2$, etc., and we will use the values in the \awnt as in \Cref{ex:awnt}. We consider the sequence generated by the polynomial: \[3x^5 + 1x^4 + 4x^3 + 1x^2 + 5x + 9; \ x \in 3.3,3.4..3.9\] The difference table for this sequence and input data is given in \Cref{tbl:awntexni}, with the \md in bold. Again, the sequence values are in row 0 in accordance with \Cref{def:ddt}. This row appears directly below the integers, starting at 0, that we have calculated and assigned to the input data entries using the equation: \begin{equation} \label{eq:gx} g(x)=10.0(x-3.3) \end{equation} \noindent This allows us to use the method using the \awnt. \vspace{0.1em} \begin{table}[!htbp] \centering \caption{The Difference Table for $3x^5 + 1x^4 + 4x^3 + 1x^2 + 5x + 9; \ x \in 3.3,3.4..3.9$} \small \label{tbl:awntexni} \begin{tabular}{cccccccccccccl} 3.3 & & \mxem 3.4 & & \mxem 3.5 & & \mxem 3.6 & & \mxem 3.7 & & \mxem 3.8 & & \mxem 3.9 & $\leftarrow x$ \\ 0 & & \mxem 1 & & \mxem 2 & & \mxem 3 & & \mxem 4 & & \mxem 5 & & \mxem 6 & $\leftarrow g(x)$ \\ \textbf{1472.79189} & & \mxem 1691.47232 & & \mxem 1935.96875 & & \mxem 2208.53088 & & \mxem 2511.53681 & & \mxem 2847.49664 & & \mxem 3219.05607 \\ & \mxem \textbf{218.68043} & & \mxem 244.49643 & & \mxem 272.56213 & & \mxem 303.00593 & & \mxem 335.95983 & & \mxem 371.55943 \\ & & \mxem \textbf{25.816} & & \mxem 28.0657 & & \mxem 30.4438 & & \mxem 32.9539 & & \mxem 35.5996 \\ & & & \mxem \textbf{2.2497} & & \mxem 2.3781 & & \mxem 2.5101 & & \mxem 2.6457 \\ & & & & \mxem \textbf{0.1284} & & \mxem 0.132 & & \mxem 0.1356 \\ & & & & & \mxem \textbf{0.0036} & & \mxem 0.0036 \\ \end{tabular} \end{table} \vspace{0.1em} Row 5 of the \dt is constant, matching the degree, $d$, of the polynomial as expected. We start at $n,k=d=5$ in \Cref{tbl:awntt}, and use column $k$ to calculate $c_{n=k}$ of the polynomial using the entries in the table as multipliers for each $c_n, n \in 1..5$. By inspecting the first element in row 0 of \Cref{tbl:awntexni}, we see that $c_0$ is 1472.79189. Furthermore, we have: \vspace{0.2em} \footnotesize \[ \arraycolsep=3.0pt \begin{array}{ccccccccccccccc} 0.0036 & = & c_5(120) & & & & & & & & & \Rightarrow & c_5 & = & 0.00003 \\ 0.1284 & = & 0.00003(240) & + & c_4(24) & & & & & & & \Rightarrow & c_4 & = & 0.00505 \\ 2.2497 & = & 0.00003(150) & + & 0.00505(36) & + & c_3(6) & & & & & \Rightarrow & c_3 & = & 0.3439 \\ 25.816 & = & 0.00003(30) & + & 0.00505(14) & + & 0.3439(6) & + & c_2(2) & & & \Rightarrow & c_2 & = & 11.8405 \\ 218.68043 & = & 0.00003(1) & + & 0.00505(1) & + & 0.3439(1) & + & 11.8405(1) & + & c_1(1) & \Rightarrow & c_1 & = & 206.49095 \\ \end{array} \] \normalsize \vspace{0.1em} The value of the sequence for other input values of $x$ may be calculated using these coefficients with an input value of $g(x)$ as given in \Cref{eq:gx}. Alternatively, we could calculate the coefficients for use with $x$ directly, as opposed to $g(x)$, by symbolic evaluation of: \[0.00003(g(x))^5+0.00505(g(x))^4+0.3439(g(x))^3+11.8405(g(x))^2+206.49095(g(x))+1472.79189\] which simplifies to: \[3x^5 + 1x^4 + 4x^3 + 1x^2 + 5x + 9\] \vspace{0.1em} Obviously, the same method may be used for integer generated sequences with a starting integer value other than 0 or 1 by making a substitution of $g(x)=x-y$, with $y$ as the appropriate integer. The value of $y$ will depend upon whether the \mwnt or the \awnt was used in the calculation, and upon the starting integer value of the sequence. \end{examp} \section{Mathematical Basis} \label{sec:mb} \subsection{A Formula for the \mdc of the \dtc of a \psc} \label{ss:fmd} A \ps, given by $a_i, a_{i+1},a_{i+2}$, etc., has a \dt as defined in \Cref{def:ddt}. A general example showing $a_0,a_1,a_2, \text{ and } a_3$ was given in \Cref{tbl:gdt}. It is known \cite{bbqgmds} \cite{bbcmd} that the $k^{th}$ term of the \md of the sequence of the \dt is: \[ D_k=\sum_{i=0}^k(-1)^{k+i} \binom{k}{i} a_i \qquad k \geq 0 \] \noindent This may be proved via induction, or by the method given by Graham, et al., (1994) \cite{bbcmp}. If we multiply by $(-1)^{-2i}=1$, for each $i$ in turn, we get: \begin{equation} \label{eq:md} D_k=\sum_{i=0}^k(-1)^{k-i} \binom{k}{i} a_i \qquad k \geq 0 \end{equation} \subsection{Using the \mdc with the \awnt} \label{ss:mdawnt} \subsubsection{Mathematical Derivation:} \label{sss:awntmd} In this section we derive the expressions linking the \md of the \dt of a \ps and the polynomial's coefficient multipliers to the equation for the \awnt, $\awntm$. This leads to the method of solving for the polynomial's unknown coefficients as shown in \Cref{ex:awnt}. First, since the starting integer used to generate the polynomial is $x=0$, it is obvious that $c_0=a_0$. Furthermore, recalling \Cref{eq:md}, the equation for the $k^{th}$ term of the \md, and substituting $a_i=\sum_{n=0}^d c_n i^n$, yields: \[ D0_k=\sum_{i=0}^k (-1)^{k-i} \binom{k}{i} \sum_{n=0}^d c_n i^n \qquad k \geq 0 \] \noindent where $d$ is the degree of the polynomial. \vspace{0.1em} Since both sums have a finite number of terms, and due to the commutative property of multiplication and addition, and the distributive property of multiplication over addition, we may rearrange the above equation into ($c_0$ is left out as it is already known from the first value in the main diagonal): \begin{equation} \label{eq:awntmde} D0_k=\sum_{n=1}^d c_n \ \underbrace{\sum_{i=0}^k(-1)^{k-i} \binom{k}{i} i^n}_{\awntm} \qquad n \geq 1, \ k \geq 1 \end{equation} This equation shows that that the $k^{th}$ main diagonal element value for $k \geq 1$, is composed of $c_n, \ n \in 1..d$, multiplied by $\sum_{i=0}^k (-1)^{k-i} \binom{k}{i} i^n$. This is column $k$, with corresponding row, $n$, in \awnta, and is seen by comparison of \Cref{eq:awntmde} with \Cref{eq:awnt}. \subsubsection{Solution Procedure using \boldmath \texorpdfstring{\awntm}{AWNT(n,k)} \unboldmath \hspace{-0.3em}, \awnta:} \label{sss:awntsp} Therefore, to find the unknown coefficients of a polynomial sequence generated with integers starting at 0, first construct the \dt until the elements of a row are all constant. The degree, $d$, of the polynomial is the row number, as defined in \Cref{def:ddt}, of the difference triangle with the constant values. Then start with column $k=d$ and row $n=d$ in \awnta, and for each $c_n, n \in 1..d$, assign the value of \awntm as a multiplier to $c_n$ (moving up the column is perhaps easiest) and equate it to the main diagonal value for row $k=n=d$ in the \dt. \awntm[n][d] will have multipliers of 0 for all $c_n$ except for $c_d$, allowing for easy calculation of $c_d$. Next, move back to column $k=d-1$ and starting at row $n=d$ assign the value of multipliers to each $c_n$, again moving up the column. Since the value of $c_d$ is known, equating the coefficients and multipliers to the main diagonal value in row $d-1$ will leave $c_{d-1}$ as the only unknown. Continue this process backwards to column $k=1$ to solve for the coefficients down to $c_1$. The value of $c_0$ is equal to the value in row 0 of the \dt (the first term of the sequence), and the solution is complete. See \Cref{ex:awnt} for a worked example using this procedure. \subsection{Using the \mdc with the \mwnt} \label{ss:mdmwnt} \subsubsection{Mathematical Derivation:} \label{sss:mwntmd} In this section we derive the expressions linking the \md of the \dt of a \ps and the polynomial's coefficient multipliers to the equation for the \mwnt, $\mwntm$. This leads to the method of solving for the polynomial's unknown coefficients as shown in \Cref{ex:mwnt}. In this case, the starting integer used to generate the polynomial is $x=1$, and we conveniently refer to the terms of the sequence as $a_1, a_2, a_3$, etc. If we look at the $m^{th}$ term of the \md from \Cref{eq:md}, we get: \[ D1_m=\sum_{j=0}^m(-1)^{m-j} \binom{m}{j}a_{j+1} \qquad m \geq 0 \] \noindent where: \[ a_{j+1}=\sum_{q=0}^d c_q (j+1)^q \] \noindent and $d$ is the degree of the polynomial, as before. Substituting, we get: \[ D1_m=\sum_{j=0}^m(-1)^{m-j} \binom{m}{j} \sum_{q=0}^d c_q (j+1)^q \qquad m \geq 0 \] Since both sums have a finite number of terms, and due to the commutative property of multiplication and addition, and the distributive property of multiplication over addition, we may rearrange the above equation into: \begin{align} \label{eq:intermed} D1_m &= \sum_{q=0}^d c_q \sum_{j=0}^m(-1)^{m-j} \binom{m}{j} (j+1)^q & \text{Let } i &=j+1 \Rightarrow j=i-1: \nonumber \\ D1_m &= \sum_{q=0}^d c_q \sum_{i=1}^{m+1}(-1)^{m-i+1} \binom{m}{i-1} i^q & \text{Let } k&=m+1 \Rightarrow m=k-1: \nonumber \\ D1_{k-1} &= \sum_{q=0}^d c_q \sum_{i=1}^{k}(-1)^{k-i} \binom{k-1}{i-1} i^q & k \geq 1 \end{align} We now need to show that the following relationship involving the right hand sum of \Cref{eq:intermed} is valid: \begin{equation} \label{eq:qpone} \sum_{i=1}^{k}(-1)^{k-i} \binom{k-1}{i-1} i^q = \frac{1}{k} \sum_{i=0}^{k} (-1)^{k-i} \binom{k}{i} i^{q+1} \qquad q \geq 0, \ k \geq 1 \end{equation} First, on the right hand side, the $i=0$ term is 0 since $0^{q+1}=0$. The rest of the terms, with $i \geq 1$, are equal on a term by term basis, shown as follows: \begin{align*} (-1)^{k-i} \binom{k-1}{i-1}i^q & \overset{?}{=} \frac{1}{k} (-1)^{k-i} \binom{k}{i} i^{q+1} \\ \binom{k-1}{i-1}i^q & \overset{?}{=} \frac{1}{k} \binom{k}{i} i^{q+1} \\ \frac{(k-1)! \ i^q}{(k-1-(i-1))! \ (i-1)!} & \overset{?}{=} \frac{1}{k} \ \frac{k! \ i^{q+1}}{(k-i)! \ i!} \\ \frac{(k-1)! \ i^q}{(k-i)! \ (i-1)!} & \overset{?}{=} \frac{(k-1)! \ i^{q+1}}{(k-i)! \ i!} \\ \frac{i^q}{(i-1)!} & \overset{?}{=} \frac{i^q \ i}{i!} \\ \frac{i^q}{(i-1)!} & \overset{\checkmark}{=} \frac{i^q}{(i-1)!} \end{align*} \noindent which confirms the relationship (again, with $i \geq 1$). We now substitute the right side expression of \Cref{eq:qpone} for the right side sum of \Cref{eq:intermed} and get: \begin{align} \label{eq:mwntmde} D1_{k-1} &= \sum_{q=0}^d c_q \ \frac{1}{k} \sum_{i=0}^{k} (-1)^{k-i} \binom{k}{i} i^{q+1} & q \geq 0, \ k \geq 1 & & \text{Let } n=q+1 \Rightarrow q=n-1 \nonumber \\ D1_{k-1} &= \sum_{n=1}^{d+1} c_{n-1} \ \underbrace{\frac{1}{k} \sum_{i=0}^{k} (-1)^{k-i} \binom{k}{i} i^n}_{\mwntm} & n \geq 1, \ k \geq 1 & \end{align} This equation shows that for $k \geq 1$, the $(k-1)^{th}$ main diagonal element value is composed of $c_{n-1}, \ n \in 1..(d+1)$, multiplied by $\frac{1}{k} \sum_{i=0}^{k} (-1)^{(k-i)} \binom{k}{i} i^n$. This is column, $k$, with corresponding row, $n$, in \mwnta, as seen by comparison of \Cref{eq:mwntmde} with \Cref{eq:mwnt}. \subsubsection{Solution Procedure using \boldmath \texorpdfstring{\mwntm}{MWNT(n,k)} \unboldmath \hspace{-0.3em}, \mwnta:} \label{sss:mwntsp} Therefore, to find the unknown coefficients of a polynomial sequence generated with integers starting at 1, first construct the \dt until the elements of a row are all constant. The degree, $d$, of the polynomial is the row number, as defined in \Cref{def:ddt}, of the difference triangle with the constant values. Then start with column $k=d+1$ and row $n=d+1$ in \mwnta, and for each $c_{n-1}, n \in 1..(d+1)$, assign the value of \mwntm as a multiplier to $c_{n-1}$ (moving up the column) and equate it to the main diagonal value for row $k-1=n-1=d$ in the \dt. \mwntm[n][d+1] will have multipliers of 0 for all $c_{n-1}$ except for $c_d$, allowing for easy calculation of $c_d$. Next, move back to column $k=d$ and starting at row $n=d+1$ assign the value of multipliers to each $c_{n-1}$, again moving up the column. Since the value of $c_d$ is known, equating the coefficients and multipliers to the main diagonal value in row $d-1$ will leave $c_{d-1}$ as the only unknown. Continue this process backwards to column $k=1$ to solve for the coefficients down to $c_0$, and the solution is complete. See \Cref{ex:mwnt} for a worked example using this procedure. \subsection{Gaining Insight Using \efdt} \label{ss:mbefdt} \efdt as presented by \gaq \cite{bbqgefdt} states that given $f(x)=\sum_{j=0}^d c_j x^j$ then: \[ \sum_{i=0}^k(-1)^i \binom{k}{i}f(i) = \begin{cases} 0, & 0 \leq d < k \\ (-1)^k k! \ c_k, & d=k \end{cases} \] \vspace{0.2em} \noindent The authors use \efdt and let: \[f(x)=(z-bx)^n \ \Rightarrow \ f(i)=(z-bi)^n, \ n \in \mathbb{Z}_{\geq 0}\] to derive the following equation \cite{bbqgmme}: \begin{equation} \label{eq:sec} \sum_{i=0}^k (-1)^i \binom{k}{i} (z-bi)^n = \begin{cases} 0, & n < k \\ b^k \ k!, & n=k \end{cases} \end{equation} \vspace{0.2em} \noindent By setting $b=-1$ and $z=0$ we can derive \Cref{eq:main} below as follows: \[ \sum_{i=0}^k (-1)^i \binom{k}{i} i^n = \begin{cases} 0, & n < k \\ (-1)^k k!, & n=k \end{cases} \] \noindent If we multiply each side by $(-1)^k$, we get: \[ \sum_{i=0}^k (-1)^{k+i} \binom{k}{i} i^n = \begin{cases} 0, & n < k \\ (-1)^{2k} k!, & n=k \end{cases} \] \noindent If we multiply the left side by $(-1)^{-2i}=1$, for each $i$ in turn, and since $(-1)^{2k}=1$, we get: \begin{equation} \label{eq:main} \sum_{i=0}^k (-1)^{k-i} \binom{k}{i} i^n = \begin{cases} 0, & n < k \\ k!, & n=k \end{cases} \end{equation} We can use \Cref{eq:sec} and \Cref{eq:main} to gain insight into the structure of the contents of the \mwnt (\Cref{tbl:mwntt}), the \awnt (\Cref{tbl:awntt}) and the \dt of a sequence. \Cref{eq:main} shows why the factorials of numbers appear on the right diagonal of both triangles, starting from 0! in \Cref{tbl:mwntt}, and from 1! in \Cref{tbl:awntt}. It also explains the zeros in the tables when $n<k$, and the rows of 0 that one would get by continuing the \dt past the constant row, as $n<k$. \Cref{eq:sec} may be used to explain the constant row of the \dt. In that equation, $z$ may be taken to be any number (not just 0), so along with having $b=-1$, this explains why successive terms in the row are all equal (and related to the factorial), and that the method of isolating the coefficients could be used with any row of the \dt because the coefficient multipliers would also remain 0 for $n<k$.\footnote{A multiplication by $(-1)^k$ is assumed as was done in proceeding from \Cref{eq:sec} to \Cref{eq:main} in order to match the form of the diagonal terms where the multipliers of the $i=k$ term are positive (so that the factorials are all positive).} However, different tables of numbers would need to be used for the multipliers (the factorials and zeros would still be in place), or the multipliers could just be calculated from the terms in \Cref{eq:sec} with the appropriate values for $z$ and $b$ -- see \Cref{sec:eor} for the terms used in \Cref{ex:awnt} with \awnta ($z=0, \ b=-1$). \section{\texorpdfstring{\Cref{ex:awnt}}{Example 1} Revisited} \label{sec:eor} In this section, we will show \Cref{ex:awnt} in full, per \Cref{eq:awntmde} with all terms shown. Along with \Cref{eq:main}, this will hopefully provide a more complete view of how the method works. So we have, with the binomial coefficients shown in bold: \arraycolsep=0.5pt \[ \begin{array}{ccccccccccccccccccccl} 2880 & = & c_6 & \cdot & (\mathbf{1} \cdot 6^6 & - & \mathbf{6} \cdot 5^6 & + & \mathbf{15} \cdot 4^6 & - & \mathbf{20} \cdot 3^6 & + & \mathbf{15} \cdot 2^6 & - & \mathbf{6} \cdot 1^6 & + & \mathbf{1} \cdot 0^6) & = & c_6 \cdot \awntm[6][6] & = & c_6 \cdot 720 \\ & + & c_5 & \cdot & (\mathbf{1} \cdot 6^5 & - & \mathbf{6} \cdot 5^5 & + & \mathbf{15} \cdot 4^5 & - & \mathbf{20} \cdot 3^5 & + & \mathbf{15} \cdot 2^5 & - & \mathbf{6} \cdot 1^5 & + & \mathbf{1} \cdot 0^5) & = & c_5 \cdot \awntm[5][6] & = & c_5 \cdot 0 \\ & + & c_4 & \cdot & (\mathbf{1} \cdot 6^4 & - & \mathbf{6} \cdot 5^4 & + & \mathbf{15} \cdot 4^4 & - & \mathbf{20} \cdot 3^4 & + & \mathbf{15} \cdot 2^4 & - & \mathbf{6} \cdot 1^4 & + & \mathbf{1} \cdot 0^4) & = & c_4 \cdot \awntm[4][6] & = & c_4 \cdot 0 \\ & + & c_3 & \cdot & (\mathbf{1} \cdot 6^3 & - & \mathbf{6} \cdot 5^3 & + & \mathbf{15} \cdot 4^3 & - & \mathbf{20} \cdot 3^3 & + & \mathbf{15} \cdot 2^3 & - & \mathbf{6} \cdot 1^3 & + & \mathbf{1} \cdot 0^3) & = & c_3 \cdot \awntm[3][6] & = & c_3 \cdot 0 \\ & + & c_2 & \cdot & (\mathbf{1} \cdot 6^2 & - & \mathbf{6} \cdot 5^2 & + & \mathbf{15} \cdot 4^2 & - & \mathbf{20} \cdot 3^2 & + & \mathbf{15} \cdot 2^2 & - & \mathbf{6} \cdot 1^2 & + & \mathbf{1} \cdot 0^2) & = & c_2 \cdot \awntm[2][6] & = & c_2 \cdot 0 \\ & + & c_1 & \cdot & (\mathbf{1} \cdot 6^1 & - & \mathbf{6} \cdot 5^1 & + & \mathbf{15} \cdot 4^1 & - & \mathbf{20} \cdot 3^1 & + & \mathbf{15} \cdot 2^1 & - & \mathbf{6} \cdot 1^1 & + & \mathbf{1} \cdot 0^1) & = & c_1 \cdot \awntm[1][6] & = & c_1 \cdot 0 \\ c_6 \Rightarrow 4 \\ \\ 7800 & = & 4 & \cdot & (\mathbf{1} \cdot 5^6 & - & \mathbf{5} \cdot 4^6 & + & \mathbf{10} \cdot 3^6 & - & \mathbf{10} \cdot 2^6 & + & \mathbf{5} \cdot 1^6 & - & \mathbf{1} \cdot 0^6) & & & = & 4 \cdot \awntm[6][5] & = & 4 \cdot 1800 \\ & + & c_5 & \cdot & (\mathbf{1} \cdot 5^5 & - & \mathbf{5} \cdot 4^5 & + & \mathbf{10} \cdot 3^5 & - & \mathbf{10} \cdot 2^5 & + & \mathbf{5} \cdot 1^5 & - & \mathbf{1} \cdot 0^5) & & & = & c_5 \cdot \awntm[5][5] & = & c_5 \cdot 120 \\ & + & c_4 & \cdot & (\mathbf{1} \cdot 5^4 & - & \mathbf{5} \cdot 4^4 & + & \mathbf{10} \cdot 3^4 & - & \mathbf{10} \cdot 2^4 & + & \mathbf{5} \cdot 1^4 & - & \mathbf{1} \cdot 0^4) & & & = & c_4 \cdot \awntm[4][5] & = & c_4 \cdot 0 \\ & + & c_3 & \cdot & (\mathbf{1} \cdot 5^3 & - & \mathbf{5} \cdot 4^3 & + & \mathbf{10} \cdot 3^3 & - & \mathbf{10} \cdot 2^3 & + & \mathbf{5} \cdot 1^3 & - & \mathbf{1} \cdot 0^3) & & & = & c_3 \cdot \awntm[3][5] & = & c_3 \cdot 0 \\ & + & c_2 & \cdot & (\mathbf{1} \cdot 5^2 & - & \mathbf{5} \cdot 4^2 & + & \mathbf{10} \cdot 3^2 & - & \mathbf{10} \cdot 2^2 & + & \mathbf{5} \cdot 1^2 & - & \mathbf{1} \cdot 0^2) & & & = & c_2 \cdot \awntm[2][5] & = & c_2 \cdot 0 \\ & + & c_1 & \cdot & (\mathbf{1} \cdot 5^1 & - & \mathbf{5} \cdot 4^1 & + & \mathbf{10} \cdot 3^1 & - & \mathbf{10} \cdot 2^1 & + & \mathbf{5} \cdot 1^1 & - & \mathbf{1} \cdot 0^1) & & & = & c_1 \cdot \awntm[1][5] & = & c_1 \cdot 0 \\ c_5 \Rightarrow 5 \\ \\ 7584 & = & 4 & \cdot & (\mathbf{1} \cdot 4^6 & - & \mathbf{4} \cdot 3^6 & + & \mathbf{6} \cdot 2^6 & - & \mathbf{4} \cdot 1^6 & + & \mathbf{1} \cdot 0^6) & & & & & = & 4 \cdot \awntm[6][4] & = & 4 \cdot 1560 \\ & + & 5 & \cdot & (\mathbf{1} \cdot 4^5 & - & \mathbf{4} \cdot 3^5 & + & \mathbf{6} \cdot 2^5 & - & \mathbf{4} \cdot 1^5 & + & \mathbf{1} \cdot 0^5) & & & & & = & 5 \cdot \awntm[5][4] & = & 5 \cdot 240 \\ & + & c_4 & \cdot & (\mathbf{1} \cdot 4^4 & - & \mathbf{4} \cdot 3^4 & + & \mathbf{6} \cdot 2^4 & - & \mathbf{4} \cdot 1^4 & + & \mathbf{1} \cdot 0^4) & & & & & = & c_4 \cdot \awntm[4][4] & = & c_4 \cdot 24 \\ & + & c_3 & \cdot & (\mathbf{1} \cdot 4^3 & - & \mathbf{4} \cdot 3^3 & + & \mathbf{6} \cdot 2^3 & - & \mathbf{4} \cdot 1^3 & + & \mathbf{1} \cdot 0^3) & & & & & = & c_3 \cdot \awntm[3][4] & = & c_3 \cdot 0 \\ & + & c_2 & \cdot & (\mathbf{1} \cdot 4^2 & - & \mathbf{4} \cdot 3^2 & + & \mathbf{6} \cdot 2^2 & - & \mathbf{4} \cdot 1^2 & + & \mathbf{1} \cdot 0^2) & & & & & = & c_2 \cdot \awntm[2][4] & = & c_2 \cdot 0 \\ & + & c_1 & \cdot & (\mathbf{1} \cdot 4^1 & - & \mathbf{4} \cdot 3^1 & + & \mathbf{6} \cdot 2^1 & - & \mathbf{4} \cdot 1^1 & + & \mathbf{1} \cdot 0^1) & & & & & = & c_1 \cdot \awntm[1][4] & = & c_1 \cdot 0 \\ c_4 \Rightarrow 6 \\ \\ 3168 & = & 4 & \cdot & (\mathbf{1} \cdot 3^6 & - & \mathbf{3} \cdot 2^6 & + & \mathbf{3} \cdot 1^6 & - & \mathbf{1} \cdot 0^6) & & & & & & & = & 4 \cdot \awntm[6][3] & = & 4 \cdot 540 \\ & + & 5 & \cdot & (\mathbf{1} \cdot 3^5 & - & \mathbf{3} \cdot 2^5 & + & \mathbf{3} \cdot 1^5 & - & \mathbf{1} \cdot 0^5) & & & & & & & = & 5 \cdot \awntm[5][3] & = & 5 \cdot 150 \\ & + & 6 & \cdot & (\mathbf{1} \cdot 3^4 & - & \mathbf{3} \cdot 2^4 & + & \mathbf{3} \cdot 1^4 & - & \mathbf{1} \cdot 0^4) & & & & & & & = & 6 \cdot \awntm[4][3] & = & 6 \cdot 36 \\ & + & c_3 & \cdot & (\mathbf{1} \cdot 3^3 & - & \mathbf{3} \cdot 2^3 & + & \mathbf{3} \cdot 1^3 & - & \mathbf{1} \cdot 0^3) & & & & & & & = & c_3 \cdot \awntm[3][3] & = & c_3 \cdot 6 \\ & + & c_2 & \cdot & (\mathbf{1} \cdot 3^2 & - & \mathbf{3} \cdot 2^2 & + & \mathbf{3} \cdot 1^2 & - & \mathbf{1} \cdot 0^2) & & & & & & & = & c_2 \cdot \awntm[2][3] & = & c_2 \cdot 0 \\ & + & c_1 & \cdot & (\mathbf{1} \cdot 3^1 & - & \mathbf{3} \cdot 2^1 & + & \mathbf{3} \cdot 1^1 & - & \mathbf{1} \cdot 0^1) & & & & & & & = & c_1 \cdot \awntm[1][3] & = & c_1 \cdot 0 \\ c_3 \Rightarrow 7 \\ \\ 540 & = & 4 & \cdot & (\mathbf{1} \cdot 2^6 & - & \mathbf{2} \cdot 1^6 & + & \mathbf{1} \cdot 0^6) & & & & & & & & & = & 4 \cdot \awntm[6][2] & = & 4 \cdot 62 \\ & + & 5 & \cdot & (\mathbf{1} \cdot 2^5 & - & \mathbf{2} \cdot 1^5 & + & \mathbf{1} \cdot 0^5) & & & & & & & & & = & 5 \cdot \awntm[5][2] & = & 5 \cdot 30 \\ & + & 6 & \cdot & (\mathbf{1} \cdot 2^4 & - & \mathbf{2} \cdot 1^4 & + & \mathbf{1} \cdot 0^4) & & & & & & & & & = & 6 \cdot \awntm[4][2] & = & 6 \cdot 14 \\ & + & 7 & \cdot & (\mathbf{1} \cdot 2^3 & - & \mathbf{2} \cdot 1^3 & + & \mathbf{1} \cdot 0^3) & & & & & & & & & = & 7 \cdot \awntm[3][2] & = & 7 \cdot 6 \\ & + & c_2 & \cdot & (\mathbf{1} \cdot 2^2 & - & \mathbf{2} \cdot 1^2 & + & \mathbf{1} \cdot 0^2) & & & & & & & & & = & c_2 \cdot \awntm[2][2] & = & c_2 \cdot 2 \\ & + & c_1 & \cdot & (\mathbf{1} \cdot 2^1 & - & \mathbf{2} \cdot 1^1 & + & \mathbf{1} \cdot 0^1) & & & & & & & & & = & c_1 \cdot \awntm[1][2] & = & c_1 \cdot 0 \\ c_2 \Rightarrow 8 \\ \\ 39 & = & 4 & \cdot & (\mathbf{1} \cdot 1^6 & - & \mathbf{1} \cdot 0^6) & & & & & & & & & & & = & 4 \cdot \awntm[6][1] & = & 4 \cdot 1 \\ & + & 5 & \cdot & (\mathbf{1} \cdot 1^5 & - & \mathbf{1} \cdot 0^5) & & & & & & & & & & & = & 5 \cdot \awntm[5][1] & = & 5 \cdot 1 \\ & + & 6 & \cdot & (\mathbf{1} \cdot 1^4 & - & \mathbf{1} \cdot 0^4) & & & & & & & & & & & = & 6 \cdot \awntm[4][1] & = & 6 \cdot 1 \\ & + & 7 & \cdot & (\mathbf{1} \cdot 1^3 & - & \mathbf{1} \cdot 0^3) & & & & & & & & & & & = & 7 \cdot \awntm[3][1] & = & 7 \cdot 1 \\ & + & 8 & \cdot & (\mathbf{1} \cdot 1^2 & - & \mathbf{1} \cdot 0^2) & & & & & & & & & & & = & 8 \cdot \awntm[2][1] & = & 8 \cdot 1 \\ & + & c_1 & \cdot & (\mathbf{1} \cdot 1^1 & - & \mathbf{1} \cdot 0^1) & & & & & & & & & & & = & c_1 \cdot \awntm[1][1] & = & c_1 \cdot 1 \\ c_1 \Rightarrow 9 \end{array} \] \noindent We know that $c_0=10$ for the reasons stated before in \Cref{ex:awnt}. This completes the solution. \vspace{1em} Further notes: Obviously, $c_0$ is absent from any row number greater than 0 in the \dt as it is subtracted out from each term in row 1. It should also be noted that $c_1$ will be absent from any row number greater than 1 in the \dt because in row 1 there will be a difference of $c_1 \cdot 1x$, which will subtracted out from each term in row 2. This is not the case for the higher order terms. \section{\texorpdfstring{\Cref{ex:mwnt}}{Example 2} (Partially) Revisited} \label{sec:etr} In this section, we will show a partial working of \Cref{ex:mwnt}. Although only partial, this presentation of the solution will hopefully clarify how the method works by going into more detail in the areas that are covered, especially when compared to the full solution to \Cref{ex:awnt} given above. Our focus will be on \Cref{eq:intermed} and \Cref{eq:mwntmde}, for the calculation of $c_6$. \Cref{eq:intermed} gives the expression derived for the \md values and the coefficients per \Cref{eq:md}, given that the starting integer used to generate the sequence is $x=1$. Since $k=7$, and with $q$ taken appropriately for each coefficient, we have, (partially): \arraycolsep=1pt \[ \begin{array}{ccccccccccccccccccl} 1440 & = & c_6 & \cdot & (\mathbf{1} \cdot 7^6 & - & \mathbf{6} \cdot 6^6 & + & \mathbf{15} \cdot 5^6 & - & \mathbf{20} \cdot 4^6 & + & \mathbf{15} \cdot 3^6 & - & \mathbf{6} \cdot 2^6 & + & \mathbf{1} \cdot 1^6) & = & c_6 \cdot 720 \\ & + & c_5 & \cdot & (\mathbf{1} \cdot 7^5 & - & \mathbf{6} \cdot 6^5 & + & \mathbf{15} \cdot 5^5 & - & \mathbf{20} \cdot 4^5 & + & \mathbf{15} \cdot 3^5 & - & \mathbf{6} \cdot 2^5 & + & \mathbf{1} \cdot 1^5) & = & c_5 \cdot 0 \\ & \vdots \end{array} \] \noindent \Cref{eq:mwntmde}, with $n=q+1$ taken appropriately for each coefficient, gives (partially): \arraycolsep=0.0pt \[ \begin{array}{ccccccccccccccccccccccl} 1440 & = & c_6 & \cdot \frac{1}{7} & (\mathbf{1} \cdot 7^7 & - & \mathbf{7} \cdot 6^7 & + & \mathbf{21} \cdot 5^7 & - & \mathbf{35} \cdot 4^7 & + & \mathbf{35} \cdot 3^7 & - & \mathbf{21} \cdot 2^7 & + & \mathbf{7} \cdot 1^7 & - & \mathbf{1} \cdot 0^7) & = & c_6 \cdot \mwntm[7][7] & = & c_6 \cdot 720 \vspace{0.2em} \\ & + & c_5 &\cdot \frac{1}{7} & (\mathbf{1} \cdot 7^6 & - & \mathbf{7} \cdot 6^6 & + & \mathbf{21} \cdot 5^6 & - & \mathbf{35} \cdot 4^6 & + & \mathbf{35} \cdot 3^6 & - & \mathbf{21} \cdot 2^6 & + & \mathbf{7} \cdot 1^6 & - & \mathbf{1} \cdot 0^6) & = & c_5 \cdot \mwntm[6][7] & = & c_5 \cdot 0 \\ & \vdots \end{array} \] \section{Closing Remarks} \label{sec:cr} We have presented a method of solving for the unknown coefficients of a \ps using two of the \wnts and the \md of the sequence's \dt. However, we are unsure if our description of the method in this paper meets the rigorous requirements for a mathematical proof.\footnote{Our experience is in engineering, not in producing ironclad mathematical proofs. The recognition of the method as presented in this paper came about upon investigation of the results of the statistical analysis of an electronics manufacturing process.} Regarding the use of the method described in \Cref{ss:mdawnt} and \Cref{ss:mdmwnt}, if we have not met the requirements, we feel that we are fairly close in doing so. We are quite sure that we have not met the requirements of mathematical rigor regarding extending the method as in \Cref{ex:awntni}. However, we believe that the extension of the method is valid and that any result obtained in practice may be checked for validity on a case by case basis. We welcome papers that address any shortcomings in this paper, with due credit going to the authors.\footnote{We are assuming, perhaps incorrectly, that members of the mathematical community with the necessary skills and experience to write such papers will also deem it worthwhile to do so.}\footnote{We also note that while we are interested in real numbers, we surmise that mathematicians may wish to extend the method to include complex numbers, if possible.} Certainly, we feel that the \wnts are worthy of wider recognition and perhaps of standard definition and notation as well. \clearpage \def\bibindent{1em}
{ "timestamp": "2018-06-05T02:12:53", "yymm": "1806", "arxiv_id": "1806.00887", "language": "en", "url": "https://arxiv.org/abs/1806.00887" }
\section{Introduction} \setcounter{equation}{0} In the last decade, there has been a great deal of interest in studying the nonlinear Schr\"odinger equation with inverse-square potential, namely \begin{align} i\partial_t u + \Delta u + c|x|^{-2} u = \mu |u|^\alpha u, \quad (t,x) \in \mathbb R \times \mathbb R^d, \label{NLS inverse square introduction} \end{align} where $d\geq 3$, $u: \mathbb R \times \mathbb R^d \rightarrow \mathbb C$, $c \ne 0$ satisfies $c<\lambda(d):=\left(\frac{d-2}{2}\right)^2$, $\mu \in \mathbb R$ and $\alpha>0$. The nonlinear Schr\"odinger equation $(\ref{NLS inverse square introduction})$ appears in a variety of physical settings, such as quantum field equations or black hole solutions of the Einstein's equations (see e.g. \cite{Case, CamEpeFanCan, KalSchWalWus}) and quantum gas theory (see e.g. \cite{AstrakharchikMalomed, SakaguchiMalomed-11, SakaguchiMalomed-13}). The mathematical interest in the nonlinear Schr\"odinger equation with inverse-square potential comes from the fact that the potential is homogeneous of degree $-2$ and thus scales exactly the same as the Laplacian. Recently, the equation $(\ref{NLS inverse square introduction})$ has been intensively studied (see e.g. \cite{Bensouilah, BensouilahDinh, BensouilahDinhZhu, BurPlaStaZad, CsoboGenoud, Dinh-inverse, KilMiaVisZhaZhe-energy, KillipMurphyVisanZheng, OkazawaSuzukiYokota, TracZogra, ZhangZheng} and references therein). In this paper, we consider the $L^2$-supercritical nonlinear Schr\"odinger equation with inverse-square potential, namely \begin{align} \left\{ \begin{array}{rcl} i\partial_t u + \Delta u + c|x|^{-2} u &=& - |u|^{\alpha} u, \quad (t,x)\in \mathbb R \times \mathbb R^d, \\ u(0)&=& u_0 \in H^1, \end{array} \right. \label{inverse square NLS} \end{align} where $d\geq 3$, $u: \mathbb R \times \mathbb R^d \rightarrow \mathbb C$, $u_0:\mathbb R^d \rightarrow \mathbb C$, $c\ne 0$ satisfies $c<\lambda(d)$ and $\frac{4}{d} <\alpha<\frac{4}{d-2}$. The main purpose of this paper is to study the instability of radial ground state standing waves for \eqref{inverse square NLS}. Before stating our result, let us recall known results related to the stability and instability of standing waves for the nonlinear Schr\"odinger-like equations. The stability of standing waves for the classical nonlinear Schr\"odinger equation (i.e. $c=0$ in $(\ref{inverse square NLS})$) is widely pursued by physicists and mathematicians (see e.g. \cite{F} for reviews). To our knowledge, the first work addressed the orbital stability of standing waves for the classical NLS belongs to Cazenave-Lions \cite{CazenaveLions} via the concentration-compactness principle. Later, Weinstein in \cite{Weinstein85, Weinstein86} gave another approach to prove the orbital stability of standing waves for the classical NLS. Afterwards, Grillakis-Shatah-Strauss in \cite{GrillakisShatahStrauss87, GrillakisShatahStrauss90} gave a criterion based on a form of coercivity for the action functional (see $(\ref{action functional})$) to prove the stability of standing waves for a Hamiltonian system which is invariant under a one-parameter group of operators. Since then, a lot of results on the orbital stability of standing waves for nonlinear dispersive equations were obtained. For the nonlinear Schr\"odinger equation with a harmonic potential, Zhang \cite{Zhang} succeeded in obtaining the orbital stability of standing waves by the weighted compactness lemma. Recently, the orbital stability phenomenon was proved for the fractional nonlinear Schr\"{o}dinger equation by establishing the profile decomposition for bounded sequences in $H^s$ (see e.g. \cite{PengShi, ZhangZhu}). The instability of standing waves for the classical NLS was first studied by Berestycki-Cazenave \cite{BerestyckiCazenave} (see also \cite{Cazenave}). Later, Le Coz in \cite{LeCoz08} gave an alternative, simple proof of the classical result of Berestycki-Cazenave. The key point is to establish the finite time blow-up by using the variational characterization of the ground states as minimizers of the action functional and the virial identity. For the Schr\"odinger equations with more general nonlinearities, this method does not work due the the lack of virial identities. In such cases, one may use a powerful tool of Grillakis-Shatah-Strauss \cite{GrillakisShatahStrauss87, GrillakisShatahStrauss90} to derive the instability of standing waves. Recently, the authors in \cite{BensouilahDinhZhu} succeeded, using a profile decomposition theorem proved by the first author \cite{Bensouilah}, to establish the stability of standing waves for \eqref{inverse square NLS} in the $L^2$-subcritical regime and the instability by blow-up in the $L^2$-critical regime. The main goal here is to extend these results to the $L^2$-supercritical case but only for radial ground state standing waves. Throughout this paper, we call a standing wave a solution of $(\ref{inverse square NLS})$ of the form $e^{i\omega t} \phi_\omega$, where $\omega \in \mathbb R$ is a frequency and $\phi_\omega \in H^1$ is a nontrivial solution to the elliptic equation \begin{align} -\Delta \phi_\omega + \omega \phi_\omega - c|x|^{-2} \phi_\omega - |\phi_\omega|^\alpha \phi_\omega =0. \label{elliptic equation} \end{align} Note that the existence of positive radial solutions to the elliptic equation \[ -\Delta \phi + \phi - c|x|^{-2} \phi - |\phi|^\alpha \phi =0 \] was shown in \cite[Theorem 3.1]{KillipMurphyVisanZheng} and \cite[Theorem 4.1]{Dinh-inverse}. By setting $\phi_\omega(x):= \left(\sqrt{\omega} \right)^{\frac{2}{\alpha}} \phi(\sqrt{\omega}x)$, it is easy to see that $\phi_\omega$ is a solution of $(\ref{elliptic equation})$. This shows the existence of positive radial solutions to $(\ref{elliptic equation})$. Note also that $(\ref{elliptic equation})$ can be written as $S'_\omega(\phi_\omega)=0$, where \begin{align} \begin{aligned} S_\omega(v) &:= E(v) + \frac{\omega}{2} \|v\|^2_{L^2} \\ &\mathrel{\phantom{:}}= \frac{1}{2} \|v\|^2_{\dot{H}^1_c} + \frac{\omega}{2} \|v\|^2_{L^2} -\frac{1}{\alpha+2} \|v\|^{\alpha+2}_{L^{\alpha+2}} \end{aligned} \label{action functional} \end{align} is the action functional. Here \begin{align} \|v\|^2_{\dot{H}^1_c} := \|\nabla v\|^2_{L^2} - c\||x|^{-1} v\|^2_{L^2} \label{hardy functional} \end{align} is the Hardy functional. We denote the set of non-trivial radial solutions of $(\ref{elliptic equation})$ by \[ \mathcal A_{\text{rad},\omega}:= \left\{ v \in H^1_{\text{rad}} \backslash \{0\} \ : \ S'_\omega(v) =0 \right\}, \] where $H^1_{\text{rad}}$ is the space of radial $H^1$ functions. \begin{definition} [Radial ground states] \label{definition radial ground state} A function $\phi \in \mathcal A_{\text{rad},\omega}$ is called {\bf a radial ground state} for $(\ref{elliptic equation})$ if it is a minimizer of $S_\omega$ over the set $\mathcal A_{\text{rad},\omega}$. The set of radial ground states is denoted by $\mathcal G_{\text{rad},\omega}$. In particular, \[ \mathcal G_{\text{rad},\omega} = \left\{ \phi \in \mathcal A_{\text{rad},\omega} \ : \ S_\omega(\phi) \leq S_\omega(v), \ \forall v \in \mathcal A_{\text{rad},\omega} \right\}. \] \end{definition} We have the following result on the existence of radial ground states for $(\ref{elliptic equation})$. \begin{proposition} \label{proposition existence radial ground states} Let $d\geq 3$, $c\ne 0$ be such that $c<\lambda(d)$, $\frac{4}{d}<\alpha<\frac{4}{d-2}$ and $\omega>0$. Then the set $\mathcal{G}_{\emph{rad},\omega}$ is not empty, and it is characterized by \[ \mathcal{G}_{\emph{rad},\omega} = \left\{ v \in H^1_{\emph{rad}} \backslash \{0\}, \ : \ S_\omega(v) = d(\emph{rad},\omega), \ K_\omega(v)=0 \right\}, \] where \[ K_\omega(v):= \left. \partial_\lambda S_\omega(\lambda v) \right|_{\lambda=1} = \|v\|^2_{\dot{H}^1_c} + \omega \|v\|^2_{L^2} - \|v\|^{\alpha+2}_{L^{\alpha+2}} \] is the Nehari functional and \begin{align} d(\emph{rad},\omega):= \inf \left\{ S_\omega(v) \ : \ v \in H^1_{\text{rad}} \backslash \{0\}, \ K_\omega (v) =0 \right\}. \label{minimizing problem} \end{align} \end{proposition} We refer the reader to Section $\ref{section existence ground state}$ for the proof of the above result. \begin{remark} Recently, Fukaya-Ohta in \cite{FukayaOhta} studied the instability of standing waves for the nonlinear Schr\"odinger equation with an attractive inverse power potential, namely \[ i\partial_t u + \Delta u + \gamma |x|^{-\alpha} u = -|u|^{p-1} u, \] where $\gamma>0, 0<\alpha<\min\{2, d\}$ and $\frac{4}{d}<p-1<\frac{4}{d-2}$ if $d\geq 3$ and $\frac{4}{d}<p-1<\infty$ if $d=1$ or $d=2$. The potential $V(x) = \gamma |x|^{-\alpha}$ belongs to $L^r(\mathbb R^d) + L^\infty(\mathbb R^d)$ for some $r> \min \{1,d/2\}$. This special property allows them to use the weak continuity of the potential energy (see e.g. \cite[Theorem 11.4]{LiebLoss}) to prove the existence of non-radial ground states. In our case, the inverse-square potential $V(x)=c |x|^{-2}$ does not belong to $L^{\frac{d}{2}}(\mathbb R^d) + L^\infty(\mathbb R^d)$, so the weak continuity of potential energy is not applicable to our potential. At the moment, we do not know how to show the existence of non-radial ground states for $(\ref{elliptic equation})$. We hope to consider this problem in a future work. \end{remark} Let us now recall the definition of the strong instability. \begin{definition}[Strong instability] We say that the standing wave $e^{i\omega t} \phi_\omega$ is strongly unstable if for any $\epsilon>0$, there exists $u_0 \in H^1$ such that $\|u_0 - \phi_\omega\|_{H^1} <\epsilon$ and the solution $u(t)$ of $(\ref{inverse square NLS})$ with initial data $u_0$ blows up in finite time. \end{definition} Our main result of this paper is the following: \begin{theorem} \label{theorem instability} Let $d\geq 3$, $c\ne 0$ be such that $c<\lambda(d)$, $\frac{4}{d}< \alpha<\frac{4}{d-2}$, $\omega>0$ and $\phi_\omega \in \mathcal G_{\emph{rad},\omega}$. Then the standing wave solution $e^{i\omega t} \phi_\omega$ of $(\ref{inverse square NLS})$ is strongly unstable. \end{theorem} To our knowledge, the usual strategy to show the strong instability of standing waves is to use the characterization of ground states combined with the virial identity. However, in the presence of the inverse-square potential, the existence of ground states is well-known. However, the regularity as well as the decay of ground states are not yet known. Therefore, it is not known that the ground states $\phi_\omega$ belongs to the weighted space $\Sigma: = H^1 \cap L^2(|x|^2 dx)$ in order to apply the virial identity. This is a reason why we only consider the instability of radial ground state standing waves in this paper. If one can show that $\phi_\omega \in \Sigma$, then one can study the instability of non-radial ground state standing waves. The proof of Theorem $\ref{theorem instability}$ is based on the characterization of the radial ground states and the localized virial estimates. Thanks to the radial symmetry of the ground state, we are able to use the localized virial estimates derived by the second author in \cite{Dinh-inverse} to show the finite time blow-up. We refer the reader to Section $\ref{section instability}$ for more details. The rest of the paper is organized as follows. In Section $\ref{section existence ground state}$, we give the proof of the existence of radial ground states for $(\ref{elliptic equation})$ given in Proposition $\ref{proposition existence radial ground states}$. The proof of our main result-Theorem $\ref{theorem instability}$ will be given in Section $\ref{section instability}$. \section{Existence of radial ground states} \label{section existence ground state} \setcounter{equation}{0} In this section, we give the proof the existence of radial ground states for $(\ref{elliptic equation})$ given in Proposition $\ref{proposition existence radial ground states}$. The proof of Proposition $\ref{proposition existence radial ground states}$ follows from several lemmas. Let us denote the $\omega$-Hardy functional by \[ H_\omega(v):= \|v\|^2_{\dot{H}^1_c} + \omega \|v\|^2_{L^2}. \] Using the sharp Hardy inequality \[ \lambda(d) \||x|^{-1} v\|^2_{L^2} \leq \|\nabla v\|^2_{L^2}, \] we see that for $c<\lambda(d)$ and $\omega>0$ fixed, \begin{align} H_\omega(v) \sim \|v\|^2_{H^1}. \label{equivalent norms} \end{align} We note that the action functional can be rewritten as \begin{align} S_\omega(v):= \frac{1}{2} K_\omega(v) + \frac{\alpha}{2(\alpha+2)} \|v\|^{\alpha+2}_{L^{\alpha+2}} = \frac{1}{\alpha+2} K_\omega(v) + \frac{\alpha}{2(\alpha+2)} H_\omega(v). \label{expressions S_omega} \end{align} Let us start with the following result. \begin{lemma} \label{lemma positivity d_omega} $d(\emph{rad},\omega)>0$. \end{lemma} \begin{proof} Let $v \in H^1_{\text{rad}} \backslash \{0\}$ be such that $K_\omega(v) =0$. By the Sobolev embedding, $(\ref{equivalent norms})$ and the fact $H_\omega(v) = \|v\|^{\alpha+2}_{L^{\alpha+2}}$, we have \[ \|v\|^2_{L^{\alpha+2}} \leq C_1 \|v\|^2_{H^1} \leq C_2 H_\omega(v) = C_2 \|v\|^{\alpha+2}_{L^{\alpha+2}}, \] for some $C_1, C_2>0$. This implies that \[ \frac{\alpha}{2(\alpha+2)} \|v\|^{\alpha+2}_{L^{\alpha+2}} \geq \frac{\alpha}{2(\alpha+2)} \left( \frac{1}{C_2}\right)^{\frac{\alpha+2}{\alpha}}. \] Taking the infimun over $v \in H^1_{\text{rad}} \backslash \{0\}$, we obtain $d(\text{rad},\omega)>0$. \end{proof} We now denote the set of all minimizers of $(\ref{minimizing problem})$ by \[ \mathcal M_{\text{rad},\omega}:= \left\{ v \in H^1_{\text{rad}} \backslash \{0\} \ : \ K_\omega(v) =0, \ S_\omega(v) = d(\text{rad},\omega) \right\}. \] \begin{lemma} \label{lemma non empty M_omega} The set $\mathcal M_{\emph{rad},\omega}$ is non-empty. \end{lemma} \begin{proof} Let $(v_n)_{n\geq 1}$ be a minimizing sequence of $d(\text{rad},\omega)$, i.e. $v_n \in H^1_{\text{rad}} \backslash \{0\}$, $K_\omega(v_n) =0$ and $S_\omega(v_n) \rightarrow d(\text{rad},\omega)$ as $n\rightarrow \infty$. Since $K_\omega(v_n) = 0$, we have $H_\omega(v_n) = \|v_n\|^{\alpha+2}_{L^{\alpha+2}}$ for any $n\geq 1$. Using $(\ref{expressions S_omega})$, the fact $S_\omega(v_n) \rightarrow d(\text{rad},\omega)$ as $n\rightarrow \infty$ implies that \[ \frac{\alpha}{2(\alpha+2)} H_\omega(v_n) = \frac{\alpha}{2(\alpha+2)} \|v_n\|^{\alpha+2}_{L^{\alpha+2}} \rightarrow d(\text{rad},\omega), \] as $n\rightarrow \infty$. We infer that there exists $C>0$ such that \[ H_\omega(v_n) \leq \frac{2(\alpha+2)}{\alpha} d(\text{rad},\omega) + C, \] for all $n\geq 1$. It follows from $(\ref{equivalent norms})$ that $(v_n)_{n\geq 1}$ is a bounded sequence in $H^1_{\text{rad}}$. Using the compact embedding $H^1_{\text{rad}} \hookrightarrow L^{\alpha+2}$, there exists $v_0 \in H^1_{\text{rad}}$ such that \[ v_n \rightharpoonup v_0 \text{ weakly in } H^1 \text{ and strongly in } L^{\alpha+2} \text{ as } n \rightarrow \infty. \] Writting $v_n= v_0 + r_n$, where $r_n \rightharpoonup 0$ weakly in $H^1$ as $n\rightarrow \infty$. We have \[ K_\omega(v_n) = H_\omega(v_n) - \|v_n\|^{\alpha+2}_{L^{\alpha+2}} = H_\omega(v_0) + H_\omega(r_n) - \|v_n\|^{\alpha+2}_{L^{\alpha+2}} + o_n(1), \] as $n\rightarrow \infty$. Here $o_n(1)$ means that $o_n(1) \rightarrow 0$ as $n\rightarrow \infty$. Since $K_\omega(v_n) =0$ and $H_\omega(r_n) \geq 0$ for all $n\geq 1$, we get \[ H_\omega(v_0) \leq \|v_n\|^{\alpha+2}_{L^{\alpha+2}} + o_n(1), \] as $n\rightarrow \infty$. Taking the limit $n\rightarrow \infty$, we obtain \[ H_\omega(v_0) \leq \frac{2(\alpha+2)}{\alpha} d(\text{rad},\omega). \] Since $v_n \rightarrow v_0$ strongly in $L^{\alpha+2}$, it follows that \[ \|v_0\|^{\alpha+2}_{L^{\alpha+2}} =\lim_{n\rightarrow \infty} \|v_n\|^{\alpha+2}_{L^{\alpha+2}} = \frac{2(\alpha+2)}{\alpha} d(\text{rad},\omega). \] We thus get $K_\omega(v_0) \leq 0$. Now suppose that $K_\omega(v_0) <0$. We have for $\mu>0$, \[ K_\omega(\mu v_0) = \mu^2 H_\omega(v_0) - \mu^{\alpha+2} \|v_0\|^{\alpha+2}_{L^{\alpha+2}}. \] It is easy to see that the equation $K_\omega(\mu v_0)=0$ admits a unique non-zero solution \[ \mu_0 = \left(\frac{H_\omega(v_0)}{\|v_0\|^{\alpha+2}_{L^{\alpha+2}}} \right)^{\frac{1}{\alpha}}. \] Since $K_\omega(v_0)<0$, we have $\mu_0 \in (0,1)$. By the definition of $d(\text{rad},\omega)$ and $(\ref{expressions S_omega})$, we get \begin{align*} d(\text{rad},\omega) \leq S_\omega(\mu_0 v_0) = \frac{\alpha}{2(\alpha+2)} H_\omega(\mu_0 v_0) &= \mu_0^2 \frac{\alpha}{2(\alpha+2)} H_\omega(v_0) \\ &< \frac{\alpha}{2(\alpha+2)} H_\omega(v_0) \leq d(\text{rad},\omega), \end{align*} which is a contradiction. Therefore, $K_\omega(v_0)=0$. Moreover, \[ S_\omega(v_0) = \frac{\alpha}{2(\alpha+2)} \|v_0\|^{\alpha+2}_{L^{\alpha+2}} = d(\text{rad},\omega). \] This shows that $v_0$ is a minimizer of $d(\text{rad},\omega)$. The proof is complete. \end{proof} \begin{lemma} \label{lemma M_omega subset G_omega} $\mathcal M_{\emph{rad},\omega} \subset \mathcal G_{\emph{rad},\omega}$. \end{lemma} \begin{proof} Let $\phi \in \mathcal M_{\text{rad},\omega}$. Since $K_\omega(\phi) =0$, we have $H_\omega(\phi)=\|\phi\|^{\alpha+2}_{L^{\alpha+2}}$. Since $\phi$ is a minimizer of $d(\text{rad},\omega)$, there exists a Lagrange multiplier $\mu \in \mathbb R$ such that $S'_\omega(\phi) = \mu K'_\omega(\phi)$. We thus have \[ 0 = K_\omega(\phi) = \scal{S'_\omega(\phi), \phi} = \mu \scal{K'_\omega(\phi), \phi}. \] It is easy to see that \[ K'_\omega(\phi) = -2\Delta \phi + 2\omega \phi -2 c |x|^{-2} \phi - (\alpha+2)|\phi|^\alpha \phi. \] Therefore, \[ \scal{K'_\omega(\phi),\phi}= 2 H_\omega(\phi) - (\alpha+2) \|\phi\|^{\alpha+2}_{L^{\alpha+2}} = -\alpha \|\phi\|^{\alpha+2}_{L^{\alpha+2}}<0. \] This implies that $\mu=0$, hence $S'_\omega(\phi) =0$. In particular, we have $\phi \in \mathcal A_{\text{rad},\omega}$. To prove $\phi \in \mathcal G_{\text{rad},\omega}$, it remains to show that $S_\omega(\phi) \leq S_\omega(v)$ for all $v \in \mathcal A_{\text{rad},\omega}$. To see this, let $v \in \mathcal A_{\text{rad},\omega}$. We have \[ K_\omega(v) = \scal{S'_\omega(v),v} =0. \] By definition of $\mathcal M_{\text{rad},\omega}$, we have $S_\omega(\phi) \leq S_\omega(v)$. The proof is complete. \end{proof} \begin{lemma} \label{lemma G_omega subset M_omega} $\mathcal G_{\emph{rad},\omega} \subset \mathcal M_{\emph{rad},\omega}$. \end{lemma} \begin{proof} Let $\phi \in \mathcal G_{\text{rad},\omega}$. Since $\mathcal M_{\text{rad},\omega}$ is not empty, we take $\psi \in \mathcal M_{\text{rad},\omega}$. By Lemma $\ref{lemma M_omega subset G_omega}$, $\psi \in \mathcal G_{\text{rad},\omega}$. In particular, $S_\omega(\phi) = S_\omega(\psi)$. Since $\psi \in \mathcal M_{\text{rad},\omega}$, we get \[ S_\omega(\phi) = S_\omega(\psi) = d(\text{rad},\omega). \] It remains to show that $K_\omega(\phi) =0$. Since $\phi \in \mathcal A_{\text{rad},\omega}$, we have $S'_\omega(\phi) =0$, hence $K_\omega(\phi) = \scal{S'_\omega(\phi),\phi} =0$. Therefore, $\phi \in \mathcal M_{\text{rad},\omega}$ and the proof is complete. \end{proof} \noindent {\it Proof of Proposition $\ref{proposition existence radial ground states}$.} Proposition $\ref{proposition existence radial ground states}$ follows immediately from Lemmas $\ref{lemma non empty M_omega}$, $\ref{lemma M_omega subset G_omega}$ and $\ref{lemma G_omega subset M_omega}$. \defendproof \section{Instability of radial standing waves} \label{section instability} \setcounter{equation}{0} In this section, we give the proof of the instability of radial ground state standing waves given in Theorem $\ref{theorem instability}$. Let us start by recalling the local well-posedness in the energy space $H^1$ for $(\ref{inverse square NLS})$ proved by Okazawa-Suzuki-Yokota \cite{OkazawaSuzukiYokota}. \begin{theorem}[Local well-posedness \cite{OkazawaSuzukiYokota}] \label{theorem local theory} Let $d\geq 3$, $c\ne 0$ be such that $c<\lambda(d)$ and $\frac{4}{d} <\alpha<\frac{4}{d-2}$. Then for any $u_0 \in H^1$, there exists $T \in (0, +\infty]$ and a maximal solution $u \in C([0,T), H^1)$ of $(\ref{inverse square NLS})$. The maximal time of existence satisfies either $T=+\infty$ or $T<+\infty$ and \[ \lim_{t\uparrow T} \|\nabla u(t)\|_{L^2} =\infty. \] Moreover, the local solution enjoys the conservation of mass and energy \begin{align*} M(u(t)) &= \int |u(t,x)|^2 dx = M(u_0), \\ E(u(t)) &= \frac{1}{2} \int |\nabla u(t,x)|^2 dx - \frac{c}{2} \int |x|^{-2} |u(t,x)|^2 dx - \frac{1}{\alpha+2} \int |u(t,x)|^{\alpha+2} dx \\ &=E(u_0), \end{align*} for any $t\in [0,T)$. \end{theorem} We refer the reader to \cite[Proposition 5.1]{OkazawaSuzukiYokota} for the proof of the above result. Note that the existence of local solution is based on a refined energy method of the well-known energy method proposed by Cazenave \cite[Chapter 3]{Cazenave}. The uniqueness of local solutions follows from Strichartz estimates proved by Burq-Planchon-Stalker-Zadel \cite{BurPlaStaZad}. We next recall the so-called Pohozaev's identities for $(\ref{elliptic equation})$. We give the proof for the reader's convenience. \begin{lemma} \label{lemma pohozaev identities} Let $\omega>0$. If $\phi_\omega \in H^1$ is a solution to $(\ref{elliptic equation})$, then \[ \|\phi_\omega\|^2_{\dot{H}^1_c} + \omega \|\phi_\omega\|^2_{L^2} - \|\phi_\omega\|^{\alpha+2}_{L^{\alpha+2}} =0, \] and \[ \left(1-\frac{d}{2}\right) \|\phi_\omega\|^2_{\dot{H}^1_c} -\frac{d\omega}{2} \|\phi_\omega\|^2_{L^2} + \frac{d}{\alpha+2} \|\phi_\omega\|^{\alpha+2}_{L^{\alpha+2}}. \] \end{lemma} \begin{proof} Multiplying both sides of $(\ref{elliptic equation})$ with $\phi_\omega$ and integrating over $\mathbb R^d$, we obtain easily the first identity. Let us prove the second identity. Due to the singularity of the inverse-square potential at zero, we multiply both sides of $(\ref{elliptic equation})$ with $x \cdot \nabla \phi_\omega$ and integrate on $P(r,R):= \{ x \in \mathbb R^d \ : \ r \leq |x| \leq R\}$ for some $R>r>0$. We have \begin{align*} -\int_{P(r,R)} \Delta \phi_\omega (x\cdot \nabla \phi_\omega) dx = \int_{P(r,R)} \nabla \phi_\omega \cdot \nabla(x\cdot \nabla \phi_\omega) dx &- \int_{\partial B_r} |\nabla \phi_\omega|^2 (x\cdot \bm n_1) dS \\ &- \int_{\partial B_R} |\nabla \phi_\omega|^2 (x\cdot \bm n_2) dS, \end{align*} where $\bm n_1= -\frac{x}{r}$ is the unit inward normal at $x \in \partial B_r$ and $\bm n_2 = \frac{x}{R}$ is the unit outward normal at $x \in \partial B_R$. We also have \begin{align*} \int_{P(r,R)} \nabla \phi_\omega \cdot \nabla(x\cdot \nabla \phi_\omega) dx = \left(1-\frac{d}{2}\right) \int_{P(r,R)} |\nabla \phi_\omega|^2 dx &+ \frac{1}{2} \int_{\partial B_r} |\nabla \phi_\omega|^2 (x\cdot \bm n_1)dS \\ &+ \frac{1}{2} \int_{\partial B_R} |\nabla \phi_\omega|^2 (x\cdot \bm n_2)dS. \end{align*} Thus, \begin{align*} -\int_{P(r,R)} \Delta \phi_\omega (x\cdot \nabla \phi_\omega) dx = \left(1-\frac{d}{2}\right) \int_{P(r,R)} |\nabla \phi_\omega|^2 dx &- \frac{1}{2} \int_{\partial B_r} |\nabla \phi_\omega|^2 (x\cdot \bm n_1) dS \\ &-\frac{1}{2} \int_{\partial B_R} |\nabla \phi_\omega|^2 (x\cdot \bm n_2) dS. \end{align*} Similarly, \begin{align*} \omega\int_{P(r,R)} \phi_\omega(x\cdot \nabla \phi_\omega) dx = -\frac{d\omega}{2} \int_{P(r,R)} |\phi_\omega|^2 dx &+ \frac{\omega}{2} \int_{\partial B_r}|\phi_\omega|^2 (x\cdot \bm n_1) dS \\ &+\frac{\omega}{2} \int_{\partial B_R}|\phi_\omega|^2 (x\cdot \bm n_2) dS, \end{align*} and \begin{align*} -c \int_{P(r,R)} |x|^{-2} \phi_\omega(x\cdot \nabla \phi_\omega) dx &= -c\left(1-\frac{d}{2}\right) \int_{P(r,R)} |x|^{-2} |\phi_\omega|^2 dx \\ &\mathrel{\phantom{=}}-\frac{c}{2} \int_{\partial B_r} |x|^{-2} |\phi_\omega|^2 (x\cdot \bm n_1) dS \\ &\mathrel{\phantom{=}}-\frac{c}{2} \int_{\partial B_R} |x|^{-2} |\phi_\omega|^2 (x\cdot \bm n_2) dS, \end{align*} and finally \begin{align*} -\int_{P(r,R)} |\phi_\omega|^\alpha \phi_\omega (x \cdot \nabla \phi_\omega) dx &= \frac{d}{\alpha+2} \int_{P(r,R)} |\phi_\omega|^{\alpha+2} dx \\ &\mathrel{\phantom{=}}-\frac{1}{\alpha+2} \int_{\partial B_r} |\phi_\omega|^{\alpha+2} (x\cdot \bm n_1) dS \\ &\mathrel{\phantom{=}}-\frac{1}{\alpha+2} \int_{\partial B_R} |\phi_\omega|^{\alpha+2} (x\cdot \bm n_2) dS. \end{align*} Adding the above identities, we get \begin{multline} \left(1-\frac{d}{2}\right) \left[\int_{P(r,R)} |\nabla \phi_\omega|^2 dx - c \int_{P(r,R)} |x|^{-2} |\phi_\omega|^2 dx\right] -\frac{d\omega}{2} \int_{P(r,R)}|\phi_\omega|^2 dx \\ + \frac{d}{\alpha+2} \int_{P(r,R)} |\phi_\omega|^{\alpha+2} dx = I_1(r) + I_2(R), \label{pohozaev proof} \end{multline} where \begin{align*} I_1(r) &= \frac{1}{2}\int_{\partial B_r} |\nabla \phi_\omega|^2 (x\cdot \bm n_1) dS - \frac{\omega}{2} \int_{\partial B_r} |\phi_\omega|^2 (x\cdot\bm n_1) dS \\ &\mathrel{\phantom{=}} +\frac{c}{2}\int_{\partial B_r} |x|^{-2} |\phi_\omega|^2 (x\cdot \bm n_1) dS +\frac{1}{\alpha+2} \int_{\partial B_r} |\phi_\omega|^{\alpha+2} (x\cdot \bm n_1) dS \\ &=-r\left( \int_{\partial B_r} \frac{1}{2} |\nabla \phi_\omega|^2 -\frac{\omega}{2}|\phi_\omega|^2 + \frac{c}{2} |x|^{-2} |\phi_\omega|^2 +\frac{1}{\alpha+2} |\phi_\omega|^{\alpha+2} dS \right), \end{align*} and \begin{align*} I_2(R) &= \frac{1}{2}\int_{\partial B_R} |\nabla \phi_\omega|^2 (x\cdot \bm n_2) dS - \frac{\omega}{2} \int_{\partial B_R} |\phi_\omega|^2 (x\cdot\bm n_2) dS \\ &\mathrel{\phantom{=}} +\frac{c}{2}\int_{\partial B_R} |x|^{-2} |\phi_\omega|^2 (x\cdot \bm n_2) dS +\frac{1}{\alpha+2} \int_{\partial B_R} |\phi_\omega|^{\alpha+2} (x\cdot \bm n_2) dS \\ &=R\left( \int_{\partial B_R} \frac{1}{2} |\nabla \phi_\omega|^2 -\frac{\omega}{2}|\phi_\omega|^2 + \frac{c}{2} |x|^{-2} |\phi_\omega|^2 +\frac{1}{\alpha+2} |\phi_\omega|^{\alpha+2} dS \right). \end{align*} Denote \[ A(\phi_\omega) = \frac{1}{2} |\nabla \phi_\omega|^2 -\frac{\omega}{2}|\phi_\omega|^2 + \frac{c}{2} |x|^{-2} |\phi_\omega|^2 +\frac{1}{\alpha+2} |\phi_\omega|^{\alpha+2}. \] We have \begin{align} \int_{B} A(\phi_\omega) dx = \int_0^1 \int_{\partial B_r} A(\phi_\omega) dS dr <\infty, \label{finite term} \end{align} where $B$ is the unit ball in $\mathbb R^d$. Hence, there exists a sequence $r_n \rightarrow 0$ such that \[ r_n \int_{\partial B_{r_n}} A(\phi_\omega) dS \rightarrow 0 \quad \text{as } n\rightarrow \infty. \] Indeed, if \[ \liminf_{r\rightarrow 0} r \int_{\partial B_r} A(\phi_\omega) dS = c >0, \] then \[ \int_{\partial B_r} A(\phi_\omega) dS \] would not be in $L^1(0,1)$, which contradicts to $(\ref{finite term})$. On the other hand, since \[ \int_{\mathbb R^d} A(\phi_\omega) dx = \int_0^{+\infty} \int_{\partial B_R} A(\phi_\omega) dS dR <\infty, \] there exists a sequence $R_n \rightarrow +\infty$ such that \[ R_n\int_{\partial B_R} A(\phi_\omega) dS \rightarrow 0 \quad \text{as } n\rightarrow \infty. \] This implies that $I_1(r_n) \rightarrow 0$ and $I_2(R_n) \rightarrow 0$ as $n\rightarrow \infty$. Now substituting $r$ by $r_n$ and $R$ by $R_n$ in $(\ref{pohozaev proof})$ and taking $n\rightarrow \infty$, we obtain the second identity. The proof is complete. \end{proof} Throughout this section, we denote the functional \[ Q(v):= \|v\|^2_{\dot{H}^1_c} -\frac{d\alpha}{2(\alpha+2)} \|v\|^{\alpha+2}_{L^{\alpha+2}}. \] Note that if we take \begin{align} v^\lambda(x):= \lambda^{\frac{d}{2}} v(\lambda x), \label{scaling} \end{align} then we have \begin{align} \begin{aligned} \|v^\lambda\|_{L^2} &= \|v\|_{L^2}, & \|\nabla v^\lambda\|_{L^2} &= \lambda \|\nabla v\|_{L^2}, \\ \||x|^{-1} v^\lambda \|_{L^2} &= \lambda \||x|^{-1} v\|_{L^2}, & \|v^\lambda\|_{L^{\alpha+2}} &= \lambda^{\frac{d\alpha}{2(\alpha+2)}} \|v\|_{L^{\alpha+2}}. \end{aligned} \label{scaling examples} \end{align} Thus, \[ S_\omega(v^\lambda) = \frac{\lambda^2}{2} \|v\|^2_{\dot{H}^1_c} + \frac{\omega}{2} \|v\|^2_{L^2} - \frac{\lambda^{\frac{d\alpha}{2}}}{\alpha+2} \|v\|^{\alpha+2}_{L^{\alpha+2}}, \] and \[ Q(v) = \left. \partial_\lambda S_\omega(v^\lambda)\right|_{\lambda=1}. \] \begin{lemma} \label{lemma S_ome phi_ome} Let $d\geq 3, c \ne 0$ be such that $c<\lambda(d)$, $\frac{4}{d}<\alpha < \frac{4}{d-2}$ and $\omega>0$. Let $\phi_\omega \in \mathcal{G}_{\emph{rad},\omega}$. Then \[ S_\omega(\phi_\omega) = \inf \{S_\omega(v) \ : \ v \in H^1_{\text{rad}} \backslash \{0\}, Q(v)=0 \}. \] \end{lemma} \begin{proof} Let $d_n:= \inf \{S_\omega(v) \ : \ v \in H^1_{\text{rad}} \backslash \{0\}, Q(v)=0 \}$. Thanks to the Pohozaev's identities, it is easy to check that $S_\omega(\phi_\omega)= Q(\phi_\omega)=0$. By the definition of $d_n$, \begin{align} S_\omega(\phi_\omega) \geq d_n. \label{inequality 1} \end{align} We now consider $v \in H^1_{\text{rad}} \backslash \{0\}$ be such that $Q(v)=0$. If $K_\omega(v)=0$, then by Proposition $\ref{proposition existence radial ground states}$, $S_\omega(v) \geq S_\omega(\phi_\omega)$. Assume that $K_\omega(v) \ne 0$. Let $v^\lambda$ be as in $(\ref{scaling})$. We have \[ K_\omega(v^\lambda)=\lambda^2 \|v\|^2_{\dot{H}^1_c} + \omega \|v\|^2_{L^2} - \lambda^{\frac{d\alpha}{2}} \|v\|^{\alpha+2}_{L^{\alpha+2}}. \] We see that $\lim_{\lambda\rightarrow 0} K_\omega(v^\lambda)= \omega \|v\|^2_{L^2}>0$. Since $\frac{d\alpha}{2}>2$, we have $\lim_{\lambda \rightarrow +\infty} K_\omega(v^\lambda) = -\infty$. Thus, there exists $\lambda_0 >0$ such that $K_\omega(v^{\lambda_0})=0$. By Proposition $\ref{proposition existence radial ground states}$, we get $S_\omega(v^{\lambda_0}) \geq S_\omega(\phi_\omega)$. On the other hand, a direct computation shows that \begin{align*} \partial_\lambda S_\omega(v^\lambda)&= \lambda \|v\|^2_{\dot{H}^1_c} -\frac{d\alpha}{2(\alpha+2)} \lambda^{\frac{d\alpha}{2}-1} \|v\|^{\alpha+2}_{L^{\alpha+2}} \\ &= \lambda \left( \|v\|^2_{\dot{H}^1_c}- \frac{d\alpha}{2(\alpha+2)} \lambda^{\frac{d\alpha}{2}-2} \|v\|^{\alpha+2}_{L^{\alpha+2}} \right). \end{align*} The equation $\partial_\lambda S_\omega(v^\lambda)= 0$ admits a unique non-zero solution \[ \lambda_1 = \left( \frac{\|u\|^2_{\dot{H}^1_c}}{\frac{d\alpha}{2(\alpha+2)} \|v\|^{\alpha+2}_{L^{\alpha+2}} } \right)^{\frac{2}{d\alpha-4}} \] which is equal to 1 since $Q(v)=0$. It follows that $\partial_\lambda S_\omega(v^\lambda)>0$ if $\lambda \in (0,1)$ and $\partial_\lambda S_\omega(v^\lambda)<0$ if $\lambda \in (1,\infty)$. In particular, we get $S_\omega(v^\lambda) < S_\omega(v)$ for any $\lambda>0$ and $\lambda \ne 1$. Since $\lambda_0>0$, it follows that $S_\omega(v^{\lambda_0}) \leq S_\omega(v)$. This implies that $S_\omega(v) \geq S_\omega(\phi_\omega)$ for any $v \in H^1_{\text{rad}} \backslash \{0\}$, $Q(v)=0$. Taking the infimum, we obtain \begin{align} S_\omega(\phi_\omega) \leq d_n. \label{inequality 2} \end{align} Combining $(\ref{inequality 1})$ and $(\ref{inequality 2})$, we prove the result. \end{proof} Let $\phi_\omega \in \mathcal{G}_{\text{rad},\omega}$. We denote \[ \mathcal{B}_{\text{rad},\omega} := \{ v \in H^1_{\text{rad}} \backslash \{0\} \ : \ S_\omega(v) < S_\omega(\phi_\omega), Q(v) <0 \}. \] \begin{lemma} Let $d\geq 3, c \ne 0$ be such that $c<\lambda(d)$, $\frac{4}{d}<\alpha <\frac{4}{d-2}$ and $\omega>0$. Let $\phi_\omega \in \mathcal{G}_{\text{rad},\omega}$. Then $\mathcal{B}_{\text{rad},\omega}$ is invariant under the flow of $(\ref{inverse square NLS})$, that is, if $u_0 \in \mathcal{B}_{\text{rad},\omega}$, then the corresponding solution $u(t)$ to $(\ref{inverse square NLS})$ with $u(0) = u_0$ satisfies $u(t) \in \mathcal{B}_{\text{rad},\omega}$ for any $t\in [0,T)$. \end{lemma} \begin{proof} Let $u_0 \in \mathcal{B}_{\text{rad},\omega}$. By the conservation of mass and energy, \begin{align} S_\omega(u(t)) = S_\omega(u_0) < S_\omega(\phi_\omega), \quad \forall t\in [0,T). \label{invariant property} \end{align} It remains to show that $Q(u(t))<0$ for any $t\in [0,T)$. Suppose that there exists $t_0 \in [0,T)$ such that $Q(u(t_0)) \geq 0$. By the continuity of $t\mapsto Q(u(t))$, there exists $t_1 \in (0, t_0]$ such that $Q(u(t_1))=0$. By Lemma $\ref{lemma S_ome phi_ome}$, $S_\omega(u(t_1)) \geq S_\omega(\phi_\omega)$ which contradicts to $(\ref{invariant property})$. \end{proof} \begin{lemma} \label{lemma key estimate} Let $d\geq 3, c\ne 0$ be such that $c<\lambda(d)$, $\frac{4}{d}<\alpha<\frac{4}{d-2}$ and $\omega>0$. Let $\phi_\omega \in \mathcal{G}_{\text{rad},\omega}$. If $v \in \mathcal{B}_{\text{rad},\omega}$, then \[ Q(v) \leq 2 ( S_\omega(v) - S_\omega(\phi_\omega)). \] \end{lemma} \begin{proof} Let $v^\lambda$ be as in $(\ref{scaling})$. Set $g(\lambda):= S_\omega(v^\lambda)$. We have \begin{align*} g(\lambda) &= \frac{\lambda^2}{2} \|v\|^2_{\dot{H}^1_c} + \frac{\omega}{2} \|v\|^2_{L^2} - \frac{\lambda^{\frac{d\alpha}{2}}}{\alpha+2} \|v\|^{\alpha+2}_{L^{\alpha+2}}, \\ g'(\lambda)&= \lambda \|v\|^2_{\dot{H}^1_c} -\frac{d\alpha}{2(\alpha+2)} \lambda^{\frac{d\alpha}{2}-1} \|v\|^{\alpha+2}_{L^{\alpha+2}} = \frac{Q(v^\lambda)}{\lambda}, \\ \end{align*} and \begin{align*} (\lambda g'(\lambda))' &= 2 \lambda \|v\|^2_{\dot{H}^1_c} - \frac{d^2\alpha^2}{4(\alpha+2)} \lambda^{\frac{d\alpha}{2}-1} \|v\|^{\alpha+2}_{L^{\alpha+2}} \\ &= 2 \left( \lambda \|v\|^2_{\dot{H}^1_c} - \frac{d\alpha}{2(\alpha+2)} \lambda^{\frac{d\alpha}{2}-1} \|v\|^{\alpha+2}_{L^{\alpha+2}} \right) - \frac{d\alpha (d\alpha-4)}{4(\alpha+2)} \lambda^{\frac{d\alpha}{2}-1} \|v\|^{\alpha+2}_{L^{\alpha+2}} \\ &= 2 g'(\lambda) - \frac{d\alpha (d\alpha-4)}{4(\alpha+2)} \lambda^{\frac{d\alpha}{2}-1} \|v\|^{\alpha+2}_{L^{\alpha+2}}. \end{align*} Since $d\alpha>4$, we see that \begin{align} (\lambda g'(\lambda))' \leq 2 g'(\lambda), \quad \forall \lambda>0. \label{integration} \end{align} Since $Q(v) <0$, the equation $\partial_\lambda S_\omega(v^\lambda) =0$ admits a unique non-zero solution $\lambda_0 \in (0,1)$. Taking the integration over $\lambda_0$ and 1 and note that $Q(v^{\lambda_0}) = \lambda_0 \left.\left( \partial_\lambda S_\omega(v^\lambda) \right)\right|_{\lambda =\lambda_0} =0$, we get \[ Q(v) - Q(v^{\lambda_0}) \leq 2 (S_\omega(v) - S_\omega(v^{\lambda_0})) \leq 2 (S_\omega(v) - S_\omega(\phi_\omega)). \] Here, the last inequality comes from the fact $Q(v^{\lambda_0})=0$. The proof is complete. \end{proof} The key ingredient in showing the strong instability of radial standing waves is to use localized virial estimates to establish the finite time blowup. Let us recall localized virial estimates related to $(\ref{inverse square NLS})$. Let $\theta: [0,\infty) \rightarrow [0,\infty)$ be such that \[ \theta(r) = \left\{ \begin{array}{cl} r^2 &\text{if } 0\leq r\leq 1, \\ \text{const.} &\text{if } r \geq 2, \end{array} \right. \quad \text{and} \quad \theta''(r) \leq 2 \text{ for } r\geq 0. \] The precise constant here is not important. For $R>1$, we define the radial function \begin{align} \varphi_R(x) = \varphi_R(r) := R^2 \theta(r/R), \quad r=|x|. \label{define varphi_R} \end{align} We define the virial potential by \begin{align} V_{\varphi_R}(t) := \int \varphi_R(x) |u(t,x)|^2 dx. \label{define virial potential} \end{align} \begin{lemma}[Radial virial estimate \cite{Dinh-inverse}] \label{lemma radial virial estimate} Let $d\geq 3$, $c \ne 0$ be such that $c<\lambda(d)$, $\frac{4}{d} <\alpha<\frac{4}{d-2}$, $R>1$ and $\varphi_R$ be as in $(\ref{define varphi_R})$. Let $u: I\times \mathbb R^d \rightarrow \mathbb C$ be a radial solution to $(\ref{inverse square NLS})$. Then for any $t \in I$, \begin{align} \frac{d^2}{dt^2} V_{\varphi_R}(t) &\leq 8 \|u(t)\|^2_{\dot{H}^1_c} - \frac{4d\alpha}{\alpha+2} \|u(t)\|^{\alpha+2}_{L^{\alpha+2}} + O \left( R^{-2} + R^{-\frac{(d-1)\alpha}{2}} \|u(t)\|^{\frac{\alpha}{2}}_{\dot{H}^1_c} \right) \label{radial virial estimate 1} \\ &= 8 Q(u(t)) + O \left( R^{-2} + R^{-\frac{(d-1)\alpha}{2}} \|u(t)\|^{\frac{\alpha}{2}}_{\dot{H}^1_c} \right) \label{radial virial estimate 2} \\ &=4d\alpha E(u(t)) - 2(d\alpha-4) \|u(t)\|^2_{\dot{H}^1_c} + O \left( R^{-2} + R^{-\frac{(d-1)\alpha}{2}} \|u(t)\|^{\frac{\alpha}{2}}_{\dot{H}^1_c} \right). \label{radial virial estimate 3} \end{align} The implicit constant depends only on $\|u_0\|_{L^2}, d$ and $\alpha$. Here $A=O(B)$ means there exists a constant $C>0$ such that $A=CB$. \end{lemma} We refer the reader to \cite[Lemma 5.4]{Dinh-inverse} for the proof of the above result. We are now able to prove our main result. \noindent {\it Proof of Theorem $\ref{theorem instability}$.} Let $\epsilon>0$, $\omega>0$ and $\phi_\omega \in \mathcal G_{\text{rad},\omega}$. Since $\phi^\lambda_\omega \rightarrow \phi_\omega$ in $H^1$ as $\lambda \rightarrow 1$, there exists $\lambda_0>1$ such that $\|\phi_\omega - \phi^{\lambda_0}_\omega \|_{H^1} <\epsilon$. By decreasing $\lambda_0$ if necessary, we claim that $\phi^{\lambda_0}_\omega \in \mathcal{B}_{\text{rad},\omega}$. To see this, we first notice that $Q(\phi_\omega)=0$. This fact follows from the Pohozaev's identities related to $(\ref{elliptic equation})$ given in Lemma $\ref{lemma pohozaev identities}$: \begin{align} \omega \|\phi_\omega\|^2_{L^2} = \frac{4-(d-2)\alpha}{2(\alpha+2)} \|\phi_\omega\|^{\alpha+2}_{L^{\alpha+2}} = \frac{4-(d-2)\alpha}{d\alpha} \|\phi_\omega\|^2_{\dot{H}^1_c}. \label{pohozaev identities} \end{align} On the other hand, a direct computation shows \begin{align*} S_\omega(\phi^{\lambda}_\omega) &:= \frac{\lambda^2}{2} \|\phi_\omega\|^2_{\dot{H}^1_c} + \frac{\omega}{2} \|\phi_\omega\|^2_{L^2} - \frac{\lambda^{\frac{d\alpha}{2}}}{\alpha+2} \|\phi_\omega\|^{\alpha+2}_{L^{\alpha+2}}, \\ \partial_\lambda S_\omega(\phi^\lambda_\omega) &:= \lambda \|\phi_\omega\|^2_{\dot{H}^1_c} - \frac{d\alpha}{2(\alpha+2)} \lambda^{\frac{d\alpha-2}{2}} \|\phi_\omega\|^{\alpha+2}_{L^{\alpha+2}} = \frac{Q(\phi^\lambda_\omega)}{\lambda}. \end{align*} It is easy to see that the equation $\partial_\lambda S_\omega(\phi^\lambda_\omega) =0$ has a unique non-zero solution \[ \left( \frac{\|\phi_\omega\|^2_{\dot{H}^1_c} }{ \frac{d\alpha}{2(\alpha+2)} \|\phi_\omega\|^{\alpha+2}_{L^{\alpha+2}} } \right)^{\frac{2}{d\alpha-4}} =1. \] The last inequality comes from the fact $Q(\phi_\omega) =0$. This implies in particular that \[ \left\{ \begin{array}{c l} \partial_\lambda S_\omega(\phi^\lambda_\omega)>0 &\text{if } \lambda \in (0,1), \\ \partial_\lambda S_\omega(\phi^\lambda_\omega)<0 &\text{if } \lambda \in (1,\infty), \end{array} \right. \] from which we get $S_\omega(\phi^\lambda_\omega) < S_\omega(\phi_\omega)$ for any $\lambda>0, \lambda \ne 1$. Since $Q(\phi^\lambda_\omega) = \lambda \partial_\lambda S_\omega (\phi^\lambda_\omega)$, we also have \[ \left\{ \begin{array}{c l} Q(\phi^\lambda_\omega)>0 &\text{if } \lambda \in (0,1), \\ Q(\phi^\lambda_\omega)<0 &\text{if } \lambda \in (1,\infty). \end{array} \right. \] As an application of the above argument, we have \[ S_\omega(\phi^{\lambda_0}_\omega) <S_\omega(\phi_\omega), \quad Q(\phi^{\lambda_0}_\omega) <0. \] This shows that $\phi^{\lambda_0}_\omega \in \mathcal{B}_{\text{rad},\omega}$ and the claim follows. By Theorem $\ref{theorem local theory}$, there exists a unique solution $u \in C([0,T),H^1)$ to $(\ref{inverse square NLS})$ with initial data $u(0)=u_0= \phi^{\lambda_0}_\omega$, where $T>0$ is the maximal existence time. Since $u_0= \phi^{\lambda_0}_\omega$ is radial, it is well-known that the corresponding solution is also radial. The rest of this note is to show that $u$ blows up in finite time. It is done by several steps. \noindent {\bf Step 1.} We claim that there exists $a>0$ such that $Q(u(t)) \leq -a$ for any $t \in [0,T)$. Indeed, since $\mathcal{B}_{\text{rad},\omega}$ is invariant under the flow of $(\ref{inverse square NLS})$, we see that $u(t) \in \mathcal{B}_{\text{rad},\omega}$ for any $t\in [0,T)$. By Lemma $\ref{lemma key estimate}$, we get \[ Q(u(t)) \leq 2(S_\omega(u(t)) - S_\omega(\phi_\omega)) = 2 (S_\omega(\phi^{\lambda_0}_\omega) - S_\omega(\phi_\omega)). \] This proves the claim with $a= 2(S_\omega(\phi_\omega)- S_\omega(\phi^{\lambda_0}_\omega))>0$. \noindent {\bf Step 2.} We next claim that there exists $b>0$ such that \begin{align} \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq -b, \label{virial potential bound} \end{align} for any $t \in [0,T)$, where $V_{\varphi_R}(t)$ is as in $(\ref{define virial potential})$. Indeed, since the solution $u(t)$ is radial, we apply Lemma $\ref{lemma radial virial estimate}$ to have \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq 4d\alpha E(u(t)) - 2(d\alpha-4) \|u(t)\|^2_{\dot{H}^1_c} + O\left( R^{-2} + R^{-\frac{(d-1)\alpha}{2}} \|u(t)\|^{\frac{\alpha}{2}}_{\dot{H}^1_c} \right), \] for any $t\in [0,T)$ and any $R>1$. The Young inequality implies for any $\epsilon>0$, \[ R^{-\frac{(d-1)\alpha}{2}} \|u(t)\|^{\frac{\alpha}{2}}_{\dot{H}^1_c} \lesssim \epsilon \|u(t)\|^2_{\dot{H}^1_c} + \epsilon^{-\frac{\alpha}{4-\alpha}} R^{-\frac{2(d-1)\alpha}{4-\alpha}}. \] Note that in our consideration, we always have $0<\alpha<4$. We thus get \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq 4d\alpha E(u(t)) - 2(d\alpha-4) \|u(t)\|^2_{\dot{H}^1_c} + C\epsilon \|u(t)\|^2_{\dot{H}^1_c} + O\left( R^{-2} + \epsilon^{-\frac{\alpha}{4-\alpha}} R^{-\frac{2(d-1)\alpha}{4-\alpha}} \right), \] for any $t\in [0,T)$, any $R>1$, any $\epsilon>0$ and some constant $C>0$. To see $(\ref{virial potential bound})$, we follow the argument of Bonheure-Cast\'eras-Gou-Jeanjean \cite{BonheureCasterasGouJeanjean}. Fix $t \in [0,T)$ and denote \[ \mu:= \frac{4d\alpha |E(u_0)| +2}{d\alpha-4}. \] We consider two cases. \noindent {\bf Case 1.} \[ \|u(t)\|^2_{\dot{H}^1_c} \leq \mu. \] Since $4d\alpha E(u(t)) - 2(d\alpha-4) \|u(t)\|^2_{\dot{H}^1_c} =8Q(u(t)) \leq -8a$ for any $t\in [0,T)$, we have \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq -8a + C\epsilon \mu + O\left( R^{-2} + \epsilon^{-\frac{\alpha}{4-\alpha}} R^{-\frac{2(d-1)\alpha}{4-\alpha}} \right). \] By choosing $\epsilon>0$ small enough and $R>1$ large enough depending on $\epsilon$, we see that \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq -4a. \] \noindent {\bf Case 2.} \[ \|u(t)\|^2_{\dot{H}^1_c} >\mu. \] In this case, we have \[ 4d\alpha E(u_0) - 2(d\alpha-4) \|u(t)\|^2_{\dot{H}^1_c} < -2 -(d\alpha -4) \|u(t)\|^2_{\dot{H}^1_c}. \] Thus, \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq -2 -(d\alpha-4) \|u(t)\|^2_{\dot{H}^1_c} + C\epsilon \|u(t)\|^2_{\dot{H}^1_c} + O \left( R^{-2} + \epsilon^{-\frac{\alpha}{4-\alpha}} R^{-\frac{2(d-1)\alpha}{4-\alpha}} \right). \] Since $d\alpha-4 >0$, we choose $\epsilon>0$ small enough so that \[ d\alpha-4 - C\epsilon \geq 0. \] This implies that \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq -2 + O\left( R^{-2} + \epsilon^{-\frac{\alpha}{4-\alpha}} R^{-\frac{2(d-1)\alpha}{4-\alpha}} \right). \] We next choose $R>1$ large enough depending on $\epsilon$ so that \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq -1. \] Note that in both cases, the choices of $\epsilon>0$ and $R>1$ are independent of $t$. Therefore, the claim follows with $b= \min\{4a, 1\}>0$. \noindent {\bf Step 3.} By Step 2, the solution $u(t)$ satisfies \[ \frac{d^2}{dt^2} V_{\varphi_R}(t) \leq -b <0, \] for any $t\in [0,T)$. The convexity argument of Glassey (see e.g. \cite{Glassey}) implies that the solution blows up in finite time. The proof is complete. \defendproof \section*{Acknowledgments} V. D. Dinh would like to express his deep gratitude to his wife-Uyen Cong for her encouragement and support. The authors would like to thank the reviewers for their helpful comments and suggestions.
{ "timestamp": "2018-09-25T02:03:05", "yymm": "1806", "arxiv_id": "1806.01068", "language": "en", "url": "https://arxiv.org/abs/1806.01068" }
\section{INTRODUCTION} By matching the oscillation frequencies of models with the observed ones, asteroseismology is used to determine the fundamental parameters of stars, such as mass, radius, and age, and diagnose internal structures of stars \citep{roxb03, chri10, dehe10, chap14, guen14, liu14, yang15, gugg16, rodr17, silv17}. Asteroseismology plays an irreplaceable role in studying the structure and evolution of stars. The measurable characteristics of solar-like oscillations of stars mainly include individual frequencies $\nu_{n,l}${}, the large frequency separation $\Delta\nu${}, the frequency of maximum power $\nu_{max}${}, and so on. Individual frequencies are widely used in asteroseismology. In order to determine the fundamental parameters of stars, the observed frequencies $\nu_{n,l}${} are directly compared with those calculated from models by the chi-squared method or the maximum likelihood function method. By calculating the ratios of the small separations to the large separations, $r_{01}${} and $r_{10}${}, from individual frequencies, individual frequencies are also used to extract the information about the internal structures of stars \citep{roxb03, roxb07, cunh07, dehe10, liu14, yang16b}. The large frequency separation, $\Delta\nu${}, and the frequency of maximum power, $\nu_{max}${}, are \textbf{considered to be} more easily-measured than individual frequencies. The value of $\Delta\nu${} is proportional to the mean density of stars \citep{ulri86}. The value of $\nu_{max}${} scales with the acoustic cutoff frequency ($\nu_{ac}${}) \citep{brow91}. Basing on these results, \cite{kjel95} obtained the well-known scaling relations: \begin{equation} \Delta\nu=(M/M_{\odot})^{1/2}(R/R_{\odot})^{-3/2}\Delta\nu_{\odot}, \label{scal1} \end{equation} and \begin{equation} \nu_{max}=\frac{M/M_{\odot}}{(R/R_{\odot})^{2}\sqrt{T_{\rm eff}/5777}}\nu_{max,\odot}. \label{scal2} \end{equation} The scaling relations are extensively used to estimate masses and radii of stars and study the characteristics of star populations \citep{migl09, yang10, shar17}. Besides the frequencies, the amplitude and the FWHM linewidth $\Gamma$ of $p$-modes of stars also can be determined \citep{chap09, hekk10, baud11, appo14}. The lifetime $\tau$ is considered to be related to the linewidth $\Gamma$ by $\Gamma=1/(\pi\tau)$. \cite{chap09} is the first to try to find a simple scaling relation to describe the average lifetime, $\langle\tau\rangle$, of solar-like oscillations and obtain $\langle\tau\rangle\propto T_{\rm eff}^{-4}$. Hereafter, we throw out the average symbol $\langle \rangle$. \cite{baud11} obtain $\tau\propto T_{\rm eff}^{-s}$, where the value of the $s$, however, is $14\pm8$ for main-sequence (MS) stars and $-0.3\pm0.9$ for red giants. The difference between the result of Chaplin and that of Baudin is very significant. The lifetime is potentially an extremely useful diagnostic of near-surface convection in stars, affects the detectability of modes, and aids in better understanding the excitation and extinction mechanisms of modes, so that the lifetime predictions play an important role in asteroseismology \citep{chap09}. In this work, we give \textbf{some simple scaling relations to describe the average linewidth $\Gamma$ and} lifetime $\tau$ of solar-like oscillations. The paper is organized as follows: in Section 2, we deduce the scaling relations and compare the results of the scaling relations with observations, then discussion and summary are given in Section 3. \section{SCALING RELATIONS OF MEAN LINEWIDTH AND LIFETIME OF SOLAR-LIKE OSCILLATIONS} \subsection{Scaling Relation of Mean Linewidth of Oscillations} The energy transported per unit time across a spherical surface by propagating acoustic waves with angular frequency $\omega$ is \begin{equation} F=4\pi r^{2}\frac{1}{2} \rho |\mathbf{v}|^{2}c, \end{equation} where $\mathbf{v}$ is the velocity of oscillations, $c$ the adiabatic sound speed, $\rho$ the density, $r$ the radius. The total energy of the oscillations in a star is \citep{balm90} \begin{equation} \begin{array}{ll} E_{osc}&=2\int^{R}_{0}\frac{1}{2}\rho |\mathbf{v}|^{2} 4\pi r^{2}dr \\ &= 2F\int^{R}_{0}\frac{dr}{c}\\ &= F\Delta\nu^{-1}. \end{array} \label{p1} \end{equation} The growth rate $\omega_{i}$ of amplitude of an oscillation, which is the imaginary part of $\omega(\omega=\omega_{r}+i\omega_{i})$, is related to the work integral $W$, i.e. \begin{equation} \omega_{i}=-\frac{1}{2} \frac{W/E_{osc}}{\Pi}, \label{damp1} \end{equation} where the period $\Pi=2\pi\omega_{r}^{-1}$, and the work integral $W$ is defined as (see equations 25.16, 25.17 and 25.18 of \cite{unno89} for more details of $W$ and $E_{osc}$) \begin{equation} W =\oint dt \frac{dE_{osc}}{dt}. \label{w1} \end{equation} Therefore, we obtain \begin{equation} \omega_{i}=-\frac{\omega_{r}}{4\pi} \frac{W\Delta\nu}{F}. \label{damp2} \end{equation} The work integral $W$ can be decomposed into $W_{N}$ and $W_{E}$, which are related to the perturbations of nuclear energy generation rate and energy transfer, respectively. Let us neglect $W_{N}$. We only consider radial oscillations in this work. The work integral $W_{E}$ is given as \textbf{ (see Equations 26.3 and 26.4 of \cite{unno89}, and Equation 3 of \cite{gold91})} \begin{equation} W_{E} =\frac{\pi}{\omega_{r}}\int^{R}_{0}dr(-\frac{\partial\Delta L}{\partial r})\frac{\Delta T}{T}, \label{we1} \end{equation} \textbf{where $L$ is luminosity and T temperature of a star. Let us estimate the order of magnitude of the integral of Equation (\ref{we1}) from adiabatic oscillations. For the adiabatic radial oscillations, \cite{gold91} gave \begin{equation} \frac{\Delta T}{T} = -(\Gamma_{3}-1)\frac{\partial\xi}{\partial r} \sim -(\Gamma_{3}-1)\frac{\omega^{2}\xi}{g}, \label{dt1} \end{equation} where $\xi$ is radial displacement eigenfunction, $g$ gravity, and \begin{equation} \Gamma_{3}-1 \equiv (\frac{\partial\ln T}{\partial\ln\rho})_{s}. \end{equation} Neglecting the change in $(\Gamma_{3}-1)$, \cite{gold91} gave \begin{equation} \frac{\Delta T}{T} \sim -\frac{\omega^{2}\xi}{g}, \label{dt2} \end{equation} and \begin{equation} \frac{\partial\Delta L}{\partial r} \sim -\frac{\Delta L}{H} \sim -\frac{L\omega^{2}\xi}{gH}, \label{dl1} \end{equation} where $H$ is the local pressure scale height. Then \cite{gold91} obtained \begin{equation} \int^{R}_{0}dr(-\frac{\partial\Delta L}{\partial r})\frac{\Delta T}{T} \sim -\frac{L\omega^{4}\xi^{2}}{g^{2}}. \end{equation} } \textbf{ Therefore, we can obtain \begin{equation} W_{E}\sim -\frac{\pi}{\omega_{r}}\frac{L\omega^{4}\xi^{2}}{g^{2}}. \label{we2} \end{equation} Substituting $W$ in Equation (\ref{damp2}) by $W_{E}$,} we obtain that the thermal damping rate can be given as \begin{equation} \begin{array}{ll} \omega_{i}& \sim\frac{1}{4} \frac{\Delta\nu}{F} \frac{L\omega^{4}\xi^{2}}{g^{2}}\\ &=\frac{\sigma\pi r^{2}\omega^{4}\xi^{2}}{F g^{2}}\Delta\nu T^{4}, \end{array} \label{damp3} \end{equation} where $\sigma$ is Stefen constant. For turbulent stresses, the work integral is given as \citep{gold91} \begin{equation} \begin{array}{ll} W_{T} & = \frac{\pi}{\omega_{r}}4\pi\omega^{2}\int^{R}_{0}dr r^{2}\rho\nu_{H}(\frac{\partial\xi}{\partial r})^{2} \\ & \sim \frac{\pi}{\omega_{r}}\frac{L\omega^{4}\xi^{2}}{g^{2}}, \end{array} \label{wt} \end{equation} where $\nu_{H}$ is turbulent viscosity. Therefore, mechanical damping rate also has the expression of Equation (\ref{damp3}). Solar-like oscillations are considered to be stable. The value of $F$ can be estimated as \begin{equation} F \sim 4\pi r^{2}\frac{1}{2} \rho c \omega^{2} \xi^{2}. \label{flux} \end{equation} \textbf{Substituting $F$ in Equation (\ref{damp3}) by Equation (\ref{flux}), we obtain \begin{equation} \omega_{i} \approx \frac{\sigma}{2\rho c} \frac{\omega^{2}}{g^{2}} \Delta\nu T^{4} \label{damp4} \end{equation} where $\rho$, $c$, $g$, and $T$ take their values at $r=R$. } \textbf{ The linewidth is considered to be related with the damping rate by $\Gamma_{\omega}=\omega_{i}/2\pi$, i.e. \begin{equation} \Gamma_{\omega} \approx \frac{\sigma}{4\pi\rho c} \frac{\omega^{2}}{g^{2}} \Delta\nu T^{4}. \label{gama1} \end{equation} Due to the fact that frequencies $\nu_{n,l}$ have an approximately equal $\Delta\nu${}, the mean linewidth $\Gamma$ is approximate to $\Gamma_{\omega_{max}}$, i.e. \begin{equation} \Gamma \simeq \Gamma_{\omega_{max}} \approx \frac{\sigma}{4\pi\rho c} \frac{\omega^{2}_{max}}{g^{2}} \Delta\nu T^{4}. \label{gama2} \end{equation} The mean linewidth of the solar $p$-modes is about $0.95\pm0.08$ $\mu$Hz{} \citep{baud11} or around $1.15\pm0.07$ $\mu$Hz{} \citep{chap09}. The value of $\Gamma$ of Equation (\ref{gama2}) is about $2.83$ $\mu$Hz{} at $\nu=3090$ $\mu$Hz{} for the Sun. This indicates that Equations (\ref{gama1}) and (\ref{gama2}) overestimate the linewidth of the Sun by about $3$ times. Equation (\ref{gama1}) divided by $3$, we obtained that the linewidths of solar-like oscillations can be estimated as \begin{equation} \Gamma_{\omega} \simeq \frac{\sigma}{12\pi\rho c} \frac{\omega^{2}}{g^{2}} \Delta\nu T^{4}. \label{gama3} \end{equation} Figure \ref{fig1} shows the $\Gamma_{\omega}$ of the Sun as a function of frequency calculated by using Equation (\ref{gama3}), which indicates that the order of magnitude of linewidths of $p$-modes with $\nu\sim$ $\nu_{max}${} can be properly estimated by Equation (\ref{gama3}). } \textbf{ Equation (\ref{dt2}) neglects the effect of $(\Gamma_{3}-1)$. Calculations show that the value of $(\Gamma_{3}-1)$ is less than $1$ at $r=R$. The more massive the star, the closer to $1$ the value of $(\Gamma_{3}-1)_{R}$. The lower the metallicity, the larger the value of $(\Gamma_{3}-1)_{R}$. The value of $(\Gamma_{3}-1)_{R}$ of MS models with $M\lesssim1.1$ $M_\odot${} is approximate to that of the Sun. Therefore, linewidths of oscillations of these stars can be estimated by Equation (\ref{gama3}). But the value of $(\Gamma_{3}-1)_{R}$ of MS models with masses between $1.1$ and $1.5$ $M_\odot${} is about $1-5$ times as large as that of the Sun, which is dependent on the mass, chemical compositions, and evolutionary stage of stars. Thus linewidths of oscillations of these stars can be estimated by Equations (\ref{gama1}) and (\ref{gama2}). For these stars, Equation (\ref{gama1}) can be rewritten as \begin{equation} \Gamma_{\omega} \simeq f\frac{\sigma}{12\pi\rho c} \frac{\omega^{2}}{g^{2}} \Delta\nu T^{4}, \label{gama4} \end{equation} where the value of $f$ is mainly in the range of $1-5$. For the models with masses between $1.2$ and $1.5$ $M_\odot${}, the value of $(\Gamma_{3}-1)_{R}$ is mainly around $3$ times greater than that of the Sun. As a consequence, the mean value of $f$ is around $3$. Equations (\ref{dt2}), (\ref{dl1}), and (\ref{flux}) only give an estimation of the order of magnitude. Thus the value of $f$ is not only affected by $(\Gamma_{3}-1)_{R}$. The value of $f$ can be determined from detailed asteroseismic analyses of some F-type stars with solar-like oscillations. } \textbf{ Scale from the solar value to estimate the mean linewidth and use \begin{equation} \omega_{max}=\frac{g/g_{\odot}}{\sqrt{T_{\rm eff}/5777}}\omega_{max,\odot}, \label{scal22} \end{equation} one can obtain \begin{equation} \left\{ \begin{array}{lc} \Gamma \simeq\frac{(\rho c)_{\odot}}{\rho c}\frac{\Delta\nu}{\Delta\nu_{\odot}}(\frac{T_{\rm eff}}{5777})^{3}\Gamma_{\odot}, &\mathrm{for } M\lesssim1.1 M_{\odot};\\ \Gamma \simeq f\frac{(\rho c)_{\odot}}{\rho c}\frac{\Delta\nu}{\Delta\nu_{\odot}}(\frac{T_{\rm eff}}{5777})^{3}\Gamma_{\odot}, &\mathrm{for } M\gtrsim1.2 M_{\odot},\\ \end{array}\right. \label{scga1} \end{equation} where the value of $\Delta\nu_{\odot}$ is $134.6$ $\mu$Hz{}, and $(\rho c)_{\odot}\simeq 0.12$ g cm$^{-2}$ s$^{-1}$. The value of $\Gamma_{\odot}$ is about $1.15$ $\mu$Hz{} \citep{chap09} or $0.95\pm0.08$ $\mu$Hz{} \citep{baud11}. The value of $f$ is mainly in the range of $1-5$, which is dependent on the mass and evolutionary stage of stars. Its mean value is around $3$. } \subsection{Scaling Relation of Mean Lifetime of Oscillations} The \textbf{lifetime of a mode is} considered to be relevant to \textbf{the linewidth} of the mode \textbf{by $\tau_{\omega}=1/(\pi\Gamma_{\omega})$}. The average lifetime, $\tau$, of low-degree $p$-modes of the Sun given by \cite{chap09} is $3.2\pm0.2$ days, which is close to that calculated from the $\Gamma_{\omega}$ of the mode with $\nu \approx$ $\nu_{max}${}. Therefore, we obtain that the mean lifetime of $p$-modes can be estimated \textbf{as \begin{equation} \tau \approx \tau_{\omega_{max}} \simeq \left\{ \begin{array}{lc} \frac{12\rho c }{\sigma} \frac{g^{2}}{\omega^{2}_{max}} \Delta\nu^{-1} T^{-4},&\mathrm{for } M\lesssim1.1 M_{\odot};\\ \frac{12\rho c }{f\sigma} \frac{g^{2}}{\omega^{2}_{max}} \Delta\nu^{-1} T^{-4},&\mathrm{for } M\gtrsim1.2 M_{\odot}.\\ \end{array}\right. \label{tau0} \end{equation} } In order to estimate the mean lifetime of $p$-modes of stars, we scale from the solar value to \textbf{calculate the mean lifetime: \begin{equation} \tau \approx\left\{ \begin{array}{lc} \frac{\rho c }{(\rho c)_{\odot}}\frac{\Delta\nu_{\odot}}{\Delta\nu}(\frac{5777}{T_{\mathrm{eff}}})^{3}\tau_{\odot}, &\mathrm{for } M\lesssim1.1 M_{\odot};\\ \frac{\rho c }{f(\rho c)_{\odot}}\frac{\Delta\nu_{\odot}}{\Delta\nu}(\frac{5777}{T_{\mathrm{eff}}})^{3}\tau_{\odot}, &\mathrm{for } M\gtrsim1.2 M_{\odot},\\ \end{array}\right. \label{tau1} \end{equation} where the value of $\tau_{\odot}$ is about $3.2$ days \citep{chap09} or $3.8$ days \citep{baud11}.} This relation \textbf{shows} that the mean lifetime of solar-like oscillations increases with a decrease in $\Delta\nu${} and $T_{\mathrm{eff}}${}. For MS stars and red giants, the value of $\rho(R)$ generally decreases with an increase in $R$. The increase in $M$ can result in an increase in radius, i.e. can lead to a decrease in $\rho(R)$. Therefore, the acoustic impedance $\rho c$ generally decreases with an increase in mass and radius of stars. But for the same type stars, they are expected to have an \textbf{approximate $\rho c$. The parameter $f$ in Equation (\ref{tau1}) can be roughly replaced by the mean value $3$.} As a consequence, for stars extracted $\Delta\nu${} and $T_{\mathrm{eff}}${}, their $\tau$ can be estimated from Equation (\ref{tau1}). \textbf{For theoretical calculations, the value of $\rho c$ can be obtained from stellar models. But for observations, the value of $\rho c$ of stars is hard to be obtained.} The density $\rho(R)$ and sound speed $c$ decrease with mass and age of stars, i.e. decrease with an increase in mass and radius. Assume $\rho c \propto 1/(MR)$, Equation (\ref{tau1}) can be rewritten as \begin{equation} \tau\simeq f_{2} \frac{\Delta\nu_{\odot}}{\Delta\nu}(\frac{5777 \mathrm{K}}{T_{\mathrm{eff}}})^{3} \frac{M_{\odot}R_{\odot}}{MR}\tau_{\odot}, \label{tau2} \end{equation} where $f_{2}$ is a free nondimensional parameter. \textbf{ It is more convenient to estimate $\tau$ of stars by using Equation (\ref{tau2}). } \textbf{ If the mass of a star is unknown, Equation (\ref{tau2}) is also hard to be used in observations. } The masses of stars with solar-like oscillations are mainly between about $1$ and $1.5$ $M_\odot${}. The change \textbf{of the $R$ in Equation (\ref{tau2}) can be much larger than that of the $M$.} Therefore, for stars whose radius is determined, their $\tau$ can be \textbf{approximately estimated as} \begin{equation} \tau\simeq f_{3} \frac{\Delta\nu_{\odot}}{\Delta\nu}(\frac{5777 \mathrm{K}}{T_{\mathrm{eff}}})^{3} \frac{R_{\odot}}{R}\tau_{\odot}, \label{tau3} \end{equation} where $f_{3}$ is a free parameter. \textbf{Our calculations show that the value of $f_{3}$ is $1$ for MS stars with $M\lesssim1.1$ $M_\odot${} and red giants ($\Delta\nu${} $\lesssim 40$ $\mu$Hz{}) and $1/3$ for MS stars with $M\gtrsim1.2$ $M_\odot${}.} For stars observed $R$, $\Delta\nu${}, and $T_{\mathrm{eff}}${}, it is convenient to \textbf{estimate} $\tau$ or $\Gamma$ by using Equation (\ref{tau3}). \textbf{But Equation (\ref{tau3}) is an approximation of Equation (\ref{tau2}).} \subsection{Comparison with Observations} \cite{lund17} determined the values of FWHM ($\Gamma_{\alpha}$) at $\nu_{max}${} of $66$ stars (LEGACY Sample) from the observations of \textit{Kepler}. Moreover, \cite{hekk10} studied in detail the lifetimes of $p$-modes of four red giants from the observations of \textit{CoRoT} and gave the values of radii of the four red giants. The values of $\tau$ of LEGACY Sample and \cite{hekk10} sample are obtained by $\tau=1/(\pi\Gamma)$. In addition, \cite{chap09} and \cite{baud11} also gave $\tau$ or $\Gamma$ of some stars. Thanks to these works, we have a large enough sample to test Equations (\ref{tau1}), (\ref{tau2}), and (\ref{tau3}). The observed $\tau$ of the sample is shown in panel $a$ of Figure \ref{fig2}. The results of Equation (\ref{tau1}) are also shown in the panel. Due to the fact that the values of $\rho c$ of the stars are unknown, we assumed that the value of $\rho c/(\rho c)_{\odot}$ is a constant in the calculations. The sample can be divided into three subsamples: one has $\tau> 2$ days and $\Delta\nu${} $>50$ $\mu$Hz{}, labeled as `low-mass' sample; one has $\tau < 2$ days, labeled as `more massive' sample; four red giants of \cite{hekk10} are labeled as `red' sample. Panel $b$ of Figure \ref{fig2} shows the mean lifetimes of the `low-mass' sample. Part of the sample is reproduced by Equation (\ref{tau1}) with $\rho c/(\rho c)_{\odot}=1$; but part cannot be reproduced correctly. This can be due to the assumption of $\rho c/(\rho c)_{\odot}=1$. For stars with mass less than $1$ $M_\odot${}, their $\rho c/(\rho c)_{\odot}$ may be larger than $1$; but for stars with mass larger than $1$ $M_\odot${}, their $\rho c/(\rho c)_{\odot}$ may be less than $1$. Panel $c$ of Figure \ref{fig2} shows the mean lifetimes of the `more massive' sample. Most of the sample are reproduced well by Equation (\ref{tau1}) with $\mathbf{\rho c/[3(\rho c)_{\odot}]=1/6}$, which indicates that \textbf{the acoustic impedance ($\rho c$) of these stars is lower than that of the Sun. Because the more massive the stars, the smaller the $\rho c$, masses of these stars} may be larger than $1$ $M_\odot${} and that the difference in the masses \textbf{may not be} significant, especially for the F-like stars. Panel $d$ of Figure \ref{fig2} shows the mean lifetimes of `red' sample of \cite{hekk10}. In order to reproduce the sample, the value of $\rho c/[3(\rho c)_{\odot}]$ decreases from $\mathbf{1/6}$ for `more massive' sample to $\mathbf{1/15}$. However, the results are unsatisfied. This may be due to neglecting \textbf{the variation in $\rho c$ of red giants.} The variation in radii of red giants is significant, \textbf{which leads to the fact that there is a significant difference in $\rho c$ of red giants.} This hints to us that Equation (\ref{tau3}) could be more suitable for red giants. Figure \ref{fig3} shows that the values of $\tau$ of the red giants are reproduced well by Equation (\ref{tau3}) with $f_{3}=1$. Three of the four are reproduced within a standard error. This indicates that scaling relations (\ref{tau1}) and (\ref{tau3}) work well. \subsection{Mean Lifetimes Calculated from Stellar Models} Equation (\ref{tau1}) provides us with an opportunity to fastly calculate the mean lifetimes of $p$-modes from stellar models. By using the Yale Rotation Evolution Code \citep[YREC]{pins89, yang07} in its nonrotation configuration, we calculated a serial stellar models with different masses and metallicities. Mean lifetimes of oscillations of the models were calculated by using Equation (\ref{tau1}). Calculated results are represented in \textbf{Figures \ref{fig4} and \ref{fig5}. The mean lifetimes of oscillations of F-like stars of LEGACY sample can be reproduced well by those of MS and MS-turnoff } models with masses between about $\mathbf{1.2}$ and $1.5$ $M_\odot${}. \textbf{But} the masses of Simple stars of the sample change from about $\mathbf{1.0}$ to $\mathbf{1.2}$ $M_\odot${}. \textbf{The F-like stars are more massive, thus they have a higher effective temperature.} The more massive the stars, the smaller the $\tau$. \textbf{The higher the metallicity, the larger the $\tau$ of MS stars. This is because the higher the metallicity, the lower the luminosity of a star, and the lower the effective temperature. For MS models with a given mass and radius, they have a given $\Delta\nu${}; the higher the metallicity, the larger the value of $\rho c$, and the lower the effective temperature. As a consequence, the higher the metallicity, the larger the $\tau$. } \textbf{Figures \ref{fig4} and \ref{fig5} show that post-MS models with $M\lesssim1.1$ have a $\tau$ larger than about $4$ days between $\Delta\nu${} $=80$ and $\Delta\nu${} $=60$ $\mu$Hz{}. These stars could not have evolved into the post-MS stage. Thus we could hardly find the stars with $\tau>4$ days and $\Delta\nu${} between $80$ and $60$ $\mu$Hz{}. The lower the mass, the larger the $\tau$; the higher the metallicity, the larger the $\tau$. This} explains why the Simple stars with $\tau \sim 5$ days have a higher metallicity and a lower effective temperature (see panels $a$ and $b$ of Figure \ref{fig4}). Moreover, Figures \ref{fig4} and \ref{fig5} show that the effect of metallicity on lifetimes is significant \textbf{for MS stars.} Thus determining metallicity of solar-like stars, especially for the `low-mass' type stars, aids us in understanding their oscillations. Figure \ref{fig6} compares the values of $\tau$ of \cite{hekk10} sample and those \textbf{calculated from models by using Equation (\ref{tau1}) with $\tau_{\odot}=3.8$ days.} The mean lifetimes of \cite{hekk10} sample can be reproduced by \textbf{ models with masses between about $1.2$ and $1.5$ $M_\odot${}.} \textbf{The mean lifetimes of oscillations calculated by using Equation (\ref{tau2}) are shown in Figure \ref{fig7}. Compared with the results shown in Figure \ref{fig4}, Equation (\ref{tau2}) seems to underestimate the values of $\tau$ of models with $Z=0.03$ and $0.02$. This indicates that the acoustic impedance is dependent on metallicity. Moreover, in the calculations for Figure \ref{fig4}, we adoted $f=3$, which may overestimate the values of $\tau$ for more massive stars. For these stars, the value of $f$ might be larger than $3$ and should be determined from detailed asteroseismic analyses.} \textbf{For red giants with a given mass and radius, the higher the metallicity, the lower the effective temperature and $\rho c$. Therefore, compared with Equation (\ref{tau1}), Equation (\ref{tau2}) overestimates $\tau$ of stars with a higher metallicity (see Figures \ref{fig6} and \ref{fig8}).} \textbf{Figures \ref{fig9} and \ref{fig10} represent the results calculated by using Equation (\ref{tau3}), which shows that Equation (\ref{tau3}) is a good approximation of Equation (\ref{tau2}). However, Equation (\ref{tau3}) slightly overestimates the values of $\tau$ of stars with $M\gtrsim1.2$ $M_\odot${}. Moreover, it slightly overestimates the $\tau$ of red giants with $Z=0.03$.} \section{DISCUSSION AND SUMMARY} \subsection{Discussion} For some stars whose $\Delta\nu${}, $T_{\mathrm{eff}}${}, and $\tau$ have been determined, the acoustic impedance $\rho c$ at the surface of the stars can be determined by Equation (\ref{tau1}). \textbf{Thus determining the linewidths of modes with $\nu\sim$ $\nu_{max}${} play an important role in asteroseismology.} \textbf{In the calculations of the results shown in Figure \ref{fig4}, we adoted the mean value of $f$. This could overestimate the values of $\tau$ of more massive stars and lead to a difference between the theoretical mean lifetimes and the observed ones. The dependence of $f$ and $f_{2}$ on mass could be obtained from detailed asteroseismic analyses of some F-type stars with solar-like oscillations.} The metallicity of a star can affect the energy transfer in the star. Thus the work integral of Equation (\ref{we1}) can be affected by metallicity, i.e. $\Gamma$ or $\tau$ of modes can be affected by metallicity. The mean lifetime of CoRoT 102767771 in \cite{hekk10} sample \textbf{is overestimated} by Equation (\ref{tau3}) (see Figure \ref{fig3}). This could be related to the effect of metallicity. \textbf{Compared with the results of Equation (\ref{tau1}), calculations show that Equation (\ref{tau3}) actually overestimates the values of $\tau$ of red giants with $Z_{i}=0.03$ (see Figure \ref{fig10}). The change in metallicity for a star with a given mass can lead to a variation in the effective temperature and radius of the star or a variation in $\rho c$. This effect has been considered by the scaling relations. Therefore, the effect of metallicity cannot significantly change the results of Equation (\ref{tau1}). } \textbf{The mean lifetime of oscillations of a star is dependent on the acoustic impedance. However, the acoustic impedance is hard to be estimated in observation.} The value of $\rho c/(\rho c)_{\odot}$ is $1/2$ for `more massive' sample and $1/5$ for `red' sample. This indicates that the value of the acoustic impedance of stars decreases with mass and \textbf{radius. Thus we can assume $\rho c \propto 1/(MR)$. The calculations show that the effect of $\rho c$ on $\tau$ can be approximated to the effect of $1/(MR)$. Parameters $f_{2}$ and $f_{3}$ derive from the assumption of $\rho c \propto 1/(MR)$ and the dependence of $f$ on the mass of stars. The calculations show that the values of $f_{2}$ and $f_{3}$ are about $1$ for MS stars with $M\lesssim1.1$ $M_\odot${} but are about $1/3$ for MS stars with $M\gtrsim1.2$ $M_\odot${}. This can be partly due to the fact that the value of $(\Gamma_{3}-1)_{R}$ of MS stars with $M\gtrsim1.2$ $M_\odot${} is about $3$ times greater than that of the Sun. The values of $f_{2}$ and $f_{3}$ are $1$ for red giants. In red-giant phase, the radius of a star with $M=1.2$ $M_\odot${} can increase by more than $10$ times. The rapid change in radius of red giants is the main factor affecting the $\tau$ of red giants. } \textbf{Figures \ref{fig4} and \ref{fig5} show that there are three stars whose $\tau$ is hard to be explained by stellar models. The values of their $\Delta\nu${} are larger than $\sim150$ $\mu$Hz{}, which indicates that their masses might be less than $1$ $M_\odot${}, i.e. the values of their $\rho c$ could be larger than that of the Sun. But their $\tau$ can be well reproduced by Equation (\ref{tau1}) with $\rho c/[3(\rho c)_{\odot}]=1/6$ (see panel $c$ of Figure \ref{fig2}).} \subsection{Summary} In this work, basing on the works of \cite{balm90} and \cite{gold91} and the definition of \cite{unno89}, we deduced a \textbf{formula to describe the linewidth of the mode with $\nu\sim\nu_{max}$. Due to the fact that the frequencies of $p$-modes have an approximately equal separation, the mean linewidth is approximate to $\Gamma_{\omega_{max}}$. By using $\tau=1/(\pi\Gamma$), we obtained a} scaling relation to describe the average lifetime of solar-like oscillations of stars. The \textbf{mean linewidth and lifetime are} determined by the large frequency separation, the effective temperature, and the acoustic impedance $\rho c$. Furthermore, the dependence of the mean lifetime on the acoustic impedance can be \textbf{roughly reduced to a dependence on the mass and radius} of stars. However, this will introduce a free parameter into the formula of the mean lifetime. \textbf{The calculations show that the mean lifetimes of $p$-modes of stars decrease with an increase in mass of stars. The mean lifetimes also decrease with a decrease in metallicity for MS stars. } We compared the results of the scaling relations with the observational results of \textit{Kepler} \citep{lund17} and \textit{CoRoT} \citep{hekk10}. Most of the observational results are well reproduced. This indicates that the scaling relations work well. Our calculations show that the lifetime of modes can be affected by stellar metallicity. The effects of metallicity cannot be fully described by the change in radius and effective temperature. Therefore, the effects of metallicity on $\tau$ could lead to the fact that $\tau$ of some stars deviates from the results of the scaling relations, such as CoRoT 102767771. \section*{Acknowledgments}
{ "timestamp": "2018-06-05T02:15:04", "yymm": "1806", "arxiv_id": "1806.00994", "language": "en", "url": "https://arxiv.org/abs/1806.00994" }
\section{Introduction} \label{sec_intro} With recent improvements in deep neural network, researchers came up with neural network based vocoders such as WaveNet \cite{WAVENET1} and SampleRNN \cite{SAMPLERNN}. Those models showed their ability to generate high quality waveform from acoustic features. Some researchers further devised neural network based text-to-speech (TTS) models which can replace the entire TTS system with neural networks \cite{DV1, DV2, DV3, TACO1, TACO2}. Neural network based TTS models can be built without prior knowledge of a language when generating speech. Neural TTS models can be easily built compared to the previous approaches, which require carefully designed features, if we have enough (speech, text) pair data. Furthermore, the neural network based TTS models are capable of generating speech with different voices by conditioning on a speaker's index \cite{DV2, DV3} or an emotion label \cite{EMOTTS}. Some researchers have tried to imitate a new speaker's voice using the speaker's recordings \cite{VOICELOOP}. Taigman et al. reported their model's ability to mimic a new speaker's voice by learning a speaker embedding of the new speaker \yrcite{VOICELOOP}. However, this approach requires additional training stage and transcriptions of the new speaker's speech sample. The transcription may not be always available, and the additional training stage prohibits immediate imitation of a new speaker's voice. In this study, we propose a voice imitating TTS model that can imitate a new speaker's voice without transcript of speech sample or additional training. This enables the voice imitation process immediately using only a short speech sample of a speaker. The proposed model takes two inputs: (1) target text and (2) a speaker's speech sample. The speech sample is first transformed into a speaker embedding by the speaker embedder network. Then a neural network based TTS model generates speech output by conditioning on the speaker embedding and the text input. We implemented a baseline multi-speaker TTS model based on Tacotron, and we also implemented voice imitating TTS model by extending the baseline model. We investigated latent space of the learned speaker embeddings by visualizing with principal component analysis (PCA). We directly qualitatively compared similarity of voice from the both TTS models and the ground truth data. We further conducted two types of surveys to analyze the result quantitatively. The first survey compared generation quality of the voice imitating TTS and the multi-speaker TTS. The second survey checked how speaker-discriminable speech samples are generated by the both models. The main contributions of this study can be summarized as follows: \vspace{-.5em} \begin{enumerate}[leftmargin=2em] \item The proposed model makes it possible to imitate a new speaker's voice using only a 6-seconds long speech sample. \item Imitating a new speaker's voice can be done immediately without additional training. \item Our approach allows TTS model to utilize various sources of information by changing the input of the speaker embedder network. \end{enumerate} \section{Background} \label{sec_related} In this section, we review previous works that are related to our study. We will cover both traditional TTS systems and neural network based TTS systems. The neural network based TTS systems includes neural vocoder, single-speaker TTS, multi-speaker TTS, and voice imitation model. \begin{figure*}[ht] \centerline{\includegraphics[width=14cm]{net1}} \caption{Multi-speaker Tacotron}\label{fig-net1} \end{figure*} Common TTS systems are composed of two major parts: (1) text encoding part and (2) speech generation part. Using prior knowledge about the target language, domain experts have defined useful features of the target language and have extracted them from input texts. This process is called a text encoding part, and many natural language processing techniques have been used in this stage. For example, a grapheme-to-phoneme model is applied to input texts to obtain phoneme sequences, and a part-of-speech tagger is applied to obtain syntactic information. In this manner, the text encoding part takes a text input and returns various linguistic features. Then, the following speech generation part takes the linguistic features and generates waveform of the speech. Examples of the speech generation part include concatenative and parametric approach. The concatenative approach generates speech by connecting short units of speech which has a scale of phoneme or sub-phoneme level, and the parametric TTS utilizes a generative model to generate speech. Having seen neural networks show great performance in regression and classification tasks, researchers have tried to substitute previously used components in TTS systems. Some group of researchers came up with neural network architectures that can substitute the vocoder of the speech generation part. Those works include Wavenet \cite{WAVENET1} and SampleRNN \cite{SAMPLERNN}. Wavenet can generate speech by conditioning on several linguistic features, and Sotelo et al. showed that SampleRNN can generate speech by conditioning on vocoder parameters \yrcite{CH2WAV}. Although these approaches can substitute some parts of the previously used speech synthesis frameworks, they still required external modules to extract the linguistic features or the vocoder parameters. Some researchers came up with neural network architectures that can substitute the whole speech synthesis framework. Deep Voice 1 \cite{DV1} is made of 5 modules where all modules are modelled using neural networks. The 5 modules exhaustively substitute the text encoding part and the speech generation part of the common speech synthesis framework. While Deep voice 1 is composed of only neural networks, it was not trained in end-to-end fashion. Wang et al. proposed fully end-to-end speech synthesis model called Tacotron \yrcite{TACO1}. Tacotron can be regarded as a variant of a sequence-to-sequence network with attention mechanism \cite{SEQATT}. Tacotron is composed of three modules: encoder, decoder, and post-processor (refer to Figure \ref{fig-net1}). Tacotron basically follows the sequence-to-sequence framework with attention mechanism, especially which converts a character sequence into corresponding waveform. More specifically, the encoder takes the character sequence as an input and generates a text encoding sequence which has same length with the character sequence. The decoder generates Mel-scale spectrogram in an autoregressive manner. Combining attention alignment with the text encoding gives a context vector, and decoder RNN takes context vector and output of the attention RNN as inputs. The decoder RNN predicts Mel-scale spectrogram, and the post-processor module consequently generates linear-scale spectrogram from the Mel-scale spectrogram. Finally, Griffin-Lim reconstruction algorithm estimates waveform from the linear-scale spectrogram \cite{GLRECON}. Single-speaker TTS systems have further extended to the multi-speaker TTS systems which can generate speech by conditioning on a speaker index. Arik et al. proposed Deep Voice 2, a modified version of Deep Voice 1, to enable multi-speaker TTS \yrcite{DV1, DV2}. By feeding learned speaker embedding as nonlinearity biases, recurrent neural network initial states, and multiplicative gating factors, they showed their model can generate multiple voices. They also showed Tacotron is able to generate multiple voice using the similar approach. Another study reported a TTS system that can generate voice containing emotions \cite{EMOTTS}. This approach is similar to the multi-speaker Tacotron in Deep Voice 2 paper, but the model could be built with less number of speaker embedding input connections. Multi-speaker TTS model is further extended to voice imitation model. Current multi-speaker TTS models takes a one-hot represented speaker index vector as an input, and this is not easily extendable to generate voices which are not in the training data. Because the model can learn embeddings only for the speakers represented by one-hot vectors, there is no way to get a new speaker's embedding. If we want to generate speech of a new speaker, we need to retrain the whole TTS model or fine-tune the embedding layer of the TTS model. However, training of the network requires large amount of annotated speech data, and it takes time to train the network until convergence. Taigman et al. proposed a model that can mimic a new speaker's voice \yrcite{VOICELOOP}. While freezing the model's parameters, they backpropagated errors using new speaker's (speech, text, speaker index) pairs to get a learned embedding. However, this model could not overcome the problems we mentioned earlier. The retraining step requires (speech, text) pair which can be inaccurate or even unavailable for data from the wild. Furthermore, because of the additional training, voice imitation cannot be done immediately. In this study, we will propose a TTS model that does not require annotated (speech, text) pairs so that it can be utilized in more general situations. Moreover, our model can immediately mimic a new speaker's voice without retraining. \section{Voice imitating neural speech synthesizer} \label{sec_method} \subsection{Multi-speaker TTS} \label{ssec_multiTTS} One advantage to use neural network for a TTS model is that it is easy to give conditions when generating speech. For instance, we can give condition by just adding a speaker index input. Among several approaches to neural network based multi-speaker TTS models, we decided to adopt the architecture of Lee et al. \yrcite{EMOTTS}. Their model extends Tacotron to take a speaker embedding vector at the decoder of Tacotron (see Figure \ref{fig-net1}). If we drop the connections from the one-hot speaker ID input and the speaker embedding vector $s$, there is no difference from the original Tacotron architecture. The model has two targets in its objective function: (1) Mel-scale spectrogram target $Y_{mel}$ and (2) linear-scale spectrogram target $Y_{linear}$. L1 distances of each Mel-scale spectrograms $\hat{Y}_{mel}$ and $Y_{mel}$ and linear-scale spectrograms $\hat{Y}_{linear}$ and $Y_{linear}$ are added to compute the objective function as follows: \begin{equation}\label{eq_trainigobj} Loss = ||Y_{linear}-\hat{Y}_{linear}||_1 + ||Y_{mel}-\hat{Y}_{mel}||_1 \end{equation} where $\hat{Y}$'s are output of the Tacotron and $Y$'s are the ground truth spectrograms. Note that, there is no direct supervision on the speaker embedding vector, and each speaker index will have corresponding speaker embedding vector $s$ learned by backpropagated error from the loss function (\ref{eq_trainigobj}). By its formulation, the model can store only the speaker embeddings appeared in the training data at the Lookup table. When we want to generate a speech with a new speaker's voice, we need another speaker embedding for that speaker. In order to get a speaker embedding of the unseen speakers, we should train the model again with the new speaker's data. This retraining process consumes much time, and the model's usability limited to the voice with large data size. \begin{figure}[t] \centerline{\includegraphics[width=4cm]{embedder}} \caption{The speaker embedder network}\label{fig-embednet} \end{figure} \begin{figure*}[t] \centerline{\includegraphics[width=14cm]{net2}} \caption{Voice imitating Tacotron}\label{fig-net2} \end{figure*} \subsection{Proposed model} \label{ssec_imiTTS} One possible approach to address the problem is direct manipulation of the speaker embedding vector. Assuming the speaker embedding vector can represent arbitrary speakers' voices, we may get desired voice by changing values of the speaker embedding vector, but it will be hard to find the exact combination of values of the speaker embedding vector. This approach is not only inaccurate but also labor intensive. Another possible approach is to retrain the network using the new speaker's data. With enough amount of data, this approach can give us the desired speech output. However, it is not likely to have enough data of the new speaker, and the training process requires much time until convergence. To tackle this problem more efficiently, we propose a novel TTS architecture that can generate a new speaker’s voice using a small amount of speech sample. The imitation of a speaker's voice can be done immediately without requiring additional training or manual search of the speaker embedding vector. The proposed voice imitating TTS model is an extension of the multi-speaker Tacotron in Section \ref{ssec_multiTTS}. We added a subnetwork that predicts a speaker embedding vector from a speech sample of a target speaker. Figure \ref{fig-embednet} shows the subnetwork, the speaker embedder, that contains convolutional layers followed by fully connected layers. This network takes log-Mel-spectrogram as input and predicts a fixed dimensional speaker embedding vector. Notice that, the input of speaker embedder network is not limited to the speech sample. Substituting input of the speaker embedder network enables TTS models to condition on various sources of information, but we focus on conditioning on a speaker's speech sample in this paper. Prediction of the speaker embedding vector requires only one forward pass of the speaker embedder network. This enables immediate imitation of the proposed model to generate speech for a new speaker. Although the input spectrograms may have various lengths, the max-over-time pooling layer, which is located at the end of the convolutional layers, squeezes the input into a fixed dimensional vector with length 1 for time axis. In this way, the voice imitating TTS model can deal with input spectrograms with arbitrary lengths. The speaker embedder with input speech sample replaces the Lookup table with one-hot speaker ID input of the multi-speaker Tacotron as described in Figure \ref{fig-net2}. For training of the voice imitating TTS model, we also use the same objective function (\ref{eq_trainigobj}) with the multi-speaker TTS. Note also that, there is no supervision on training the speaker embedding vector. \section{Experiments} \subsection{Dataset} In accordance with Arik et al., we used VCTK corpus which contains 109 native English speakers with various accents \yrcite{DV2}. The population of speakers in VCTK corpus has various accents and ages, and each speaker recorded around 400 sentences. We preprocessed the raw dataset in several ways. At first, we manually annotated transcripts for audio files which did not have corresponding transcripts. Then, for the text data, we filtered out symbols if they are not English letters or numbers or punctuation marks. We used capital letters without decapitalization. For the audio data, we trimmed silence using WebRTC Voice Activity Detector \cite{WEBRTC}. Reportedly, trimming silence is important for training Tacotron \cite{DV2,TACO1}. Note that, there is no label which tells the model when to start speech. If there is silence in the beginning of audio file, the model cannot learn what is the proper time to start speech. Removing silence can alleviate this problem by aligning the starting times of speeches. After the trimming, the total length of the dataset became 29.97 hours. Then, we calculated log-Mel-spectrogram and log-linear-spectrogram of each audio file. When generating spectrogram, we used Hann window of frame length 50ms and shifted windows by 12.5ms. \subsection{Training} In this experiment, we trained two TTS models: multi-speaker Tacotron and voice imitating Tacotron. In the rest of this paper, we will use terms multi-speaker TTS and voice imitating TTS to refer the two models respectively. To train the latter model, we did additional data preparation process. We prepared speech samples of each speaker since the model needs to predict a speaker embedding from log-Mel-spectrogram of a speech sample. Since we thought it is hard to capture a speaker’s characteristic in a short sentence, we concatenated one speaker’s whole speech data and made samples by applying fixed size rectangular window with overlap. The resulting window covers around 6 seconds of speech, which can contain several sentences. We fed a speech sample a target speaker to the model together with text input, while randomly drawing the speech sample from the windowed sample pool. We did not used the speech sample that is matched to the text input to prevent model from learning to generate by coping from the input speech sample. Furthermore, when training the voice imitating TTS model, we held out 10 speakers’ data for test set since we wanted to check if the model can generate unseen speakers’ voices. The profiles of 10 held out speakers are shown in Table \ref{tbl-testset}. We selected them to have similar distribution with training data in terms of gender, age, and accent. \begin{table}[t] \caption{Speaker profiles of the test set.} \label{tbl-testset} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{ccccc} \toprule ID & Age & Gender & Accents & Region \\ \midrule 225 & 23 & F & English & S. England \\ 226 & 22 & M & English & Surrey \\ 243 & 22 & M & English & London \\ 244 & 22 & F & English & Manchester\\ 262 & 23 & F & Scottish & Edinburgh \\ 263 & 22 & M & Scottish & Aberdeen \\ 302 & 20 & M & Canadian & Montreal \\ 303 & 24 & F & Canadian & Toronto \\ 360 & 19 & M & American & New Jersey \\ 361 & 19 & F & American & New Jersey \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} For the Tacotron’s parameters, we basically followed specifications written in the original Tacotron paper except the reduction factor $r$ \cite{TACO1}. We used 5 for the $r$, which means generating 5 spectrogram frames at each time-step. For hyperparameters of the speaker embedder network, we used the following settings. We used 5-layered 1D-convolutional neural network with 128 channels and window size of 3. The first 2 layers have stride of 1 and the remaining 3 layers have stride of 2. We used 2 linear layers with 128 hidden units after the convolutional layers. We used ReLU as a nonlinearity and applied batch normalization for every layer \cite{BN}. We also applied dropout with ratio of 0.5 to improve generalization \cite{DROPOUT}. The last layer of the speaker embedder network is a learnable projection layer without nonlinearity and dropout. We used mini-batch size of 32. During the training, limited capacity of GPU’s memory prevented us from loading a mini-batch of long sequences at once. To maximize the utilization of data, we used truncated backpropagation through time \cite{TBPTT}. We used gradient clipping to avoid the exploding gradient problem \cite{CLIPPING}. We used 1.0 as a clipping threshold. For the optimization, we used ADAM \cite{ADAM}, which adaptively changes scale of update, with parameters 0.001, 0.9, and 0.999 for learning rate, $\beta_1$, and $\beta_2$ respectively. \section{Result} \begin{table}[t] \begin{center} \begin{tabular} {cc} \parbox[c]{4cm}{\includegraphics[width=4cm]{pca_imi}}& \parbox[c]{4cm}{\includegraphics[width=4cm]{pca_multi}}\\ (a) Voice imitating TTS & (b) Multi-speaker TTS\\ \end{tabular} \end{center} \captionof{figure} {Principal component of speaker embeddings of voice imitating TTS and multi-speaker TTS, shown with gender of the speakers} \label{fig_pca} \end{table} \begin{table*}[t] \begin{center} \begin{tabular} {ccc} \parbox[c]{5.5cm}{\includegraphics[width=5.5cm]{mimic_tr_f_g1}}& \parbox[c]{5.5cm}{\includegraphics[width=5.5cm]{multi_tr_f_g1}}& \parbox[c]{5.5cm}{\includegraphics[width=5.5cm]{mimic_tr_f_t1}}\\ Voice imitating TTS & Multi-speaker TTS & Ground truth \\ \end{tabular} \end{center} \captionof{figure} {Mel-spectrogram from Multi-speaker TTS and voice imitating TTS, generated for train set} \label{fig_mimic1} \end{table*} \begin{table*}[t] \begin{center} \begin{tabular} {ccc} \parbox[c]{5.5cm}{\includegraphics[width=5.5cm]{mimic_te_f_g1}}& \parbox[c]{5.5cm}{\includegraphics[width=5.5cm]{multi_te_f_g1}}& \parbox[c]{5.5cm}{\includegraphics[width=5.5cm]{mimic_te_f_t1}}\\ Voice imitating TTS & Multi-speaker TTS & Ground truth\\ \end{tabular} \end{center} \captionof{figure} {Mel-spectrogram from Multi-speaker TTS and voice imitating TTS, generated for test set} \label{fig_mimic2} \end{table*} We first checked performance of voice imitating TTS qualitatively by investigating learned latent space of the speaker embeddings. In order to check how the speaker embeddings are trained, we applied PCA to the speaker embeddings. Previous researches reported discriminative patterns were found from the speaker embeddings in terms of gender and other aspects \cite{DV2, VOICELOOP}. Figure \ref{fig_pca} shows the first two principal components of the speaker embeddings where green and red colors represent female and male respectively. We could see clear separation from speaker embeddings of the multi-speaker TTS as reported from other studies. Although the speaker embeddings of voice imitating TTS had an overlapped area, we could observe that the female embeddings are dominant in the left part, whereas the male embeddings are dominant in the right part. Besides, some of the embeddings located far from the center. We suspect that the overlap and the outliers are existing because the speaker embedding is extracted from a randomly chosen speech sample of a speaker. A speech sample of a male speaker can have only the particularly lower-pitched voice, or a speech sample of a female speaker can have only particularly higher-pitched voice. This may result in prediction of the out-lying embeddings, and similar argument could be applied for the overlapping embeddings. To check how similar are the generated voices and the ground truth voice, we compared spectrogram and speech samples from the voice imitating TTS to that of multi-speaker TTS model and the ground truth data. Then, by feeding a text from the training data while conditioning on the same speaker, we generated samples from voice imitating TTS and multi-speaker TTS. Then we compared the generated samples and the corresponding ground truth speech samples. Example spectrograms from the both models and the ground truth data are shown in Figure \ref{fig_mimic1}. We could observe both models gave us similar spectrogram, and also the difference between them was negligible when we listened to the speech samples. From the spectrogram, we could observe they have similar pitch and speed by seeing heights and widths of harmonic patterns. When we compared generated samples of the both models to the ground truth data, we could observe the samples from the both models having simliar pitch with the ground truth. We could see the model can learn to predict speaker embedding from the speech samples. Similarly, we analyzed spectrograms to check whether the voice imitating TTS can generalize well on the test set. Note that, the multi-speaker TTS included the test set of the voice imitating TTS for its training data, because otherwise multi-speaker TTS cannot generate speech for unseen speakers. In Figure \ref{fig_mimic2}, we also could observe spectrograms from generated samples showing similar pattern, especially for the pitch of each speaker. With these results, we conjecture that the model at least learned to encode pitch information in the speaker embedding, and it was generalizable to the unseen speakers. Since it is difficult to evaluate generated speech sample objectively, we conducted surveys using crowdsourcing platforms such as Amazon's Mechanical Turk. We first made speech sample comparison questions to evaluate voice quality of generated samples. This survey is composed of 10 questions. For each question, 2 audio samples--one from the voice imitating TTS and the other one from the multi-speaker TTS--are presented to participants, and the participants are asked to give a score from -2 (multi-speaker TTS is far better than voice imitating TTS) to 2 (multi-speaker TTS is far worse than voice imitating TTS). We gathered 590 ratings on the 10 questions from 59 participants (see Figure \ref{fig-survey1}). From the result, we could observe the ratings were concentrated on the center with overall mean score of $-0.10 \pm 1.16$. It seems there is not much difference in the voice quality of the voice imitating TTS and the multi-speaker TTS. \begin{figure}[t] \centerline{\includegraphics[width=7cm]{survey1}} \caption{Average scores of generated sample comparison survey, where the symbols mean: M$>>$I - M (the multi-speaker TTS) is far better than I (the voice imitating TTS), M$>$I - M is little better than I, M$=$I - Both M and I have same quality }\label{fig-survey1} \end{figure} For the second survey, we made speaker identification questions to check whether generated speech samples contain distinct characteristics. The survey consists of 40 questions, where each question has 3 audio samples: ground truth sample and two generated samples. The two generated samples were from the same TTS model, but each of which conditioned on different speakers' index or speech samples. The participants are asked to choose one speech sample that sounds mimicking the same speaker identity of the ground truth speech sample. From the crowdsourcing platform, we found 50 participants for surveying the voice imitating TTS and other 50 participants for surveying the multi-speaker TTS model. The resulted speaker identification accuracies were 60.1$\%$ and 70.5$\%$ for the voice imitating TTS and the multi-speaker TTS respectively. Considering random selection will score 50$\%$ of accuracy, we may argue higher accuracies than 50$\%$ reflect distinguishable speaker identity in the generated speech samples. By its nature of the problem, it is more difficult to generate distinct voice for the voice imitating TTS. Because the voice imitating TTS must capture a speaker's characteristic in a short sample whereas the multi-speaker TTS can learn the characteristic from vast amount of speech data. Considering these difficulties, we think the score gap between the two models are explainable. \section{Conclusion} We have proposed a novel architecture that can imitate a new speaker's voice. In contrast to the current multi-speaker speech synthesis models the voice imitating TTS could generate a new speaker's voice using a small amount of speech sample. Furthermore, our method could imitate voice immediately without additional training. We have evaluated generation performance of the proposed model both in qualitatively and quantitatively, and we have found there is no significant difference in the voice quality between the voice imitating TTS and the multi-speaker TTS. Though generated speech from the voice imitating TTS have showed less distinguishable speaker identity than that from the multi-speaker TTS, generated voices from the voice imitating TTS contained pitch information which can make voice distinguishable from other speakers' voice. Our approach is particularly differentiated from the previous approaches by learning to extract features with the speaker embedder network. Feeding various sources of information to the speaker embedder network makes TTS models more versatile, and exploring its possibility is connected to our future works. We expect intriguing researches can be done in the future by extending our approach. One possible direction will be a multi-modal conditioned text-to-speech. Although this paper has focused on extracting speaker embedding from a speech sample, the speaker embedder network can learn to extract speaker embedding from various sources such as video. In this paper, the speaker embedder network has extracted a speaker's characteristic from a speech sample. By applying same approach to a facial video sample, the speaker embedder network may capture emotion or other characteristics from the video sample. The resulting TTS system will be able to generate a speaker's voice which contains appropriate emotion or characteristics for a given facial video clip and an input text. Another direction will be cross-lingual voice imitation. Since our model requires no transcript corresponding to the new speaker's speech sample, the model has a potential to be applied in the cross-lingual environment. For instance, imitating a Chinese speaker's voice to generate English sentence can be done.
{ "timestamp": "2018-06-05T02:13:48", "yymm": "1806", "arxiv_id": "1806.00927", "language": "en", "url": "https://arxiv.org/abs/1806.00927" }
\section{Introduction} General video game playing (GVGP) and General game playing (GGP) aim at designing AI agents that are able to play more than one (video) game successfully alone without human intervention. One of the early stage challenges is to define a common framework that allows the implementation and testing of such agents on multiples games. For this purpose, the General Video Game AI (GVGAI) framework~\cite{perez2018general} and General Game Playing framework~\cite{genesereth2005general,love2008general} have been developed. Competitions using the GVGAI and GGP frameworks have significantly promoted the development of a variety of AI methods for game-playing. Examples include tree search algorithms, evolutionary computation, hyper-heuristic, hybrid algorithms, and combinations of them. GVGP is more challenging due to the possibly stochastic nature of the games to be played and the short decision time. Five competition tracks have been designed based on the GVGAI framework for specific research purposes. The planning and learning tracks focus on designing an agent that is capable of playing several unknown games respectively with or without the forward model to simulate future game states. The level and rule generation tracks have the objective of designing AI programs that are capable of creating levels or rules based on a game specification. Despite the fact that the initial purpose of developing GVGAI framework was to facilitate the research on GVGP, GVGAI and its game-playing agents have also been used in other application rather than just competitive GGP. For instance, the GVGAI level generation track has used the GVGAI game playing agents to evaluated the automatically generated game levels. Relative algorithm performance \cite{nielsen2015general} has been used to understand how several agents perform in the same level. Although, no introspection into the agent behaviour or decision-making process was used so far. The main purpose of this paper is to give a general set of metrics that can be gathered and logged during the agent's decision-making process to understand its in-game behaviour. These are meant to be generic, shallow and flexible enough to be applied to any kind of agent regardless of its algorithmic nature. Moreover we are also providing a generic methodology to analyse and compare game-playing agents in order to get an insight on how the decision-making process is carried out. This method will be later addressed as \textit{comparison method}. Both the metrics and the comparison method will be useful in several applications. It can be used for level generation: knowing the behaviour of an agent and what attracts it in the game-states space means that it can be used to measure how a specific level design suits a certain play-style therefore pushing the design to suit the agent in a recommender system fashion~\cite{machado2016shopping}. From a long term perspective, this can be helpful to understand a human player's behaviour and then personalise a level or a game to meet this player's taste or playing style. Solving the dual problem is useful as well, in the process of looking for an agent that can play well a certain level design, disposing of reliable metrics to analyse the agent behaviour could significantly speed up the search. Additionally, by analysing the collected metrics, it's possible to find out if a rule or an area of the game world is obsolete. This can be also applied generally to the purpose of understanding game-playing algorithms, it's well known that there are black-box machine learning techniques that offer no introspection in their reasoning process, thus being able of comparing in a shallow manner, the decision-making process of different agents can help shed some light into their nature. A typical example is a neural network that given some input features outputs the action probability vector. With the proposed metrics and methodology it would be possible to make estimate its behaviour without actually looking at the agent playing the game and extracting behavioural information by hand. The rest of this paper is structured as follows. In Section \ref{sec:back}, we provide a background on the GVGAI framework focusing in particular on the game-playing agents, three examples of how agent performance metrics have been used so far in scenarios other than pure game-play and an overview of MCTS-based agents. Then, we propose a comparison method, a set of metrics and an analysis procedure in Section \ref{sec:methods}. Experiments using these metrics are described in Section \ref{sec:ex_setup} and the results are discussed in Section \ref{sec:xp} to demonstrate how they provide a deeper understanding on the agent's behaviour and decision-making. Last, we draw final considerations and list possible future work in Section \ref{sec:conc}. \section{Background}\label{sec:back} \subsection{General Video Game AI framework} The General Video Game AI (GVGAI) framework~\cite{perez2018general} has been used for organising GVGP competitions at several international conferences on games or evolutionary computation, for research and education in worldwide institutions. The main GVGAI framework is implemented using \emph{Java} and \emph{Python}. A Python-style Video Game Description Language (VGDL)~\cite{ebner2013towards,schaul2013video} is developed to make it possible to create and add new games to the framework easily. The framework enables several tracks with different research purposes. The objective of the single-player~\cite{perez20162014} and two-player planning~\cite{gaina20172016} tracks is to design an AI agent that is able to play several different video games respectively alone or with another agent. With access to the current game state and the forward model of the game, a planning agent is required to return a legal action in a limited time. Thus, it can simulate games to evaluate an action or a sequence of actions and get the possible future game state(s). However, in the learning track, no forward model is given, a learning agent needs to learn in an trial-and-error way. There are two other tracks based on the GVGAI framework which focus more on game design: the rule generation~\cite{khalifa2016general} and the level generation~\cite{khalifa2017rulegen}. In the rule generation track, a competition entry (generator) is required to generate game rules (interactions and game termination conditions) given a game level as input, while in the level generation track, an entry is asked to generate a level for a certain game. The rule generator or level generator should be able to generate rules or levels for any game given a specified search space. \subsection{Monte Carlo Tree Search-based agents} Monte-Carlo Tree Search (MCTS) has been the state-of-the-algorithm in game playing~\cite{browne2012survey}. The goal of MCTS is to approximate the value of the actions/moves that may be taken from the current game state. MCTS builds iteratively a search tree using Monte Carlo sampling in the decision space and the selection of the node (action) to expand is based on the outcome of previous samplings and on a Tree Policy. A classic Tree Policy is the Upper Confidence Bound (UCB)~\cite{auer2002finite}. The UCB is one of the classic multi-armed bandit algorithms which aims at balancing between exploiting the best-so-far arm and exploring more the least pulled arms. Each arm has an unknown reward distribution. In the game-playing case, each arm models a legal action from the game state (thus a node in the tree), a reward can be the game score, a win or lose of a game, or a designed heuristic. The UCB Tree Policy selects to play the action (node) $a^*$ such that $ a^*=\arg \max_{a \in A}\ \bar x_{a} + \sqrt{\frac{\alpha \ln{n}}{n_a}}$, where $A$ denotes the set of legal actions at the game state, $n$ and $n_a$ refers to the total number of plays and the number of times that the action $a$ has been played (visited), $\alpha$ is called exploration factor. The GVGAI framework provides several sample controllers for each of the tracks. For instance, the \emph{sampleMCTS} is a vanilla implementation of MCTS for single-player games, but performs finely on most of the games. M. Nelson~\cite{nelson2016investigating} tests the \emph{sampleMCTS} on more than sixty GVGAI games, using different amounts of time budget for planning at every game tick, and observes that this implementation of MCTS is able to reduce the loss rate given longer planning time. More advanced variants of MCTS have been designed for playing a particular game (e.g., the game of Go~\cite{silver2016mastering,silver2017mastering}), for general video game playing (e.g., \cite{perez20162014,soemers2016enhancements}) or general game playing (e.g., \cite{mehat2010combining}). Recently, Bravi el al.~\cite{bravi2017master} custom various heuristics particularly for some GVGAI games, and Sironi el al.~\cite{sironi2018self} design several Self-Adaptive MCTS variants which use hyper-parameter optimisation methods to tune on-line the exploration factor and maximal roll-out depth during the game playing. \subsection{Agent performance evaluation} Evaluating the performance of an agent is sometimes a very complex task depending on how the concept of performance is defined. In the GVGAI planning and learning competitions, an agent is evaluated based on the the amount of games it wins over a fixed number of trials, the average score that it gets and the average duration of the games. Sironi et al. \cite{sironi2018self} evaluate the quality of their designed agents using a heuristic which combines the score obtained eventually giving an extra bonus or penalty depending on whether the agent could reach a winning state or a losing state, respectively. The GVGAI framework has also been used for purposes other than the ones laid out by the competition tracks. Bontrager et al. \cite{bontrager2016matching} cluster some GVGAI single-player and two-player games using game features and agent performance extracted using the playing data by the single-player and two-player planning competition entries, respectively. In particular, the performance of an agent, represented by win ratio in \cite{bontrager2016matching}, is used to cluster the games in four groups: games easy to win, hard games, games that MCTS agent can play well and games that can be won by a specific set of agents. The idea behind that work is interesting although the clustering results in three small sized groups and a very large one. This suggests that using more introspective metrics could help clustering the games more finely. GVGAI has also been used as test bed for evolving MCTS tree policies (in the form of a mathematical formula for decision making) for specific games~\cite{bravi2017master}. \cite{bravi2017master} consists in evolving Tree Policies (formulae) using Genetic Programming, the fitness evaluation is based on the performance of an MCTS agent which uses the specific tree policy. Once again, the informations logged and used from the playthrough by the fitness function were a combination of win ratio, average score and average game-play time, in terms of the number of game ticks. Unfortunately no measurement was made about the robustness of the agent's decision-making process of which could have been embedded in the fitness function to possibly enhance the evolutionary process. In the recent Dagstuhl seminar on AI-Driven Game Design, game researchers have envisioned a set of features to be logged during game-play, divided into four main groups: direct logging features, general indirect features, agent-based features and interpreted features~\cite{measures2018dagstuhl}. A preliminary example of how such features can be extracted and logged in the GVGAI framework has also been provided~\cite{measures2018dagstuhl}. Among the direct logging features, we can find some kind of game information that don't need any sort of interpretation, few examples are: game duration, actions log, game outcome and score. Instead, these features are listed in the general indirect features which require some degree of interpretation or analysis of the game state such as the entropy of the actions, the game world and the game state space. The agent-based features gather information about the agent(s) taking part to the game, for example about the agent surroundings, the exploration of the game-state space or the convention between different agents. Finally, the interpreted features are based on metrics already defined in previous works such as drama and outcome uncertainty~\cite{browne2010evolutionary} or skill depth~\cite{liu2017evolving}. \section{Methods}\label{sec:methods} This section first introduces a set of metrics that can potentially be extracted from any kind of agent regardless of its algorithmic nature, aiming at giving an introspection of the decision-making process of a game-playing agent in a shallow and general manner (Section \ref{metrics}). Then we present a method to compare the decisions of two distinct game-playing agents under identical conditions using the metrics introduced previously. As described in \cite{holmgaard2016evolving} the decision-making comparison can be done at growing levels of abstraction: action, tactic or strategic level. Our proposed method compares the decision-making at the action level. Later, we design a scenario in which the metrics and the comparison method are used to analyse the behaviour of instances of an MCTS-agent using different tree policies comparing them to agents with other algorithmic natures. Finally we describe the agents used in the experiments. In this paper, the following notations are used. A \textit{playthrough} refers to a complete play of a game from beginning to end. The set of available actions is denoted as $\mathcal{A}$ being $N=|\mathcal{A}|$, $a_i$ refers to the $i^{th}$ action in $\mathcal{A}$. A \textit{budget} or \textit{simulation budget} is either the amount of forward-model calls the agent can make at every game tick to decide the next action to play or the CPU-time that the agent can take. The fixed budget is later addressed as $\mathcal{B}$. \subsection{Metrics} \label{metrics} The metrics presented in this paper are based on two simple and fairly generic assumptions: (1) for each game tick the agent considers each available action $a_i$ for $n_i$ times; (2) for each game tick the agent assigns a value $v(a_i)$ to each available action. In this scenario the agents are designed to operate on a fixed budget $\mathcal{B}$~ in terms of real time or number of forward model calls, which allows for a fair comparison making the measurements comparable between each other. Due to the stochastic nature of an agent or a game, it is sometimes necessary to make multiple playthroughs for evaluation. The game id, level id, outcome (specifically, win/loss, score, total game ticks) and available actions at every game tick are logged for each playthrough. Additionally, for each game tick in the playthrough, the agent is going to provide the following set of metrics: \begin{itemize} \item $a^*$: the recommended action to be played next; \item $\overline{p}$: probability vector where $p_i$ represents the probability of considering $a_i$ during the decision-making process; \item $\overline{v}$: vector of values $v_i \in \mathcal{R}$ where $v_i$ is the value of playing $a_i$ from the current game state, $v^*$ is the highest value which implies it being associated with $a^*$. Whenever the agent doesn't actually have such information about the quality of $a_i$ then $v_i$ should be NaN; \item $b$: represents the ratio of the budget consumed over the fixed available budget $\mathcal{B}$, $b \in [0,1]$ where $0$ and $1$ respectively mean that either no budget or the whole $\mathcal{B}$~ was used by the agent; \item $conv$: convergence, as the budget is being used is likely for the current $a^*$ to fluctuate, $conv$~ is the ratio of budget used over $\mathcal{B}$~ when $a^*$ is stable. It means that any budget used after $conv$~ hasn't changed the recommended action. $conv$~$ \in [0,b]$. \end{itemize} It is notable that most of the agents developed for the GVGAI try to consume as much budget as possible, however this is not necessarily a good trait of the agent, being able to log the amount of budget used and distinguish between a budget-saver and a budget-waster can give an interesting insight on the decision-making process especially on the confidence of the agent. Since this set of metrics tries to be as generic as possible, we shouldn't limit the metrics because of the current agent implementations. The vectors $\overline{p}$ and $\overline{v}$ can be inspected to portray the agent preference over $\mathcal{A}$. The vector $\overline{p}$ can also be used during the debug phase of designing an agent to see whether it actually ever considers all the available action. Generally different agents reward actions differently, therefore it is not possible to make a priori assumptions on the range or the distribution over values. Although the values in $\overline{v}$ allow at the very least to rank the actions and moreover to get informations about their boundaries and distributions (guaranteed a reasonable amount of data) a posteriori. Furthermore, it is possible to follow the oscillation of such values through the game-play highlighting critical portions of it. For example, when the $v_i$ are similar (not very far apart from each other considering the value bounds logged) and generally high then we can argue that the agent evaluates all actions as good ones. On the contrary if the values are generally low, the agent is probably struggling in a bad game scenario. \subsection{Comparison method}\label{sec:comparison} Comparing the decisions made by different agents is not a trivial matter especially when their algorithmic nature can be very different. The optimal set-up under which we can compare their behaviour is when they are provided the same problem or scenario under exactly same conditions. This is sometimes called \emph{pairing}. We propose the following experimental set-up: a meta-agent, called \textit{Shadowing Agent}, instantiates two agents: the \textit{main} agent, and the \textit{shadow} agent. For each game tick the \textit{Shadowing Agent}~behaves as a proxy and feeds the current game state to each of the agents which will provide the next action to perform as if it was a normal GVGAI game-play execution. Both these agents have a limited budget. Once both main and shadow agent behaviours are simulated, the \textit{Shadowing Agent}~takes care of logging the metrics described previously from both agents and then returns to the framework the action chosen by the main agent. In this way the actual avatar behaviour in the game simulated is consistent with the main agent and the final outcome represents its performance. In the next sections we are going to use the superscripts $^m$ and $^s$ for a metric respectively relative to the main agent or the shadow agent. A typical scenario would be comparing how very radically different agents such as: a Random agent, a Monte-Carlo Search agent, a One-Step Look Ahead agent and an MCTS-based agent. Under this scenario, comparing each single coupling of agents will result in producing a matrix of comparisons. All the informations on how the agents extract the metrics described previously are detailed in Section \ref{agents}. \subsection{Analysis Method} We are going to analyse these agents' behaviours in few games, for each game we are going to run all the possible couplings of main agent and shadow agent, for each couple we are going to run $N_p$ playthroughs and, finally, for each playthrough we are going to save the current metrics for both main and shadow agents. It's worth remembering that each playthrough has its own length, thus playthrough $i$ will have length $l_i$. This means that in order to analyse and compare behaviours we need a well structured methodology to slice data appropriately. Our proposed method is represented in Figure \ref{analysis-graph}. The first level of comparison is done at the action level, we can measure two things: \textit{Agreement Percentage} $\mathcal{AP}$, percentage of times the agents agreed on the best action averaged across the several playthroughs; and \textit{Decision Similarity} $\mathcal{DS}$, the average symmetric Kullback-Leibler divergence of the two probability vectors $\overline{p}^m$ and $\overline{p}^s$. When $\mathcal{AP}$~ is close to $100\%$ or $\mathcal{DS}$~$\sim0$ we have two agents with similar behaviours, at this point we can step to the next level of comparison: \textit{Convergence}, we compare $conv$$^m$ and $conv$$^s$ to see if there is a faster converging agent; and \textit{Value Estimation}, this level of comparison is thorny, in fact each agent has its own function for evaluating a possible action, for this step we recommend using these values to rank the actions using them as preference evaluation. \textit{Convergence} can highlight both the ambiguity of the surrounding game states or the inability of the agent to recognise important features. If the agents have a similar $conv$~ values we can then take a look at the \textit{Efficiency}. This value represents the average amount of budget used by the agent. To summarise, once two agents with similar $\mathcal{AP}$~ or $\mathcal{DS}$~ are found, the next comparison levels highlight the potential preference toward the fastest converging and most budget-saver one. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{analysis} \caption{\label{analysis-graph}The decision graph to compare agents' behaviours.} \end{figure} \section{Experimental set-up} \label{sec:ex_setup} In this section, we show how a typical experiment could be run using the metrics and methods introduced previously. Each experiment is run over the following games in order to have diverse scenarios that can highlight different behaviours: \begin{itemize} \item Aliens: a game loosely modelled on the Atari 2600's Space Invaders, the agent on the bottom of the screen has to shoot the incoming alien spaceships from above avoiding their blasts; \item Brainman: the objective of the game is for the player to reach the exit, the player can collect diamonds to get points and push keys into doors to open them \item Camel Race: the player, controlling a camel, has to reach the finish line before the other camels whose behaviour is part of the design of the game \item Racebet: in the game there are few camels racing toward the finish line, each has a unique colour, in order to win the game the agent has to position the avatar on the camel with a specific colour; \item Zenpuzzle: the level has two different types of floor tiles, one that can be always stepped on and a special type that can be stepped on no more than once. The agent has to step on all the special tiles in order to win the game. \end{itemize} Further details on the games and the framework can be found at www.gvgai.net . The budget given to the agents is a certain number of forward-model calls which is different than the real time constraints used in the GVGAI competitions. We made this decision in order to get more robust data across different games, in fact the number of forward model calls that can be executed in the 40 ms can drastically vary changing the game, sometimes from hundreds to thousands. This experiment consists in running the comparisons between the MCTS-based agents that use all possible prunings $h' \in \mathcal{H}$ as tree policy generated from $h$ (cf. \eqref{A0}, variables summarised in Table \ref{variables}), and the following agents: Random, One-Step Look Ahead, and Monte-Carlo Search. \begin{equation} \label{A0} h = min(D_{MOV}) \cdot min(D_{NPC}) + \frac{\lvert max(R)\rvert}{\sum D_{NPC}} \end{equation} \begin{table}[h] \centering \caption{Variables used in the heuristic (cf. \eqref{A0}).} \renewcommand{\arraystretch}{1.4} \setlength\tabcolsep{1.4pt} \begin{tabular}{c|l} \hline \textbf{Notation} & \multicolumn{1}{c}{\textbf{Description}} \\ \hline $max(R)$ & Highest reward among the simulations that visit current node \\ $min(D_{MOV})$ & Minimum distance from a movable sprite \\ $min(D_{NPC})$ & Minimum distance from an NPC \\ $sum(D_{NPC})$ & Sum of all the distances from NPCs \\ \hline \end{tabular} \label{variables} \end{table} In this work, each pair of agents is tested over 20 playthroughs of the first level of each game, all the agents were given a budget of $700$ forward-model calls. The budget was decided looking at the average number of forward-model calls done in all the GVGAI games by the Genetic Programming MCTS (GPMCTS) agent with a time budget of 40 ms, same as in the competitions. The GPMCTS agent is an MCTS agent with customisable Tree Policy as described in \cite{bravi2017master}. \subsection{Comparison method for MCTS-based agents} MCTS-based agents can be tuned and enhanced in many different ways, a wide set of hyper-parameters can be configured differently, one of the most crucial components is the tree policy. The method we propose gradually prunes the tree policy heuristic in order to isolate bits of \eqref{A0}. Evaluating the similarity of two tree policies is a rather complex task, it can be roughly done by analysing the difference between their values given a point in their search domain. This approach is not optimal, supposing we want to analyse two functions $f$ and $g$ where $g = f + 10$, their values will never be the same but when applied to the MCTS scenario they would perform exactly the same. Actually, what matters is not the exact value of the function but the way that two points in the domain are ordered according to their evaluations. In short, being $\mathcal{D}$ the domain of the functions $f$ and $g$ and $p_1,p_2 \in \mathcal{D}$ what matters is that both the following conditions $f(p_1) \ge f(p_2)$ and $g(p_1) \ge g(p_2)$ hold true. The objective is to understand how each term in \eqref{A0} used in the tree policy of an MCTS agent impacts the behaviour of the whole agent. Given $h$, thus \eqref{A0} used as tree policy, let $\mathcal{H}$ be the set of all possible prunings (therefore functions) of the expression tree associated to $h$. This method applies the metrics and the comparison method introduced previously and it consists in running all possible couples $(A_m,A_s) \in \mathcal{AG}\times\mathcal{AG}$ where the agent $A_m$ is the main agent and $A_s$ is the shadow agent, the set $\mathcal{AG}$ contains one instance of MCTS-based agent for each tree policy in $\mathcal{H}$ and the following agents: Random, One-Step Look Ahead, Monte-Carlo Search. In this way it is possible to get a meaningful evaluation of how different equations might result in suggesting the same action, or not, for all the possible comparisons of the equations in $\mathcal{H}$ but also how they compare to the other reference agents. \subsection{Agents} \label{agents} In this section, we give the specifications of the agents used and the way they link each metric to their algorithmic implementation. These agents are going to be used in the experiments and they can be used as examples of how algorithmic informations can be interpreted and manipulated to get the metrics described previously. Most agents use \textit{SimpleStateHeuristic} which evaluates a game state according to the win/lose state, the distance from portals and the number of NPCs. It rewards best winning states with no NPCs and where the position of the player is closest to a portal. None of the agents was chosen for its performance, the point of using these agents is that theoretically they can represent very different play styles: completely stochastic, very short-sighted, randomly long-sighted, generally short-sighted. \subsubsection{Random} The random agent has a very straightforward implementation: given the set of available actions, it picks an action uniformly at random. \begin{itemize} \item $\overline{p}$: since the action is picked uniformly $p_i = 1/|\mathcal{A}|$; \item $\overline{v}$: each $v_i$ is set to NaN; \item $b=0$, since no budget is consumed to return a random action; \item $conv$~is always $0$ for the same reason of $b$. \end{itemize} \subsubsection{One-Step Look Ahead} The agent makes a simulation for each of the possible actions, and evaluates the resulted game state using the \textit{SimpleStateHeuristic} defined by the GVGAI framework. The action with the highest values is going to be picked as $a^*$. \begin{itemize} \item $\overline{p}$: $p_i = 1/|\mathcal{A}|$ since each action is picked once; \item $\overline{v}$: each $v_i$ corresponds to the evaluation given by the \textit{SimpleStateHeuristic} initialized with current game state and compared to the game state reached via action $a_i$; \item $b$ is always $\frac{|\mathcal{A}|}{sb} $; \item $conv$~ varies and corresponds to the budget ratio when the best action is simulated. \end{itemize} \subsubsection{Monte-Carlo Search} The Monte-Carlo Search agent performs a Monte-Carlo sampling of the action-sequence space following 2 constraints: the sequence is not longer than 10 and only the last action can bring to a termination state. \begin{itemize} \item $\overline{p}$: considering $n_i$ as the number of times action $a_i$ was picked as first action and $N = \sum_{i=0}^{|\mathcal{A}|}n_i$ then $p_i = \frac{n_i}{N}$; \item $\overline{v}$: each $v_i$ is the average evaluation by the $\textit{SimpleStateHeuristic}$ initialized with the current game state compared to each last game state reached by every action sequence started from $a_i$; \item $b$ is always 1, since the agent keeps simulating until the end of the budget; \item $conv$~corresponds to the ratio of budget used at the moment the action with the highest $v_i$ last changed. \end{itemize} \subsubsection{MCTS-based} The MCTS-based is an implementation of MCTS with uniformly random roll-outs to a maximum depth of 10. The tree policy used can be specified when the agent is initialised, therefore the reader should not suppose UCB1 as the tree policy, whereas the heuristic used to evaluate game states is a combination of the score plus an eventual bonus/penalty for a win/lose state. \begin{itemize} \item $\overline{p}$: considering $n_i$ as the number of visits for $a_i$ at the root node of the search tree and $N$ as the number of visits at the root node then $p_i = \frac{n_i}{N}$; \item $\overline{v}$: each $v_i$ is the heuristic value associated to $a_i$ at the root node; \item $b=1$, since the agent keeps simulating until the budget is used up; \item $conv$~corresponds to the ratio of budget used when the action with the highest $v_i$ last changed in the root node. \end{itemize} \section{Experiments}\label{sec:xp} \begin{table}[h] \centering \caption{Agents used in experiments and their ids.} \label{agents_table} \renewcommand{\arraystretch}{1.4} \scriptsize \begin{tabular}{|c|l|} \hline Id & Agent\\ \hline 0 & MCTS + $\frac{1}{\sum D_{NPC}}$ \\ 1 & MCTS + $\lvert max(R)\rvert$ \\ 2 & MCTS + $\frac{\lvert max(R)\rvert}{\sum D_{NPC}}$ \\ 3 & MCTS + $min(D_{NPC})$ \\ 4 & MCTS + $min(D_{NPC}) + \frac{1}{\sum D_{NPC}}$ \\ 5 & MCTS + $min(D_{NPC}) + \lvert max(R)\rvert$ \\ 6 & MCTS + $min(D_{NPC}) + \frac{\lvert max(R)\rvert}{\sum D_{NPC}}$ \\ 7 & MCTS + $min(D_{MOV})$ \\ 8 & MCTS + $min(D_{MOV}) + \frac{1}{\sum D_{NPC}}$ \\ 9 & MCTS + $min(D_{MOV}) + \lvert max(R)\rvert$ \\ 10 & MCTS + $min(D_{MOV}) + \frac{\lvert max(R)\rvert}{\sum D_{NPC}}$ \\ 11 & MCTS + $min(D_{MOV}) \cdot min(D_{NPC})$ \\ 12 & MCTS + $min(D_{MOV}) \cdot min(D_{NPC}) + \frac{1}{\sum D_{NPC}}$ \\ 13 & MCTS + $min(D_{MOV}) \cdot min(D_{NPC}) + \lvert max(R)\rvert$ \\ 14 & MCTS + $min(D_{MOV}) \cdot min(D_{NPC}) + \frac{\lvert max(R)\rvert}{\sum D_{NPC}}$ \\ 15 & One-Step Look Ahead \\ 16 & Random \\ 17 & Monte-Carlo Search\\ \hline \end{tabular} \end{table} Table \ref{agents_table} summarises the agents used in the experiments and the ids assigned to them. Multiples MCTS agents using different tree policies have been tested. Figure \ref{bigboy} illustrates an example of agreement percentage $\mathcal{AP}$~and another of decision similarity $\mathcal{DS}$~between the main agent and the shadow agent on two tested games. An important fact to remember when looking at Figure \ref{bb:m_aliens} is that the probability of two random agents agreeing on the same action is $\frac{1}{|\mathcal{A}|}$. Therefore, when looking at the $\mathcal{AP}$~we should take into account and analyse what deviates from $\frac{1}{|\mathcal{A}|}$. The game Aliens is the only game where the agent has three available actions, the rest of the game is played with four available actions. The bottom-right to top-left diagonal in the matrix represents the $\mathcal{AP}$~that the agent has with itself, this particular comparison has a intrinsic meaning: it shows the coherence of the decision-making process, the higher the agreement the more consistent is the agent. This feature can be highlighted even more clearly looking at the $\mathcal{DS}$~ where the complete action probability vectors are compared. This isn't necessarily always good feature especially in competitive scenarios where a mixed strategy could be advantageous, but it's a measure of how the search process is consistent with its final decision. Picturing the action-sequence fitness landscape, a high $\mathcal{AP}$~implies that the agent shapes it in a very precise and sharp definition being able to identify consistently a path through it. In the scenarios where a lot of navigation of the level is necessary, there might be several way to reach the same end goal, this will result in the agent having a lower self-agreement. The KL-Divergence measure adopted for $\mathcal{DS}$~hilights how distinct are the decision making processes of each agent. Using this approach we would then expect much stronger agreement along the leading diagonals of all the comparison matrices as Figure \ref{bb:zen_kl}. Conversely, we would also expect a much clearer distinction between agents with genuinely distinct policies. \begin{figure*} \centering \subfloat[Aliens Pure Agreements]{ \label{bb:m_aliens} \centering \includegraphics[width=.43\linewidth]{m_aliens_no} } \subfloat[Zenpuzzle Decision Similarities]{ \label{bb:zen_kl} \centering \includegraphics[width=.43\linewidth]{symmetryc_kl-zenpuzzle} } \caption{\label{bigboy} Results of two comparison scenarios between all the agents in Table \ref{agents_table}. In Figure \ref{bb:m_aliens} we have the comparison using the \textit{Pure Agreement} method, the values from dark blue to light blue represent the agreement percentage (the lighter the higher). Instead in Figure \ref{bb:zen_kl} light blue represents very diverging action probability vectors while the darkest blue is for the case those are identical. The vertical and the horizontal dimensions of the matrix represent the main and shadow agent, respectively, in the comparison process. The main agent's win percentage is specified between square brackets in its label on the vertical axis.} \end{figure*} \textbf{Aliens.} The game Aliens is generally easy to play, the Random agent can achieve a win rate of 27\%, and the MCTS alternatives achieve win rates varied from 44\% to 100\%. So there are clearly some terms of the equation used in tree policy which matter more than others. The best performing agent is the agent $0$ with a perfect win rate, which uses a very basic policy and chooses the action that maximises the highest value found, it's a greedy agent. An interesting pattern is observed in Figure \ref{bb:m_aliens}: the agents $0$, $8$ and $12$ all share the same term $\frac{1}{\sum D_{NPC}}$ alone or together with $min(D_{MOV})$ it gives stability to the decisions taken. This is even clearer looking at the $\mathcal{DS}$~ value which are respectively 0, 0.067 and 0.07 . Agent $12$, the one with the best combination of $\mathcal{AP}$ and win rate, is driven by a rather peculiar policy: the first term maximises the combined minimal distance from NPCs (aliens) and movable objects (bullets), the second term minimises the sum of the distances from NPCs. This translates into a very clear and neat game-playing strategy: stay away from bullets and kill the aliens (being the fastest way to reduce $\sum D_{NPC}$). This agent is not only very strong with a 93\% win rate, but also extremely fast in finding its preferred action with an average $conv$$=0.26$. Even the win rate of agent $15$ is not one of the best ones, the $b$ metric highlights how an agent as $11$ is intrinsically flawed. In fact, even if agent $11$ constantly consumes all the budget at its disposal ($b=1$) it gets a win rate of just 44\% whilst agent $15$ with a $b<0.006$ is able to get a 69\% win rate. \textbf{Brainman.} This game is usually very hard for the AIs, the best one from the batch has a win rate of 31\%. Looking at the data we have noticed a high concentration of $\mathcal{AP}$~around 50\% for all combination of agents from 7 to 10, this is even clearer looking at the $\mathcal{DS}$ data which is consistently below 0.2. When the policy contains the term $min(D_{MOV})$ not involved in any multiplication the agent is more consistent in moving far away from moving objects. Unfortunately that is exactly a behaviour that will never allow the agent to win, in fact, the key to open the door with the goal is the only movable object in the game. \textbf{Camelrace.} The best way to play Camelrace is easy to understand: keep moving right until reaching the finish line. Looking into the comparison matrix $\mathcal{AP}$~for this game, we've noticed how there's a big portion of it (agents from $3$ to $14$) where the agents consistently agree most of the time (most values over 80\%). What is interesting to highlight is how only that clustering with an $\mathcal{AP}$$=100$ (agents $8$ and $7$) can hit a win rate of 100\% which is further highlighted by $\mathcal{DS}$ that is $0$. This is due to the fact that even just few wrong actions can backfire dramatically. In fact in the game there's an NPC going straight right thus wasting few actions means risking to be overcome by it and lose the race, therefore coherence is extremely important. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{racebet_agent_10} \caption{\label{racebet2_dp}The average $conv$ in the game Racebet2 for the agent $10$ throughout the plays. It shows how the agent doesn't clearly have a preference over the actions until the end of the game when the value drastically drops.} \end{figure} \textbf{Racebet2.} The $\mathcal{AP}$~values for this game are harder to read, the avatar can move only in a very restricted cross-shaped area and its interaction with the game elements is completely useless until the end of the playthrough when the result of the race is obvious to the agent. This is clearly expressed by the average convergence value during the play for agent $10$ shown in Figure \ref{racebet2_dp}. Agent $10$ can not make up his mind consuming all the budget before settling for $a^*$ ($conv$~$=1$), it keeps happening until the very end of the game when it has a drastic drop of $conv$ meaning that the agent is now able to swiftly decide the preferred action. Potentially, an agent could stand still for most of the game and move just during the last few frames of the game. This overall irrelevance of most actions during the game is exemplified by an almost completely flat value of $\mathcal{AP}$~ for most agent couples around $25\%$. \textbf{Zenpuzzle.} This is a pure puzzle game where to win the game is not sufficient following the rewards. The $\mathcal{AP}$~ values are completely flat, in this case the pure agreement doesn't provide any valuable information. However, as we can see in Figure \ref{bb:zen_kl}, the KL-divergence is more expressive to catch decision making differences and we can notice that generally being less consistent with itself can eventually take to perform the crucial right action to fill the whole puzzle. This is a perfect scenario to show a limit of $\mathcal{AP}$, there are several agents to win a game every four but without comparing the full action probability vector we couldn't have highlighted this crucial detail. \section{Conclusion and Future Work}\label{sec:conc} We have presented a set of metrics that can be used to log the decision-making process of a game-playing agent using the General Video Game AI framework. Together with these metrics, we also introduced a methodology to compare agents under the same exact conditions, both are applicable to any agent regardless of their actual implementation and the game they are meant to play. The experimental results have demonstrated how combining such methods and metrics make it possible to have a better understanding on the decision-making process of the agents. In several occasions we have seen how the measuring the agreement between a simple and not necessarily well-performing agent and the target agent, can shed some light on the implicit intentions of the latter. Such approach holds the potential for developing a set of agents with a specific well-known behaviour that can be used to analyse, using the comparison method introduced, another agent's playthrough. They could be used as an array of shadow agents, instead of a single one, and measure during the same play if and how much the behaviour of the main agent resembles that of the shadow agents. Progressively pruning the original Tree Policy we have seen how it was possible to decompose it in simple characteristic behaviours with extremely compact formulae: fleeing a type of objects, maximising the score, killing NPCs. Recognising them has been proven helpful to then understand the behaviour of more complex formulae whose behaviour is not possible to be expected a-priori. Measuring the $conv$~has shown how it is possible to go beyond the sometimes-too-sterile win rate and to use both metrics to distinguish between more and less efficient agents. The game Zenpuzzle has clearly shown that the current set of metrics is not sufficient. The implementation of the \textit{Shadowing Agent}~and the single agents compatible with it will be released as open source code after the publication of this paper, together with the full set of comparison matrices, at www.github.com/ivanbravi/ShadowingAgentForGVGAI . In future work the metrics can be extended to represent additional information about the game states explored by the agent, such as the average events triggered, average counter for each game element just to name few as examples, but also more features from the sets envisioned in \cite{measures2018dagstuhl}. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:18:56", "yymm": "1806", "arxiv_id": "1806.01151", "language": "en", "url": "https://arxiv.org/abs/1806.01151" }
\section{Introduction} Copositive optimization has become an active area of research during recent years. The significance of copositive programming is due to the fact that; several combinatorial and non-convex optimization problems can have linear programming reformulation over the copositive cone, which is convex. A growing list of problems that have copositive programming reformulations includes; standard quadratic programming (\cite {bomze2000copositive}, \cite{bome01solving}), the chromatic and stability number of a graph (\cite{Klerk02app}, \cite{JVPena07comp},\cite{Monique08theOpertr} \cite{DIFranz10copoPro}), crossing number of a graph \cite{de2006improved}, the maximum stable set problem \cite{de2006improved}, the quadratic assignment problem \cite{Povh:2009:CSR:2296586.2296766} and discrete optimization\cite{burer2009copositive}. Consequently the new developments about copositive cone and its dual cone can be helpful to the solution of all the above mentioned hard problems. \\ The copositive programming problems are not solvable directly, as the copositive and completely positive cones are not tractable, thus the approximation hierarchies for these cones have been studied in much detail by the matrix theorists, see \cite{Laurent2009}. Several approximation hierarchies based on sum-of-squares conditions and discretization methods have been studied. For instance, Parrilo \cite{Parrilo00structuredsemidefinite} had provided a hierarchy of linear and semi-definite inner approximations for copositive cone (see also\cite{deklerk2002}). Moreover, Bomze and de Klerk \cite{bomze2002solving} (see also \cite{Bundfuss2009}) suggested a criteria to check membership of a given matrix in copositive cone by using a sequence of polyhedral approximations; which approximates the cone from inside and outside. Moreover, these approximations are exact in the limit. Problems stated above have a common feature that they have quadratic objective function or in certain cases quadratic constraints. The immediate generalization of quadratic optimization is polynomial optimization. In recent years, polynomial optimization has attracted many researchers due to its vast applications in empirical modeling of science \cite{Lasserre2012polyOpt} and engineering problems such as biomedical engineering \cite{barmpoutis:17,ghosh:hal-00340600,Zhang2011}, signal processing \cite{Meng:2009:QPA:1653465.1653481,Qi2003,Weiland:2010:SVD:1771983.1772002}, quantum graphs \cite{quantumGraph2015}, and material science \cite{Soare2008915}. Quadratic functions can be represented using matrices, likewise polynomial can be represented by multidimensional arrays known as tensors. Similar to the case of quadratic optimization a reformulation of polynomial optimization is described by~\cite{Lasserre2012polyOpt,Pena2015}. The notion of copositive and completely positive tensors is used to describe these reformulations. The copositive and completely positive cones of matrices, which are tensors of order two, are very well explored, therefore it seems natural to study analogous results for copositive tensors. However this generalization is not trivial since higher dimension usually destroy the nice structure present at lower dimension. \\ The area is not very well explored, however there are some research in describing the properties of tensors. Song have provided a characterization of copositive tensors using eigenvectors of principal sub-tensors~\cite{song2015necessary}. Qi extended the diagonal dominance sufficient conditions for complete positivity to the tensor case~\cite{qi2014nonnegative}. In this article, we analyze some approximation schemes for copositive cone of tensors. Therefore in order to answer the theoretical questions posed above, the contributions of this article are: (a) The basic properties of copositive tensor cone $\mathcal{C}_{n,d}$ are discussed. Moreover, the conditions, for which copositive cone of tensors coincides with the positive semidefinite cone of tensors, are established. (b) Several polynomial conditions based approximation hierarchies for copositive cone of tensors are presented, which are generalization of analogous results for matrix case. (c) Approximation hierarchies for copositive cone of tensors; which are based on simplicial partition and rational griding are also presented. (d) It has been established that the above mentioned hierarchies approximates the cone $\mathcal{C}_{n,d}$ exactly, in the limiting case. (e)The inclusion relations among approximation hierarchies are also presented. The article is arranged as; Section 2 comprises of the basic definitions and notations. In Section 3, first we give a brief introduction of tensor cones, secondly we discuss several properties of tensors along with characterization of tensor cones. In Section 4.1, we present the inner approximation hierarchies for copositive cone of tensors. One of these hierarchies is based on polynomial conditions and the other is based on simplicial partition. Then we present the containment relationship among inner approximation hierarchies for copositive cone of tensors. Lastly in Section 4.2, two types of outer approximation hierarchies for copositive cone of tensors are presented based on polynomial conditions and rational griding. In section 5 we provide conclusion and future work. \section{Preliminaries} The use of matrix for the representation of quadratic form has become ubiquitous, in recent years. Thus it looks natural to represent a polynomial using multi-dimensional array, usually termed as tensor. Throughout this article; $\Re^n$ denotes the $n$-dimensional Euclidean space and $\Re^n_+$ denotes the non-negative orthant of $\Re^n$. The set of natural numbers is denoted by $\mathbb{N}$ and the set of whole numbers is denoted by $\mathbb{N}_{0}=\{0,1,2,\cdots\}$. The vectors are denoted by using small case bold letters and matrices are denoted by using the capital letters, however calligraphic capital letters are used to denote tensors. The tensor is defined as follows; \begin{definition}[Tensor] A Tensor is a multi-array of real numbers. Mathematically, a Tensor; \[\mathcal{A}=\big(a_{{i_{1}} \ldots {i_{d}} } \big)_{1\leq i_{1}, \ldots, i_{d} \leq n}.\] is a $n$-dimensional, $d^{th}$-order array; and $\mathcal{A}$ is said to be symmetric if all permutations $\sigma$ of indices's represents the same element of $\mathcal{A}$,\ i.e we have \[a_ {i_1 \ldots i_d} = a_ {\sigma({i_{1} \ldots i_{d}})}\] \end{definition} \noindent Clearly any matrix is a tensor of order two. For brevity of notation, if some index $i_j$ of an element $a_{i_1i_2 \cdots i_d} \in \mathcal{A}$ is repeated $k$-times, we write it as ${(i_j)}^{k}$ and such elements are denoted by \[a_{{\underset{k-times}{\underbrace{i_ji_j \cdots i_j}}}i_{(k+1)}i_{(k+2)}\cdots i_{d}}=a_{{(i_j)^k}i_{(k+1)}i_{(k+2)} \cdots i_{d}}\] The collection of $n$-dimensional, $d^{th}$-order symmetric is denoted by $\mathcal{S}_{n,d}$. One particular case is the cone of entry-wise non-negative tensors denoted by $\mathcal{N}_{n,d}$. For $ \bx \in \Re^{n}$ the product tensor $\mathcal{X}={\bx}^d$; is stated as under : \begin{align*} \mathcal{X}=\underset{d-times}{\underbrace{{\bx} \otimes \cdots \otimes {\bx}}} \ \in {({\Re}^n \otimes \cdots \otimes {\Re}^n)} \end{align*} Let $\mathcal{T}_d:\Re^n\rightarrow\mathcal{S}_{n,d}$ be a mapping defined as; $\mathcal{T}_d(\bx)=\mathcal{X}$. \\ Using above notation the $d^{th}$ degree homogeneous polynomial in $n$-variables is stated as, \begin{align}\label{2} f_\mathcal{A}{({\bx})} = \sum_{i_1,i_2 \cdots, i_d =1}^{n} a_{i_1 \cdots i_d} {x_{i_1} x_{i_2}} \cdots {x_{i_d}} \end{align} where $\mathcal{A}$ is $n$-dimensional, $d^{th}$-order symmetric tensors. Introducing a more convenient notation for homogeneous polynomial $f_\mathcal{A}{({\bx})}$ associated with tensor $\mathcal{A}$, denoting $\mathbb{Z}_d=\{0,1,2,\cdots,d\}$ and for any vector ${\balpha}\in\mathbb{Z}_d^n$, let us define 1-norm $\|{\balpha}\|_{1}=\sum_{i=1}^{n} \alpha_i$. For the subset, $\mathbb{I}^n(d)=\{{\balpha}\in\mathbb{Z}_d^n:\|{\balpha}\|_{1}=d \}$ of $\mathbb{Z}_{d}^n$, the monomial of degree $d$ over $\Re^n$ is defined as; \begin{align}\label{3} {\bx}^{\balpha} &= \prod_{i=1}^{n} {x_i}^{\alpha_i} \\ &={x_1}^{\alpha_1}{x_2}^{\alpha_2}\cdots{x_n}^{\alpha_n} ~\text{for}~\balpha \in \mathbb{I}^n(d)~\text{and}~ \bx \in \Re^n \end{align} The collection of all possible $d^{th}$-degree monomials in $n$ variables is denoted by $S(\mathcal{X})$, and is given as under: \begin{align*} S(\mathcal{X})=\bigg\lbrace {\bx}^{\balpha} : \balpha \in \mathbb{I}^n(d) ~and~\bx \in \Re^n \bigg\rbrace \end{align*} Since for an arbitrary $\bx \in \Re^n$ there exists a bijection $\phi_{\bx}: \mathbb{I}^n(d) \rightarrow S(\mathcal{X})$ such that; $\phi_{\bx}(\balpha)=\bx^{\balpha}$. Thus it implies that, the cardinality of set $S(\mathcal{X})$ is same as the cardinality of $\mathbb{I}^n(d)$, that is $|S(\mathcal{X})|=|\mathbb{I}^n(d)|= {n+d-1 \choose d}$ \cite{Monique2013}. \\ Note that, the set $S(\mathcal{X})$ is the collection of all possibly distinct elements of $\mathcal{X}$. The inner product of vectors $\bu,\bv \in \Re^{n} $ denoted by $ \langle \bu,\bv \rangle $ is defined as; $\langle \bu,\bv \rangle=\bu^{T}\bv$. Moreover, for tensors $\mathcal{A}, {\cB} \in {\cS}_{n,d}$ the inner product $\big\langle \mathcal{A}, {\cB} \big\rangle$ is defined as; \begin{align}\label{4} \big\langle \mathcal{A} , \mathcal{B} \big\rangle = \sum_{i_1,i_2 \cdots, i_d =1}^n a_{i_1i_2 \cdots i_d} b_{i_1i_2 \cdots i_d} \end{align} \noindent by using (\ref{3}) and (\ref{4}), we may rewrite (\ref{2}) as; \begin{align*} f_\mathcal{A}{({\bx})} &=\sum_{i_1,i_2 \cdots, i_d =1}^{n} a_{i_1i_2 \cdots i_d} x_{i_1} x_{i_2} \cdots x_{i_d}\\ &=\sum_{i_1,i_2 \cdots, i_d =1}^{n} a_{i_1i_2 \cdots i_d}\prod_{k \in \mathbb{Z}_d \backslash \{0\}}^{} \bx^{\be_{i_k}} = \bigg\langle \mathcal{A}, {\mathcal{T}_{d}(\bx)} \bigg\rangle \end{align*} \begin{align*} \text{where} \ \be_{i_k}\in \Re^n \text{ is a unit vector having all its components} \ 0 \ \text{except the} \ {i_k}^{th} \ \text{component.} \end{align*} Let $V$ be a vector space with underline field $F$, for arbitrary vectors $\bu_{1},\bu_{2},\cdots,\bu_{m} \in V$ and scalars $\lambda_{1}, \lambda_{2},\cdots,\lambda_{m}\in F$ the linear combination $\bu=\sum_{i=1}^{m}\lambda_{i}\bu_{i}$ is said to be; affine combination if $\sum_{i=1}^{m}\lambda_i=1$, and it is said to be conical combination if $\lambda_i \ge 0$. Moreover, $\bu$ is said to be convex combination if it is both affine and conical combination. The set $\cC \subseteq V$ is said to be a cone if for each $\bx\in \cC$ it implies that $\lambda \bx \in \mathcal{C}$ for all scalars $\lambda \ge 0$. Moreover, the cone $\cC$ is said to be convex cone if for each pair $\bx,\by\in \cC$ and for non-negative scalars $\lambda_1,\lambda_2 \in F$ we have, ${\lambda_1} \bx+{\lambda_2} \by \in \mathcal{C}$. The dual of cone $\mathcal{C}$ denoted by $\cC^*$ is stated as under : \begin{align*} \cC^*=\bigg\lbrace \bu \in V:\big\langle \bu,\bv \big\rangle \ge 0 \; \forall\; \bu \in \cC \bigg\rbrace \end{align*} For any subset ${\mathcal{M}} \subseteq V$ the conic hull of $\mathcal{M}$ denoted by $conic(\mathcal{M})$ is defined as: \[conic(\mathcal{M})=\bigg\lbrace \sum_{i=1}^{m}\lambda_{i}\bu_{i}: \ \bu_{i}\in\mathcal{M}, \ \lambda_{i} \ge 0 \ \ \forall i=1,2,\cdots,m \bigg\rbrace \] A convex cone $\cC$ is said to be pointed if $\{\cC\}\cap\{-\cC\}=\{0\}$, and $\cC$ is said to be solid if its interior is nonempty. A convex cone which is closed, pointed and solid is termed as proper cone. \noindent A convex cone $\mathcal{C}$ is said to be a polyhedral cone if it is finitely generated, that is, there exists a finite set $\mathcal{M}$ such that, $\mathcal{C}=conic(\mathcal{M}).$ \section{Tensor Cones} In this section, we define several cones of tensors. These cones appears as a generalization of the cone of matrices. We discuss various properties of these cones together with special cases where these cones coincides. The collection of symmetric tensors $\mathcal{S}_{n,d}$ is a vector space over the field of reals $\Re$. we define our first cone of tensors as follows; \begin{definition}[Positive Semidefinite Cone: $\mathcal{S}_{n,d}^+$ ] A tensor $\mathcal{A}\in {\cS}_{n,d}$ is said to be positive semidefinite (PSD) if $f_\mathcal{A}(\bx)=\big\langle \mathcal{A},\mathcal{T}_d(\bx) \big\rangle \ge 0$ for $\bx\in \Re^n$. The set of $n$-dimensional, $d^{th}$-order positive semidefinite symmetric tensors denoted by $\mathcal{S}_{n,d}^+ $ is stated as under : \begin{align}\label{PSD} \mathcal{S}_{n,d}^{+} & := \bigg\lbrace \mathcal{A} \in \mathcal{S}_{n,d} : \big\langle \mathcal{A},\mathcal{T}_d(\bx) \big\rangle \ge 0 ~\forall~ \bx \in \Re^n \bigg\rbrace \end{align} \end{definition} \noindent Clearly $\mathcal{S}_{n,d}^{+}$ is a convex cone, as for each tensor $\mathcal{A}\in \mathcal{S}_{n,d}^{+}$ it implies that $\lambda\mathcal{A}\in\mathcal{S}_{n,d}^{+}$ for $\lambda\in\Re_+$, moreover for $\mathcal{A},\mathcal{B}\in\mathcal{S}_{n,d}^{+}$ we have ${\lambda_1}\mathcal{A}+{\lambda_2}\mathcal{B}\in \mathcal{S}_{n,d}^{+}$ for ${\lambda_1},{\lambda_2}\in\Re_+$. \\ It is clear from the above definition that $d$ must be even, since for odd degree tensors non-negativity requirement can not be satisfied. The dual of $\mathcal{S}^{+}_{n,d}$ is usually termed as completely positive semidefinite cone denoted by $ \mathcal{S}^{{+}^{*}}_{n,d} $ is stated as: \begin{align*} \mathcal{S}^{{+}^{*}}_{n,d}&=\bigg\lbrace \mathcal{X} \in \mathcal{S}_{n,d} : \big\langle \mathcal{A},\mathcal{X} \big\rangle \ge 0 ~\forall~ \mathcal{A} \in \mathcal{S}^{+}_{n,d} \bigg\rbrace\\ &=\bigg\lbrace \sum_{\mathcal{X} \in \mathcal{S}_{n,d}}^{} \mathcal{X} : \mathcal{X}=\mathcal{T}_{d}(\bx), ~~\forall~\bx\in \Re^n \bigg\rbrace \end{align*} Clearly for $d=2$, PSD cone is self dual, that is, $\mathcal{S}^{+}_{n,2} = \mathcal{S}^{{+}^{*}}_{n,2}$. However for $d \ge 4$ self duality property is not true, that is $\mathcal{S}^{+}_{n,d} \ne \mathcal{S}^{{+}^{*}}_{n,d}$ in general (see [counter example 4.5,\cite{Luo2015}]).\\ The polynomials are continuously differentiable functions. Since we know that the Hessian matrix of a function is PSD if and only if it is a convex function[Theorem 4.5. \cite{rockafellar1970convex}]. Therefore the convexity of homogeneous polynomial defined in (\ref{2}) amounts to check if $\triangledown^2f_\mathcal{A}(\bx)\in\mathcal{S}^+_{n,2}~for~\bx\in \Re^n$. In [Proposition 5.10, \cite{Luo2015}] it has been shown that, if the polynomial $f_\mathcal{A}(\bx)$ is convex then its associated tensor $\mathcal{A}$ is PSD, but the converse is not true in general, (see [counter example 5.11, \cite{Luo2015}] and \cite{Ahmadi2012}).\\ During recent years, copositive matrices has been studied extensively due to their usefulness to solve combinatorial and quadratic optimization problems \cite{Mirjam2010survey} as discussed in the introduction. We define positive semi-definiteness of tensor $\mathcal{A}$ over the non-negative restriction(subset) of $\Re^n$ as follows. \begin{definition} [Copositive Cone: ${\cC}_{n,d}$] A tensor $\mathcal{A}\in {\cS}_{n,d}$ is said to be copositive if $\big\langle \mathcal{A},\mathcal{T}_d(\bx) \big\rangle \ge 0$ for all $\bx\in \Re^n_{+}$. The set of $n$-dimensional, $d^{th}$-order copositive tensors denoted by $\mathcal{C}_{n,d}$ is stated as under : \begin{align} \mathcal{C}_{n,d} := \bigg\lbrace \mathcal{A} \in {\cS}_{n,d} : \big\langle \mathcal{A},\mathcal{T}_d(\bx) \big\rangle \ge 0 ~\forall~ \bx \in \Re^n_+ \bigg\rbrace \end{align} \end{definition} \noindent Similar to (\ref{PSD}), it is obvious that, $\mathcal{C}_{n,d}$ is also convex cone. Here we would like to remark that in case of copositive tensor one need not to take $d$ to be even. It is obvious from the above definitions, if a tensor is positive semidefinite then it is copositive as well; however, copositive tensors are not necessarily positive semidefinite in general; we present a counter example as under. \begin{example} For a symmetric tensor $\mathcal{A}\in\mathcal{S}_{3,4}$ with entries; \begin{align*} a_{i_1i_2i_3i_4}= \begin{cases} \begin{array}{cc} 0 & ~~ ~~if~i_{j}=1~\forall~j \in \{1,2,3,4\} \\ 1 & ~~ ~~if~i_{j}=2~\forall~j \in \{1,2,3,4\} \\ 1 & ~~ ~~if~i_{j}=3~\forall~j \in \{1,2,3,4\} \\ 5 & otherwise \end{array} \end{cases} \end{align*} The associated polynomial is: \begin{align*} f_{\mathcal{A}}(\bx)=\bigg(x_2^4+x_3^4+\sum_{i_1,i_2,i_3,i_4=1}^{3}5x_{i_1}x_{i_2}x_{i_3}x_{i_4}\bigg) ~\ge 0~~\forall~\bx\in\Re^n_+ \end{align*} for $\bar{\bx}=(-2,0,1)^{T}$ we have, $f_{\mathcal{A}}(\bar{\bx})=-24$, which implies that, $\mathcal{A}$ is not positive semidefinite tensor. \end{example} \noindent We describe some basic properties of copositive tensor $\mathcal{A} \in \mathcal{C}_{n,d}$, which are generalizations of the similar properties for the matrix case. \begin{proposition} Let $\mathcal{A} \in \mathcal{S}_{n,d}$ be any copositive tensor. Then the following properties holds; \begin{itemize} \item[(i)] $a_{(i)^d} \ge 0$ for all $i$. \item[(ii)] If $a_{(i_j)^d} =0 $ then $a_{i_1i_2 \cdots i_{j}i_{j+1} \cdots i_{d}} \ge 0$ where $i_j \ne i_k$ for all $j \ne k \in \{1,2, \cdots, d \}$ \end{itemize} \end{proposition} \noindent \Pf For an arbitrary tensor $\mathcal{A} \in \mathcal{C}_{n,d}$. \begin{itemize} \item[(i)] Let ${\be}_i \in \Re^n_+$ be a standard unit vector. Then the tensor $\mathcal{B}=\mathcal{T}_{d}(\be_i)$ has zero elements everywhere except the $i^{th}$ diagonal element i.e. $b_{(i)^d}=1$. Since $\mathcal{A}$ is copositive thus, the inner product $0 \le \big\langle \mathcal{A},\mathcal{T}_{d}({\be}_i) \big\rangle =a_{(i)^d}$; that is, $a_{(i)^d}\ge 0~~\forall~i$. \item[(ii)] let, $\mathcal{A}\in \mathcal{C}_{n,d}$ be a tensor with $a_{{(i_{1})}^d}=0$, we assume on the contrary that there exist ${i_j} \in \{1,2,\cdots,n\}$ such that; \[ a_{{(i_{1})}^{(d-m)}\cdot {(i_{j})}^{(m)}} < 0 \ where \ {i_j} \ne {i_1} \ and \ 1\le m < d \] Then for $\bar{\bx} \in \Re^n_+$ with components; \begin{align*} x_{i}=\begin{cases} \begin{matrix} 1 &~~ if ~~i\in \{1,j\} \\ 0 &~~ otherwise \end{matrix} \end{cases} \end{align*} the associated homogeneous polynomial $f_\mathcal{A}(\bar{\bx})$ is defined as: \begin{align}\label{jthSum} f_{\mathcal{A}}(\bar{\bx})= a_{{(i_{1})}^{(d-m)}\cdot {(i_{j})}^{(m)}}\cdot(1)^{(d-m)}\cdot(1)^{(m)} <0 \end{align} from (\ref{jthSum}), we have a contradiction, i.e. $\mathcal{A}$ is not copositive. \end{itemize} \eop The dual of copositive cone $\mathcal{C}_{n,d}$ is completely positive cone; denoted by $\mathcal{C}_{n,d}^*$ and is defined below. \begin{definition}[Completely Positive Cone: ${{\cC}^*_{n,d}}$] A tensor $\mathcal{X}\in {{\cS}_{n,d}}$ is completely positive if $\exists ~ \bx_k\in \Re^{n}_{+}$ such that; $\mathcal{X}=\sum_{k=1}^{N} (\bx_{k})^{d}$. \begin{align} {{\cC}^*_{n,d}}&:=\left\{ \sum^{N}_{k=1}{(\bx_k) ^d}~\forall~ \bx_k\in {\Re^{n}_{+}}, N\in \mathbb{N} \right \} \end{align} \end{definition} \noindent The cones ${\cS}_{n,d}, {\cC}_{n,d} ~~and~~ {{\cC}_{n,d}}^*$ defined above, are all convex cones. we describe a special case for which the cones $\mathcal{S}_{n,d}^+$ and $\mathcal{C}_{n,d}$ coincide. The following theorem states the condition under which a tensor having non-positive off-diagonal entries is copositivity. \begin{theorem} Let $\mathcal{A} \in \mathcal{S}_{n,d}$ ($d$ is even) be an arbitrary tensor with all off-diagonal entries non-positive i.e. there exist $j$ and $k$ such that $i_{j} \ne i_{k}$ with $ \ a_{i_1i_2\cdots i_d} \le 0$ then $\mathcal{A} \in \mathcal{C}_{n,d}$ if and only if $\mathcal{A} \in \mathcal{S}_{n,d}^+$. \end{theorem} \noindent \Pf Let $\mathcal{A} \in \mathcal{C}_{n,d}$, then we have; \begin{align}\label{CopCondition} f_\mathcal{A}(\bx)=\sum_{i_1,i_2 \cdots, i_d =1}^{n} a_{i_1i_2 \cdots i_d} x_{i_1} x_{i_2} \cdots x_{i_d}~\ge~0 ~ ~\forall~ \bx \in \Re^n_+ \end{align} where all off-diagonal entries $a_{i_1i_2 \cdots i_d} \le 0$. \\ To show that $\mathcal{A}$ is positive semidefinite tensor, we consider a polynomial form $f_{\mathcal{A}}(\bz)$ for all $\bz \in \Re^n$. Since $\bz$ has positive, negative and zero components. Therefore, to bifurcate among the non-negative and negative terms in the form $f_{\mathcal{A}}(\bz)$, we introduce few notations as follows; $\gamma_+(\bz):=\{k:z_{k} \ge 0\}$ and non-empty $ \gamma_-(\bz):=\{j:z_{j} < 0\}=\mathbb{N}_n\backslash\gamma_+(\bz)\ne \phi $ where $\mathbb{N}_n=\{1,2,\cdots,n\}$, and $|\sigma(i_j)|:=\text{number of times} ~i_j ~\text{occurs in the index of } a_{i_1i_2 \cdots i_d}$. Using these notations we rewrite the polynomial form $f_{\mathcal{A}}(\bz)$ as follows: \begin{small} \begin{align}\label{PsdCondition} f_{\mathcal{A}}(\bz)=\sum_{i_1 \cdots, i_d =1}^{n} (\delta_{|\sigma(k)|\alpha_{k}})(\delta_{|\sigma(j)|\beta_{j}})a_{i_1i_2 \cdots i_d}\prod_{\begin{array}{c} k\in \gamma_+(\bz) \\ j\in \mathbb{N}_n\backslash\gamma_+(\bz) \end{array}}^{} {(\bz^{{\be}_{i_k}})}^{\alpha_{k}(i_k)} {(\bz^{\be_{i_j}})}^{\beta_{j}(i_j)} \end{align} \end{small} \noindent where, $\delta$ is Kronecker delta; and \\ \begin{minipage}{0.3cm} \begin{align*} \alpha_k(i_k)=\begin{cases} \begin{matrix} & \alpha_{kk} = \alpha_{k} & if \ i_k=k \\ & 0 & if \ i_k \ne k \end{matrix} \end{cases} \end{align*} \end{minipage} and \begin{minipage}{0.3cm} \begin{align*} \beta_j(i_j)=\begin{cases} \begin{matrix} & \beta_{jj} =\beta_{j} & if \ i_j=j \\ & 0 & if \ i_j \ne j \end{matrix} \end{cases} \end{align*} \end{minipage} \linebreak \\ Moreover, $\sum_{k}^{}\alpha_{k}+\sum_{j}^{}\beta_{j}=d$ for all $\alpha_{k},\beta_{j} \in \mathbb{Z}_{d+1}$. From equation (\ref{PsdCondition}), it is clear that, all its terms corresponding to diagonal elements $a_{{(i)}^{d}}$ are given as follows: \begin{align}\label{diagonal} \varOmega^{(+)}=\underset{\ge 0}{\underbrace{\sum_{i=1}^{n} a_{(i)^d}(z_{i})^d}} ~~ \text{as both}~a_{(i)^d}~\text{and}~(z_{i})^d~\text{are positive}~\forall ~i. \end{align} We analyze those terms in (\ref{PsdCondition}) which correspond to off-diagonal elements of $\mathcal{A}$. The powers $\beta_j$ of negative components of $\bz$ are crucial in this analysis. Therefore we consider two cases as follows: \begin{small} \begin{itemize} \item[(i)] If $\sum_{j}^{}\beta_j$ is even then all such terms are non-positive, that is; \begin{align}\label{negative} \varLambda^{(-)}=\underset{\le 0}{\underbrace{\bigg(\sum_{i_1 \cdots, i_d =1}^{n} (\delta_{|\sigma(k)|\alpha_{k}})(\delta_{|\sigma(j)|\beta_{j}})a_{i_1\cdots i_d}\prod_{k\in \gamma_+(\bz)}^{} {(\bz^{\be_{i_k}})}^{\alpha_{k}(i_k)} \prod_{j\in \mathbb{N}_n\backslash\gamma_+(\bz)}^{} {(\bz^{\be_{i_j}})}^{2\beta'_{j}(i_j)}\bigg)} } \end{align} as $a_{i_1i_2 \cdots i_d}\le0$ and $\prod_{k\in \gamma_+(\bz)}^{} {(\bz^{e_{i_k}})}^{\alpha_{k}(i_k)} \prod_{j\in \mathbb{N}_n\backslash\gamma_+(\bz)}^{} {(\bz^{e_{i_j}})}^{2\beta'_{j}(i_j)} \ge 0$ therefore sum of non-positive terms is non-positive again. \item[(ii)] If $\sum_{j}^{}\beta_j$ is odd then all such terms are non-negative, that is; \begin{align}\label{positive} \varGamma^{(+)}=\underset{\ge 0}{\underbrace{\bigg(\sum_{i_1 \cdots, i_d =1}^{n} (\delta_{|\sigma(k)|\alpha_{k}})(\delta_{|\sigma(j)|\beta_{j}})a_{i_1\cdots i_d}\prod_{k\in \gamma_+(\bz)}^{} {(\bz^{e_{i_k}})}^{\alpha_{k}(i_k)} \prod_{j\in \mathbb{N}_n\backslash\gamma_+(\bz)}^{} {(\bz^{e_{i_j}})}^{\beta_{j}(i_j)}\bigg)}} \end{align} as two factors $a_{i_1i_2 \cdots i_d}$ and $\prod_{j\in \mathbb{N}_n\backslash\gamma_+(\bz)}^{} {(\bz^{\be_{i_j}})}^{\beta_{j}(i_j)}$ are non-positive, and the third one $\prod_{k\in \gamma_+(\bz)}^{} {(\bz^{e_{i_k}})}^{\alpha_{k}(i_k)}$ is non-negative therefore the product of these three factors is non-negative. Hence the sum of non-negative terms is non-negative again. \end{itemize} \end{small} Summarizing equation (\ref{PsdCondition}) by incorporating the information given in equations (\ref{diagonal}),(\ref{positive})and (\ref{negative}) we have: \begin{align}\label{=13} f_\mathcal{A}(\bz)&=\varOmega^{(+)}+\varLambda^{(-)}+\varGamma^{(+)}\\ &=\varOmega-\varLambda+\varGamma ~~\text{where}~\varLambda^{(-)}=-\varLambda~~\text{and}~\varOmega,\varLambda, \varGamma \in \Re_+ \end{align} We define a mapping $\phi:\Re^n \rightarrow\Re^n_+$ as follows; \begin{align*} \phi(\bz)=\begin{pmatrix} a_{11} & 0 & \cdots & 0 \\ 0 & a_{22} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_{nn} \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ \vdots \\ z_n \end{pmatrix} ~~where~~a_{ii}=\bigg \lbrace \begin{array}{cc} ~ ~1 & if ~i \in \gamma_+(\bz) \\ -1 & if ~i \in \gamma_-(\bz) \end{array} \end{align*} thus corresponding to each $\bz\in\Re^n$ there exist unique $\phi(\bz) \in \Re^n_+$, therefore, rewriting (\ref{CopCondition}) as a sum of non-negative and non-positive forms as follows: \begin{align}\label{19} f_\mathcal{A}(\phi(\bz))=\sum_{i=1}^{n} \underset{\ge 0}{\underbrace{{a_{(i)^d}}}} ~ \underset{\ge 0}{\underbrace{{(\phi(z_i))^d}}} + \sum_{i_1,i_2 \cdots, i_d =1}^{n} \underset{\le 0}{\underbrace{a_{i_1i_2 \cdots i_d}}} ~ \underset{\ge 0}{\underbrace{\phi(z_{i_1}) \phi(z_{i_2}) \cdots \phi(z_{i_d})}} \end{align} for even order $d$ the first and second terms in (\ref{19}) are non-negative and non-positive respectively, therefore we have; \begin{align}\label{14} \varOmega^{(+)}=\sum_{i+1}^{n} & a_{(i)^d}{\phi(z_i)}^d ~ ~ ~ and ~ ~ ~~\varPsi^{(-)}= \sum_{i_1,i_2 \cdots, i_d =1}^{n} a_{i_1i_2 \cdots i_d} \phi(z_{i_1})\phi(z_{i_2}) \cdots \phi(z_{i_d}) \end{align} Clearly all the terms in $\varPsi^{(-)}$ are non-positive, that is, $\varPsi^{(-)}=-\varPsi$ where $\varPsi \in \Re_+$, thus we have; \begin{align*} & f_\mathcal{A}(x)=\varOmega^{(+)}+\varPsi^{(-)}\\ \implies & f_\mathcal{A}(x)=\varOmega-\varPsi ~~\text{where}~~\varOmega,\varPsi \in \Re_+ \\ \implies & f_\mathcal{A}(x)=\varOmega-(\varLambda+\varGamma) ~~\because~\varPsi=\varLambda+\varGamma \end{align*} Clearly, we have; \begin{align*} \varLambda-\varGamma \le & ~ ~\varLambda+\varGamma \\ \implies -(\varLambda - \varGamma )\ge & -(\varLambda+\varGamma) \\ \implies \varOmega-(\varLambda-\varGamma) \ge & ~\varOmega-(\varLambda+\varGamma) ~~\because~\varOmega\in \Re_+\\ \implies f_\mathcal{A}(\bz)=\varOmega-(\varLambda-\varGamma )\ge & \varOmega-(\varLambda+\varGamma) =f_\mathcal{A}(\bx) \ge 0 ~~ \text{for}~\bx\in \Re^n_+~\text{and}~\bz\in \Re^n \end{align*} Thus,$f_\mathcal{A}(\bz) \ge 0~~\forall~\bz\in \Re^n$ which means, $\mathcal{A}\in \mathcal{S}_{n,d}^+$. \\ Converse is straight forward as all positive semidefinite tensors are copositive as well. \eop \\ Here, we would like to remark that the above theorem has a significant importance for the case $d=2$, since it gives a special class consisting of polynomial time solvable problems as a sub-class of an in general NP-hard problem. However, for the case when $d\ge4$ this property is not true due to the fact that determining if a tensor is PSD is an NP-hard problem \cite{Hillar:2013:MTP:2555516.2512329}. \section{Approximation Hierarchies for Copositive Cone of Tensors} The copositive cone of tensors is not computationally tractable. To solve optimization programs which involves such cones, we replace the copositive (completely positive) cone by its approximation hierarchies which yields an approximate optimal value of the original program. In this section we present, several inner and outer approximation for the copositive cone of tensors. \subsection{Inner Approximation Hierarchies for Copositive Cone of Tensors} In the first part of this section, we present polynomial based inner approximation hierarchies $\mathcal{C}_{n,d}^{(r)}$ and $\mathcal{K}_{n,d}^{(r)}$ for copositive cone of tensors. Secondly, we present the inner approximations $\mathcal{I}_{n,d}^{\mathcal{P}}$ for copositive cone based on simplicial partition $\mathcal{P}$ of the simplex $\Delta$. In the last part, the containment relations among the above mentioned inner approximations are discussed. \subsubsection{Inner Approximations Based on Polynomial Conditions} Since for any polynomial $f_\mathcal{A}(\bx)$ of degree $d$ in $n$ variables as defined in (\ref{2}), by virtue of the fact that for any $\bx\in\Re^n_+$ we may write ${\bx}=\by \circ \by$ for some $\by\in\Re^n$, where $\circ$ indicates the component wise product; thus the copositivity conditions for a tensor $\mathcal{A}$ can be represented as follows; \begin{align} f_\mathcal{A}{(\by)} &=\bigg\langle \mathcal{A}, {\mathcal{T}_d(\by \circ \by)} \bigg\rangle \\ &= \bigg(\sum_{i_1, \ldots, i_d =1}^n a_{{i_1}{i_2} \ldots i_d}{{ y^2_{i_1}}} { {y^2_{i_2}}}\cdots { {y^2_{i_d}}}\bigg) \ge 0 ~~ \forall~ \by\in\Re^n \end{align} In the subsequent part of this section, we adapt the idea from Parrilo \cite{Parrilo:2013:CAG:2465506.2466575}, De Klerk and Pasechnik \cite{deklerk2002} for higher degree sequence of polynomials $\lbrace P^{(r)}(\by)\rbrace _{r\in \mathbb{N}_{0}}$ which is stated as under : \begin{align}\label{sos} P^{(r)}\big(\by\big)=f_\mathcal{A}{(\by)} \bigg(\sum_{k=1}^{n}y^2_k\bigg)^r~~ for~all ~~\by\in\Re^n \end{align} \noindent Based on the above representation of polynomials, and by virtue of algebraic geometry Pablo A. Parrilo \cite{Parrilo:2013:CAG:2465506.2466575} defined a sufficient condition for the polynomial to be non-negative everywhere, which further culminates an idea to define an approximation for the copositive cone known as the Parrilo cone. The tensor analogue of Parrilo cone denoted by $\mathcal{K}_{n,d}^{(r)}$ which consists of those tensors $\mathcal{A} \in {\mathcal{S}}_{n,d}$ for which the associated polynomial in (\ref{sos}) has sums of squares decomposition. That is; \[\mathcal{K}_{n,d}^{(r)}=\bigg\lbrace \mathcal{A} \in {\mathcal{S}}_{n,d}: P^{(r)}(\by)=f_\mathcal{A}{(\by)} \left(\sum_{k=1}^{n}y^2_k\right)^{(r)} \text{ has an SOS decomposition} \bigg\rbrace \] Clearly, $\mathcal{K}_{n,d}^{(r)} \subseteq \mathcal{K}_{n,d}^{(r+1)}$ for $r\in\mathbb{N}_{0}$, since: \begin{align*} P^{(r+1)}(\by)&=f_\mathcal{A}{(\by)} \left(\sum_{k=1}^{n}y^2_k\right)^{(r+1)}\\ &=\left(\sum_{k=1}^{n}y^2_k\right) \left(f_\mathcal{A}{(\by)}\left(\sum_{k=1}^{n}y^2_k\right)^{(r)}\right)\\ &= \left(\sum_{k=1}^{n}y^2_k\right)P^{(r)}(\by). \end{align*} For $r=0$ we have, \begin{align*} \mathcal{K}_{n,d}^{(0)}&=\bigg\lbrace \mathcal{A} \in {\mathcal{S}}_{n,d}: P^{(0)}(\by)=f_\mathcal{A}{(\by)} \left(\sum_{k=1}^{n}y^2_k\right)^{(0)} \text{ allows an SOS decomposition} \bigg\rbrace \\ &=\bigg\lbrace \mathcal{A} \in {\mathcal{S}}_{n,d}: f_\mathcal{A}{(\by)} \text{ allows an SOS decomposition} \bigg\rbrace \end{align*} As we know that, if $f_\mathcal{A}{(\by)}$ allows an SOS decomposition then its associated tensor $\mathcal{A} \in \mathcal{S}_{n,d}^+$, however converse is not true in general (see counter Example 4.5, \cite{Parrilo00structuredsemidefinite}). Clearly the following inclusion relation holds; \[\mathcal{K}_{n,d}^0\subset \mathcal{K}_{n,d}^1\subset \cdots \subset \mathcal{K}_{n,d}^{\infty}=\mathcal{C}_{n,d} \] where $\mathcal{S}_{n,d}$ denotes the cone of positive semidefinite tensors.\\ Another, sufficient condition for a polynomial to be non-negative, defined by Bomze and De Klerk \cite{bomze2002solving}, cf. also De Klerk and Pasechnik \cite{deklerk2002} which exploits the coefficients. The tensor analogue of this condition leads us to define an approximation cone $\mathcal{C}_{n,d}^{(r)}$ for the copositive cone which consists of those tensors $\mathcal{A}\in \mathcal{S}_{n,d}$ for which the associated polynomial in (\ref{sos}) has non-negative coefficients. That is; \[\mathcal{C}_{n,d}^{(r)}=\bigg\lbrace \mathcal{A} \in {\mathcal{S}}_{n,d}: P^{(r)}(\by)=f_\mathcal{A}{(\by)} \left(\sum_{k=1}^{n}y^2_k\right)^{(r)} \text{ has non-negative coefficients} \bigg\rbrace\] For an arbitrary $\mathcal{A} \in \mathcal{C}_{n,d}^{(r)}$ and due to the fact that $P^{(r+1)}(\by)= \big(\sum_{k=1}^{n}y^2_k\big)P^{(r)}(\by)$, it is straightforward to see that $\mathcal{C}_{n,d}^{(r)} \subseteq \mathcal{C}_{n,d}^{(r+1)}$. Moreover, if $\mathcal{A} \in \mathcal{C}_{n,d}^{(r)}$ then definitely its associated polynomial $P^{(r)}(\by)$ has non-negative coefficients which further allows to have an SOS decomposition, thus $\mathcal{A} \in \mathcal{K}_{n,d}^{(r)}$ for each $r\in\mathbb{N}_{0}$. Therefore we have; \[\mathcal{C}_{n,d}^{(r)} \subseteq \mathcal{K}_{n,d}^{(r)} \ \forall \ r\in\mathbb{N}_{0} \] Both cones $\mathcal{C}_{n,d}^{(r)}$ and $\mathcal{K}_{n,d}^{(r)}$ are inner approximations of $\mathcal{C}_{n,d}$, that is \[int\big(\mathcal{C}_{n,d}\big)\subseteq \bigcup_{r\in \mathbb{N}_{0}} \mathcal{K}_{n,d}^{(r)} \subseteq \mathcal{C}_{n,d} ~ ~ \text{and} ~ ~ int\big(\mathcal{C}_{n,d}\big)\subseteq \bigcup_{r\in \mathbb{N}_{0}} \mathcal{C}_{n,d}^{(r)} \subseteq \mathcal{C}_{n,d}\] We discuss the procedure to calculate the coefficients of polynomials $P^{(r)}(\by)$ of degree $2(d+r)$. Generalizing the characterizations given by Bomze and De Klerk in \cite{bomze2002solving} for the tensors of order $d$ and dimension $n$. For an arbitrary $\balpha \in \Re^n$ we define the multinomial coefficients as follows: \begin{align}\label{c(m)} c(\balpha)=\begin{cases} \begin{matrix} \frac{\|\balpha\|_{1}!}{\prod_{i}^{n}(\alpha_i!)} & if~\balpha\in\mathbb{N}^n_0 \\ 0 & \ \ if~\balpha\in \Re^n \backslash \mathbb{N}^n_0 \end{matrix} \end{cases} \end{align} where for any integer $k\in\mathbb{Z}$, $k!$ denotes the factorial of $k$. Expanding the polynomial $P^{(r)}(y)$ in (\ref{sos}) by using the multinomial law we have the following: \begin{align}\label{22} P^{(r)}({\by})&=f_\mathcal{A}{({\by})} \left(\sum_{k=1}^{n}y^2_k\right)^r\\ &=\bigg(\sum_{i_1, \ldots, i_d =1}^{n} a_{i_1i_2 \ldots i_d}{{ y^2_{i_1}}} { {y^2_{i_2}}}\cdots { {y^2_{i_d}}}\bigg)\bigg(\sum_{\balpha\in\mathbb{I}^n(r)}^{}c(\balpha) \by^{2\alpha}\bigg)\\ &=\bigg(\sum_{i_1, \ldots, i_d =1}^n a_{i_1 \ldots i_d}{{ \by}^{2\be_{i_1}}} { {\by}^{2\be_{i_2}}}\cdots { {\by}^{2\be_{i_d}}}\bigg)\bigg(\sum_{\balpha\in\mathbb{I}^n(r)}^{}c(\balpha) \by^{2\balpha}\bigg)\\ &=\sum_{\balpha\in\mathbb{I}^n(r)}^{} \sum_{i_1, \ldots, i_d =1}^n c(\balpha) a_{i_1 \ldots i_d} \by^{2(\balpha+\be_{i_1}+\be_{i_2}+\cdots+\be_{i_d})}\\ &=\sum_{\balpha\in\mathbb{I}^n(r)}^{} \bigg(\sum_{i_1, \ldots, i_d =1}^{n} c(\balpha) a_{i_1 \ldots i_d}\bigg) \by^{2(\balpha+\be_{i_1}+\be_{i_2}+\cdots+\be_{i_d})} \end{align} let us assume that, $\btheta=\balpha+\be_{i_1}+\be_{i_2}+\cdots+\be_{i_d}$, and abbreviating \[\btheta\big(i_1,i_2,\cdots,i_d\big)=\btheta-(\be_{i_1}+\be_{i_2}+\cdots+\be_{i_d})\] taking $s=r+d$, from the last identity in (\ref{22}) it follows that; \begin{align}\label{multicoeff} P^{(r)}(\by)=\sum_{\btheta\in\mathbb{I}^n(s)}^{} \bigg(\sum_{i_1, \ldots, i_d =1}^{n} c\big(\btheta(i_1,i_2,\cdots,i_d)\big) a_{i_1i_2 \ldots i_d}\bigg) \by^{2(\btheta)} \end{align} denoting the coefficients in (\ref{multicoeff}) by $\mathcal{A}_{\btheta}$, that is we have; \begin{align}\label{Am} \mathcal{A}_{\btheta}=\sum_{i_1, \ldots, i_d =1}^n c\big(\btheta(i_1,i_2,\cdots,i_d)\big) a_{i_1i_2 \ldots i_d} \end{align} The procedure for finding the coefficients $c\big(\btheta(i_1,i_2,\cdots,i_d)\big)$ of multinomial in (\ref{Am}) is described in the following proposition. \begin{proposition} Let $c\big(\btheta(i_1,i_2,\cdots,i_d)\big)$ be the coefficient of multinomial $P^{(r)}(\by)$ in (\ref{multicoeff}), then we have; \begin{align*} c\big(\btheta(i_1,\cdots,i_d)\big)=\bigg\lbrace \begin{matrix} c\big(\btheta-d\be_{i}\big);& i=i_j~:~\forall~j\in \mathbb{N}_{d}~and~i\in\mathbb{N}_{n}\\ 0; & if ~~ i=i_1=\cdots=i_k \ne i_{k+1}\ne \cdots \ne i_{d}\\ c\big(\btheta-(\be_{i_1}+\cdots+\be_{i_d})\big);&if ~~ i_1 \ne i_2 \ne \cdots \ne i_{d}\\ \end{matrix} \end{align*} \end{proposition} \noindent \Pf Since, $\btheta=\alpha+\be_{i_1}+\be_{i_2}+\cdots+\be_{i_d}$ therefore $\|\btheta\|_{1}=r+d$. By (\ref{c(m)}) we see that, when index $i$ is repeated $d-times$ then we have, $c\big(\btheta(i,i,\cdots,i)\big)\ne0~~as~~\omega_i > d$. However, when some index $i$ is repeated $k-times$ where $1<k<d$, then $\omega_i=r_{i}+k$, so there exist an active index $j\ne i$ such that $m_j\le0$, which implies that, such coefficients are always zero; that is \begin{align*} c\big(\btheta(i_1,\cdots,i_d)\big)=0; ~~ if ~~ i=i_1=\cdots=i_k \ne i_{k+1}\ne \cdots \ne i_{d} \end{align*} we consider the last case when, $i_1 \ne i_2 \ne \cdots \ne i_{d}$. In such case we have, $m_{j}=r_{j}-1~~\forall j \in \mathbb{N}_d$, thus we have; \begin{align*} c\big(\btheta(i_1,\cdots,i_d)\big)= c\big(\btheta-(\be_{i_1}+\cdots+\be_{i_d})\big). \end{align*} \eop Analogous to matrix case, a vector extracted from all diagonal entries of $\mathcal{A}$ is denote by $diag(\mathcal{A})\in\Re^n$, i.e. \[diag(\mathcal{A})=\begin{bmatrix} a_{(i_1)^d}\\a_{(i_2)^d}\\ \vdots \\a_{(i_n)^d} \end{bmatrix} \] However, $Diag(\btheta)$ is a diagonal tensor of order $d$ and dimension $n$ having components of $\btheta$ at its diagonals. By using the definition (\ref{c(m)}) of $c(\btheta)$ the representation of $\mathcal{A}_{\btheta}$ given in (\ref{multicoeff}) is simplified considerably in the following theorem. \begin{theorem}\label{propo1} Let $\mathcal{A}\in \mathcal{S}_{n,d}$ be any $d^{th}$-order $n$-dimensional symmetric tensor then, for $ \btheta\in \mathbb{I}^n(s)$ we have; \begin{small} \begin{align*}\label{propo} \mathcal{A}_{\btheta}=&\frac{c\big(\btheta\big)}{s(s-1)(s-2)\cdots(s-(d-1))} \bigg[\bigg\langle \mathcal{A},\btheta^d \bigg\rangle + \\ &\sum_{k=1}^{(d-1)}(-1)^{k} \bigg(\sum_{ \prod_{j=1}^{k}{\theta_{t_j}}\in \{\sigma (i_1i_2\cdots i_k) : \{i_j\}_{j=1}^{k}\subseteq \mathbb{N}_{d-1} \}}^{} \big(\prod_{j=1}^{k}{\theta_{t_j}}\big)\bigg)\bigg \langle \mathcal{A},Diag~\underset{(d-k)-times}{\underbrace{ \big(\btheta\circ\cdots\circ\btheta\big)}}\bigg \rangle \bigg] \end{align*} \end{small} \end{theorem} \noindent \Pf For $i_1=i_2=\cdots=i_d=i$ the coefficients $c\big(\btheta(i,i,\cdots,i)\big)$ are zero if $\omega_i<d$, however for $i_1 \ne i_2\ne\cdots\ne i_d$ the coefficients $c\big(\btheta(i_1,i_2,\cdots,i_d)\big)=0$ if $\prod \omega_i=0$. Thus the nonzero coefficients in (\ref{multicoeff}) occurs only for some $(i_1,i_2,\cdots,i_d)$ tuples depending upon $\btheta$. Therefore after some straightforward calculations, we have the following simplification of (\ref{Am}); \begin{small} \begin{align*} \mathcal{A}_{\btheta}&=\sum_{i_1, \ldots, i_d =1}^{n} c\big(\btheta(i_1,i_2,\cdots,i_d)\big) a_{i_1i_2 \ldots i_d}\\ &=\sum_{{i_1,i_2, \ldots, i_d} =1 }^{n}{\frac{\|\btheta-(\theta_1 \be_{i_1}+\cdots+\theta_d \be_{i_d})\|_{1}!} {\prod_{i}^{n}(\omega_i-{\theta_i})!}} a_{i_1i_2\ldots i_d};~~with~ \sum_{i}^{}\theta_i=d ~\forall~\theta_i \in \mathbb{Z}_{d+1}\\ &=\sum_{i=1}^{n}{\frac{\|\btheta-{d}\be_{i}\|_{1}!} {(\omega_i-{d})!\prod_{k \in \mathbb{N}_{n}\backslash \{i\}}^{}(\omega_k)!}} a_{(i)^d} ~+\\ & \sum_{ \begin{tiny} \begin{array}{c} i_1,i_{j}=1 \\ i_j \in \lbrace i_j:i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1\} \rbrace \end{array} \end{tiny}}^{n}\frac{\|\btheta-\be_{i_1}-(d-1)\be_{i_j}\|_{1}!} {(\omega_{i_1}-{1})!{(\omega_{i_j}-{(d-1)})!\prod_{k \in \mathbb{N}_{n}\backslash \{1,j\}}^{}(\omega_k)!}} a_{i_1(i_j)^{(d-1)} } + \\ &\sum_{ \begin{tiny} \begin{array}{c} i_1,i_2,i_j=1 \\ i_j \in \lbrace i_j : i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1,2\} \rbrace \end{array} \end{tiny} }^{n}\frac{\|\btheta-\be_{i_1}-\be_{i_2}-(d-2)\be_{i_j}\|_{1}!} {(\omega_{i_1}-{1})!(\omega_{i_2}-{1})!{(\omega_{i_j}-{(d-2)})!\prod_{k \in \mathbb{N}_{n}\backslash \{1,2,j\}}^{}(\omega_k)!}} a_{i_1i_{2}(i_j)^{(d-2)} }+\cdots \\ &+\sum_{ \begin{tiny} \begin{array}{c} i_1,i_2,\cdots,i_{(d-1)},i_j=1 \\ i_j \in \lbrace i_j : i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1,2,\cdots,d-1\} \rbrace \end{array} \end{tiny}}^{n}\frac{\|\btheta-\be_{i_1}-\cdots-\be_{i_d}\|_{1}!} {\prod_{k=1}^{d} (\omega_{k}-{1})!\prod_{k \in \mathbb{N}_{n}\backslash \{1,2,\cdots,d-1\} }^{}(\omega_k)!} a_{i_1i_2 \ldots i_d} \end{align*} using $\|\btheta\|_{1}=s$ and by virtue of the fact that the difference between positive numbers, it implies that, $\|\btheta-(\theta_1 \be_{i_1}+\cdots+\theta_d \be_{i_d})\|_{1}=(s-d)$, which further simplifies the previous equation as follows: \begin{align*} \mathcal{A}_{\btheta}&=\frac{c(\btheta)}{s(s-1)(s-2)\cdots(s-(d-1))} \Bigg[\sum_{i=1}^{n}{\bigg(\prod_{k=0}^{d-1} (\omega_i-k)}\bigg) a_{(i)^d} + \\ &~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ~ \sum_{ \begin{tiny} \begin{array}{c} i_1,i_{j}=1 \\ i_j \in \lbrace i_j:i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1\} \rbrace \end{array} \end{tiny}}^{n}{\bigg(\omega_{i_1} \prod_{k=0}^{d-2} (\omega_{i_j}-k)\bigg)} a_{i_{1}(i_j)^{(d-1)}}+\\ &~~ \sum_{ \begin{tiny} \begin{array}{c} i_1,i_2,i_j=1 \\ i_j \in \lbrace i_j : i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1,2\} \rbrace \end{array} \end{tiny} }^{n}{ \bigg( \omega_{i_1} \omega_{i_2}\prod_{k=0}^{d-3} (\omega_{i_j}-k)\bigg) {a_{{i_1}i_{2}(i_j)^{(d-2)}}} } + \cdots +\\ &~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sum_{ \begin{tiny} \begin{array}{c} i_1,i_2,\cdots,i_{(d-1)},i_j=1 \\ i_j \in \lbrace i_j : i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1,2,\cdots,d-1\} \rbrace \end{array} \end{tiny}}^{n} \bigg(\prod_{k=1}^{d} \omega_{i_k} \bigg) {a_{i_1i_2 \ldots i_d} } \Bigg]\\ \end{align*} since by multi-coefficient formula (\ref*{c(m)}) it is evident that $c\big(\btheta(i_1,i_2,\cdots,i_d)\big)=0$, as some of the components of $\btheta(i_1,i_2,\cdots,i_d)$ can be negative, or the product of its components is zero. Therefore, the coefficients of $a_{i_{1},(i_j)^{(d-1)}},a_{i_{1},i_2,(i_j)^{(d-2)}},\cdots, a_{i_{1},i_2,\cdots,(i_{(d-1)})} $ vanish. Thus, we have the following simplified equation: \begin{align*} \mathcal{A}_{\btheta}&=\frac{c(\btheta)}{s(s-1)(s-2)\cdots(s-(d-1))} \Bigg[\sum_{i=1}^{n}{\bigg(\prod_{k=0}^{d-1} (\omega_i-k)}\bigg) a_{(i)^d} + \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sum_{ \begin{tiny} \begin{array}{c} i_1,i_2,\cdots,i_{(d-1)},i_j=1 \\ i_j \in \lbrace i_j : i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1,2,\cdots,d-1\} \rbrace \end{array} \end{tiny}}^{n} \bigg(\prod_{k=1}^{d} \omega_{i_k} \bigg) {a_{i_1i_2 \ldots i_d} } \Bigg]\\ &=\frac{c\big(\btheta\big)}{s(s-1)(s-2)\cdots(s-(d-1))}\bigg[\sum_{i}^{}\bigg( \omega_i\big(\omega_i-1\big)\cdots\big(\omega_i-(d-1)\big)\bigg) a_{(i)^d} +\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sum_{ \begin{tiny} \begin{array}{c} i_1,i_2,\cdots,i_{(d-1)},i_j=1 \\ i_j \in \lbrace i_j : i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1,2,\cdots,d-1\} \rbrace \end{array} \end{tiny}}^{n}\bigg(\omega_{i_1}\omega_{i_2}\cdots \omega_{i_d}\bigg)a_{i_1i_2 \ldots i_d}\bigg]\\ &=\frac{c\big(\btheta\big)}{s(s-1)(s-2)\cdots(s-(d-1))}\bigg[\sum_{ \begin{tiny} \begin{array}{c} i_1,i_2,\cdots,i_{(d-1)},i_j=1 \\ i_j \in \lbrace i_j : i_j=i_k~\forall~ j,k \in \mathbb{N}_{d} \backslash \{1,2,\cdots,d-1\} \rbrace \end{array} \end{tiny}}^{n}\big(\omega_{i_1}\cdots \omega_{i_d}\big)a_{i_1 \cdots i_d}+\\ &\sum_{i=1}^{n}\big(\omega_i^d\big)a_{(i)^d} + \sum_{i=1}^{n} \bigg(\sum_{k=1}^{(d-1)}(-1)^{k} \bigg(\sum_{ \big(\prod_{j=1}^{k}{\theta_{t_j}}\big)\in \{\sigma (i_1i_2\cdots i_k) : \{i_j\}_{j=1}^{k}\subseteq \mathbb{N}_{d-1} \}}^{} \big(\prod_{j=1}^{k}{\theta_{t_j}}\big)\bigg){\omega_i}^{(d-k)} \bigg){a_{(i)^d}}\bigg]\\ &=\frac{c\big(\btheta\big)}{s(s-1)(s-2)\cdots(s-(d-1))}\bigg[\bigg\langle \mathcal{A},\btheta^d \bigg\rangle +\\ &~~ \bigg(\sum_{k=1}^{(d-1)}(-1)^{k} \bigg(\sum_{ {(\prod_{j=1}^{k}{\theta_{t_j}})}\in \{\sigma (i_1i_2\cdots i_k) : \{i_j\}_{j=1}^{k}\subseteq \mathbb{N}_{d-1} \}}^{} \big(\prod_{j=1}^{k}{\theta_{t_j}}\big)\bigg) \bigg\langle \mathcal{A},Diag\underset{(d-k)-times}{\underbrace{ \big(\btheta\circ\btheta\circ\cdots\circ\btheta\big)}}\bigg\rangle\bigg)\bigg] \end{align*} \end{small} \eop For the sake of brevity, we denote the coefficients of inner product $\bigg\langle \mathcal{A},Diag\underset{(d-k)-times}{\underbrace{ \big(\btheta\circ\btheta\circ\cdots\btheta\big)}} \bigg\rangle$ in the last identity by $\beta_k$, that is; \[\beta_k=\bigg(\sum_{ {(\prod_{j=1}^{k}{\theta_{t_j}})}\in \lbrace\sigma (i_1i_2\cdots i_k): \{i_j\}_{j=1}^{k}\subseteq \mathbb{N}_{d-1} \rbrace }^{} \big(\prod_{j=1}^{k}{\theta_{t_j}}\big)\bigg)~~where~~k=1,2,\cdots,(d-1) \] Notice from the above that $P^{(r)}(\by)$ have non-negative coefficient whenever $\mathcal{A}_{\btheta} \ge 0$. Thus we arrive at the following theorem. \begin{theorem} For an arbitrary $\btheta\in\Re^n$ and $r\in \mathbb{N}_{0}$, the cone $\mathcal{C}_{n,d}^{(r)}$ is defined as: \begin{small} \begin{align*} \mathcal{C}_{n,d}^{(r)}=\left\lbrace \mathcal{A} \in {\mathcal{S}}_{n,d}:\bigg\langle \mathcal{A},\bigg(\btheta^d+\sum_{k=1}^{(d-1)}(-1)^{k} \beta_k Diag~\big(\underset{(d-k)-times}{\underbrace{ \btheta\circ\cdots\circ \btheta}} \big)\bigg)\bigg\rangle\ge 0~~\forall~ \btheta\in \mathbb{I}^n(s)\right\rbrace \end{align*} \end{small} \end{theorem} \noindent \Pf It is an immediate consequence of (\ref{multicoeff}) and Proposition \ref{propo1}. \eop \\ for $r=0$ we have; \begin{small} \begin{align*} \mathcal{C}_{n,d}^{(0)}=\left\lbrace \mathcal{A} \in {\mathcal{S}}_{n,d}:\bigg\langle \mathcal{A} \ ,\bigg(\btheta^d+\sum_{k=1}^{(d-1)}(-1)^{k} \beta_k Diag~\big(\underset{(d-k)-times}{\underbrace{ \btheta\circ\cdots\circ \btheta}} \big)\bigg)\bigg\rangle\ge 0~~\forall~ \btheta\in \mathbb{I}^n(d)\right\rbrace \end{align*} \end{small} Clearly, if $\mathcal{A} \in \mathcal{C}_{n,d}^{(0)}$ then the entries of $\mathcal{A}$ must be non-negative. Thus $\mathcal{A} \in \mathcal{N}_{n,d}$; that is, \[\mathcal{N}_{n,d}=\mathcal{C}_{n,d}^0\subset\mathcal{C}_{n,d}^1\subset \cdots \subset \mathcal{C}_{n,d}^{\infty} =\mathcal{C}_{n,d} \] \subsubsection{Inner Approximations Based on Simplicial Partition} Let $\|\cdot\|_{1}$ denote 1-norm on $\Re^n_+.$ The set $\Delta=\{\bx\in\Re^n_+:\|x\|_1=1\}$ is known as the standard simplex. Any tensor $\mathcal{A}$ is copositive if and only if $\langle \mathcal{A}, \mathcal{T}_{d}(\bx) \rangle \ge 0 ~~\forall~~\bx \in \Delta$. In the subsequent part of this section we state the conditions for non-negativity of the polynomial $f_\mathcal{A}(\bx)$ over a simplex. An appropriate way to express polynomials over a simplex $\Delta=conv\{\bv_1,\bv_2,\cdots,\bv_n\}$ is by using barycentric coordinates; that is, $\bx=\sum_{i=1}^{n}{\lambda_i \bv_i}$ where $\sum_{i=1}^{n}{\lambda_i }=1$ for $\lambda_1,\lambda_2,\cdots,\lambda_n\in\Re $ and $\bx \in \Delta$. The representation of polynomial form in barycentric coordinates is given as follows: \begin{align*} f_\mathcal{A}{({\bx})} &=\bigg\langle \mathcal{A} \ , {\mathcal{T}_{d}\bigg(\sum_{i=1}^{n}\lambda_i \bv_i\bigg)} \bigg\rangle \\ &= \sum_{i_1, \ldots, i_d =1}^n {\lambda_{i_1}\lambda_{i_2} \cdots \lambda_{i_d}} \bigg\langle \mathcal{A} \ , \bv_{i_1}\otimes \bv_{i_2}\otimes \cdots\otimes \bv_{i_d} \bigg\rangle \end{align*} For the non-negative $\lambda$ and basis vectors of the simplex $\Delta$, we state the following lemma. \begin{lemma}\label{lemma1} Let $\Delta=conv\{\bv_1,\bv_2,\cdots,\bv_n\}$ be a simplex. If $\big\langle \mathcal{A}, \bv_{i_1}\otimes \cdots\otimes \bv_{i_d}\big\rangle \ge 0$ for all $i_1, \cdots, i_d \in \{1,2,\cdots,n\}$ then $\bigg\langle \mathcal{A},\mathcal{T}_d(\bx) \bigg\rangle \ge 0 ~~\forall~\bx\in\Delta.$ \end{lemma} \noindent \Pf Let $\Delta=conv\{\bv_1,\bv_2,\cdots,\bv_n\}$ then each $x\in\Delta$, can be expressed as; \[\bx=\sum_{i=1}^{n}{\lambda_i \bv_i} ~~with~~\sum_{i=1}^{n}{\lambda_i }=1\] taking, \begin{align*} \bigg\langle \mathcal{A},\mathcal{T}_d(\bx)\bigg \rangle&=\bigg\langle \mathcal{A},\mathcal{T}_{d}(\bx)\bigg \rangle\\ &=\bigg\langle \mathcal{A},\mathcal{T}_{d}\bigg(\sum_{i=1}^{n}\lambda_i\bv_i\bigg) \bigg\rangle \\ &= \sum_{i_1, \ldots, i_d =1}^n {\lambda_{i_1}\lambda_{i_2} \cdots \lambda_{i_d}} \bigg\langle \mathcal{A}, \bv_{i_1}\otimes \bv_{i_2}\otimes \cdots\otimes \bv_{i_d}\bigg\rangle \ge 0 \end{align*} As both $\sum_{i_1, \ldots, i_d =1}^n {\lambda_{i_1}\lambda_{i_2} \cdots \lambda_{i_d}} $ and $\langle \mathcal{A}, \bv_{i_1}\otimes \bv_{i_2}\otimes \cdots\otimes \bv_{i_d}\rangle$ are non-negative. \eop \\ \noindent For the standard simplex $\Delta^s=conv\{\be_1,\be_2,\cdots,\be_n\}$ by the implication of Lemma \ref{lemma1}, the tensor $\mathcal{A}$ is copositive if $0\le \bigg\langle \mathcal{A} \ , \mathcal{T}_{d}\big(\sum_{i=1}^{n}\lambda_i\be_i \big) \bigg\rangle=a_{i_1i_2 \cdots i_d}$, which establishes the fact that any entry wise non-negative tensor is copositive. looking at the simplicial partition which is stated as under : \\ Let $\Delta$ be a simplex in $\Re^n$ a family $\mathcal{P}=\{\Delta^1,\Delta^2,\cdots,\Delta^m\}$ of sub-simplexes satisfying \begin{align*} \Delta=\bigcup_{i=1}^{m}\Delta^i ~ ~ \text{and} ~ ~ int(\Delta^i) \bigcap int(\Delta^j)=\phi ~~for~~i \ne j \end{align*} is said to be simplicial partition of $\Delta.$ The set of all vertices's in $\mathcal{P}$ is denoted by $V_\mathcal{P}$ and the set of all edges in $\mathcal{P}$ by $E_\mathcal{P}$. One of the convenient approach to partition any simplex is the radial subdivision of $\Delta$; choosing an arbitrary $\bu\in\Delta\backslash \{\bv_1,\bv_2,\cdots,\bv_n\}$ such that, $\bu=\frac{\bv_{i}+\bv_{i+1}}{2}$ The sub-simplex $\Delta^i$ is obtained by replacing the vertex $\bv_i$ in $\Delta$ with $u$; that is, \begin{align*} \Delta^i=conv\{\bv_1,\cdots,\bv_{i-1},\bu,\bv_{i+1},\cdots,\bv_n\} \end{align*} For brevity of notation we define $\mathcal{T}_{d}(\bx,\by)=\bx^{\alpha} \otimes \by^{d-\alpha}$. We describe the sufficient conditions for copositivity, in the following theorem which is generalization of [Theorem 3,\cite{Bundfuss2009}]; \begin{theorem}\label{theorem1} Let $\mathcal{A}$ be a $d^{th}$-order, $n$-dimensional symmetric tensor, and $\mathcal{P}=\{\Delta^1,\cdots,\Delta^m\}$ be a simplicial partition of $\Delta^s$; if \[ \bigg\langle \mathcal{A},{\mathcal{T}_{d}(\bu,\bv)} \bigg \rangle \ge 0 ~ \text{ for each pair} ~ \{\bu,\bv\}\in E_{\mathcal{P}} ~ \text{and }~ \bigg \langle \mathcal{A} , \mathcal{T}_{d}(\bv) \bigg \rangle\ge 0 ~ \forall ~ \bv\in V_\mathcal{P}\] then $\mathcal{A}$ is copositive. \end{theorem} \noindent \Pf Let $\mathcal{P}=\{\Delta^1, \Delta^1, \cdots,\Delta^m\}$ be a simplicial partition of $\Delta^s$ then, for each $\bu,\bv \in \Delta^s$ there exist some $\Delta^i,\Delta^j\in\mathcal{P}$ such that $\bu \in \Delta^i$ and $\bv \in \Delta^j$, by hypothesis it is true that, for all possible pairs of vertices's in the partition $\mathcal{P}$ of standard simplex $\Delta^s$ we have; $\big \langle \mathcal{A},{\mathcal{T}_{d}(\bu,\bv)} \big \rangle \ge 0$. The standard simplex $\Delta^s$ is topologically equivalent to $\Re^n_+$, since there exists a one-one, onto mapping $ \phi:\Re^n_+ \rightarrow \Delta^s $ defined as: \begin{align*} \phi(\bx)=\frac{\sum_{i=1}^{n}{x_{i}}\be_{i}}{\|\bx\|_{1}}~~\text{where}~~x_i \in \Re_+ \end{align*} Thus, corresponding to each $\bx \in \Re^n_+$ there exists distinctly unique $\phi(\bx)\in\Delta^s$, such that; \begin{align*} \bigg\langle\mathcal{A} \ , \ \mathcal{T}_{d} \big({\bx}\big) \bigg \rangle &=\|\bx\|_{1} \bigg\langle\mathcal{A} \ , \ \mathcal{T}_{d} \bigg(\frac{\bx}{\|\bx\|_{1}}\bigg) \bigg \rangle \\ &= \|\bx\|_{1} \bigg\langle\mathcal{A} \ \ , \ \mathcal{T}_{d} \big(\phi(\bx)\big) \ \bigg\rangle \\ &=\|\bx\|_{1} \bigg\langle\mathcal{A} \ \ , \ \mathcal{T}_{d} \big(\bu \big) \bigg\rangle \ge 0 \end{align*} Thus, $\bigg\langle\mathcal{A} \ , \ \mathcal{T}_{d} \big({\bx}\big) \bigg\rangle \ge 0 ~\forall~\bx \in \Re^n_+$ implies that, $\mathcal{A}$ is copositive tensor. \eop \\ \noindent we define the diameter $\delta(\mathcal{P})$ of the partition $\mathcal{P}$ as follows: \begin{align*} \delta(\mathcal{P}):=\underset{\{\bu,\bv\}\in E_\mathcal{P}}{\max} \|\bu-\bv\| \end{align*} As the diameter $\delta(\mathcal{P})$ tends to zero, the partition gets finer and finer, eventually members of strictly copositive tensor's cone are captured. For the limiting case we state the necessary condition for a tensor to be strictly copositive. \begin{theorem}\label{theorem2} Let $ \mathcal{A}\in \mathcal{S}_{n,d}$ be a strictly copositive tensor, then for every finite simplicial partition $\mathcal{P}$ of $\Delta$ there exists an $\varepsilon>0$ with $\delta(\mathcal{P})\le\varepsilon$ such that; \begin{align*} \bigg\langle \mathcal{A},{\mathcal{T}_{d}(\bu, \bv)}\bigg\rangle & \ge 0 ~~\forall~~\{\bu,\bv\}\in E_{\mathcal{P}}~~ \\ \bigg\langle \mathcal{A},\mathcal{T}_{d}(\bv) \bigg\rangle & \ge 0~~ \forall~~ \bv\in V_\mathcal{P} \end{align*} \end{theorem} \noindent \Pf Let $\mathcal{A} \in \mathcal{S}_{n,d}$ be a strictly copositive tensor, then the associated polynomial form $f_\mathcal{A}(\bx) > 0~~ \forall ~ \bx \in \Re^n_+$. Since $\Delta$ is a compact subspace of $\Re^n_+$, therefore by continuity condition it implies that, for each $\bx,\by\in\Delta$ there exists an $\varepsilon_{\bx}\ge0$ such that; \begin{align*} f_\mathcal{A}(\bx,\by)=\bigg\langle \mathcal{A},\mathcal{T}_{d}{(\bx,\by)} \bigg\rangle > 0 ~~\forall~~\|\bx-\by\| \le \varepsilon_{\bx} \end{align*} By uniform continuity of polynomials on the compact space $\Delta$, it implies that; $\varepsilon:=\underset{\bx\in\Delta}{\inf}\varepsilon_{\bx}~>0$. For the simplicial partition $\mathcal{P}$ of simplex $\Delta$ with $\delta(\mathcal{P}) \le \varepsilon$, choose an arbitrary sub-simplex $\Delta^k$ and $\bx^{(k)},\by^{(k)}\in\Delta^k$ it implies that $\|\bx^{(k)}- \by^{(k)}\|\le\varepsilon$, which further implies that for all possible pairs of vertices's $\bx^{(k)},\by^{(k)}$ in $\Delta$, we have; \begin{align*} &f_\mathcal{A}(\bx^{(k)},\by^{(k)}) \ge 0 ~~as ~~k\rightarrow \infty\\ \implies & f_\mathcal{A}(\bx,\by) \ge 0 ~~\forall~~\{\bx,\by\} \in E_\mathcal{P}~~and~~f_\mathcal{A}(\bx) \ge 0 ~~\forall~~\bx \in V_\mathcal{P} \end{align*} \eop Consequently, for any partition $\mathcal{P}$ and by Theorems \ref{theorem1} and \ref{theorem2}, it is natural to define inner polyhedral approximations for the copositive cone $\mathcal{C}_{n,d}$ as follows: \begin{small} \begin{align*} \mathcal{I}_{n,d}^{\mathcal{P}}:=\bigg\lbrace\mathcal{A} \in {\mathcal{S}}_{n,d}: \bigg\langle \mathcal{A},{\mathcal{T}_{d}(\bu,\bv)} \bigg\rangle \ge 0 ~\forall~\{\bu,\bv\}\in E_{\mathcal{P}} ~\&~\bigg\langle \mathcal{A},\mathcal{T}_{d}(\bv) \bigg\rangle\ge 0 ~~\forall~~\bv\in V_{\mathcal{P} } \bigg\rbrace \end{align*} \end{small} \noindent The inner cone approximation $\mathcal{I}_{n,d}^{\mathcal{P}}$ corresponding to the partition $\mathcal{P}$, and for two simplicial partitions $\mathcal{P}_1$ and $\mathcal{P}_2$ of simplex $\Delta$. The partition $\mathcal{P}_2$ is said to be refinement of $\mathcal{P}_1$ if for each sub-simplex $\Delta^k\in\mathcal{P}_1$ there exists a subset $\mathcal{P}_{\Delta^k}\subseteq \mathcal{P}_2$ which is simplicial partition of $\Delta^k$. In the subsequent part of this section, we discuss several properties of $\mathcal{I}_{n,d}^{\mathcal{P}}$. \begin{lemma}\label{lemma2} The cone $\mathcal{I}_{n,d}^{\mathcal{P}}$ is an inner approximation for $\mathcal{C}_{n,d},$ that is, $\mathcal{I}_{n,d}^{\mathcal{P}}\subseteq \mathcal{C}_{n,d}$. \end{lemma} \noindent \Pf Let $\mathcal{A} \in \mathcal{I}_{n,d}^{\mathcal{P}}$ be an arbitrary tensor, then for each $\bx\in\Delta$ there exist a sub-simplex $\Delta^k$ such that; $\bx\in\Delta^k$ and by continuity conditions it implies that for $\delta(\mathcal{P})\le\varepsilon$ and $\bx,\by\in \Delta^k$ we have; \begin{align*} &f_\mathcal{A}(\bx,\by)=\bigg\langle \mathcal{A},\mathcal{T}_{d}(\bx,\by) \bigg\rangle \ge 0 ~~whenever~~\|\bx-\by\| \le \varepsilon \\ \implies & f_\mathcal{A}(\bx,\by) \ge 0 ~~\forall~~ \bx,\by\in \Re^n_+ \\ \implies & f_\mathcal{A}(\bx) \ge 0 ~~\forall~~ \bx \in \Re^n_+ \end{align*} Hence $\mathcal{A}$ is copositive tensor, that is $\mathcal{I}_{n,d}^{\mathcal{P}}\subseteq \mathcal{C}_{n,d}$. \eop we consider a sequence $\{\mathcal{A}_k\}$ of tensors in $\mathcal{I}_{n,d}^{\mathcal{P}}$, with reference to Theorem \ref{theorem2}; we see that, for any partition $\mathcal{P}_k$ as $\delta(\mathcal{P}_k)\rightarrow 0$ we have; \begin{align*} &\lim\limits_{k \rightarrow \infty}\bigg\langle\mathcal{A}_k,{\mathcal{T}_{d}(\bx^{k})} \bigg\rangle \ge 0 ~~\forall~\bx^k\in\Delta^k\\ & \implies \bigg\langle\mathcal{A},\mathcal{X} \bigg \rangle \ge 0 ~~\forall~\bx \in \Delta^k \text{ since } \Delta ~~is~compact \end{align*} Thus, $\mathcal{A}\in \mathcal{I}_{n,d}^{\mathcal{P}}$, which implies that $\mathcal{I}_{n,d}^{\mathcal{P}}$ is closed and convex as well. Moreover, for every finite partition $\mathcal{P}$ of $\Delta$ the total number of vertices's and edges are also finite, therefore to determine that, a tensor $\mathcal{A}\in \mathcal{I}_{n,d}^{\mathcal{P}}$ it required to solve finitely many inequalities. Henceforth such tensor cones can be generated by finite subset $\mathcal{M}\subseteq \mathcal{I}_{n,d}^{\mathcal{P}}$, which establishes the fact that $\mathcal{I}_{n,d}^{\mathcal{P}}$ is polyhedral cone. In the following lemma we discuss the containment relation among inner approximations based on partition $\mathcal{P}_1$ and its refinement $\mathcal{P}_2$. \begin{lemma}\label{lemma3} Let $\mathcal{P}$, $\mathcal{P}_1$, and $\mathcal{P}_2$ be simplicial partitions of $\Delta$. If $\mathcal{P}_2$ is a refinement of $\mathcal{P}_1$, then $\mathcal{I}_{n,d}^{\mathcal{P}_1} \subseteq \mathcal{I}_{n,d}^{\mathcal{P}_2}$ \end{lemma} \noindent \Pf Let $\mathcal{P}_2$ be a refinement of $\mathcal{P}_1$, then there exists a sub-simplexes $\Delta^{k_1}\in\mathcal{P}_1$ and $\Delta^{k_2}\in\mathcal{P}_2$ such that $\Delta^{k_2}\subseteq\Delta^{k_1}$. we consider an arbitrary tensor $\mathcal{A}\in\mathcal{I}_{n,d}^{\mathcal{P}_1}$ , and $\bx,\by \in \Delta^{k_2}$. As both $\bx$ and $\by$ can be expressed as convex combination of the vertices's $\bu^{k_1}_i\in\Delta^{k_1}$: \[ \bx=\sum_{i=1}^{n}{\lambda_i \bu^{k_1}_i}~~where~~\sum_{i=i}^{n}\lambda_i=1; ~~\lambda_i\ge0 \] \[\by=\sum_{i=1}^{n}{\theta_i \bu^{k_1}_i}~~where~~\sum_{i=i}^{n}\theta_i=1;~~\theta_i\ge0 \] Since for each pair $\bu^{k_1}_i,\bu^{k_1}_j$ of vertices's in $\Delta^{k_1}$ we have, $\bigg\langle \mathcal{A},\mathcal{T}_{d}{(\bu^{k_1}_i,\bu^{k_1}_j)} \bigg\rangle \ge 0$, which implies that; \begin{small} \begin{align*} f_{\mathcal{A}}(\bx,\by)&=\bigg\langle \mathcal{A} \ \ , \ \ \underset{\alpha-times}{\underbrace{\bx\otimes \cdots\otimes \bx }} \ \otimes \ \underset{({d-\alpha})-times}{\underbrace{\by\otimes \cdots\otimes \by}} \bigg\rangle \\ &=\begin{tiny} \underset{j_1, \ldots, j_{(d-\alpha)}=1} {\sum_{i_1, \ldots, i_\alpha =1}^n} ({\lambda_{i_1} \cdots \lambda_{i_\alpha}}{\theta_{j_1} \cdots \theta_{j_{(d-\alpha)}}} ) \end{tiny} \bigg\langle \mathcal{A}, \bu_{i_1}\otimes \cdots\otimes \bu_{i_\alpha}\otimes \bu_{j_1}\otimes \cdots\otimes \bu_{j_{(d-\alpha)}} \bigg\rangle \\ & \ge \ \ \ 0 \end{align*} \end{small} Hence, $\mathcal{A}\in\mathcal{I}_{n,d}^{\mathcal{P}_2}$, which further implies that, $\mathcal{I}_{n,d}^{\mathcal{P}_1} \subseteq \mathcal{I}_{n,d}^{\mathcal{P}_2}$. \eop \\ \noindent The sequence $\{\mathcal{P}_k\}$ of simplicial partition yields a system of polyhedral inner approximations which approximates the copositive cone precisely. \begin{theorem}\label{theorem3} Let the sequence $\{\mathcal{P}_k\}$ of simplicial partitions of $\Delta$ with $\delta(\mathcal{P}_k)\rightarrow0$. Then we have: \[int(\mathcal{C}_{n,d})\subset\underset{k\in\mathbb{N}}{\bigcup} \mathcal{I}_{\mathcal{P}_k} = \mathcal{C}_{n,d} \] \end{theorem} \noindent \Pf Suppose $\mathcal{A}\in int(\mathcal{C}_{n,d})$ be an arbitrary tensor, then $\mathcal{A}$ is strictly copositive. By Theorem \ref{theorem2} it implies that, there exists a partition $\mathcal{P}_{k_0}$ such that $\mathcal{A}\in \mathcal{I}_{n,d}^{\mathcal{P}_{k_0}}$ where $k_{0}\in\mathbb{N}$; therefore we have, \[\mathcal{A}\in\underset{k\in\mathbb{N}}{\bigcup} \mathcal{I}_{n,d}^{\mathcal{P}_{k}}~~\implies~~ int(\mathcal{C}_{n,d})\subset\underset{k\in\mathbb{N}}{\bigcup} \mathcal{I}_{n,d}^{\mathcal{P}_{k}}\] However, for any $\mathcal{A} \in \underset{k\in\mathbb{N}}{\bigcup} \mathcal{I}_{n,d}^{\mathcal{P}_{k}}$ there exists some partition $\mathcal{P}_{k}$ such that \[\bigg\langle \mathcal{A}, \mathcal{T}_{d}(\bv) \bigg\rangle = 0 ~~\text{for}~~\bv\in V_{\mathcal{P}_{k}} ~\implies~\mathcal{A} \notin int(\mathcal{C}_{n,d}) \] Moreover, by Lemma \ref{lemma2} we have, $\mathcal{I}_{n,d}^{\mathcal{P}_{k}}\subset\mathcal{C}_{n,d}~~\forall~~k\in\mathbb{N}$ which implies $\underset{k\in\mathbb{N}}{\bigcup} \mathcal{I}_{n,d}^{\mathcal{P}_{k}}\subseteq\mathcal{C}_{n,d}$.\\ For an arbitrary $\mathcal{A} \in \mathcal{C}_{n,d}$ we have; \begin{align*} &\bigg\langle \mathcal{A}, \mathcal{T}_{d}(\bx) \bigg\rangle \ge 0 ~~\text{for}~~\bx\in \Re^{n}_{+} \\ \implies & \|\bx\|_{1}\bigg\langle \mathcal{A} , \mathcal{T}_{d}\bigg(\frac{\bx}{\|\bx\|_{1}}\bigg) \bigg\rangle \ge 0 ~~\text{for}~~{\frac{\bx}{{\|\bx\|_{1}}}} \in \Delta \\ \end{align*} hence, there exists a partition $\mathcal{P}_k$ of simplex $\Delta$ such that; \[f_{\mathcal{A}} \bigg(\frac{\bx}{{\|\bx\|_{1}}}\bigg) \ge 0 \ \ \text{for all} \ \frac{\bx}{{\|\bx\|_{1}}} \in V_{\mathcal{P}_k} ~\text{ and } f_{\mathcal{A}} \bigg(\frac{\bx}{{\|\bx\|_{1}}},\frac{\by}{{\|\by\|_{1}}}\bigg) \ge 0 \text{ for all } \bigg\lbrace \frac{\bx}{{\|\bx\|_{1}}},\frac{\by}{{\|\by\|_{1}}} \bigg\rbrace \in E_{\mathcal{P}_k} \] Thus, $\mathcal{A} \in \mathcal{I}_{n,d}^{\mathcal{P}_k}$ as well, therefore we have the result \begin{align*} \underset{k\in\mathbb{N}}{\bigcup} \mathcal{I}_{n,d}^{\mathcal{P}_{k}} = \mathcal{C}_{n,d} \end{align*} \eop \subsection{Containment relations among $\mathcal{C}_{n,d}^{(r)}$ , $\mathcal{K}_{n,d}^{(r)}$ and $\mathcal{I}_{n,d}^{\mathcal{P}_r}$} We present the inclusion relation between the cones $\mathcal{C}_{n,d}^{(r)}$, $\mathcal{K}_{n,d}^{(r)}$ and $\mathcal{I}_{n,d}^{\mathcal{P}_r}$ as a proposition: \begin{proposition} Let $\mathcal{P}_r$ be a simplicial partition of the simplex $\Delta$, then the $r^{th}$ level inner approximation hierarchies $\mathcal{C}_{n,d}^{(r)}$ , $\mathcal{K}_{n,d}^{(r)}$ and $\mathcal{I}_{n,d}^{\mathcal{P}_r}$ for the copositive tensor cone $\mathcal{C}_{n,d}$ has the following inclusion relations: \begin{align*} \mathcal{C}_{n,d}^{(r)} \subseteq \mathcal{K}_{n,d}^{(r)}~~and~~\mathcal{C}_{n,d}^{(r)} \subseteq \mathcal{I}_{n,d}^{\mathcal{P}_r} ~~for ~~all~~r \in \{0,1,2,\cdots\} \end{align*} \end{proposition} \noindent \Pf Let $ \mathcal{A} \in \mathcal{C}_{n,d}^{(r)}$ be an arbitrary tensor, then the associated polynomial $P^{(r)}(\by)$ in (\ref{multicoeff}) allows a sum-of-square decomposition, thus $ \mathcal{A} \in \mathcal{K}_{n,d}^{(r)}$, henceforth, $\mathcal{C}_{n,d}^{(r)} \subseteq \mathcal{K}_{n,d}^{(r)}~\forall~r$. \\ To show that the tensor $ \mathcal{A}$ belongs to $r^{th}$ level cone $\mathcal{I}_{n,d}^{\mathcal{P}_r}$ as well, we take an arbitrary $\bv \in V_{\mathcal{P}_r}$, which implies that $(r+d)\bv \in \mathbb{I}^n(r+d)$. Therefore for any pair of vertices's $\bu,\bv \in V_{\mathcal{P}_r}$ we have; \begin{align*} \bigg\langle \mathcal{A} \ , \ \mathcal{T}_{d}(\bu,\bv) \bigg\rangle &=\bigg\langle \mathcal{A},\mathcal{T}_{d}\bigg(\frac{\bx}{r+d},\frac{\by}{r+d}\bigg) \bigg\rangle \\ &=\frac{1}{(r+d)^d} \bigg\langle \mathcal{A},\mathcal{T}_{d}(\bx,\by) \bigg\rangle \\ &=\frac{1}{(r+d)^d} \bigg(\bigg\langle \mathcal{A},\mathcal{T}_{d}(\bx) \bigg\rangle + \bigg\langle \mathcal{A},\mathcal{T}_{d}(\by) \bigg\rangle- \bigg\langle \mathcal{A},\mathcal{T}_{d}(\bx-\by) \bigg\rangle \bigg)\\ & \ge \frac{1}{(r+d)^d} \bigg(\bigg\langle \mathcal{A},Diag(\bx) \bigg\rangle + \bigg\langle \mathcal{A},Diag(\by) \bigg\rangle- \bigg\langle \mathcal{A},Diag(\bx-\by) \bigg\rangle\bigg)\\ &\ge \frac{1}{(r+d)^d} \sum_{i=1}^{n} \bigg(a_{(i)^d} x_i + a_{(i)^d} y_i- a_{(i)^d} (x_i-y_i) \bigg) \\ &\ge \frac{1}{(r+d)^d} \sum_{i=1}^{n} \big(2a_{(i)^d} y_i\big) \ge 0 \end{align*} Thus $\bigg\langle \mathcal{A} , \mathcal{T}_{d}(\bu,\bv) \bigg\rangle \ge 0~~\forall~~ \bu,\bv \in V_{\mathcal{P}_r}$, which further implies $\bigg\langle \mathcal{A} , \mathcal{T}_{d}(\bv) \bigg\rangle \ge 0 ~~\forall~~ \bv \in V_{\mathcal{P}_r}$. Hence, $\mathcal{A} \in \mathcal{I}_{n,d}^{\mathcal{P}_r}$ for all $r$. \eop Note that, neither $\mathcal{I}_{n,d}^{\mathcal{P}_r} \subseteq \mathcal{K}_{n,d}^{(r)}$ nor $\mathcal{K}_{n,d}^{(r)} \subseteq \mathcal{I}_{n,d}^{\mathcal{P}_r}$. \begin{example} Let $\mathcal{A}\in \mathcal{K}_{3,6}^{(0)}$ be any tensor with entries given as follows: \begin{align*} a_{i_1i_2i_3i_4i_5i_6}= \begin{cases} \begin{matrix} \ 1 & \ if \ i_{j}=i_{k}~\forall~j,k \in \{1,2,3,4,5,6\} \\ \ 2 & \ if \ i_{j}=1~\forall~j\in\{1,2,3\} ~and~ i_{K}=2~\forall~k \in \{4,5,6\} \\ \ 2 & \ if \ i_{j}=1~\forall~j \in \{1,2,3\} ~and~ i_{K}=3~\forall~ k \in \{4,5,6\} \\ -2 & \ if \ i_{j}=2~\forall~j \in \{1,2,3\} ~and~ i_{K}=3~\forall~ k \in \{4,5,6\} \\ 0 & \ otherwise \end{matrix} \end{cases} \end{align*} then the associated polynomial form, $f_{\mathcal{A}}(x,y,z)=x^6+y^6+z^6+2x^3y^3+2x^3z^3-2y^3z^3$ must allow an sos decomposition. , since not all elements of $\mathcal{A}$ are non-negative, therefore $ \mathcal{A} \notin \mathcal{I}_{3,6}^{\mathcal{P}_0}$. \end{example} \begin{example} Let $\mathcal{A} \in \mathcal{I}_{2,2}^{\mathcal{P}_1}$ be an arbitrary tensor, then the following conditions on elements of $\mathcal{A}$ are true: \begin{align*} &a_{ii} \ge 0 ~~for~i\in \{1,2\}\\ &a_{11}+a_{12} \ge 0\\ &a_{11}+a_{12} \ge 0 \end{align*} the associated polynomial of $\mathcal{A}$ is $f_\mathcal{A}(\bx)=a_{11}x_1^2+a_{22}x_2^2+2a_{12}x_1x_2 ~~for~~\bx\in\Re^n_+$, by substituting ${\bx}={\by} \circ {\by}~~for~\by\in\Re^n$, which further leads to the polynomial \begin{align*} P^{(1)}(\by)&=(a_{11} y_{1}^4+a_{22} y_{2}^4+2a_{12}y_{1}^2 y_{2}^2) (y_{1}^2+y_{2}^2)\\ &=\begin{pmatrix} y_1^2\\ y_2^2\\ y_1y_2 \end{pmatrix} ^{T} \begin{pmatrix} a_{11}y_1^2 & 0 & a_{12}y_1y_2\\ 0 & a_{22}y_2^2 & a_{12}y_1y_2\\ a_{12}y_1y_2 & a_{12}y_1y_2 & a_{11}y_1^2+a_{22}y_2^2 \end{pmatrix} \begin{pmatrix} y_1^2\\ y_2^2\\ y_1y_2 \end{pmatrix} \\ &=V^{T}Q(y_1,y_2)V \end{align*} since, $Q(y_1,y_2)$ is not a PSD matrix; for if taking $y_1=y_2=1$ and for the vector $z^T=(-10,-10,1)$, we have; $z^T Q(1,1)z=101(a_{11}+a_{22})-40a_{12}$ which is negative for $a_{11}=a_{22}=\frac{1}{101}~~and~~a_{12}=1$, that is $z^T Q(1,1)z=-38$. Henceforth, $P^{(1)}(\by)$ don't allow sum-of-square decomposition. Thus $\mathcal{A}\notin \mathcal{K}_{2,2}^{(1)}$. \end{example} \subsection{Outer Approximations for Copositive Cone} In this section we discuss, the outer approximations $ \mathcal{O}_{n,d}^\mathcal{P}$ and $\mathcal{O}_{n,d}^{(r)}$ for copositive cone of tensors. These approximations contains copositive cone $\mathcal{C}_{n,d}$. \subsubsection{Outer Approximations based on Simplicial Partition} For any simplicial partition $\mathcal{P}$ of $\Delta$ the cone $\mathcal{O}_{n,d}^\mathcal{P}$ is stated as follows; \begin{align*} \mathcal{O}_{n,d}^\mathcal{P}=\bigg\lbrace\mathcal{A} \in \mathcal{S}_{n,d}: \big\langle \mathcal{A},\mathcal{T}_{d}(\bv) \big\rangle \ge 0 ~~\forall~\bv\in V_{\mathcal{P}} \bigg\rbrace \end{align*} approximates the copositive cone from outside. Since for each tensor $\mathcal{A} \in \mathcal{C}_{n,d}$; the associated polynomial form $f_\mathcal{A}(\bx)=\big\langle \mathcal{A},\mathcal{T}_{d}(\bx) \big\rangle \ge~0~\forall~\bx\in\Re^n_+$. For the collection of all vertices's $V_\mathcal{P}\subseteq\Re^n_+$, we have, $\big \langle \mathcal{A}, \mathcal{T}_{d}(\bx) \big \rangle \ge~0~\forall~\bx\in V_{\mathcal{P}}\subseteq\Re^n_+$. Consequently, the tensor $\mathcal{A}\in \mathcal{O}_{n,d}^\mathcal{P}$, which implies $\mathcal{C}_{n,d} \subseteq \mathcal{O}_{n,d}^{\mathcal{P}}$. Moreover, if the partition $\mathcal{P}=\{\Delta^{s}\}$ then $\mathcal{O}_{n,d}^{\{\Delta^{s}\}}$ carries all those tensors whose diagonal entries are non-negative. It is well known fact that the diagonal entries of copositive tensors are necessarily non-negative.\\ Clearly, the cone $\mathcal{O}_{n,d}^{\mathcal{P}}$ is closed and convex, and it possesses a polyhedral geometry, since for any finite partition $\mathcal{P}$ of simplex $\Delta$ the total number of vertices's $V_{\mathcal{P}}$ are finite, therefore to find an element of such cone requires to solve finitely many inequalities. , if $\mathcal{P}, \mathcal{P}_1,$ and $\mathcal{P}_2$ are the simplicial partitions of $\Delta$ then the inclusion relation between the outer approximations based on these partitions is presented as a lemma, whose proof is analogous to [Lemma 8,\cite{Bundfuss2009}]; for the sake of completeness we give the proof. \begin{lemma}\label{lemma3'} For simplicial partitions $\mathcal{P}$, $\mathcal{P}_1$, and $\mathcal{P}_2$ of $\Delta$, if $\mathcal{P}_2$ is a refinement of $\mathcal{P}_1$, then $\mathcal{O}_{n,d}^{\mathcal{P}_2} \subseteq \mathcal{O}_{n,d}^{\mathcal{P}_1}$. \end{lemma} \noindent \Pf Let $\mathcal{P}_2$ be a refinement of $\mathcal{P}_1$; so there exists a sub-simplex $\Delta^{k_1}\in\mathcal{P}_1$ and another sub-simplex $\Delta^{k_2}\in\mathcal{P}_2$ such that $\Delta^{k_2} \subseteq \Delta^{k_1} $, which implies that, $ V_{\mathcal{P}_1}\subseteq V_{\mathcal{P}_2}$, therefore the set of all inequalities defining $\mathcal{O}_{n,d}^{\mathcal{P}_1}$ is a subset of the set of inequalities defining $\mathcal{O}_{n,d}^{\mathcal{P}_2}$; thus, $\mathcal{O}_{n,d}^{\mathcal{P}_2} \subseteq \mathcal{O}_{n,d}^{\mathcal{P}_1}$. \eop The sequence $\{\mathcal{O}_{\mathcal{P}_k}\}$ of outer approximations converges to $\mathcal{C}_{n,d}$ as the radius $\delta(\mathcal{P}_k)\rightarrow0$, as stated in the following theorem. \begin{theorem}\label{theorem4} For the sequence $\{\mathcal{P}_k\}$ of simplicial partitions of $\Delta$ with $\delta(\mathcal{P}_k)\rightarrow 0$; we have: \begin{align*} \mathcal{C}_{n,d}=\underset{k\in\mathbb{N}}{\bigcap} \mathcal{O}_{n,d}^{\mathcal{P}_k} \end{align*} \end{theorem} \noindent \Pf Since we know that; $\mathcal{C}_{n,d} \subseteq \mathcal{O}_{n,d}^{\mathcal{P}_k}~~\forall k\in\mathbb{N}$ therefore, $\mathcal{C}_{n,d} \subseteq \underset{k\in\mathbb{N}}{\bigcap}\mathcal{O}_{n,d}^{\mathcal{P}_k}$. To establish the reverse inclusion, we assume on the contrary that, $\mathcal{A}\notin\mathcal{C}_{n,d}$ which implies that, for some $\bv\in\Delta$ we have $\big\langle \mathcal{A}, \mathcal{T}_{d}(\bv) \big\rangle < 0 $, so by continuity property, there exists an $\varepsilon$-neighborhood $N_\varepsilon(\bv)$ such that; \[\big\langle \mathcal{A}, \mathcal{T}_{d}(\bv) \big\rangle<0 ~~\forall~\bv\in N_\varepsilon(\bv)\] if the diameter $\delta(\mathcal{P})<\varepsilon$, then there exist a simplex $\Delta^{k}\in\mathcal{P}$ and $\bu^{k}\in\Delta^{k}$such that; $\|\bu-\bv\|<\varepsilon$ which implies that; $\bu^{k}\in N_\varepsilon(\bv)$, which further implies; \begin{align*} \big \langle \mathcal{A}, \mathcal{T}_{d}(\bu^k) \big \rangle <0 ~\forall~\bu^{k}\in N_{\varepsilon}(\bv)\subseteq \Delta^k ~\text{which is a contradiction therefore}~ \ \mathcal{A}\notin \mathcal{O}_{n,d}^{\mathcal{P}_k} \end{align*} Henceforth, $\mathcal{A}\notin \underset{k\in\mathbb{N}}{\cap}\mathcal{O}_{n,d}^{\mathcal{P}_k}$, which implies that, $\mathcal{C}_{n,d} \supseteq \underset{k\in\mathbb{N}}{\cap}\mathcal{O}_{n,d}^{\mathcal{P}_k}$. \eop \subsection{Rational Griding based Outer Approximations} The regular grid $\Delta_{n}^{(r)}$ of rational points on the simplex $\Delta$ for each $r\in \{0,1,2,\cdots\}$ is stated as follows: \[ \Delta_{n}^{(r)}:=\lbrace \bx\in\Delta:(r+2)\bx\in\mathbb{N}^n_0 \rbrace \] For each $r$ the grid $\Delta_{n}^{(r)}$ provides a finite discretization of the simplex $\Delta$ consisting of rational points. The cardinality of each grid $\Delta_{n}^{(r)}$ is given as follows: \[ |\Delta_{n}^{(r)}|={n+r-1 \choose r}\] For each $r\in \{0,1,2,\cdots\}$, let us define \[\delta_n^{(r)}:=\bigcup_{k=1}^{r}{\Delta_{n}^{(k)}} \] by using the above mentioned discretization, we define another hierarchy of outer polyhedral approximations $ \mathcal{O}_{n,d}^{(r)}$ for copositive cone $\mathcal{C}_{n,d}$ as follows: \begin{align*} \mathcal{O}_{n,d}^{(r)}:=\big\lbrace\mathcal{A} \in \mathcal{S}_{n,d}:\big \langle \mathcal{A}, \mathcal{T}_{d}(\bv) \big \rangle\ge 0 ~~\forall~~\bv\in \delta_n^{(r)} \big\rbrace \end{align*} Clearly, each cone $\mathcal{O}_{n,d}^{(r)}$ is a proper cone. In the following theorem, we prove that the hierarchy of outer polyhedral approximations $\mathcal{O}_{n,d}^{(r)}$ converges to the cone of copositive tensors. \begin{theorem}\label{theorem5} The hierarchy of outer polyhedral approximations $\mathcal{O}_{n,d}^{(r)}$ contains the copositive cone $\mathcal{C}_{n,d}$ for each $r\in \{0,1,2,\cdots\}$, that is, $\mathcal{O}_{n,d}^0\supseteq \mathcal{O}_{n,d}^1\supseteq \cdots \supseteq \mathcal{C}_{n,d}$; with, \[\mathcal{C}_{n,d}=\bigcap_{r\in\mathbb{N}} \mathcal{O}_{n,d}^{(r)}\] \end{theorem} \noindent \Pf Let $\mathcal{A} \in \mathcal{C}_{n,d}$ be an arbitrary tensor then the associated polynomial form $f_\mathcal{A}(\bx) \ge 0~~\forall~\bx \in \Re^n_+$, and since $\delta_n^{(r)} \subseteq \Re^n_+$ therefore $f_\mathcal{A}(\bx) \ge 0~~\forall~\bx\in \delta_n^{(r)}$, which implies that $\mathcal{C}_{n,d}\subseteq\mathcal{O}_{n,d}^{(r)}$, which further implies that, \begin{align*} \mathcal{C}_{n,d} \subseteq \bigcap_{r\in\mathbb{N}} \mathcal{O}_{n,d}^{(r)} \end{align*} For the inclusion $\bigcap_{r\in\mathbb{N}} \mathcal{O}_{n,d}^{(r)} \subseteq \mathcal{C}_{n,d}$, let us consider a tensor $ \mathcal{A} \in \mathcal{S}_{n,d}$ be such that $ \mathcal{A} \notin \mathcal{C}_{n,d}$, therefore for some $r$ there exist $\bu^{(r)} \in \Delta_{n}^{(r)}$ such that, $\big\langle \mathcal{A},\mathcal{T}_{d}(\bu^{(r)}) \big\rangle < 0$. Now, perturbing those components of $\bu^{(r)}$ which are zero by a small positive value $\theta>0$ in such a way that $\bu^{(r)}>0$. By continuity property of polynomials, we know that there exists an $\varepsilon_r$ neighborhood $N_{\varepsilon_r}(\bu^{(r)})$ such that $\big\langle \mathcal{A},\mathcal{T}_{d}(\bv)\big \rangle < 0$ for all $\bv\in N_{\varepsilon_r}(\bu)$. Letting, $\varepsilon_m=min\{\varepsilon_r,min_{i=1,2,\cdots,n}u_i \}$. As the set of rationals is dense in the set of reals, so there exists a vector $\mathbf{w} \in \mathbb{Q}^n_+$, with $\|\bu^{(r)}-\mathbf{w}\|<\varepsilon_m$, which implies that $\mathbf{w}>0.$ Hence there exists some positive integer $k$ such that $\bx=\frac{\mathbf{w}}{\|\mathbf{w}\|_1} \in\delta_n^{(r)}$ for all $r \ge m$, because $\big \langle \mathcal{A},\mathcal{T}_{d}(\bx)\big \rangle < 0$, thus $\mathcal{A}\notin \mathcal{O}_{n,d}^{(r)} ~~\forall~ r\ge m$ which further implies that, $\mathcal{A}\notin \underset{r\in\mathbb{N}}{\bigcap} \mathcal{O}_{n,d}^{(r)}$. \eop \\ Therefore, by using the classical partitioning of the simplex $\Delta$ through bisection of longest edge, the collection of all the vertices's in the partition $\mathcal{P}_r$ of the simplex $\Delta$ is always contained in $\delta_n^{(r)}$ for each $r$. We present the containment relation between outer approximations in the following proposition. \begin{proposition} For simplicial partition $\mathcal{P}_r$ through bisection along the longest edge and rational discretization $\delta_n^{(r)}$ of the simplex $\Delta$, we have $\mathcal{O}_{n,d}^{\mathcal{P}_r} \subseteq \mathcal{O}_{n,d}^{(r)}~~for~ each~ r \in \{0,1,2,\cdots\}$ \end{proposition} \noindent \Pf Let $\mathcal{P}_r$ be the classical partitioning of the simplex $\Delta$ through bisection along the longest edge, and $V_{\mathcal{P}_r}$ be the collection of all the vertices's in $\mathcal{P}_r$, then for each $r$ we immediately have, $V_{\mathcal{P}_r} \subseteq \delta_n^{(r)}$, which implies that \begin{align*} \mathcal{O}_{n,d}^{\mathcal{P}_r} \subseteq \mathcal{O}_{n,d}^{(r)} ~~\forall ~ r \in \{0,1,2,\cdots \} \end{align*} \eop \section{Conclusion} In this article several properties for Copositive Tensor cones are proven. Moreover, a necessary and sufficient condition under which the cones $\cC_{n,d}$ and $\cS^{+}_{n,d}$ coincides has been established. The calculation of the coefficients of higher degree polynomial is critical in the analysis of approximation hierarchies based on polynomial conditions. In this regard, procedure/formula to find the coefficients of the polynomial form is obtained, along with the representation of the inner approximation cone $\cC^{r}_{n,d}$ for copositive cone $\cC_{n,d}$. More importantly, several inner and outer approximation hierarchies along with their containment relations are also given. In future we worked towards utilizing these hierarchies for approximating polynomial optimization. Especially to recover approximation results for polynomial optimization over the simplex as obtained by De Klerk and co-authors \cite{DEKLERK2006210}, \cite{Monique2013}, \cite{DeKlerkError}, \cite{Klerk2017OnTC}. \footnotesize
{ "timestamp": "2018-06-05T02:17:05", "yymm": "1806", "arxiv_id": "1806.01063", "language": "en", "url": "https://arxiv.org/abs/1806.01063" }
\section{Introduction \label{SEC-INTRO}} Efficient computation with the current and \fadd{upcoming (both exascale and post-Moore era) supercomputers} can be realized by application-algorithm-architecture co-design~\fadd{\cite{Shalf11,Dosanjh14,POST-K,CoDEx,EuroExa}}, in which various numerical algorithms should be prepared and the optimal one should be chosen according to the target application, architecture and problem. For example, an algorithm designed to minimize the floating-point operation count can be the fastest for some combination of application and architecture, while another algorithm designed to minimize communications (e.g. the number of communications or the amount of data moved) can be the fastest in another situation. The present paper proposes a middleware approach, so as to choose the optimal set of numerical routines for the target application and architecture. The approach is shown schematically in Fig.~\ref{FIG_CONCEPT} and the crucial concept is called \lq hybrid solver'. In general, a numerical problem solver in simulations is complicated and consists of sequential stages, as Stages I, II, III,... in Fig.~\ref{FIG_CONCEPT}. Here the routines of $P, Q, R$ are considered for Stage I and those of $S, T, U$ are for Stage II. The routines in a stage \yadd{are} equivalent in their input and output quantities but use different algorithms. The routines are assumed to be included in ScaLAPACK and other parallel libraries. Consequently, they show different performance characteristics and the optimal routine depends not only on the applications denoted as $A, B, C$ but also on the architectures denoted as $X, Y, Z$. Our middleware assists the user to choose the optimal routine among different libraries for each stage and such a workflow is called \lq hybrid workflow'. The present approach for hybrid workflow is realized by the following functions\yyyadd{.} First, it provides a unified interface to the solver routines. In general, different solvers have different user interface, such as the matrix distribution scheme, so the user is often required to rewrite the application program to switch from one solver to another. Our middleware absorbs this difference and frees the user from this troublesome task. Second, it outputs detailed performance data such as the elapsed time of each routine composing the solver. Such data will be useful for detecting the performance bottleneck and finding causes of it, as will be illustrated in this paper. In addition, we also \yadd{focus} on a performance prediction function, which predicts the elapsed time of the solver routines from existing benchmark data prior to actual computations. As a preliminary research, such a prediction method is constructed with Bayesian inference in this paper. Performance prediction will be valuable for choosing an appropriate job class in the case of batch execution, or choosing an optimal number of computational nodes that can be used efficiently without performance saturation. Moreover, performance prediction \yadd{will form the basis of an auto-tuning function planned for the future version, which obviates the need to} care about the job class and detailed calculation conditions. In this way, our middleware is expected to enhance the usability of existing solver routines and allows the users to concentrate on computational science itself. Here we focus on a middleware for the generalized eigenvalue problem (GEP) with real-symmetric coefficient matrices, since GEP forms the numerical foundation of electronic state calculations. Some of the authors developed a prototype of such middleware on the K computer in 2015-2016 \cite{IMACHI-JIT2016,HOSHI2016SC16}. After that, the code appeared at GITHUB as EigenKernel ver.~2017 \cite{EIGENKERNEL-URL} under the MIT license. It was confirmed that EigenKernel ver.~2017 works well also on Oakleaf-FX10 and Xeon-based supercomputers \cite{IMACHI-JIT2016}. In 2018, a new version of EigenKernel was developed and appeared on the developer branch at GITHUB. This version can run on \fadd{the} Oakforest-PACS \fdel{(OFP)}\ffadd{supercomputer}, a new supercomputer \fdel{using}\fadd{equipping} Intel Xeon Phi many-core processor\fadd{s}. In this paper, we take up this version. A related project is ELSI (ELectronic Structure Infrastructure) that provides interfaces to various numerical methods to solve or circumvent GEP in electronic structure calculations \cite{ELSI-URL,ELSI-PAPER}. The present approach limits the discussion to GEP solver and enables the user to construct a {\it hybrid} workflow which combines routines from different libraries, as shown in Fig.~\ref{FIG_CONCEPT}, while ELSI allows the user to choose a library only {\it as a whole}. The present approach of hybrid solver will add more flexibility and increase the chance to get higher performance. This paper presents two topics. First, we show the performance data of various GEP solvers on \fdel{OFP}\fadd{Oakforest-PACS} obtained using EigenKernel. Such data will be of interest on its own since \fdel{OFP}\fadd{Oakforest-PACS} is a new machine and few performance results of \ffadd{dense} matrix solvers on it have been reported; \ffadd{stencil-based application \cite{Hirokawa18} and communication-avoiding iterative solver for a sparse linear system \cite{Idomura17} were evaluated on Oakforest-PACS, but their characteristics are totally different from those of dense matrix solvers such as GEP solvers.} Furthermore, we point out that one of the solvers has a severe scalability problem and investigate the cause of it with the help of the detailed performance data output by EigenKernel. This illustrates how EigenKernel can be used effectively for performance analysis. Second, we describe the new performance prediction method implemented as a Python program. It uses Bayesian inference and predicts the execution time of a specified GEP solver as a function of the number of computational nodes. We present the details of the mathematical performance models used in it and give several examples of performance prediction results. It is to be noted that our performance prediction method can be used not only for interpolation but also for extrapolation, that is, for predicting the execution time at a larger number of nodes from the results at a smaller number of nodes. There is a strong need for such prediction among application users. This paper is organized as follows. The algorithm and features of EigenKernel are described in Sec.~\ref{SEC-EIGENKERNEL}. Sec.~\ref{SEC-PERFORMANCE-ANA} is devoted to the scalability analysis of various GEP solvers on \fdel{OFP}\fadd{Oakforest-PACS}, which was made possible with the use of EigenKernel. Sec.~\ref {SEC-PREDICTION} explains our new performance prediction method, focusing on the performance models used in it and the performance prediction results in the case of extrapolation. Sec.~\ref{DISCUSSION} discusses our performance prediction method, comparing it with some existing studies. Finally Sec.~\ref{SUMMARY} provides summary of this study and some future outlook. \section{EigenKernel \label{SEC-EIGENKERNEL}} EigenKernel is a middleware for GEP that enables the user to use optimal solver routines according to the problem specification (matrix size, etc.) and the target architecture. In this section, we first review the algorithm for solving GEP and describe the solver routines adopted by EigenKernel. Features of EigenKernel are also discussed. \subsection{Generalized eigenvalue problem and its solution} We consider the generalized eigenvalue problem \begin{eqnarray} A \bm{y}_k = \lambda_k B \bm{y}_k, \label{EQ-GEP-ORG} \end{eqnarray} where the matrices $A$ and $B$ are $M \times M$ real symmetric ones and $B$ is positive definite ($B \ne I$). The $k$-th eigenvalue or eigenvector is denoted as $\lambda_k$ or $\bm{y}_k$, respectively ($k=1,2,...,M$). The algorithm to solve Eq.~(\ref{EQ-GEP-ORG}) proceeds as follows. First, the Cholesky decomposition of $B$ is computed, producing an upper triangle matrix $U$ that satisfies \begin{eqnarray} B = U^{\rm T} U. \label{EQ-CHOL} \end{eqnarray} Then the problem is reduced \yyyadd{to} a standard eigenvalue problem (SEP) \begin{eqnarray} A' \bm{z}_k = \lambda_k \bm{z}_k \label{EQ-SEP-ORG} \end{eqnarray} with the real-symmetric matrix of \begin{eqnarray} A' = U^{\rm -T} A U^{-1}. \label{EQ-GEN-MAT-A2} \end{eqnarray} When the SEP of Eq.~(\ref{EQ-SEP-ORG}) is solved, the eigenvector of the GEP is obtained by \begin{eqnarray} \bm{y}_k = U^{-1} \bm{z}_k. \label{EQ-BACK} \end{eqnarray} The above explanation indicates that the whole solver procedure can be decomposed into the two parts of (I) the SEP solver of Eq.~(\ref{EQ-SEP-ORG}) and (II) the \lq reducer' or the reduction procedure between GEP and SEP by Eqs.~(\ref{EQ-CHOL})(\ref{EQ-GEN-MAT-A2})(\ref{EQ-BACK}). \begin{figure*} \includegraphics[width=0.70\textwidth]{fig_workflow.eps} \caption{Possible workflows for GEP solver. } \label{FIG-WORKFLOW} \end{figure*} \subsection{GEP Solvers and hybrid workflows \label{SEC-GEP-HYB}} EigenKernel builds upon three parallel libraries for GEP: ScaLAPACK \cite{SCALAPACK}, ELPA \cite{ELPAWEB} and EigenExa \cite{EigenExaWeb}. \ffinfo{changed reference of ELPA and EigenExa: paper $\rightarrow$ web site like ScaLAPACK} Reflecting the structure of the GEP algorithm stated above, all of the GEP solvers from these libraries consist of two routines, namely, the SEP solver and the reducer. EigenKernel allows the user to select the SEP solver from one library and the reducer from another library, by providing appropriate data format/distribution conversion routines. \ffadd{We call the combination of an SEP solver and a reducer a {\it hybrid workflow}, or simply {\it workflow}}. Hybrid workflows enable the user to attain maximum performance by choosing the optimal SEP solver and reducer independently. Among the three libraries adopted by EigenKernel, ScaLAPACK is the {\it de facto} standard parallel numerical library. However, it was developed mainly in 1990's and thus some of its routines show severe bottlenecks on current supercomputers. Novel solver libraries of ELPA and EigenExa were proposed, so as to overcome the bottlenecks in eigenvalue problems. EigenKernel v.2017 were developed mainly in 2015-2016, so as to realize hybrid workflows among the three libraries. The ELPA code was developed in Europe under the tight-collaboration between computer scientists and material science researchers and its main target application is FHI-aims (\fdel{=}\finfo{removed $=$}Fritz Haber Institute {\it ab initio} molecular simulations package) \cite{FHI-AIMS}, a famous electronic state calculation code. The EigenExa code, on the other hand, was developed at RIKEN in Japan. It is an important fact that the ELPA code has routines optimized for X86, IBM BlueGene and AMD architectures \cite{ELPA}, while the EigenExa code was developed so as to be optimal mainly on the K computer. The above fact motivates us to develop a hybrid solver workflow so that we can achieve optimal performance for any problem on any architecture. EigenKernel supports only limited versions of ELPA and EigenExa, since the update of ELPA or EigenExa requires us, sometimes, to modify the interface routine without backward compatibility. EigenKernel v.2017 supports ELPA 2014.06.001 and EigenExa 2.3c. In 2018, EigenKernel was updated in the developer branch on GITHUB and can run on Oakforest-PACS. The benchmark on Oakforest-PACS in this paper was carried out by the code with the commit ID of 373fb83 that appeared at Feb 28, 2018 on GITHUB, except where indicated. The code is called the \lq current' code hereafter and supports ELPA v. 2017.05.003 and EigenExa 2.4p1. Figure \ref{FIG-WORKFLOW} shows the possible workflows in EigenKernel. The reducer can be chosen from two ScaLAPACK routines and the ELPA-style routine \fadd{,} and the difference between them is discussed later in this paper. The SEP solver for Eq.~(\ref{EQ-SEP-ORG}) can be chosen from the five routines: \fdel{a}\fadd{the} ScaLAPACK routine denoted as ScaLAPACK, two ELPA routines denoted as ELPA1 and ELPA2 and two EigenExa routines denoted as Eigen\_s and Eigen\_sx. The ELPA1 and Eigen\_s routines are based on the conventional tridiagonalization algorithm like the ScaLAPACK routine but are different in their implementations. The ELPA2 and Eigen\_sx routines are based on non-conventional algorithms for modern architectures. Details of these algorithms can be found in the references \fdel{\cite{ELPA,EIGENEXA}} \fadd{(see \cite{ELPA11,ELPA,ELPAWEB} for ELPA and \cite{Imamura11,EIGENEXA,FukayaPDSEC15,EigenExaWeb} for EigenExa)}. EigenKernel focuses on the eight solver workflows for GEP, which are listed as $A, A2, B, C, D, E, F, G$ in Table \ref{TABLE-WORKFLOW}. The algorithms of the workflows in Table \ref{TABLE-WORKFLOW} are explained in our previous paper \cite{IMACHI-JIT2016}, except the workflow $A2$. The workflow $A2$ is quite similar to \yyyadd{$A$} and the difference between them is only the point that the ScaLAPACK routine pdsyngst, one of the reducer routines, is used in the workflow $A2$, instead of pdsygst in the workflow $A$. \yyadd{The pdsygst routine is a distributed parallel version of the dsygst routine in LAPACK.} \yyadd{This routine repeatedly calls the triangular solver, namely pdtrsm, with a few right-hand sides, and this part often becomes a serious performance bottleneck, as discussed later in this paper, owing to its difficulty of parallelization.} \yyadd{The pdsyngst routine is an improved routine that employs the rank 2k update, instead of pdtrsm in pdsygst.} \yyadd{Since rank 2k update is more suitable for parallelization, pdsyngst is expected to outperform pdsygst.} \yyadd{We note that pdsyngst requires more working space (memory) than pdsygst and that pdsyngst only supports lower triangular matrices; if these requirements are not satisfied, pdsygst is called \yyyadd{instead of} pdsyngst.} \yyadd{For more details of differences between pdsygst and pdsyngst, refer \yyyadd{to} Refs.~\cite{SEARS1998,POULSON2013}.} All the workflows except $A2$ are supported in the \lq current' code, while the workflow $A2$ was added to EigenKernel very recently in a developer version. The workflow $A2$ and other workflows with pdsyngst will appear in a future version. It should be noted that all the $3 \times 5$ combinations in Fig.~\ref{FIG-WORKFLOW} are possible in principle but \yyyadd{some of them} \yadd{have not yet been} implemented in the code, owing to the limited human resource for \yadd{programming}. \begin{table} \caption{Available workflows for GEP solver in EigenKernel.} \label{TABLE-WORKFLOW} \begin{tabular}{cll} \hline\noalign{\smallskip} Workflow & SEP solver & Reducer \\ \noalign{\smallskip}\hline\noalign{\smallskip} A & ScaLAPACK & ScaLAPACK (pdsygst) \\ A2 & ScaLAPACK & ScaLAPACK (pdsyngst) \\ B & Eigen\_sx & ScaLAPACK (pdsygst) \\ C & ScaLAPACK & ELPA \\ D & ELPA2 & ELPA \\ E & ELPA1 & ELPA \\ F & Eigen\_s & ELPA \\ G & Eigen\_sx & ELPA \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Features of EigenKernel} As stated in Introduction, EigenKernel prepares basic functions to assist the user to use the optimal workflow for GEP. First, it provides a unified interface to the GEP solvers. When the SEP solver and the reducer are chosen from different libraries, the conversion of data format and distribution are also performed automatically. Second, it outputs detailed performance data such as the elapsed times of internal routines of the SEP solver and reducer for performance analysis. The data file is written in JSON (\fdel{=}\finfo{removed $=$}JavaScript Object Notation) format. This data file is used by the performance prediction tool to be discussed in Sec.~\ref{SEC-PREDICTION}. In addition to these, EigenKernel has additional features so as to satisfy the needs among application researchers: (I) It is possible to build EigenKernel only with ScaLAPACK. This is because there are supercomputer systems in which ELPA or EigenExa are not installed. (II) The package contains a mini-application called EigenKernel-app, a stand-alone application \yyyadd{that reads the matrix data from the file and calls EigenKernel to solve the GEP}. This mini-application can be used for real researches, as in Ref.~\cite{HOSHI2016SC16}, if the matrix data are prepared as files in the Matrix Market format. It is noted that there is another reducer routine called EigenExa-style reducer that appears in our previous paper \cite{IMACHI-JIT2016} but is no longer supported by EigenKernel. This is mainly because the code (KMATH\_EIGEN\_GEV)\ffadd{\cite{EIGENGEV}} \fdel{\footnote{http://www.r-ccs.riken.jp/labs/lpnctrt/en/projects/kmath-eigen-gev/}} requires EigenExa but is not compatible with EigenExa 2.4p1. Since this reducer uses the eigendecomposition of the matrix $B$, instead of the Cholesky decomposition of Eq.~(\ref{EQ-CHOL}), its elapsed time is always larger than that of the SEP solver. Such a reducer routine is not efficient, at least judging from the benchmark data of the SEP solvers reported in the previous paper \cite{IMACHI-JIT2016} and in this paper. \section{Scalability analysis on Oakforest-PACS \label{SEC-PERFORMANCE-ANA}} In this section, we demonstrate how EigenKernel can be used for performance analysis. We first show the benchmark data of various GEP workflows on Oakforest-PACS obtained using EigenKernel. Then we analyze the performance bottleneck found in one of the workflows with the help of the detailed performance data output by EigenKernel. \subsection{Benchmarks data for different workflows \label{SEC-BENCH-OFP-DIFF-WORKFLOW}} The benchmark test was carried out on Oakforest-PACS \fdel{(OFP)}, so as to compare the elapsed time among the workflows. \fdel{OFP}\fadd{Oakforest-PACS} is \ffadd{a}\fdel{the} massively parallel supercomputer \fdel{installed at the Information Technology Center of The University of Tokyo} \fadd{operated by Joint Center for Advanced High Performance Computing (JCAHPC)~\cite{JCAHPC}}. It consists of $P_{\rm max} \equiv$ 8,208 computational nodes connected by the Intel Omni-Path network. Each node has an Intel Xeon Phi 7250 many-core processor with 3TFLOPS of peak performance. Thus, the aggregate peak performance of the system is more than 25PFLOPS. In EigenKernel, the MPI/OpenMP hybrid parallelism is used and the number of the used nodes is denoted as $P$. The number of MPI processes per node or that of OMP threads per node is denoted as $n_{\rm MPI/node}$ or $n_{\rm OMP/node}$, respectively. The present benchmark test was carried out with $(n_{\rm MPI/node}, n_{\rm OMP/node}) =(1,64)$. The present benchmark is limited to those within the regular job classes and the maximum number of nodes in the benchmark is $P = P_{\rm \yadd{quarter}} \equiv 2,048$, a \yadd{quarter} of the whole system, because a job with $P_{\rm \yadd{quarter}}$ nodes is the largest resource available for the regular job classes. A benchmark with up to the full system $(P_{\rm \yadd{quarter}} < P \le P_{\rm max})$ is beyond the regular job classes and is planed in a near future. The test numerical problem is \lq VCNT90000' \fdel{that}\fadd{, which} appears in ELSES matrix library \fdel{(\verb|http://www.elses.jp/matrix/|)}\fadd{\cite{ELSESMATRIX}}. The matrix size of the problem is $M=90,000$. The problem comes from the simulation of a vibrating carbon nanotube (VCNT) calculated by ELSES \fdel{(\verb|http://www.elses.jp/|)} \cite{ELSES,ELSESWEB}, a quantum nanomaterial simulator. The matrices of $A$ and $B$ in Eq.~(\ref{EQ-GEP-ORG}) were generated with an {\it ab initio}-based modeled (tight-binding) electronic-state theory \cite{CERDA}. The calculations were carried out by the workflows of $A, A2, D, E, F$ and $G$. The results of the workflows $B$ and $C$ can be estimated from those of the other workflows, since the SEP solver and the reducer in the two workflows appear among other workflows. The above discussion implies that the two workflows are not efficient. \begin{figure*} \includegraphics[width=0.99\textwidth]{fig_bench_GEP_SEP.eps} \caption{ Benchmark on Oakforest-PACS. The matrix size of the problem is $M=90,000$. The computation was carried out with $P=16, 32, 64, 128, 256, 512, 1024, 2048$ nodes in the workflows of $A$(circle), $A2$(square), $D$(filled diamond), $E$(triangle), $F$(cross), $G$(open diamond). The elapsed times of (a) for the whole GEP solver, (b) for the SEP solver and (c) for the reducer are plotted. } \label{FIG-SCALAPACK-BENCH-DETAIL} \end{figure*} The benchmark data is summarized in Fig.~\ref{FIG-SCALAPACK-BENCH-DETAIL}. The total elapsed time $T(P)$ is plotted in Fig.~\ref{FIG-SCALAPACK-BENCH-DETAIL}(a) and that of the SEP solver $T_{\rm SEP}(P)$ or the reducer $T_{\rm red}(P) \equiv T(P) - T_{\rm SEP}(P)$ is plotted in Fig.~\ref{FIG-SCALAPACK-BENCH-DETAIL}(b) or (c), respectively. Several points are discussed here; (I) The optimal workflow seems to be $D$ or $F$ as far as among the benchmark data $(16 \le P \le P_{\rm \yadd{quarter}}=2048)$. (II) All the workflows except the workflow of $A$ show a strong scaling property in Fig.~\ref{FIG-SCALAPACK-BENCH-DETAIL}(a), because the elapsed time decreases with the number of nodes $P$. Figs.~\ref{FIG-SCALAPACK-BENCH-DETAIL}(b) and (c) indicates that the bottleneck of the workflow $A$ stems not \yyyadd{from} the SEP solver but \yyyadd{from} the reducer. The bottleneck disappears in the workflow $A2$, in which the routine of pdsygst is replaced by pdsyngst. (III) The ELPA-style reducer is used in the workflows of $C, D, E, F, G$. Among them, the workflows $F$ and $G$ are hybrid workflows between ELPA and EigenExa and require the conversion process of distributed data, since the distributed data format is different between ELPA and EigenExa \cite{IMACHI-JIT2016}. The data conversion process does not dominate the elapsed time, as discussed in Ref.~\cite{IMACHI-JIT2016}. (IV) We found that the same routine gave different elapsed times in the present benchmark. For example, the ELPA-style reducer is used both in the workflows of $D$ and $E$ but the elapsed time $T_{\rm red}$ with $P=256$ nodes is significantly different; The time is $T_{\rm red}$ = 182 s or 137 s, in the workflow of $D$ or $E$, respectively. The difference stems from the time for the transformation of eigenvectors by Eq.~(\ref{EQ-BACK}), since the time is $T_{\rm trans-vec}$ = 69 s or 23 s in the workflow of $D$ or $E$, respectively. The same phenomenon was observed also in the workflow of $F$ and $G$ with $P$=64 nodes, since $(T_{\rm red},T_{\rm trans-vec})$=(277 s, 57 s) or (334 s, 114 s) in the workflow of $F$ or $G$, respectively. Here we should remember that \yadd{even if we use the same number of nodes $P$, the parallel computation time $T=T(P)$ can differ from one run to another,} since the geometry of \yadd{the} used nodes \yadd{may not be} equivalent. Therefore, the benchmark test for multiple runs with the same number of used nodes should be carried out in \yadd{a} near future. (V) The algorithm has several tuning parameters, such as $n_{\rm MPI/node}, n_{\rm OMP/node}$, though these parameters are fixed in the present benchmark. A more extensive benchmark with different values of the tuning parameters is one of possible \yyyadd{areas} of investigation in \yadd{the} future for faster computations. \subsection{Detailed performance analysis of the pure ScaLAPACK workflows} Detailed performance data are shown in Fig.~\ref{FIG-DATA-OFP-SCALAPACK} for the two pure ScaLAPACK workflows $A$ and $A2$. In the workflow $A$, the total elapsed time, denoted as $T$, is decomposed into six terms; \fdel{The}\fadd{the} five terms are those for the ScaLAPACK routines of pdsytrd, pdsygst, pdstedc, pdormtr and pdotrf. The elapsed times for these routines are denoted as $T^{\rm (pdsygst)}(P)$, $T^{\rm (pdsytrd)}(P)$, $T^{\rm (pdstedc)}(P)$, $T^{\rm (pdotrf)}(P)$ and $T^{\rm (pdormtr)}(P)$, respectively. The elapsed time for the rest part is defined as $T^{\rm (rest)} \equiv T- T^{\rm (pdsygst)} - T^{\rm (pdsytrd)} - T^{\rm (pdstedc)} - T^{\rm (pdotrf)} - T^{\rm (pdormtr)}$. In the workflow $A2$, the same definitions are used, except the point that pdsygst is replaced by pdsyngst. These timing data are output by EigenKernel automatically in JSON format. Figure \ref{FIG-DATA-OFP-SCALAPACK} indicates that the performance bottleneck of the workflow $A$ is caused by the routine of pdsygst, in which the reduced matrix $A'$ is generated by Eq.~(\ref{EQ-GEN-MAT-A2}). A possible cause for the low scalability of pdsygst is the algorithm used in it, which exploits the symmetry of the resulting matrix $A^{\prime}$ to reduce the computational cost \cite{WILKINSON}. Although this is optimal on sequential machines, it brings about some data dependency and can cause a scalability problem on massively parallel machines. The workflow $A2$ uses pdsyngst instead of pdsygst and improves the scalability, as expected \yyyadd{from} the last paragraph of Sec.~\ref{SEC-GEP-HYB}. We should remember that the ELPA-style reducer is \yyyadd{the} best among the benchmark in Fig.~\ref{FIG-SCALAPACK-BENCH-DETAIL}(c), since the ELPA-style reducer forms the inverse matrix $U^{-1}$ explicitly and computes Eq.~(\ref{EQ-GEN-MAT-A2}) directly using matrix multiplication \cite{ELPA}. While this algorithm is computationally more expensive, it has a larger degree of parallelism and can be more suited for massively parallel machines. In principle, the performance analysis such as given here could be done manually. However, it requires the user to insert a timer into every internal routine and output the measured data in some organized format. Since EigenKernel takes care of these kinds of troublesome tasks, it makes performance analysis easier for non-expert users. \ffadd{In addition, since }\fadd{performance data obtained in practical computation is sometimes valuable for finding performance issues that rarely appears in development process, this feature can contribute to the co-design of software.} \begin{figure*} \includegraphics[width=0.95\textwidth]{fig_bench_ScaLAPACK.eps} \caption{ Benchmark on Oakforest-PACS with the workflows (a) $A$ and (b) $A2$. The matrix size of the problem is $M=90,000$. The computation was carried out with $P=16, 32, 64, 128, 256, 512, 1024, 2048$ nodes. The graph shows the elapsed times for the total GEP solver (filled circle), pdsytrd (square), pdsygst in the workflow $A$ or pdsyngst in the workflow $A2$ (diamond), pdstedc (cross), pdpotrf (plus), pdormtr(triangle) and the rest part (open circle). } \label{FIG-DATA-OFP-SCALAPACK} \end{figure*} \section{Performance prediction \label{SEC-PREDICTION}} \subsection{The concept} \begin{figure*} \includegraphics[width=0.5\textwidth]{fig_bench_PREDICT_SCHEMATIC.eps} \caption{Schematic figure of performance prediction, in which the elapsed time of a routine $T$ is written as the function of the number of processor nodes $P$ ($T \equiv T(P)$). The figure illustrates the performance extrapolation that gives a typical behavior with a minimum. } \label{FIG_PREDICT_SCHEMATIC} \end{figure*} This section proposes to use Bayesian inference as a tool for performance prediction, in which the elapsed time is predicted from teacher data or existing benchmark data. The importance of performance modeling and prediction has long been recognized by library developers. In fact, in a classical paper published in 1996 \cite{Dackland96}, Dackland and K{\aa}gstr{\"o}m write, ``we suggest that any library of routines for scalable high performance computer systems should also include a corresponding library of performance models''. However, there have been few parallel matrix libraries equipped with performance models so far. The performance prediction method to be described in this section will be incorporated in the future version of EigenKernel and will form one of the distinctive features of the middleware. Performance prediction can be used in a variety of ways. Supercomputer users \yadd{are required to} prepare a computation plan that requires estimated elapsed time, but it is difficult to predict the elapsed time from hardware specifications, such as peak performance, memory and network \fdel{band widths}\fadd{bandwidths} and communication latency. The performance prediction realizes high usability, since it can predict the elapsed time without requiring huge benchmark data. Moreover, the performance prediction enables an auto-optimization (autotuning) function, which selects the optimal workflow in EigenKernel automatically given the target machine and the problem size. Such high usability is crucial, for example, in electronic state calculation codes, because the codes are used not only among \yadd{theorists} but also among experimentalists and industrial researchers who are not familiar with HPC techniques. The present paper focuses not only on performance interpolation but also on extrapolation, which predicts the elapsed time at a larger number of nodes from the data at a smaller number of nodes. This is shown schematically in Fig.~\ref{FIG_PREDICT_SCHEMATIC}. An important issue in the extrapolation is to predict the speed-up \lq saturation', or the phenomenon that the elapsed time may have a minimum, as shown in Fig.~\ref{FIG_PREDICT_SCHEMATIC}. The extrapolation technique is important, since we have only few opportunities to use the ultimately large computer resources, like the whole system of the K computer or Oakforest-PACS. A reliable extrapolation technique will encourage real researchers to use large resources. \subsection{Performance models} The performance prediction will be realized, when a reliable performance model is constructed for each routine, so as to reflect the algorithm and architecture properly. The present paper, as an early-stage research, proposes three simple models for the elapsed time of the $j$-th routine $T^{(j)}$ as the function of the number of nodes (the degrees of freedom in MPI parallelism) $P$; \fdel{The}\fadd{the} first proposed model is called generic three-parameter model and is expressed as \begin{eqnarray} T^{(j)}(P) &=& T_1^{(j)}(P) + T_2^{(j)}(P) + T_3^{(j)}(P) \label{EQ-PERF-MODEL1} \\ T_1^{(j)}(P) &\equiv& \frac{c_1^{(j)}}{P} \label{EQ-PERF-MODEL-TERM1} \\ T_2^{(j)}(P) &\equiv& c_2^{(j)} \label{EQ-PERF-MODEL-TERM2} \\ T_3^{(j)}(P) &\equiv& c_3^{(j)} \log P \label{EQ-PERF-MODEL-TERM3} \end{eqnarray} with the three fitting parameters of $\{ c_i^{(j)} \}_{i=1,2,3}$. The terms of $T_1^{(j)}$ or $T_2^{(j)}$ stand for the time in ideal strong scaling or in non-parallel computations, respectively. The model of $T^{(j)} =T_1^{(j)} + T_2^{(j)}$ is known as Amdahl's relation \cite{AMDAHL}. The term of $T_3^{(j)}$ stands for the time of MPI communications. The logarithmic function was chosen as a reasonable one, \fadd{since the main communication pattern required in dense matrix computations is categorized into \textit{collective} communication (e.g. {\tt MPI\_Allreduce} for calculating inner product of a vector), and such communication routine is often implemented as a sequence of point-to-point communications along with a binary tree, whose total cost is proportional to $\log_2 P$ \cite{Pacheko96}.} The generic three-parameter model in Eq.~(\ref{EQ-PERF-MODEL1}) can give, unlike Amdahl's relation, the minimum schematically shown in Fig.~\ref{FIG_PREDICT_SCHEMATIC}. It is noted that the real MPI communication time is not measured to determine the parameter $c_3^{(j)}$, since it would require detailed modification of the source code or the use of special profilers. Rather, all the parameters $\{ c_i^{(j)} \}_{i=1,2,3}$ are estimated simultaneously from the total elapsed time $T^{(j)}$ using Bayesian inference, as will be explained later. The second proposed model is called generic five-parameter model and is expressed as \begin{eqnarray} T^{(j)}(P) &=& T_1^{(j)}(P) + T_2^{(j)}(P) + T_3^{(j)}(P) + T_4^{(j)}(P) + T_5^{(j)}(P) \label{EQ-PERF-MODEL2} \\ T_4^{(j)}(P) &\equiv& \frac{c_4^{(j)}}{P^2} \label{EQ-PERF-MODEL-TERM4} \\ T_5^{(j)}(P) &\equiv& c_5^{(j)} \frac{\log P}{\sqrt{P}}, \label{EQ-PERF-MODEL-TERM5} \end{eqnarray} with the five fitting parameters of $\{ c_i^{(j)} \}_{i=1,2,3,4,5}$. The term of $T_4^{(j)}(\propto P^{-2})$ is responsible for the \lq super-linear' behavior in which the time decays faster than $T_1^{(j)} (\propto P^{-1})$. The super-linear behavior can be seen in several benchmarks data \cite{Ristov16}. The term of $T_5^{(j)}(\propto \log P / \sqrt{P})$ expresses the time of MPI communications for matrix computation; \fdel{When}\fadd{when} performing matrix operations on a 2-D scattered $M \times M$ matrix, the size of the submatrix allocated to each node is $(M/\sqrt{P})\times(M/\sqrt{P})$. Thus, the communication volume to send a row or column of the submatrix is $M/\sqrt{P}$. By taking into account the binary tree \fdel{network}\fadd{based collective communication} and multiplying $\log P$, we obtain the term of $T_5^{(j)}(P)$. The term decays slower than $T_1^{(j)} (\propto P^{-1})$. The third proposed model results when the MPI communication term of $T^{(j)}_3$ in Eq.~({\ref{EQ-PERF-MODEL1}) is replaced by a linear function; \begin{eqnarray} T^{(j)}(P) &=& T_1^{(j)}(P) + T_2^{(j)}(P) + \tilde{T}_3^{(j)}(P) \label{EQ-PERF-MODEL3} \\ \tilde{T}_3^{(j)}(P) &\equiv& \tilde{c}_3^{(j)} P. \label{EQ-PERF-MODEL-TERM3B} \end{eqnarray} The model is called linear-communication-term model. We should say that this model is fictitious, because no architecture or algorithm used in real research gives a linear term in MPI communication, as far as we know. The fictitious three-parameter model of Eq.~(\ref{EQ-PERF-MODEL3}) was proposed so as to be compared with the other two models. \fadd{Other models for MPI routines are proposed~\cite{Grbovic07,Hoefler10}, and comparison with these models will be also informative, which is one of our future works.} \subsection{Parameter estimation by the Markov Chain Monte Carlo procedure} \label{BAYESEAN} In this paper, the model parameters are estimated by Bayesian inference with the Markov Chain Monte Carlo (MCMC) iterative procedure and the uncertainty is included in predicted values. Here the uncertainty is formulated by the normal distribution, as usual. The result appears as the posterior probability distribution of the elapsed time $T$. Hereafter, the median is denoted as $T_{\rm med}$ and the upper and lower limits of 95 \% Highest Posterior Density (HPD) interval are denoted as $T_{\rm up-lim}$ and $T_{\rm low-lim}$, respectively. The predicted value appears both in the median value of $T_{\rm med}$ and the interval of $[T_{\rm low-lim}, T_{\rm up-lim}]$. The MCMC procedure was realized by Python with the Bayesian inference module of PyMC ver. 2.36. The method is standard and the use of PyMC is not crucial. The MCMC procedure was carried out under the preknowledge that each term of $\{ T_i^{(j)} \}_{i}$ is a fraction of the elapsed time and therefore each parameter of $\{ c_i^{(j)} \}_{i}$ should be non-negative $(T_i^{(j)} \ge 0, c_i^{(j)} \ge 0)$. The details of the method are explained briefly here; (I) The parameters of $\{ c_i^{(j)} \}_{i}$ are considered to have uncertainty and are expressed as probability distributions. The prior distribution should be chosen and the posterior distribution will be obtained by Bayesian inference. The prior distribution of the parameters of $c_i^{(j)}$ is set to the uniform distribution in the interval of $[0, c_{i{\rm (lim)}}^{(j)}]$, where $c_{i{\rm (lim)}}^{(j)}$ is an input. The upper limit of $c_{i{\rm (lim)}}^{(j)}$ should be chosen so that the posterior probability distribution is so localized in the region of $c_i^{(j)} \ll c_{i{\rm (lim)}}^{(j)}$. The values of $c_{i{\rm (lim)}}^{(j)}$ depend on problem and will appear in the next subsection with results. (II) The present Bayesian inference was carried out for the logscale variables $(x, y) \equiv (\log P, \log T)$, instead of the original variable of $(P, T)$. The prediction on the logscale variables means that the uncertainty in the normal distribution appears on the logscale variable $y$. When the original variables are used, the width of the 95 \% HPD interval ($|T_{\rm up-lim}-T_{\rm low-lim}|$) is on the same order among different nodes and is much larger than the median value $T_{\rm med}$ for data with a large number of nodes ($|T_{\rm up-lim}-T_{\rm low-lim}| \gg T_{\rm med}$). We thought of the use of the logscale variables, since we discuss the benchmark data on the logscale variables as Fig.\ref{FIG-SCALAPACK-BENCH-DETAIL}. Another choice of the transformed variables may be a possible future issue. (III) The uncertainty in the normal distribution is characterized by the standard deviation $\sigma^{(j)}$. The parameter $\sigma^{(j)}$ is also treated as a probability distribution and its prior distribution is set to be the uniform one in the interval of $[0, \sigma_{\rm limit}]$ with a given value of the upper bound $\sigma_{\rm limit}^{(j)}=0.5$. The MCMC procedure consumed only one or a couple of minute(s) by \yyyadd{a} note PC with the teacher data of existing benchmarks. The histogram of Monte Carlo sample data are obtained for the parameters of $\{ c_i^{(j)} \}_i, \sigma^{(j)}$ and the elapsed time of $T^{(j)}(P)$ and form approximate probability distributions for each quantity. The MCMC iterative procedure was carried out for each routine independently and the iteration number is set to be $n_{\rm MC} = 10^5$. In the prediction procedure of the $j$-th routine, each iterative step gives the set of parameter values of $\{ c_i^{(j)} \}_i, \sigma^{(j)}$ and the elapsed time of $T^{(j)}(P)$, according to the selected model. We discarded the data in the first $n_{\rm MC}^{(\rm early)} = n_{\rm MC}/2$ steps, since the Markov chain has not converged to the stationary distribution during such early steps. After that, the sampling data were picked out with an interval of $n_{\rm interval}=10$ steps. The number of the sampling data, therefore, is $n_{\rm sample} = (n_{\rm MC}-n_{\rm MC}^{(\rm early)})/n_{\rm interval}=5000$. Hereafter the index among the sample data is denoted by $k$ ($\equiv 1,2,\ldots,n_{\rm sample}$). The $k$-th sample data consist of the set of the values of $\{ c_i^{(j)} \}_i, \sigma^{(j)}$ and $T^{(j)}(P)$ and these values are denoted as $\{ c_i^{(j)[k]} \}_i, \sigma^{(j)[k]}$ and $T^{(j)[k]}(P)$, respectively. The sampling data set of $\{ T^{(j)[k]}(P) \}_{k=1,...,n_{\rm sample}}$ form the histogram or the probability distribution for $T^{(j)}(P)$. The probability distributions for the model parameters of $\{ c_i^{(j)} \}_i$ are obtained in the same manner and will appear later in this section. The sample data for the total elapsed time is given by the sum of those over the routines; \begin{eqnarray} T^{[k]}(P) = \sum_j T^{(j)[k]}(P). \label{EQ-TOTAL-TIME} \end{eqnarray} Finally, the median $T_{\rm med}(P)$ and the upper and lower limits of 95 \% HPD interval, ($T_{\rm up-lim}(P), T_{\rm low-lim}(P)$), are obtained from the histogram of $\{ T^{[k]}(P) \}_k$. \begin{table} \caption{Measured elapsed times in seconds for the matrix problem of \lq VCNT22500' solved by the workflow $A$ on the K computer. The elapsed times are measured as a function of the number of nodes $P$ for the total solver time $T(P)$ and the six routines of $T^{\rm (pdsytrd)}(P)$, $T^{\rm (pdsygst)}(P)$, $T^{\rm (pdstedc)}(P)$, $T^{\rm (pdormtr)}(P)$, $T^{\rm (pdotrf)}(P)$, $T^{\rm (rest)}(P)$ } \label{TABLE-DATA-KEI} \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \# nodes & total & pdsytrd & pdsygst & pdstedc & pdormtr & pdotrf & rest \\ \noalign{\smallskip}\hline\noalign{\smallskip} 4 & 1872.7 & 1562.2 & 61.589 & 58.132 & 122.91 & 20.679 & 47.190 \\ 16 & 240.82 & 129.09 & 37.012 & 21.341 & 24.624 & 8.0851 & 20.670 \\ 64 & 103.18 & 44.494 & 24.584 & 9.9665 & 7.1271 & 3.3122 & 13.692 \\ 256 & 63.029 & 21.325 & 20.509 & 5.8159 & 3.4131 & 2.2474 & 9.7189 \\ 1024 & 55.592 & 17.524 & 17.242 & 6.1105 & 2.6946 & 3.1462 & 8.8753 \\ 4096 & 70.459 & 20.479 & 21.169 & 6.7494 & 3.9326 & 7.9400 & 10.189 \\ 10000 & 140.89 & 29.003 & 49.870 & 17.714 & 9.9534 & 19.817 & 14.536 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Results} The prediction was carried out on the K computer for the matrix problem of \lq VCNT22500', which appears in ELSES matrix library \cite{ELSESMATRIX}. The matrix size is $M=22,500$. The workflow $A$, pure ScaLAPACK workflow, was used with the numbers of nodes for $P=4, 16, 64, 256, 1024, 4096, 10000$. The elapsed times were measured for the total elapsed time of $T(P)$ and the six routines of $T^{\rm (pdsygst)}(P)$, $T^{\rm (pdsytrd)}(P)$, $T^{\rm (pdstedc)}(P)$, $T^{\rm (pdotrf)}(P)$, $T^{\rm (pdormtr)}(P)$ and $T^{\rm (rest)}(P)$. The values are shown in Table \ref{TABLE-DATA-KEI}. The measured data of the total elapsed time of $T(P)$ shows the speed-up \lq saturation' or the phenomenon that the elapsed time shows a minimum at $P = 1024$. The saturation is reasonable from HPC knowledge, since the matrix size is on the order of $M=10^4$ and efficient parallelization can not be expected for $P \ge 10^3$. \begin{figure*} \includegraphics[width=1.00\textwidth]{fig_bench_PREDICT_K.eps} \caption{Performance prediction with the three models of (a) the generic three-parameter model, (b) the generic five-parameter model, and (c) the linear-communication-term model. The workflow $A$, pure ScaLAPACK workflow, was used with the numbers of nodes for $P=4, 16, 64, 256, 1024, 4096, 10000$. The data is generated for the generalized eigenvalue problem of \lq VCNT22500' on the K computer. The measured elapsed time for the total solver is drawn by circles. The predicted elapsed time is drawn by square for the median and by dashed lines for the upper and lower limits of 95 \% HPD interval. The used teacher data are the elapsed times of the six routines of $T^{\rm (pdsytrd)}$, $T^{\rm (pdsygst)}$, $T^{\rm (pdstedc)}$, $T^{\rm (pdotrf)}$, $T^{\rm (pdormtr)}$ and $T^{\rm (rest)}$ at $P=4, 16, 64$. (d) The posterior probability distribution of $c_1^{\rm (pdsytrd)}$ (upper panel) and $c_4^{\rm (pdsytrd)}$ (lower panel) within the five-parameter model. } \label{FIG-PREDICT} \end{figure*} Figure\fadd{s} \ref{FIG-PREDICT}(a)-(c) shows the result of Bayesian inference, in which the teacher data is the measured elapsed times of the six routines of $T^{\rm (pdsygst)}(P)$, $T^{\rm (pdsytrd)}(P)$, $T^{\rm (pdstedc)}(P)$, $T^{\rm (pdotrf)}(P)$, $T^{\rm (pdormtr)}(P)$ and $T^{\rm (rest)}(P)$ at $P=4, 16, 64$. In the MCMC procedure, the value of $c_{i{\rm (lim)}}^{(j)}$ is set to $c_{i{\rm (lim)}}^{(j)} = 10^5$ among all the routines and the posterior probability distribution satisfies the locality of $c_i^{(j)} \ll c_{i{\rm (lim)}}^{(j)}$. One can find that the generic three-parameter model of Eq.~(\ref{EQ-PERF-MODEL1}) and the generic five-parameter model of Eq.~(\ref{EQ-PERF-MODEL2}) commonly predict the speed-up saturation successfully at $P = 256 \sim 1024$, while the linear-communication-term model does not. Examples of the posterior probability distribution are shown in Fig.~\ref{FIG-PREDICT}(d), for $c_1^{\rm (pdsytrd)}$ and $c_4^{\rm (pdsytrd)}$ in the five-parameter model. \begin{figure*} \includegraphics[width=1.00\textwidth]{fig_bench_PREDICT_K_detail.eps} \caption{Detailed analysis of the performance prediction in Fig.~\ref{FIG-PREDICT}. The performance prediction for $T^{\rm (pdsytrd)}$ is shown by (a) the generic three-parameter model and (b) the generic five-parameter model. The performance prediction for $T^{\rm (pdsygst)}$ is shown by (c) the generic three-parameter model and (d) the generic five-parameter model. } \label{FIG-PREDICT-DETAIL} \end{figure*} Figure \ref{FIG-PREDICT-DETAIL} shows the detailed analysis of the performance prediction in Fig.~\ref{FIG-PREDICT}. Here the performance prediction of $T^{\rm (pdsytrd)}$ and $T^{\rm (pdsygst)}$ is focused on, since the total time is dominated by these two routines, as shown in Table \ref{TABLE-DATA-KEI}. The elapsed time of $T^{\rm (pdsytrd)}$ is predicted in Fig\fadd{s}.~\ref{FIG-PREDICT-DETAIL}(a) and (b) by the generic three-parameter model and the generic five-parameter model, respectively. The importance of the five-parameter model can be understood, when one see the probability distribution of $c_1^{\rm (pdsytrd)}$ and $c_4^{\rm (pdsytrd)}$ in Fig.~\ref{FIG-PREDICT}(d). Since the probability distribution of $c_4^{\rm (pdsytrd)}$ has a peak at $c_4^{\rm (pdsytrd)} \approx 2.3 \times 10^4 $, the term of $T_4^{(j)}(P)$ in Eq.~(\ref{EQ-PERF-MODEL-TERM4}), the super-linear term, should be important. The contribution at $P=10$, for example, is estimated to be $T_4(P=10) = c_4/P^2 \approx (2.3 \times 10^4) / 10^2 \approx 2 \times 10^2$. The above observation is consistent with the fact that the measured elapsed time shows the super-linear behavior between $P=4,16$ ($T(P=4)/T(P=16)$=(1872.7 sec)/(240.82 sec) $\approx$ 7.78) and the generic five-parameter model reproduces the teacher data, the data at $P$=4, 16, 64, better than the generic three-parameter model. The elapsed time of $T^{\rm (pdsygst)}$ is predicted in Fig\fadd{s}.~\ref{FIG-PREDICT-DETAIL}(c) and (d) by the generic three-parameter model and the generic five-parameter model, respectively. The prediction by the generic five-parameter model seems to be better than that by the three-parameter model. Unlike $T^{\rm (pdsytrd)}$, $T^{\rm (pdsygst)}$ is a case in a poor strong scaling, since the ideal scaling behavior $(T \propto 1/P)$ or the super-linear behavior $(T \propto 1/P^2)$ can not be seen even at the small numbers of used nodes $(P=4,16)$. \begin{figure*} \includegraphics[width=1.00\textwidth]{fig_bench_CONV_FIT.eps} \caption{(a) Performance prediction by least squares methods of \lq VCNT22500' on the K computer. The red circles indicate the measured elapsed times for $P=4, 16, 64, 256, 1024, 4096, 10000$, while the other curves are determined by the least squares methods with the teacher data at $P=4, 16, 64$; The bold solid line is determined without any constraint by the three-parameter model. The dashed line is determined with non-zero constraint by the three-parameter model. The dotted line is determined with non-zero constraint by the five-parameter model. In the fitting procedure by the five-parameter model, the initial values of the iterative procedure are chosen to be the result by the three-parameter model. (b) Performance prediction for the generalized eigenvalue problem of \lq VCNT90000' on Oakforest-PACS. The generic five-parameter model is used. The red circles indicate the measured elapsed times for $P$=16, 32, 64, 128, 256, 512, 1024 and 2048. The predicted elapsed time is drawn by square for the median and by dashed lines for the upper and lower limits of 95 \% HPD interval. The teacher data are the data at $P=$16, 32, 64, 128. } \label{FIG-CONVENTIONAL-FIT} \end{figure*} \section{Discussion on performance prediction} \label{DISCUSSION} \subsection{Comparison with conventional least squares method} In this subsection, the present MCMC method is compared with the conventional least squares methods, so as to clarify the properties of the present method. The least squares methods have been applied to the performance modeling of numerical libraries. In fact, most of the recent studies on performance modeling \cite{Peise12,Reisert17,Fukaya15,Fukaya18} rely on the least squares methods to fit the model parameters. The fitting results by three types of least squares methods are shown in Fig.~\ref{FIG-CONVENTIONAL-FIT}(a). The data for VCNT22500 on the K computer is used, as in Fig.~\ref{FIG-PREDICT}. The total elapsed time of $T(P)$ is fitted with the teacher data at $P=4, 16, 64$. The bold solid line is the fitted curve by the least squares method, without any constraint, in the generic three-parameter model. The fitting procedure determines the parameters as $(c_1,c_2,c_3) = (c_1^{\rm (LSQ0)},c_2^{\rm (LSQ0)},c_3^{\rm (LSQ0)}) \equiv (1.06 \times 10^{4}, -1.14 \times 10^{3}, 2.60 \times 10^{2})$. The fitted curve reproduces the teacher data, the data at $P=4, 16, 64$, exactly, but deviates severely from the measured values at $P \ge 256$. The fitted value of $c_2$ is negative, since the method ignores the preknowledge of the non-negative constraint $(c_2 \ge 0)$. The dashed and dotted lines are the fitted curves by the least squares methods under the non-negative constraint on the fitting parameters of $\{ c_i \}_i$. The fitting procedure was carried out by the module lsqnonneg in MATLAB. The dashed line is fitted with the three-parameter model and the fitted values are $(c_1,c_2,c_3) = (c_1^{\rm (LSQ1)},c_2^{\rm (LSQ1)},c_3^{\rm (LSQ1)}) \equiv (5.47 \times 10^3, 3.20 \times 10^{-10}, 2.77)$. The dotted line is fitted with the five-parameter model and the fitted values are $(c_1,c_2,c_3,c_4,c_5) = (c_1^{\rm (LSQ2)},c_2^{\rm (LSQ2)},c_3^{\rm (LSQ2)},c_4^{\rm (LSQ2)},c_5^{\rm (LSQ2)}) \equiv (1.65 \times 10^3, 5.64 \times 10^{-2}, 17.2, 2.30 \times 10^{4}, 4.99 \times 10^{-3})$. The dotted curve is comparable to the median values in the MCMC method of Fig.~\ref{FIG-PREDICT}(b). We found, however, that the fitting procedure with the five-parameter model is problematic, because the fitting problem is underdetermined, having smaller number of teacher data ($n_{\rm teacher}=3$) than that of fitting parameters ($n_{\rm param}=5$), and therefore the objective function can have multiple local minima. The fitting procedure is iterative and we chose the initial values as $(c_1,c_2,c_3,c_4,c_5) = (c_1^{\rm (LSQ1)},c_2^{\rm (LSQ1)},c_3^{\rm (LSQ1)},0, 0)$ in the case of Fig.~\ref{FIG-CONVENTIONAL-FIT}(a). We found that several other choices of the initial values fail to converge. The above numerical experiment reflects the general difference between the MCMC method and the least square methods. In general, the MCMC method has the following three properties; (i) The non-negative constraint on the elapsed time ($T>0$) can be imposed as preknowledge. (ii) The uncertainty or error can be taken into account both for the teacher data and the predicted data. The uncertainty of the predicted data appears as the HPD interval, for example, in Fig.~\ref{FIG-PREDICT}. (iii) The iterative procedure is guaranteed to converge to the unique posterior distribution. The least square method without any constraint does not have any of the above three properties, as is exemplified by the bold solid curve case of Fig.~\ref{FIG-CONVENTIONAL-FIT}(a). The least square method with non-negative constraint has only the property (i), which is reflected in the dashed and dotted curve cases of Fig.~\ref{FIG-CONVENTIONAL-FIT}(a). The fitting procedures do not contain the uncertainty, because of the lack of the property (ii). Instead of the property (iii), the least squares method gives the parameter values that attain one of the local minima of the least squares objective function, which can be problematic, as explained at the end of the previous paragraph. In addition, it is noted that the least squares method by the three-parameter model without constraint can be realized in the framework the MCMC method, if the parameters of $\{ c_i \}_i$ are set to be the interval of $[-c_{\rm lim}, c_{\rm lim}]$ with $c_{\rm lim} =10^5$ and that for $\sigma$ is set to be the interval of $[0, \sigma_{\rm lim}]$ with $\sigma_{\rm lim} = 10^{-5}$. The above prior distribution means that the non-negative condition $(c_i \ge 0)$ is ignored and the method is required to reproduce the teacher data exactly ($\sigma_{\rm lim} \approx 0$). In this case, the MCMC procedure was carried out for the variables $(x, T) \equiv (\log P, T)$, unlike in Sec.~\ref{BAYESEAN}, because the non-negative condition is not imposed on $T$ and we cannot use $y \equiv \log T$ as a variable. We confirmed the above statement by the MCMC procedure. As results, the median is located at $(c_1,c_2,c_3) = (c_1^{\rm (LSM)},c_2^{\rm (LSM)},c_3^{\rm (LSM)})$ and the width of the 95 \% HPD interval is tiny ($10^{-3}$ or less) for each coefficient. \subsection{Possible extension of the present models} Although the generic five-parameter model seems to be the best among the three proposed models, its theoretical extension is possible for more flexible models. Figure \ref{FIG-CONVENTIONAL-FIT}(b) shows the performance prediction by the five-parameter model for the benchmark data by the workflow $A$ in Fig.~\ref{FIG-DATA-OFP-SCALAPACK}, a data for the problem of \lq VCNT90000' on Oakforest-PACS. The data at $P=16, 32, 64, 128$ are the teacher data. \hadd{In the present case, the upper limit is set to $c_{i{\rm (lim)}}^{(j)} = 10^5$ for $i=1,2,3,5$ and $c_{4{\rm (lim)}}^{(j)} = 10^7$ and the posterior probability distribution satisfies the locality of $c_i^{(j)} \ll c_{i{\rm (lim)}}^{(j)}$.} The predicted data fails to reproduce the local maximum at $P=64$ in the measured data, since the model can have only one (local) minimum and no (local) maximum. If one would like to overcome the above limitation, a candidate for a flexible and useful model may be one with case classification; \begin{eqnarray} T(P) = \left\{ \begin{array}{ll} T_{\rm model}^{(\alpha)}(P) & \quad P < P_{\rm c} \\ T_{\rm model}^{(\beta)}(P) & \quad P > P_{\rm c} \\ \end{array} \right., \end{eqnarray} where $T_{\rm model}^{(\alpha)}(P)$ and $T_{\rm model}^{(\beta)}(P)$ are considered to be independent models and the threshold number of $P_{\rm c}$ is also a fitting parameter. A model with case classification will be fruitful from the algorithm and architecture viewpoints. An example is the case where the target numerical routine switches the algorithms according to the number of used nodes. Another example is the case where the nodes in a rack are tightly connected and parallel computation within these nodes is quite efficient. From the application viewpoint, however, the prediction in Fig.~\ref{FIG-CONVENTIONAL-FIT}(b) is still meaningful, since the extrapolation implies that an efficient parallel computation cannot be expected at $P \ge 128$. \subsection{Discussions on methodologies and future aspect} This subsection is devoted to discussions on methodologies and future aspects. \hadd{ (I) The proper values of $c_{i{\rm (lim)}}^{(j)}$ should be chosen in each problem and here we propose a way to set the values automatically. The possible maximum value of $c_{i}^{(j)}$ appears, when the elapsed time of the $j$-th routine is governed only by the $i$-th term of the given model ($T^{(j)}(P) \approx T_i(P)^{(j)} $). We consider, for example, the case in which the first term is dominant ($T^{(j)}(P) \approx c_{1}^{(j)}/P $) and the possible maximum value of $c_{1}^{(j)}$ is given by $c_{1}^{(j)} \approx PT^{(j)}(P)$. Therefore the limit of $c_{1{\rm (lim)}}^{(j)}$ can be chosen to be $c_{1{\rm (lim)}}^{(j)} \ge PT^{(j)}(P)$ among the teacher data of $(P, T^{(j)}(P))$. The locality of $c_{1}^{(j)} \ll c_{1{\rm (lim)}}^{(j)}$ should be checked for the posterior probability distribution. We plan to use the method in a future version of our code. } (II) The elapsed times of parallel computation may be different among multiple runs, as discussed in the last paragraph of Sec.~\ref{SEC-BENCH-OFP-DIFF-WORKFLOW}. Such uncertainty of the measured elapsed time can be treated in Bayesian inference by, for example, setting the parameter $\sigma_{\rm lim}^{(j)}$ appropriately based on the sample variance of the multiple measured data or other preknowledge. In the present paper, however, the optimal choice of $\sigma_{\rm lim}^{(j)}$ is not discussed and is left as a future work. (III) It is noted that ATMathCoreLib, an autotuning tool \cite{ATMATHCORELIB1,ATMATHCORELIB2}, also uses Bayesian inference for performance prediction. However, it is different from our approach in two aspects. First, it uses Bayesian inference to construct a reliable performance model from noisy observations~\fadd{\cite{Suda10}}. So, more emphasis is put on interpolation than on extrapolation. Second, it assumes normal distribution both for the prior and posterior distributions. This enables the posterior distribution to be calculated analytically without MCMC, but makes it impossible to impose non-negative condition of $c_i\ge 0$. (IV) The present method uses the same generic performance model for all the routines. A possible next step is to develop a proper model for each routine. Another possible theoretical extension is performance prediction among different problem sizes. The elapsed time of a dense matrix solver depends on the matrix size $M$, as well as on $P$, so it would be desirable to develop a model to express the elapsed time as a function of both the number of nodes and the matrix size ($T=T(P,M)$). For example, the elapsed time of the tridiagonalization routine in EigenExa on the K computer is modeled as a function of both the matrix size and the number of nodes~\cite{Fukaya18}. In this study, several approaches to performance modeling are compared depending on \ffadd{the information available for modeling}, and some of them accurately estimate the elapsed time for a \yyyadd{given} condition (i.e. matrix size and node count). The use of such a model will \yyyadd{provide} more fruitful prediction, in particular, for the extrapolation. \section{Summary and future outlook} \label{SUMMARY} We developed an open-source middleware EigenKernel for the generalized eigenvalue problem that realizes high scalability and usability, responding to the solid need in large-scale electronic state calculations. Benchmark results on Oakforest-PACS shows that the middleware enables us to construct the optimal hybrid solver from ScaLAPACK, ELPA and EigenExa routines. The detailed performance data provided by EigenKernel reveals performance issues without additional effort such as code modification. For high usability, a performance prediction method was proposed based on Bayesian inference with \yyyadd{the} Markov Chain Monte Carlo procedure. We found that the method is applicable not only to performance interpolation but also to extrapolation. For a future look, we can consider a system that gathers performance data automatically every time users call a library routine. Such a system could be realized through a possible collaboration with the supercomputer administrator and will give a greater prediction power to our performance prediction tool, by providing huge set of teacher data. The performance prediction method is general and applicable to any numerical procedure, if a proper performance model is prepared for each routine. The present middleware approach will form a foundation of the application-algorithm-architecture co-design. \section*{Acknowledgement} The authors thank to Toshiyuki Imamura (RIKEN) for the fruitful discussion on EigenExa and Kengo Nakajima (The University of Tokyo) for the fruitful discussion on Oakforest-PACS. \yyyadd{We are also grateful to the anonymous reviewers, whose comments helped us improving the quality of this paper.}
{ "timestamp": "2018-12-20T02:12:17", "yymm": "1806", "arxiv_id": "1806.00741", "language": "en", "url": "https://arxiv.org/abs/1806.00741" }
\section{INTRODUCTION} Robustness is of crucial importance in many complex systems and plays an important role in mitigating damage \cite{gao_universal_2016}. It has been studied widely in both single networks \cite{cohen_resilience_2000,albert2000error,tanizawa2005optimization}, interdependent networks \cite{buldyrev_catastrophic_2010,leicht_percolation_2009,hu_percolation_2011,gao2011robustness,gao_networks_2012,shekhtman2015resilience} and multiplex networks \cite{hackett_bond_2016,sole-ribalta_congestion_2016}. Percolation theory has demonstrated its great potential as a versatile tool for understanding system resilience based on both dynamical and structural properties \cite{aharony2003introduction,bunde2012fractals}, and has been applied to many real systems \cite{saberi2015recent,li2015percolation,meng_percolation_2017}. Recently, a theoretical framework has been developed to study the resilience of communities formed of either Erd\H os-R\'enyi (ER) and Scale-Free networks that have inter-linkes between them using percolation theory \cite{DongPNAS2018}. It has been found that the inter-links affect the percolation phase transition in a manner similar to an external field in a ferromagnetic-paramagnetic spin system. However, many real systems, such as, transportation networks \cite{weiss_global_2018,strano2017scaling}, infrastructure networks \cite{hines2010topological} and others, are spatially embedded and the influence of this feature has not been considered. Here we study how the inter-links (e.g. air flights) between two spatial networks (e.g., countries) affect the overall resilience. Furthermore, we will search for an optimal structure (or most robust point) of our model and consider it in a real transportation system. We will do so by developing a framework to study the resilience of spatial networks with inter-links and by analysing possible optimal structures for our model/s and in real transport systems. The structure of our paper is as follows: in the next Chapter, we describe and introduce the model. In Chapter III, the results are presented and discussed. Finally, in Chap. IV a short summary and outlook are provided. \section{MODEL} Our model is motived by many real-world networks where nodes and links are spatially embedded within the same region (module), but only some nodes have connections to other regions (modules). We denote the links in the same module as \textit{intralinks} and the links between different modules as \textit{interlinks}. Fig.~\ref{Fig_model}(a) demonstrates the topological structure of the global transportation network including railway roads and airline routes \cite{DATA}. We demonstrate in the figure that the airports are connected via interlinks and can be regarded as \textit{interconnected nodes}. We show here that the interlinks behave, regarding breakdown of the network, in a manner analogous to an external field from physics near magnetic-paramagnetic phase transition \cite{Stanley_1971,reynolds1977ghost}. To study this effect, for simplicity and without loss of generalization, we carried out extensive simulations on a network of two modules each with the same number of nodes, \textit{$N_{1} = L\times L$}, where $L$ is the linear size of the lattice, representing the spatial networks. Within each module the nodes are only connected with their neighbors in space as defined by a 2-dimensional square lattice. Between different modules, we randomly select a fraction $r$ of nodes to be interconnected nodes, e.g, airports, and randomly assign $M_{inter}$ interlinks among nodes in the two modules. A network generated from our model is shown in Fig.~\ref{Fig_model}(b). Our model is realistic and can represent coupled transport systems, i.e, the nodes in the same lattice module are localized railroad or road networks within the same region while the interlinks represent interregional airline routes. \begin{figure} \begin{centering} \includegraphics[width=1.0\linewidth]{FigModel} \caption{\label{Fig_model} (a) The topological structure of the global transport network. The yellow links are railway lines, the red nodes are railway intersections, and the blue lines are global airline routes. (b) Our model. We assume two separate lattice networks, representing two continents (or countries) with railway networks. We add $M_{inter}$ inter-links to a fraction $r$ of nodes, representing cities with airports having flights to the other continent. Interconnected nodes and their respective interlinks are highlighted in gray. Here, we chose $r$ = 0.1 and $M_{inter}$ = 50.} \end{centering} \end{figure} To quantify the resilience of our model, we carried out extensive numerical simulations of the size of the giant connected component $S(p,r)$ after a fraction of $1 - p$ nodes are randomly removed. Note that our model is distinct from the case of interdependent networks \cite{buldyrev_catastrophic_2010}, where the failure of nodes in one network leads to the failure of dependent nodes in other networks. Our model is also different from the interconnected modules model \cite{shai_critical_2015}, where interconnected nodes are attacked. In our model, the interconnections between different communities are additional connectivity links \cite{li2014epidemics} and randomly chosen nodes are attacked \cite{DongPNAS2018}. For a given set of parameters $[p,r;L]$, we carried out 10,000 Monte Carlo realizations and took the average of these results to obtain $S(p,r)$. \section{RESULTS} Similar to our earlier studies \cite{DongPNAS2018,shekhtman_critical_2018}, we find that the parameter $r$, governing the fraction of interconnected nodes, has effects analogous to a magnetic field in a spin system, near criticality. This analogy can be seen through the facts that: (i) the non-zero fraction of interconnected nodes destroys the original phase transition point of the single module; (ii) critical exponents (defined below) of values derived from percolation theory can be used to characterize the effect of external field on $S(p,r)$. Fig.~\ref{Fig:1A}(a) shows our simulation results for the size of the giant component $S(p,r)$ with $L = 4096$, $M_{inter} = 2\times L\times L$ for various $r$. We note that in the limit of $r=0$ our model recovers the critical threshold of single square lattices, $p_{c} \approx 0.592746$ \cite{newman_efficient_2000}. We find that $S(p_c,r) >S(p_c,0)=0$ for $r>0$, showing that the interconnected nodes remove the phase transition of the single lattice. \begin{figure} \begin{centering} \includegraphics[width=1.0\linewidth]{Fig1} \caption{\label{Fig:1A} (a) The giant component (order parameter), $S(p,r)$, as a function of the fraction of non-removed nodes $p$ for several values of $r$; (b) $S(p_c,r)$ as a function of $r$ with the exponent $\delta$; (c) $\frac{\partial S(p,r)}{\partial r}$ as a function of $p_{c}-p$ with $r = 10^{-4}$ and the exponent $\gamma$; (d) Same as (c) but for several $r$. Here, $L = 4096$, $M_{inter} = 2\times L\times L$, $p_c = 0.592746$. The dashed line is the best fit-line for the data, which is found to have a slope $1/\delta = 0.055$ and R-Square $>0.999$.} \end{centering} \end{figure} Next, we investigate the scaling relations and critical exponents, with $S(p,r)$, $p$ and $r$ serving as our analogy for magnetization (order parameter), temperature, and the external field respectively \cite{Stanley_1971}. To quantify how the external field, $r$, affects the phase transition, we define the critical exponents $\delta$, which relates the order parameter at the critical point to the magnitude of the field, \begin{equation}\label{eq5} S(p_c,r) \sim r^{1/\delta}, \end{equation} and $\gamma$, which describes the susceptibility near criticality, \begin{equation}\label{eq6} \left (\frac {\partial S(p,r)} {\partial r} \right)_{r\rightarrow 0} \sim \left| p - p_c \right|^{-\gamma}, \end{equation} where $p_c$ is the site percolation threshold for a single 2-dimensional square lattice network. The simulation results for $\delta$ in our model are shown in Fig.~\ref{Fig:1A}(b). We obtain $1/\delta = 0.055$ from simulations, which agrees very well with the known exponent value for standard percolation on square lattices $1/\delta = 5/91$ \cite{aharony2003introduction,bunde2012fractals}. The dashed line is the best fit-line for the data with R-Square $>0.999$. We next investigate the critical exponent, $\gamma$, which we claim to be analogous to magnetic susceptibility exponent with the scaling relation given in Eq.~\eqref{eq6}. Fig.~\ref{Fig:1A}(c) presents our results for $\gamma$. We obtain $\gamma = 2.389$ for $p < p_c$ and $r = 10^{-4}$, which agrees again very well with the known value $\gamma = 43/18$ in percolation \cite{aharony2003introduction,bunde2012fractals}. In Fig.~\ref{Fig:1A}(d) we also plot our results for different $r$ values: $r = 10^{-4}, 10^{-3}, 10^{-2}$ to highlight the changes in the range of the scaling region. We find that as $r$ decreases, the scaling region becomes larger, this is expected since for smaller $r$ the system approaches closer to criticality ($r$=0). Similar effects in terms of the scaling range are also observed for changing $M_{inter}$ with respect to the critical exponent $1/\delta$ and Eq.~\eqref{eq5}, as seen in Fig.~\ref{Fig:S1A} \cite{SI}. We note that for a single 2d square lattice, the scaling exponent $\beta$, defined by the relation $S \sim (p - p_c)^{\beta}$, has a value of $\beta = 5/36$ \cite{aharony2003introduction,bunde2012fractals}. The critical exponent $\beta$ together with $\delta$ and $\gamma$ characterize the percolation universality class for our model. Since the various thermodynamic quantities are related, these critical exponents are not independent, but rather can be uniquely defined in terms of only two of them \cite{domb2000phase}. We find that the scaling hypothesis is also valid for our model and note that our values for these exponents are consistent with the Widom's identity $\delta -1 = \gamma/\beta$ \cite{bunde2012fractals}. \begin{figure} \begin{centering} \includegraphics[width=1.0\linewidth]{Fig_real} \caption{\label{Fig:2A} (a) $S(p,0)$, versus the fraction of non-removed nodes, $p$, for real-data of the European (EU) and North America (NA) railway networks; (b) $S(p_c,r)$ as a function of $r$; (c) $\frac{\partial S(p,r)}{\partial r}$ as a function of $p_{c}-p$ for $r = 10^{-2}$. Inset in (a) shows the second largest component $S_{2}(p,0)$ as a function of $p$. We obtain our values of $p_c$ based on the peak of $S_{2}(p,0)$, which gives $p_{c}^{EU} = 0.7641$ and $p_{c}^{NA} = 0.7578$. The dashed lines in (b) are the best-fit lines for the data with slopes $1/\delta =0.054$, $1/\delta =0.052$ and R-Square $>0.89$. The network sizes are $N_{EU}=8354$, $M_{EU}=11128$; $N_{NA}=933$, $M_{NA}=1273$, $M_{flight} = 1864$.} \end{centering} \end{figure} In the following, we test our framework on a real world example involving global transportation networks. We consider two railway networks, one in Europe (EU) and the other in North America (NA). The two railway networks have $N_{EU}=8354$ and $N_{NA}=933$ nodes (stations), as well as $M_{EU}=11128$ and $M_{NA}=1273$ intralinks respectively. As an example of adding long distance flights, we added $M_{flight}$ interconnected links randomly among $r$ fraction of the nodes (airport hubs). We used $M_{flight} = 1864$, which is the actual number of direct flights between the two continents. Fig. \ref{Fig:2A} shows our results for the system of the two real networks. We find that, the values of the critical exponents $\delta$ and $\gamma$ for the real networks [Fig. \ref{Fig:2A}(b) and (c)] are consistent with the results obtained from our model. One should note that the percolation threshold $p_c$ is different in each module when they are separated, since the number of nodes and links is not the same in both modules. To obtain the percolation threshold, $p_c$ for each real railway network, we analyzed the second largest component, $S_{2} (p,0)$. The size of the second largest cluster is known to be at a maximum at $p_c$ \cite{margolina_size_1982}. We obtained $p_{c}^{EU} = 0.764$ and $p_{c}^{NA} = 0.758$ by utilizing the peak of $S_{2} (p,0)$ for the EU and NA networks respectively [see inset of Fig. \ref{Fig:2A}(a)]. To analyze the robustness of our model, we define an effective percolation threshold, $p_{cut}$, by using a small cut-off value of the giant component $S_{cut}$, as shown in Fig.~\ref{Fig:3A}(a). The threshold $p_{cut}$ is defined as the point where $S(p,r)$ reaches $S_{cut}$. We assume that when $S(p,r)$ is very small as $S_{cut}$ or below it is not functional. Interestingly, we find an optimal $r$ in our model. It means that for a certain $r=r_{opt}$ the system is most robust i.e., $p_{cut}$ is minimal. Indeed, Fig.~\ref{Fig:3A}(b) shows a specific example with $S_{cut} = 0.01$, where we find the optimal point to be $r_{opt} \approx 0.05$. In our framework, this suggests that if $5\%$ of the cities have interconnected flights the network is most robust to random failures. The origin of this optimization phenomenon is due to the percolation competition between the individual lattice module and the interconnected `network' composed of $r$ interconnected nodes/inter-links. When $r$ is small enough, the behavior of the giant component $S(p,r)$ is dominated by the single lattice module [see Fig.~\ref{Fig:3A}(a)], and the threshold $p_{cut}$ is large and close to $p_c$ [see Fig.~\ref{Fig:3A}(b), with small $r$]; when $r$ is increasing, the effect of the giant component of a single lattice module becomes weaker, but the effect from the interconnected nodes/inter-links becomes stronger resulting the decreasing of $p_{cut}$; however, when $r$ is large, the behavior of the giant component is dominated by the interconnected nodes/inter-links, $p_{cut}$ is proportional to $r$ [see Fig.~\ref{Fig:3A}(b), with large $r$]. In particular, our model will become like a random network, when $r=1$. We also find that, in Fig.~\ref{Fig:3A}(b), there are no significant finite-size effects for our system since the three curves with $L = 1024, 2048, 4096$ are nearly overlapping. The results on how $p_{cut}$ changes with $S_{cut}$ and $r$ are shown in Fig.~\ref{Fig:3A}(c). \begin{figure} \begin{centering} \includegraphics[width=1.0\linewidth]{pc_a1.eps} \caption{\label{Fig:3A} The effective percolation threshold, $p_{cut}$, for our model. (a) Definition of $p_{cut}$ as the intersection between $S(p,r)$ and $S_{cut}$. (b) $p_{cut}$ as a function of $r$ with $S_{cut} = 0.01$. (c) $p_{cut}$ as a function of $r$ and $S_{cut}$. } \end{centering} \end{figure} Fig.~\ref{Fig:4A}(a) presents how $p_{cut}$ changes with $S_{cut}$ and $r$ for a real network. These results are qualitatively similar to our model results [Fig.~\ref{Fig:3A}(c)]. We also observe that there exists an optimal value of $r$ in the real transportation network. Fig.~\ref{Fig:4A}(b) shows three specific cases with $S_{cut} = 0.01, 0.05, 0.1$. We find that the optimal point is around $r_{opt} \approx 0.01$. Suggesting that if $1\%$ of cities have intercontinental flights the system is optimally robust against random failures. For comparison, we also show in the figure the fraction of interconnected nodes in the real data: $r_{EU} = 0.0055$ and $r_{NA} = 0.05$. The lower and upper boundaries of the shadow in Fig.~\ref{Fig:4A}(b) are based on these two values. Note that the number of interconnected links, $M_{inter}$, is kept constant when we change $r$ in our model, i.e, $\<k_{inter}\>$ is proportional to $1/r$. We also performed the same analysis to identify how the external field affects the resilience, i.e., the critical exponents $\delta, \gamma$ and effective percolation threshold of the spatial and ER networks when $\<k_{inter}\>$ is fixed and $M_{inter}$ changes, according to $\<k_{inter}\> = \<M_{inter}\>/(rN)$. The results are presented and discussed in Supplemental Materials \cite{SI}. \section{SUMMARY} We have developed a framework to study the resilience of coupled spatial networks where we show that the inter-links act analogously to an external field in a magnetic-paramagnetic system. Using percolation theory we studied the dynamical evolution of the giant component, and found the scaling relations governing the external field. We defined the critical exponents $\delta$ and $\gamma$ using $S$, $p$ and $r$, which serve as analogues of the total magnetization, temperature and external field, respectively. The values of the critical exponents are universal and relate well with the known values previously obtained for standard percolation on a 2d lattice. Furthermore, we find that our scaling relations obey the Widom's identity. We next defined the effective percolation threshold to quantify the robustness of our model. We found that there exists an optimal amount of interconnected nodes, which is also predicted and observed in real-world networks. Our approach provides a new perspective on resilience of networks with community structure and gives insight on its interlinks response as an external field. Lastly, our model provides a method for optimizing real world interconnected infrastructure networks which could be implemented by practitioners in the field. \begin{figure} \begin{centering} \includegraphics[width=1.0\linewidth]{real_data_fig} \caption{\label{Fig:4A} The effective percolation threshold for a real-world network. (a) $p_{cut}$ as a function of $r$ and $S_{cut}$. (b) $p_{cut}$ as a function of $r$ with $S_{cut} = 0.01, 0.05, 0.1$. The region between $r_{EU} = 0.0055$ and $r_{NA} = 0.05$ is highlighted. } \end{centering} \end{figure} \section*{Acknowledgements} We acknowledge the Israel-Italian collaborative project NECST, the Israel Science Foundation, the Major Program of National Natural Science Foundation of China (Grants 71690242, 91546118), ONR, Japan Science Foundation, BSF-NSF, and DTRA (Grant no. HDTRA-1-10-1-0014) for financial support. This work was partially supported by National Natural Science Foundation of China (Grants 61403171, 71403105, 2015M581738 and 1501100B) and Key Research Program of Frontier Sciences, CAS, Grant No. QYZDJ-SSW-SYS019. J.F thanks the fellowship program funded by the Planning and Budgeting Committee of the Council for Higher Education of Israel.
{ "timestamp": "2018-06-05T02:10:03", "yymm": "1806", "arxiv_id": "1806.00756", "language": "en", "url": "https://arxiv.org/abs/1806.00756" }
\section{Introduction\label{intro}} We consider the Cauchy problem of the 2D Zakharov-Kuznetsov-Burgers (ZKB) equation: \begin{equation}\label{ZKB} \begin{cases} \displaystyle \partial_{t}u+\partial_x(\partial_x^2+\partial_y^2)u-\partial_x^2u=\partial_x(u^2),\ \ t>0,\ (x,y)\in {\BBB R}^2,\\ u(0,x,y)=u_{0}(x,y),\ \ (x,y)\in {\BBB R}^{2}, \end{cases} \end{equation} where the unknown function $u$ is ${\BBB R}$-valued. This equation is two dimensional model of the Kowteweg-de Vries-Burgers (KdVB) equation \begin{equation}\label{KdVB} \displaystyle \partial_{t}u+\partial_x^3u-\partial_x^2u=\partial_x(u^2),\ \ t>0,\ x\in {\BBB R}, \end{equation} and appears in the dust-ion-acoustic-waves in dusty-plasmas (See, \cite{MS08}, \cite{ZTZSL13}). We can see that (\ref{ZKB}) has both dissipative term and dispersive term. The aim of this paper is to prove the well-posedness of (\ref{ZKB}) in the Sobolev space $H^s({\BBB R}^2)$. First, we introduce some known results for related problems for 1D case. In \cite{KPV96}, Kenig, Ponce, and Vega proved that the Kowteweg-de Vries (KdV) equation \[ \partial_{t}u+\partial_x^3u=\partial_x(u^2),\ \ t>0,\ x\in {\BBB R}, \] is locally well-posed in $H^s({\BBB R} )$ for $s>-3/4$. Colliander, Keel, Stafillani, Takaoka, and Tao (\cite{CKSTT03}) extended the local result to globally in time. For the critical case, Kishimoto (\cite{Ki09}) and Guo (\cite{Guo09}) obtained the global well-posedness of KdV equation in $H^{-\frac{3}{4}}({\BBB R})$. While, it is proved that the flow map of KdV equation is not uniformly continuous for $s<-3/4$ by Kenig, Ponce, and Vega in \cite{KPV01} (for ${\BBB C}$-valued KdV) and Christ, Colliander, and Tao in \cite{CCT03} (for ${\BBB R}$-valued KdV). Therefore, $s=-3/4$ is optimal regularity to obtain the well-posedness of KdV equation by using the iteration argument. For the Burgers equation \[ \partial_{t}u-\partial_x^2u=\partial_x(u^2),\ \ t>0,\ x\in {\BBB R}, \] Dix (\cite{Di96}) proved the local well-posedness in $H^s({\BBB R})$ for $s>-1/2$ and nonuniqueness of solution for $s<-1/2$. For the critical case, Bekiranov (\cite{Be96}) obtained the local well-posedness of the Burgers equation in $H^{-\frac{1}{2}}({\BBB R} )$. These results say that $-1/2$ is optimal regularity to obtain the well-posedness of the Burgers equation. In \cite{MR02}, Molinet and Ribaud considered the KdV-Burgers equation \[ \partial_{t}u+\partial_x^3u-\partial_x^2u=\partial_x(u^2),\ \ t>0,\ x\in {\BBB R} \] and obtained the global well-posedness in $H^s({\BBB R})$ for $s>-1$. For the critical case, Molinet and Vento (\cite{MV}) proved the global well-posedness of the KdV-Burgers equation in $H^{-1}({\BBB R})$. They also proved that the flow map is discontinuous for $s<-1$. We note that the regularity $s=-1$ is lower than both $-3/4$ and $-1/2$. It means that both the dispersive term and the dissipative term are essentially effective for well-posedness. Next, we introduce some known results for related problems for 2D case. Gr\"unrock and Herr (\cite{GH14}), and Molinet and Pilod (\cite{MP15}) proved that the 2D Zakharov-Kuznetsov equation \begin{equation}\label{ZK} \partial_{t}u+\partial_x(\partial_x^2+\partial_y^2)u=\partial_x(u^2),\ \ t>0,\ (x,y)\in {\BBB R}^2 \end{equation} is locally well-posed in $H^s({\BBB R}^2)$ for $s>1/2$. Especially, Gr\"unrock and Herr used the linear transform \[ v(t,x,y)=u\left(t,\frac{4^{\frac{1}{3}}}{2}(x+y),\frac{4^{\frac{1}{3}}}{2\sqrt{3}}(x-y)\right). \] and rewrote (\ref{ZK}) to the symmetric form \begin{equation}\label{ZK_sym} \partial_{t}v+(\partial_x^3+\partial_y^3)v=4^{-\frac{1}{3}}(\partial_x+\partial_y)(v^2),\ \ t>0,\ (x,y)\in {\BBB R}^2. \end{equation} Such transform is introduced by Artzi, Koch, and Saut in \cite{AKS03}. We note that the well-posedness of (\ref{ZK}) in $H^s({\BBB R}^2)$ is equivalent to the well-posedness of (\ref{ZK_sym}) in $H^s({\BBB R}^2)$. This transform is not essentially needed to obtain the well-posedness (Actually, Molinet and Pilod did not used such transform), but the symmetry helps us to find the structure of the equation and to write some parts of proof simply. Well-posedness of (\ref{ZK}) for $s\le 1/2$ is still open. But, Kinoshita gave the author the comment that there is a counter example for the $C^2$-well-posedness of (\ref{ZK_sym}) in $H^s({\BBB R}^2)$ for $s<-1/4$. His counter example is given as \[ \widehat{u_0}(\xi, \eta):=N^{-s+\frac{5}{4}}(\chi_{A}(\xi,\eta)+\chi_{B}(\xi,\eta)), \] where \[ \begin{split} A&:= \left\{\left.Na+N^{-\frac{1}{2}}\delta v+N^{-2}\epsilon \frac{v^{\perp}}{|v^{\perp}|} \right| -1<\delta, \epsilon<1\right\},\\ B&:= \left\{\left.Nb+N^{-\frac{1}{2}}\delta v+N^{-2}\epsilon \frac{v^{\perp}}{|v^{\perp}|} \right| -1<\delta, \epsilon<1\right\},\\ v&:=(3\sqrt[3]{9}, \sqrt[3]{100}),\ a:=(\sqrt[3]{2}, \sqrt[3]{75}),\ b:=\left(-3\sqrt[3]{2}, -\frac{\sqrt[3]{75}}{5}\right). \end{split} \] Indeed, we can obtain $\|u_0\|_{H^s}\sim 1$ and \[ \sup_{0<t\le T}\left\|\int_0^t e^{-(t-t')(\partial_x^3+\partial_y^3)} (\partial_x+\partial_y)\left((e^{-t'(\partial_x^3+\partial_y^3)}u_0)^2\right)dt'\right\|_{H^s} \gtrsim N^{-s-\frac{1}{4}}. \] While for the nonlinear parabolic equation \[ \partial_{t}u-\Delta u=P(D)F(u),\ \ t>0,\ (x,y)\in {\BBB R}^d, \] Ribaud (\cite{R98}) obtained some well-posedness results. His results contain that the well-posedness of the 2D nonlinear parabolic equation \begin{equation}\label{parab} \partial_{t}u-(\partial_x^2+\partial_y^2) u=\partial_x(u^2),\ \ t>0,\ (x,y)\in {\BBB R}^2 \end{equation} in $H^s({\BBB R}^2)$ for $s\ge 0$ and nonuniqueness of solution for $s<0$. Therefore, our interest is the well-posedness of (\ref{ZKB}) in $H^s({\BBB R}^2)$ for lower $s$ than both $-1/4$ and $0$. Here, we introduce the results for 2D dispersive-dissipative models. The KP-Burgers equation \[ \partial_x\left(\partial_t u+\partial_x^3u-\partial_x^2u-\partial_x (u^2)\right) +\epsilon \partial_y^2 u=0,\ \ t>0,\ (x,y)\in {\BBB R}^2,\ \ \epsilon \in \{-1,1\}, \] is also two dimensional model of KdV-Burgers equation. We call KP-Burgers equation ``KP-I-Burgers equation'' if $\epsilon =-1$, and ``KP-II-Burgers equation'' if $\epsilon =1$. The well-posedness of KP-Burgers equation is obtained in $H^{s,0}({\BBB R}^2)$ for $s>-1/2$ by Kojok in \cite{Koj07} (for $\epsilon =1$) and Mohamad in \cite{Moh12} (for $\epsilon =-1$). Where $H^{s,0}({\BBB R}^2)$ is anisotropic Sobolev space defined by the norm $\|f\|_{H^{s,0}}=\|\langle \xi \rangle^s \widehat{f}(\xi,\eta)\|_{L^2_{\xi\eta}}$. Carvajal, Esfahani, and Panthee (\cite{CEP17}) considered the two dimensional dissipative KdV type equation \[ \partial_tu+\partial_x^3u+L_{x,y}u+\partial_x(u^2)=0,\ \ t>0,\ (x,y)\in {\BBB R}^{2}, \] where the operator $L_{x,y}$ is defined by \[ \mathcal{F}_{xy}[L_{x,y}f](\xi,\eta)=-\Phi (\xi, \eta)\widehat{f}(\xi,\eta) \] and the leading term of $\Phi(\xi, \eta)$ is $-(|\xi|^{p_1}+|\eta|^{p_2})$ with $p_1$, $p_2>0$. They obtained the well-posedness of this equation with $p_2>1$ in $H^{s,0}({\BBB R}^2)$ for $s>-3/4$. They also considered the high dimensional cases and obtained more general results. There is no results for the well-posedness of (\ref{ZKB}) as far as we know. But the initial-boundary problem of ZKB equation is studied by Larkin (\cite{Lar_arx}, \cite{Lar15}). Now, we give the main results in this paper. To begin with, we rewrite (\ref{ZKB}) to the symmetric form based on \cite{GH14}. We put \[ v(t,x,y)=4u(16t,2(x+y),2\sqrt{3}^{-1}(x-y)). \] Then, (\ref{ZKB}) can be rewritten \begin{equation}\label{ZKB_sym} \begin{cases} \displaystyle \partial_{t}v+(\partial_x^3+\partial_y^3)v-(\partial_x+\partial_y)^2v=(\partial_x+\partial_y)(v^2),\\ v(0,x,y)=v_0(x,y):=4u_{0}(2(x+y),2\sqrt{3}^{-1}(x-y)). \end{cases} \end{equation} We note that the well-posedness of (\ref{ZKB}) in $H^s({\BBB R}^2)$ is equivalent to the well-posedness of (\ref{ZKB_sym}) in $H^s({\BBB R}^2)$. Therefore, we consider (\ref{ZKB_sym}) instead of (\ref{ZKB}). \begin{thm}\label{LWP} \ Let $s>-\frac{1}{2}$. Then (\ref{ZKB_sym}) is locally well-posed in $H^s({\BBB R}^2)$. (Therefore (\ref{ZKB}) is also locally well-posed in $H^s({\BBB R}^2)$.) More precisely, for any $v_0\in H^s({\BBB R}^2)$, there exist $T>0$, and an unique solution $v\in X^{s,\frac{1}{2},1}_T\ (\hookrightarrow C([0,T];H^s({\BBB R}^2))$\ $($See, Definition~\ref{FRN}$)$ to (\ref{ZKB_sym}) in $[0,T]$. Furthermore, the data-to-solution map is Lipschitz continuous from $H^s({\BBB R}^2)$ to $C([0,T];H^s({\BBB R}^2))$. \end{thm} \begin{thm}\label{GWP} Let $s>-\frac{1}{2}$. For any $v_0\in \widetilde{H}^{s}({\BBB R}^2)$, the solution $v$ obtained in Theorem~\ref{LWP} can be extended globally in time and $v$ belongs to $C((0,\infty );\widetilde{H}^{\infty}({\BBB R}^2))$, where $\widetilde{H}^s({\BBB R}^2)$ is the completion of the Schwartz class $\mathcal{S} ({\BBB R}^2)$ with the norm $\|f\|_{\widetilde{H}^s}=\|\langle \xi +\eta \rangle^s \widehat{f}(\xi,\eta)\|_{L^2_{\xi\eta}}$, and $\widetilde{H}^{\infty}({\BBB R}^2)=\bigcap_{s\in {\BBB R}}\widetilde{H}^{s}({\BBB R}^2)$. \end{thm} \begin{rem} {\rm (i)}\ Although (\ref{ZKB}) does not have the dissipative term with respect to $y$, the well-posedness of (\ref{ZKB}) is obtained in isotropic Sobolev space $H^{s}({\BBB R}^2)$ for lower regularity than both (\ref{ZK}) and (\ref{parab}). \\ {\rm (ii)}\ Theorem~\ref{GWP} says that (\ref{ZKB}) is globally well-posed in $H^{s,0}({\BBB R}^2)$ for $s>-\frac{1}{2}$. \end{rem} To obtain Theorem~\ref{LWP}, we have to treat the dissipative term carefully, because the symbol $(\xi+\eta)^2$ is vanished on the line $\{(\xi, -\xi)|\xi\in {\BBB R}\}$. But the nonlinear term is also vanished on the same line. It helps us to obtain the key bilinear estimate (Proposition~\ref{bilin_est}). We will use the iteration argument with the Fourier restriction norm to obtain the local well-posedness. While, the global well-posedness will be proved by using the smoothing effect from the dissipative term and non-increasing of $L^2$-norm of the solution. \text{} \\ \noindent {\bf Notation.} We denote the spatial Fourier transform by\ \ $\widehat{\cdot}$\ \ or $\mathcal{F}_{xy}$, the Fourier transform in time by $\mathcal{F}_{t}$, and the Fourier transform in all variables by\ \ $\widetilde{\cdot}$\ \ or $\mathcal{F}$. The operator $U(t)=e^{-t(\partial_x^3+\partial_y^3)}$ and $W(t)=e^{|t|(\partial_x+\partial_y)^2}e^{-t(\partial_x^3+\partial_y^3)}$ on $H^{s}({\BBB R}^2)$ is given as a Fourier multiplier \[ \mathcal{F}_{xy}[U(t)f](\xi,\eta)=e^{it(\xi^3+\eta^3)}\widehat{f}(\xi ),\ \ \mathcal{F}_{xy}[W(t)f](\xi,\eta)=e^{-|t|(\xi+\eta)^2}e^{it(\xi^3+\eta^3)}\widehat{f}(\xi ). \] $U(t)$ and $W(t)$ give a solution to \[ \partial_t u+(\partial_x^3+\partial_y^3)u=0 \] and \[ \partial_t u+(\partial_x^3+\partial_y^3)u-{\rm sgn}(t)(\partial_x+\partial_y)^2u=0 \] respectively. We note that $\mathcal{F}[U(-\cdot )F(\cdot )](\tau ,\xi ,\eta )=\widetilde{F}(\tau +\xi^3+\eta^3,\xi,\eta )$. We will use $A\lesssim B$ to denote an estimate of the form $A \le CB$ for some constant $C$ and write $A \sim B$ to mean $A \lesssim B$ and $B \lesssim A$. We will use the convention that capital letters denote dyadic numbers, e.g. $N=2^{n}$ for $n\in {\BBB Z}$ and for a dyadic summation we write $\sum_{N}a_{N}:=\sum_{n\in {\BBB Z}}a_{2^{n}}$, $\sum_{N\geq N'}a_{N}:=\sum_{n\in {\BBB Z}, 2^{n}\geq N'}a_{2^{n}}$, and $\sum_{N\leq N'}a_{N}:=\sum_{n\in {\BBB Z}, 2^{n}\leq N'}a_{2^{n}}$ for brevity. Let $\chi \in C^{\infty}_{0}((-2,2))$ be an even, non-negative function such that $\chi (t)=1$ for $|t|\leq 1$. We define $\varphi (t):=\chi (t)-\chi (2t)$ and $\varphi_{N}(t):=\varphi (N^{-1}t)$. Then, $\sum_{N}\varphi_{N}(t)=1$ whenever $t\neq 0$. We define the projections \[ \begin{split} &\widehat{P_{N}u}(\xi ,\eta):=\varphi_{N}(|(\xi,\eta )| )\widehat{u}(\xi,\eta),\ \widehat{P_{N,M}u}(\xi ,\eta):=\varphi_{N,M}(\xi ,\eta )\widehat{u}(\xi ,\eta),\\ &\widetilde{Q_{L}u}(\tau ,\xi ,\eta):=\varphi_{L}(\tau -\xi^3-\eta^3)\widetilde{u}(\tau ,\xi ,\eta), \end{split} \] where $\varphi_{N,M}(\xi ,\eta):=\varphi_{N}(|(\xi ,\eta )|)\varphi_M(\xi+\eta )$. The rest of this paper is planned as follows. In Section 2, we will give the definition of the solution space, and prove the linear estimates. In Section 3, we will prove the bilinear estimate which is main part of this paper. In Section 4, we will give the proof of the well-posedness (Theorems~\ref{LWP} and ~\ref{GWP}). \section{Function space and linear estimate} In this section, we define the function space, and prove the estimate for linear solution and Duhamel term. First, we consider the standard Fourier restriction norm $\|\cdot \|_{X^{s,b}}$ for (\ref{ZKB_sym}) defined by \[ \|u\|_{X^{s,b}}=\|\langle |(\xi ,\eta )|\rangle^s\langle (\xi +\eta )^2+i(\tau -\xi^3-\eta^3)\rangle^b\widetilde{u}(\tau ,\xi ,\eta )\|_{L^2_{\tau \xi \eta}}. \] Such Fourier restriction norm was introduced by J. Bourgain (\cite{Bo93}) for the nonlinear Schr\"odinger equation and the KdV equation. Let $\psi \in C^{\infty}({\BBB R} )$ denotes a cut-off function such that $\operatorname{supp} \psi \subset [-2,2]$, $\psi =1$ on $[-1,1]$. We note that, the estimate \[ \|\psi (t)W(t)u_0\|_{X^{s,b}}\lesssim \|\langle |(\xi ,\eta )|\rangle^{s}\langle \xi +\eta \rangle^{b-\frac{1}{2}} \widehat{u_0}(\xi ,\eta )\|_{L^2_{\xi \eta}} \] holds. Therefore, if $b\le 1/2$, then $\psi W(\cdot )u_0\in X^{s,b}$ for $u_0\in H^s$. But the embedding $X^{s,b}\hookrightarrow C({\BBB R};H^s({\BBB R}^2))$ does not hold for $b\le 1/2$. Therefore, we use the Besov type Fourier restriction norm defined as follows. \begin{defn}\label{FRN} Let $s\in {\BBB R}$, $b\in {\BBB R}$. \\ (i)\ We define the function space $X^{s,b,1}$ as the completion of the Schwartz class ${\mathcal S}({\BBB R}_{t}\times {\BBB R}^2_{x,y})$ with the norm \[ \|u\|_{X^{s,b,1}}=\left\{\sum_{N\in 2^{{\BBB Z}}}\sum_{M\in 2^{{\BBB Z}}}\left(\sum_{L\in 2^{{\BBB Z}}}\langle N\rangle^s\langle M^2+L\rangle^{b}\|P_{N,M}Q_{L}u\|_{L^2_{txy}}\right)^2\right\}^{\frac{1}{2}}. \] (ii)\ For $T>0$, we define the time localized space $X^{s,b,1}_T$ as \[ X^{s,b,1}_{T}=\{u|_{[0,T]}|u\in X^{s,b,1}\} \] with the norm \[ \|u\|_{X^{s,b,1}_T}=\inf \{\|v\|_{X^{s,b,1}}|v\in X^{s,b,1},\ v|_{[0,T]}=u|_{[0,T]}\}. \] \end{defn} \begin{rem} (i)\ The embedding $X^{s,\frac{1}{2},1}_T\hookrightarrow C([0,T];H^s({\BBB R}^2))$ holds. \\ (ii)\ The size of $|\xi +\eta |$, which comes from the symbol of the dissipative term of (\ref{ZKB_sym}), is not decided by the size of $|(\xi ,\eta )|$. Therefore, to use the dissipative effect strictly, we focus on not only $|(\xi ,\eta )|\sim N$, but also $|\xi +\eta |\sim M$. This is a different point from 1D case. \\ (iii)\ We can assume $\sum_{M\in 2^{{\BBB Z}}}=\sum_{M\lesssim N}$ since $|\xi +\eta|\lesssim |(\xi ,\eta )|$ holds. \end{rem} We choose $X^{s,\frac{1}{2},1}_T$ as the solution space. Now, we define the operator ${\mathcal K}$ and ${\mathcal L}$ by \[ \begin{split} &{\mathcal K}F(t)(\xi,\eta ):=\int_{{\BBB R}}\frac{e^{it\tau}-e^{-|t|(\xi +\eta )^2}}{(\xi +\eta)^2+i\tau}\mathcal{F}[U(-\cdot )F(\cdot )](\tau ,\xi ,\eta )d\tau\\ &{\mathcal L}F(t):=U(t)\int_{{\BBB R}^2}e^{ix\xi}e^{iy\eta}{\mathcal K}F(t)(\xi,\eta )d\xi d\eta =U(t)\mathcal{F}_{x,y}^{-1}[{\mathcal K}F(t)]. \end{split} \] Then, we note that \[ {\mathcal L}F(t)=\int_0^t W(t-t')F(t')dt' \] holds for $t\ge 0$ and the integral form of (\ref{ZKB_sym}) on $[0,\infty)$ is given by \begin{equation}\label{ZKB_sym_int} \begin{split} v(t)&=W(t)v_0+\int_0^tW(t-t')(\partial_x+\partial_y)(v(t')^2)dt'\\ &=W(t)v_0+{\mathcal L}((\partial_x+\partial_y)v)(t). \end{split} \end{equation} \begin{prop}\label{lin_est} Let $s\in {\BBB R}$. There exists $C_1>0$, such that for any $u_0\in H^s({\BBB R}^2)$, we have \[ \|\psi (t)W(t)u_0\|_{X^{s,\frac{1}{2},1}}\le C_1\|u_0\|_{H^s}. \] \end{prop} \begin{proof} Since \[ \left(\sum_N\sum_M\langle N\rangle^{2s}\|P_{N,M}u_0\|_{L^2_{xy}}^2\right)^{\frac{1}{2}}\sim \|u_0\|_{H^s} \] holds, it suffice to prove \[ \sum_{L}\langle M^2+L\rangle^{\frac{1}{2}}\|P_{N,M}Q_{L}(\psi (t)W(t)u_0)\|_{L^2_{txy}}\lesssim \|P_{N,M}u_0\|_{L^2_{xy}} \] for each $N$, $M\in 2^{{\BBB Z}}$. By using Plancherel's theorem, we have \[ \begin{split} &\|P_{N,M}Q_{L}(\psi (t)W(t)u_0)\|_{L^2_{txy}}\\ &\sim \|\varphi_{N,M}(\xi, \eta )\varphi_L(\tau )\mathcal{F}_t[\psi (t)e^{-|t|(\xi +\eta )^2}]\widehat{u_0}(\xi ,\eta )\|_{L^2_{\xi \eta t}}\\ &\lesssim \|P_{N,M}u_0\|_{L^2_{xy}} \|\phi_M(\xi +\eta )\varphi_L(\tau )\mathcal{F}_t[\psi (t)e^{-|t|(\xi +\eta )^2}]\|_{L^{\infty}_{\xi \eta}L^2_t}\\ &=\|P_{N,M}u_0\|_{L^2_{xy}} \|\phi_M(\zeta )\varphi_L(\tau )\mathcal{F}_t[\psi (t)e^{-|t|\zeta^2}]\|_{L^{\infty}_{\zeta}L^2_t}, \end{split} \] where $\phi_M=\varphi_{2M}+\varphi_M+\varphi_{\frac{M}{2}}$ and we used $\varphi_M=\varphi_M\phi_M$. Therefore, it suffice to prove \begin{equation}\label{exp_besov} \sum_{L}\langle M^2+L\rangle^{\frac{1}{2}}\|\phi_M(\zeta)\varphi_L(\tau )\mathcal{F}_t[\psi (t)e^{-|t|\zeta^2}]\|_{L^{\infty}_{\zeta}L^2_{\tau}} \lesssim 1. \end{equation} It is obtained in the proof of Proposition\ 4.1 in \cite{MV}. \end{proof} \begin{prop}\label{duam_est} Let $s\in {\BBB R}$. There exists $C_2>0$, such that for any $F\in X^{s,-\frac{1}{2},1}$, we have \[ \left\|\psi (t){\mathcal L}F(t)\right\|_{X^{s,\frac{1}{2},1}}\le C_2\|F\|_{X^{s,-\frac{1}{2},1}} \] \end{prop} \begin{proof} We use the argument in the proof of Lemma 4.1 in \cite{MV}. Since \[ \|P_{N,M}Q_L(\psi (t){\mathcal L}F(t))\|_{L^2_{txy}} \sim \|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[\psi {\mathcal K}F](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}, \] it suffice to show that \begin{equation}\label{Duamel__est_pf} \begin{split} &\sum_L\langle M^2+L\rangle^{\frac{1}{2}}\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[\psi {\mathcal K}F](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}[U(-\cdot )F(\cdot )](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}} \end{split} \end{equation} We put $w(t)=U(-t)F(t)$ and split $\psi{\mathcal K}F$ into $K_{1}+K_{2}+K_{3}-K_{4}$, where \[ \begin{split} K_{1}(t,\xi,\eta)&=\psi(t)\int_{|\tau |\le 1}\frac{e^{it\tau}-1}{(\xi +\eta)^2+i\tau}\widetilde{w}(\tau ,\xi ,\eta )d\tau,\\ K_{2}(t,\xi,\eta)&=\psi(t)\int_{|\tau |\le 1}\frac{1-e^{-|t|(\xi+\eta)^2}}{(\xi +\eta)^2+i\tau}\widetilde{w}(\tau ,\xi ,\eta )d\tau,\\ K_{3}(t,\xi,\eta)&=\psi(t)\int_{|\tau |\ge 1}\frac{e^{it\tau}}{(\xi +\eta)^2+i\tau}\widetilde{w}(\tau ,\xi ,\eta )d\tau,\\ K_{4}(t,\xi,\eta)&=\psi(t)\int_{|\tau |\ge 1}\frac{e^{-|t|(\xi+\eta)^2}}{(\xi +\eta)^2+i\tau}\widetilde{w}(\tau ,\xi ,\eta )d\tau. \end{split} \] Furthermore, we put $w_{N,M}=P_{N,M}w$. We note that $\widetilde{w}_{N,M}(\tau,\xi,\eta)=\phi_M(\xi+\eta)\widetilde{w}_{N,M}(\tau,\xi,\eta)$ since $\varphi_M=\varphi_M\phi_M$.\\ \text{} \\ \underline{Estimate for $K_1$} By using the Taylor expansion, we have \[ \begin{split} &\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_1](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_{n=1}^{\infty}\frac{1}{n!}\left\|\left(\int_{|\tau |\le 1}\frac{|\tau |^n|\widetilde{w}_{N,M}(\tau,\xi,\eta)|}{(\xi+\eta)^2+|\tau|}d\tau \right) \|\varphi_L(\tau)\mathcal{F}_t[t^n\psi (t)](\tau)\|_{L^2_\tau}\right\|_{L^2_{\xi\eta}}. \end{split} \] By the Cauchy-Schwartz inequality, we obtain \[ \begin{split} &\int_{|\tau |\le 1}\frac{|\tau |^n|\widetilde{w}_{N,M}(\tau,\xi,\eta)|}{(\xi+\eta)^2+|\tau|}d\tau \\ &\lesssim \left(\int_{|\tau |\le 1}\frac{|\tau |^2\langle(\xi+\eta)^2+|\tau |\rangle}{((\xi+\eta)^2+|\tau |)^2}|\phi_M(\xi +\eta)|^2d\tau\right)^{\frac{1}{2}} \left(\int_{|\tau |\le 1}\frac{|\widetilde{w}_{N,M}(\tau ,\xi ,\eta )|^2}{ \langle(\xi+\eta)^2+|\tau |\rangle}d\tau\right)^{\frac{1}{2}}\\ &\lesssim \langle M\rangle^{-1} \sum_{L}\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau, \xi, \eta)\|_{L^2_{\tau}} \end{split} \] for $n\ge 1$. Therefore, we get \[ \begin{split} &\sum_L\langle M^2+L\rangle^{\frac{1}{2}}\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_1](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_{n=1}^{\infty}\frac{1}{n!} \||t|^n\psi \|_{B^{\frac{1}{2}}_{2,1}} \sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}} \end{split} \] since $\langle M^2+L\rangle^{\frac{1}{2}}\langle M\rangle^{-1}\lesssim \langle L\rangle^{\frac{1}{2}}$. \\ \text{} \\ \underline{Estimate for $K_2$} By Plancherel's theorem, we have \[ \begin{split} &\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_2](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \left\|\left(\int_{|\tau |\le 1}\frac{|\widetilde{w}_{N,M}(\tau,\xi,\eta)|}{(\xi+\eta)^2+|\tau|}d\tau \right) \|\phi_M(\xi+\eta)\varphi_L(\tau)\mathcal{F}_t[\psi (t)(1-e^{-|t|(\xi+\eta)^2})](\tau)\|_{L^2_\tau}\right\|_{L^2_{\xi\eta}}. \end{split} \] By the Cauchy-Schwartz inequality, we obtain \[ \begin{split} &\int_{|\tau |\le 1}\frac{|\widetilde{w}_{N,M}(\tau,\xi,\eta)|}{(\xi+\eta)^2+|\tau|}d\tau \\ &\lesssim \left(\int_{|\tau |\le 1}\frac{\langle(\xi+\eta)^2+|\tau |\rangle}{((\xi+\eta)^2+|\tau |)^2}|\phi_M(\xi +\eta)|^2d\tau\right)^{\frac{1}{2}} \left(\int_{|\tau |\le 1}\frac{|\widetilde{w}_{N,M}(\tau ,\xi ,\eta )|^2}{ \langle(\xi+\eta)^2+|\tau |\rangle}d\tau\right)^{\frac{1}{2}}\\ &\lesssim M^{-2}\langle M\rangle \sum_{L}\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau, \xi, \eta)\|_{L^2_{\tau}} \end{split} \] Therefore if $M\ge 1$, then we get \[ \begin{split} &\sum_L\langle M^2+L\rangle^{\frac{1}{2}}\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_2](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}} \end{split} \] by (\ref{exp_besov}) and \[ \sum_{L}\langle M^2+L\rangle^{\frac{1}{2}}\|\varphi_L(\tau )\mathcal{F}_t[\psi ](\tau)\|_{L^2_{\tau}} \lesssim M\|\psi\|_{B^{\frac{1}{2}}_{2,1}}\lesssim M. \] While if $M\le 1$, then by using the Taylor expansion, we have \[ \begin{split} &\|\phi_M(\xi+\eta)\varphi_L(\tau)\mathcal{F}_t[\psi (t)(1-e^{-|t|(\xi+\eta)^2})](\tau)\|_{L^2_\tau}\\ &\lesssim \sum_{n=1}^{\infty}\frac{(\xi +\eta )^{2n}}{n!} \phi_M(\xi+\eta)\|\varphi_L(\tau )\mathcal{F}_t[\psi (t)|t|^n](\tau )\|_{L^2_{\tau}}\\ &\lesssim M^2\sum_{n=1}^{\infty}\frac{1}{n!}\|\varphi_L(\tau )\mathcal{F}_t[\psi (t)|t|^n](\tau )\|_{L^2_{\tau}} \end{split} \] Therefore, we get \[ \begin{split} &\sum_L\langle M^2+L\rangle^{\frac{1}{2}}\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_2](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_{n=1}^{\infty}\frac{1}{n!}\||t|^n\psi\|_{B^{\frac{1}{2}}_{2,1}}\sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}. \end{split} \] \\ \underline{Estimate for $K_3$} We put $g_{N,M}(t)=\mathcal{F}_t^{-1}[\mbox{\boldmath $1$}_{|\tau |\ge 1}((\xi +\eta)^2+i\tau)^{-1}\widetilde{w}_{N,M}(\tau ,\xi ,\eta )](t)$. Then, we have \[ \begin{split} |\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_3](\tau)| &\sim |\varphi_L(\tau)\left(\mathcal{F}_t[\psi]*_{\tau}\mathcal{F}_t[g_{N,M}](\tau )\right)|\\ &\lesssim \sum_{L_1}\sum_{L_2}|\varphi_L(\tau )(\varphi_{L_1}\mathcal{F}_t[\psi])*_{\tau}(\varphi_{L_2}\mathcal{F}_t[g_{N,M})(\tau )| \end{split} \] \\ (i)\ Summation for $L_1\ll L$\ (then, $L_2\sim L$.) By the Young inequality, we have \[ \begin{split} &\|\varphi_L(\tau )(\varphi_{L_1}\mathcal{F}_t[\psi])*_{\tau}(\varphi_{L_2}\mathcal{F}_t[g_{N,M})(\tau )\|_{L^2_{\tau}}\\ &\lesssim \|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^1_{\tau}}\|\varphi_{L_2}(\tau )\mathcal{F}_t[g_{N,M}](\tau )\|_{L^2_{\tau}}\\ &\lesssim \|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^1_{\tau}} \langle M^2+L_2\rangle^{-1}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau )\|_{L^2_{\tau}}. \end{split} \] Therefore, we obtain \[ \begin{split} &\sum_{L}\langle M^2+L\rangle^{\frac{1}{2}}\sum_{L_1\ll L}\sum_{L_2\sim L} \|\varphi_L(\tau )(\varphi_{L_1}\mathcal{F}_t[\psi])*_{\tau}(\varphi_{L_2}\mathcal{F}_t[g_{N,M}])(\tau )\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \left(\sum_{L_1}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^1_{\tau}}\right) \left(\sum_{L_2}\langle M^2+L_2\rangle^{-\frac{1}{2}}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta )\|_{L^2_{\xi\eta\tau}}\right)\\ &\lesssim \sum_{L_2}\langle M^2+L_2\rangle^{-\frac{1}{2}}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta )\|_{L^2_{\xi\eta\tau}} \end{split} \] since \[ \sum_{L_1}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^1_{\tau}} \lesssim \sum_{L_1}L_1^{\frac{1}{2}}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}}\lesssim \|\psi\|_{B^{\frac{1}{2}}_{2,1}}\lesssim 1. \] \\ (ii)\ Summation for $L\lesssim M^2$, $L_1\gtrsim L$. By the H\"older inequality and the Young inequality, we have \[ \begin{split} &\|\varphi_L(\tau )(\varphi_{L_1}\mathcal{F}_t[\psi])*_{\tau}(\varphi_{L_2}\mathcal{F}_t[g_{N,M}])(\tau )\|_{L^2_{\tau}}\\ &\lesssim \|\varphi_L\|_{L^2_{\tau}}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}}\|\varphi_{L_2}(\tau )\mathcal{F}_t[g_{N,M}](\tau )\|_{L^2_{\tau}}\\ &\lesssim L^{\frac{1}{2}}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}}\langle M^2+L_2\rangle^{-1}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau )\|_{L^2_{\tau}}. \end{split} \] Therefore, we obtain \[ \begin{split} &\sum_{L\lesssim M^2}\langle M^2+L\rangle^{\frac{1}{2}}\sum_{L_1\gtrsim L}\sum_{L_2} \|\varphi_L(\tau )(\varphi_{L_1}\mathcal{F}_t[\psi])*_{\tau}(\varphi_{L_2}\mathcal{F}_t[g_{N,M}])(\tau )\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \langle M\rangle \left(\sum_{L_1}L_1^{\frac{1}{2}}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}}\right) \left(\sum_{L_2}\langle M^2+L_2\rangle^{-1}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta )\|_{L^2_{\xi\eta\tau}}\right)\\ &\lesssim \sum_{L_2}\langle M^2+L_2\rangle^{-\frac{1}{2}}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta )\|_{L^2_{\xi\eta\tau}} \end{split} \] since $\langle M\rangle \lesssim \langle M^2+L_2\rangle^{\frac{1}{2}}$ and \[ \sum_{L_1}L_1^{\frac{1}{2}}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}}\lesssim \|\psi\|_{B^{\frac{1}{2}}_{2,1}}\lesssim 1. \] \\ (iii)\ Summation for $L_1\gtrsim L\gtrsim M^2$. By the Young inequality and the Cauchy-Schwartz inequality, we have \[ \begin{split} &\|\varphi_L(\tau )(\varphi_{L_1}\mathcal{F}_t[\psi])*_{\tau}(\varphi_{L_2}\mathcal{F}_t[g_{N,M}])(\tau )\|_{L^2_{\tau}}\\ &\lesssim \|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}}\|\varphi_{L_2}(\tau )\mathcal{F}_t[g_{N,M}](\tau )\|_{L^1_{\tau}}\\ &\lesssim \|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}} \langle M^2+L_2\rangle^{-\frac{1}{2}}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau )\|_{L^2_{\tau}}. \end{split} \] Therefore, we obtain \[ \begin{split} &\sum_{L\gtrsim M^2}\langle M^2+L\rangle^{\frac{1}{2}}\sum_{L_1\gtrsim L}\sum_{L_2} \|\varphi_L(\tau )(\varphi_{L_1}\mathcal{F}_t[\psi])*_{\tau}(\varphi_{L_2}\mathcal{F}_t[g_{N,M}])(\tau )\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \left(\sum_{L_1}\langle L_1\rangle^{\frac{1}{2}}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}}\right) \left(\sum_{L_2}\langle M^2+L_2\rangle^{-\frac{1}{2}}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta )\|_{L^2_{\xi\eta\tau}}\right)\\ &\lesssim \sum_{L_2}\langle M^2+L_2\rangle^{-\frac{1}{2}}\|\varphi_{L_2}(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta )\|_{L^2_{\xi\eta\tau}} \end{split} \] since \[ \sum_{L_1}\langle L_1\rangle^{\frac{1}{2}}\|\varphi_{L_1}(\tau )\mathcal{F}_t[\psi](\tau )\|_{L^2_{\tau}} \lesssim \|\psi \|_{B^{\frac{1}{2}}_{2,1}}\lesssim 1. \] \\ \underline{Estimate for $K_4$} By Plancherel's theorem, we have \[ \begin{split} &\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_4](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \left\|\left(\int_{|\tau |\ge 1}\frac{|\widetilde{w}_{N,M}(\tau,\xi,\eta)|}{(\xi+\eta)^2+|\tau|}d\tau \right) \|\phi_M(\xi+\eta)\varphi_L(\tau)\mathcal{F}_t[\psi (t)e^{-|t|(\xi+\eta)^2}](\tau)\|_{L^2_\tau}\right\|_{L^2_{\xi\eta}}. \end{split} \] By the Cauchy-Schwartz inequality, we obtain \[ \begin{split} \int_{|\tau |\ge 1}\frac{|\widetilde{w}_{N,M}(\tau,\xi,\eta)|}{(\xi+\eta)^2+|\tau|}d\tau &\lesssim \sum_L\langle M^2+L\rangle^{-1}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^1_{\tau}}\\ &\lesssim \sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^2_{\tau}}. \end{split} \] Therefore, by (\ref{exp_besov}), we get \[ \begin{split} &\sum_L\langle M^2+L\rangle^{\frac{1}{2}}\|\varphi_{N,M}(\xi,\eta)\varphi_L(\tau )\mathcal{F}_t[K_4](\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}\\ &\lesssim \sum_L\langle M^2+L\rangle^{-\frac{1}{2}}\|\varphi_L(\tau )\widetilde{w}_{N,M}(\tau,\xi,\eta)\|_{L^2_{\xi\eta\tau}}. \end{split} \] \end{proof} \section{Bilinear estimate} In this section, we prove the estimate for nonlinear term as follows. \begin{prop}\label{bilin_est} Let $s\ge s_0>-\frac{1}{2}$. There exist $0<\delta \ll 1$ and $C_3>0$, such that for any $u$, $v\in X^{s,\frac{1-\delta}{2},1}$, we have \[ \|(\partial_x+\partial_y)(uv)\|_{X^{s,-\frac{1}{2},1}} \le C_3\|u\|_{X^{s,\frac{1-\delta}{2},1}}\|v\|_{X^{s,\frac{1-\delta}{2},1}}. \] \end{prop} To prove Proposition~\ref{bilin_est}, we first give some Strichartz estimates. \begin{prop}\label{Stri} Let $(p,q)\in {\BBB R}^2$ satisfy $p\ge 3$ and $\frac{3}{p}+\frac{2}{q}=1$. For any $u_0\in L^2({\BBB R}^2)$, we have \[ \|U(t)u_0\|_{L^p_tL^q_{xy}}\lesssim \|u_0\|_{L^2_{xy}}. \] \end{prop} Proposition~\ref{Stri} is obtained by using the variable transform $(x,y)\mapsto (4^{-\frac{1}{3}}(x+\sqrt{3}y), 4^{-\frac{1}{3}}(x-\sqrt{3}y))$ in Proposition\ 2.4 in \cite{LP09}. \begin{prop}\label{mod_Stri} For any $u_0\in L^2({\BBB R}^2)$, we have \[ \|D_x^{\frac{1}{8}}D_y^{\frac{1}{8}}U(t)u_0\|_{L^4_{txy}}\lesssim \|u_0\|_{L^2_{xy}}, \] where $D_x^s=\mathcal{F}_{xy}^{-1}|\xi |^s\mathcal{F}_{xy}$, $D_y^s=\mathcal{F}_{xy}^{-1}|\eta |^s\mathcal{F}_{xy}$ for $s\in {\BBB R}$. \end{prop} Proposition~\ref{mod_Stri} is obtained by applying $\Omega (\xi ,\eta )=\xi^3+\eta^3$ in Corollary\ 3.4 in \cite{MP15}. By using the same argument as in Lemma 2.3 in \cite{GTV97}, we obtain the following estimates from Proposition~\ref{Stri} and Proposition~\ref{mod_Stri}. \begin{cor} Let $(p,q)\in {\BBB R}^2$ satisfy $p\ge 3$ and $\frac{3}{p}+\frac{2}{q}=1$. For $N$, $L\in 2^{{\BBB Z}}$, we have \begin{equation}\label{Stri_FR} \|P_NQ_Lu\|_{L^p_tL^q_{xy}}\lesssim L^{\frac{1}{2}}\|P_NQ_Lu\|_{L^2_{txy}}. \end{equation} Furthermore, if $\mathcal{F}_{xy}[P_Nu]$ is supported in $\{(\xi,\eta)|\ |\xi |\sim |\eta |\}$, then we have \begin{equation}\label{mStri_FR} \|P_NQ_Lu\|_{L^4_{txy}}\lesssim N^{-\frac{1}{4}}L^{\frac{1}{2}}\|P_NQ_Lu\|_{L^2_{txy}}. \end{equation} \end{cor} To get a positive power of $M$, we give the following estimates. \begin{cor} Let $0<\delta \ll 1$, $0<\epsilon <1-\delta$. For $N$, $M$, $L\in 2^{{\BBB Z}}$, we have \begin{equation}\label{LP_Stri} \|P_{N,M}Q_Lu\|_{L^{\frac{4}{1+\delta}}_{txy}}\lesssim (NM)^{\frac{\epsilon}{4}}L^{\frac{5(1-\delta)}{12}-\frac{\epsilon}{6}}\|P_{N,M}Q_Lu\|_{L^2_{txy}}. \end{equation} Furthermore, if $\mathcal{F}_{xy}[P_Nu]$ is supported in $\{(\xi,\eta)|\ |\xi |\sim |\eta |\}$, then we have \begin{equation}\label{mLP_Stri} \|P_{N,M}Q_Lu\|_{L^{\frac{4}{1+\delta}}_{txy}}\lesssim (NM)^{\frac{\epsilon}{4}}N^{-\frac{1}{4}(1-\delta-\epsilon)} L^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}\|P_NQ_Lu\|_{L^2_{txy}}. \end{equation} \end{cor} \begin{comment} The following is the $L^p$-Strichartz estimate \begin{cor} Let $2\le p\le 5$. For $N$, $L\in 2^{{\BBB Z}}$, we have \begin{equation}\label{LP_Stri} \|P_NQ_Lu\|_{L^p_{txy}}\lesssim L^{\frac{5}{6}(1-\frac{2}{p})}\|P_NQ_Lu\|_{L^2_{txy}}. \end{equation} Furthermore, if $2\le p\le 4$ and $\mathcal{F}_{xy}[P_Nu]$ is supported in $\{(\xi,\eta)|\ |\xi |\sim |\eta |\}$, then we have \begin{equation}\label{mLP_Stri} \|P_NQ_Lu\|_{L^p_{txy}}\lesssim N^{-\frac{1}{4}(1-\frac{2}{p})}L^{1-\frac{2}{p}}\|P_NQ_Lu\|_{L^2_{txy}}. \end{equation} \end{cor} \end{comment} \begin{proof} By (\ref{Stri_FR}) with $p=q=5$ , we have the $L^5$-Strichartz estimate \begin{equation}\label{L5_Stri} \|P_{N,M}Q_Lu\|_{L^{5}_{txy}}\lesssim L^{\frac{1}{2}}\|P_{N,M}Q_Lu\|_{L^2_{txy}}. \end{equation} By the interpolation between (\ref{L5_Stri}) and a trivial equality $\|P_{N,M}Q_Lu\|_{L^2_{txy}}=L^0\|P_{N,M}Q_Lu\|_{L^2_{txy}}$, we have \begin{equation}\label{L4-_Stri} \|P_{N,M}Q_Lu\|_{L^{\frac{4-2\epsilon}{1+\delta}}_{txy}}\lesssim L^{\frac{5(1-\delta-\epsilon)}{6(2-\epsilon)}}\|P_NQ_Lu\|_{L^2_{txy}}. \end{equation} While, by the Cauchy-Schwartz inequality, we obtain \[ \begin{split} \|P_{N,M}Q_Lu\|_{L^{\infty}_{xy}} &\le \int_{\substack{|(\xi,\eta)|\sim N\\ |\xi+\eta|\sim M}}|\mathcal{F}_{xy}[P_{N,M}Q_Lu](\xi,\eta )|d\xi d\eta\\ &\lesssim (NM)^{\frac{1}{2}}\|P_{N,M}Q_Lu\|_{L^{2}_{xy}}. \end{split} \] Therefore, by using (\ref{Stri_FR}) with $(p,q)=(\infty ,2)$, we have \begin{equation}\label{inf_Stri} \|P_{N,M}Q_Lu\|_{L^{\infty}_{txy}}\lesssim (NML)^{\frac{1}{2}}\|P_{N,M}Q_Lu\|_{L^{2}_{txy}}. \end{equation} By the interpolation between (\ref{L4-_Stri}) and (\ref{inf_Stri}), we obtain (\ref{LP_Stri}). By using (\ref{mStri_FR}) instead of (\ref{L5_Stri}) in the above argument, we also get (\ref{mLP_Stri}). \end{proof} Next, we give the bilinear Strichartz estimates. \begin{prop}\label{BSE} Let $R_{K}^{(j)}$ ($j=1,2$) denote the bilinear operator defined by \[ \begin{split} \mathcal{F}_{xy}[R_K^{(1)}(u_1,u_2)](\xi, \eta )&=\int \varphi_{K}(\xi_1^2-(\xi -\xi_1)^2) \widehat{u_1}(\xi_1,\eta_1)\widehat{u_2}(\xi -\xi_1,\eta -\eta_1)d\xi_1d\eta_1,\\ \mathcal{F}_{xy}[R_K^{(2)}(u_1,u_2)](\xi, \eta )&=\int \varphi_{K}(\eta_1^2-(\eta -\eta_1)^2) \widehat{u_1}(\xi_1,\eta_1)\widehat{u_2}(\xi -\xi_1,\eta -\eta_1)d\xi_1d\eta_1. \end{split} \] For $N_1$, $N_2$, $L_1$, $L_2$, $K\in 2^{{\BBB Z}}$ with $N_1\ge N_2$, and $j\in \{1,2\}$, we have \begin{equation}\label{BSE_1} \begin{split} &\|R_K^{(j)}(P_{N_1}Q_{L_1}u_1, P_{N_2}Q_{L_2}u_2)\|_{L^2_{txy}}\\ &\lesssim K^{-\frac{1}{2}}N_2^{\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}}\|P_{N_1}Q_{L_1}u_1\|_{L^2_{txy}}\|P_{N_2}Q_{L_2}u_2\|_{L^2_{txy}}. \end{split} \end{equation} \end{prop} \begin{proof} We only prove for $j=1$ because the case $j=2$ can be proved by the same way. We put $f_i=\mathcal{F}[P_{N_i}Q_{L_i}u_i]$, $\zeta_i=(\xi_i,\eta_i)$ $(i=1,2)$. By the duality argument, it suffice to show that \begin{equation}\label{BSE_pf_1} \begin{split} &\left|\int_{\Omega}f_1(\tau_1,\zeta_1)f_2(\tau_2,\zeta_2)f(\tau_1+\tau_2,\zeta_1+\zeta_2)d\tau_1d\tau_2d\zeta_1d\zeta_2\right|\\ &\lesssim K^{-\frac{1}{2}}N_2^{\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}}\|f_1\|_{L^2_{\tau \xi \eta}}\|f_2\|_{L^2_{\tau \xi \eta}}\|f\|_{L^2_{\tau \xi \eta}} \end{split} \end{equation} for any $f\in L^2({\BBB R}\times {\BBB R}^2)$, where \[ \Omega =\{(\tau_1,\tau_2,\zeta_1,\zeta_2)|\ |\zeta_i|\sim N_i,\ |\tau_i-\xi_i^3-\eta_i^3|\sim L_i\ (i=1,2),\ |\xi_1^2-\xi_2^2|\sim K\}. \] By the Cauchy-Schwartz inequality, we have \begin{equation}\label{BSE_pf_2} \begin{split} &\left|\int_{\Omega}f_1(\tau_1,\zeta_1)f_2(\tau_2,\zeta_2)f(\tau_1+\tau_2,\zeta_1+\zeta_2)d\tau_1d\tau_2d\zeta_1d\zeta_2\right|\\ &\lesssim \|f_1\|_{L^2_{\tau\xi\eta}}\|f_2\|_{L^2_{\tau\xi\eta}} \left(\int_{\Omega}|f(\tau_1+\tau_2,\zeta_1+\zeta_2)|^2d\tau_1d\tau_2d\zeta_1d\zeta_2\right)^{\frac{1}{2}}. \end{split} \end{equation} By applying the variable transform $(\tau_1,\tau_2)\mapsto (\theta_1,\theta_2)$ and $(\zeta_1,\zeta_2)\mapsto (\mu,w,z,\nu)$ as \[ \begin{split} &\theta_i=\tau_i-\xi_i^3-\eta_i^3\ \ (i=1,2),\\ &\mu =\theta_1+\theta_2+\xi_1^3+\xi_2^3+\eta_1^3+\eta_2^3,\ w=\xi_1+\xi_2,\ z=\eta_1+\eta_2,\ \nu =\eta_2, \end{split} \] we have \[ \begin{split} &\int_{\Omega}|f(\tau_1+\tau_2,\zeta_1+\zeta_2)|^2d\tau_1d\tau_2d\zeta_1d\zeta_2\\ &\lesssim \int_{\substack{|\theta_1|\sim L_1\\ |\theta_2|\sim L_2}}\left(\int_{|\nu |\lesssim N_2}|f(\mu,w,z)|^2 \mbox{\boldmath $1$}_{\{|\xi_1^2-\xi_2^2|\sim K\}}(\xi_1,\xi_2)J(\zeta_1,\zeta_2)^{-1}d\mu dwdzd\nu \right)d\theta_1d\theta_2 , \end{split} \] where \[ J(\zeta_1,\zeta_2) =\left|{\rm det}\frac{\partial (\mu ,w,z,\nu )}{\partial (\xi_1,\eta_1,\xi_2,\eta_2)}\right| =3|\xi_1^2-\xi_2^2|. \] Therefore, we obtain \begin{equation}\label{BSE_pf_3} \int_{\Omega}|f(\tau_1+\tau_2,\zeta_1+\zeta_2)|^2d\tau_1d\tau_2d\zeta_1d\zeta_2 \lesssim K^{-1}N_2L_1L_2\|f\|_{L^2_{\tau\xi\eta}}. \end{equation} As a result, we get (\ref{BSE_pf_1}) from (\ref{BSE_pf_2}) and (\ref{BSE_pf_3}). \end{proof} \begin{rem} In particullar, if $N_1\gg N_2$, then we have \begin{equation}\label{BSE_4} \begin{split} &\|P_{N_1}Q_{L_1}u_1\cdot P_{N_2}Q_{L_2}u_2\|_{L^{2}_{txy}}\\ &\lesssim N_1^{-1}N_2^{\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}}\|P_{N_1}Q_{L_1}u_1\|_{L^2_{txy}}\|P_{N_2}Q_{L_2}u_2\|_{L^2_{txy}} \end{split} \end{equation} since the equality \[ P_{N_1}Q_{L_1}u_1\cdot P_{N_2}Q_{L_2}u_2 =R_K^{(j)}(P_{N_1}Q_{L_1}u_1, P_{N_2}Q_{L_2}u_2) \] with $K\sim N_1^2$ holds for $j=1$ or $2$. \end{rem} \begin{cor} Let $0< \delta \ll 1$, $0<\epsilon <1-\delta$. For $N_1$, $N_2$, $M_1$, $M_2$, $L_1$, $L_2\in 2^{{\BBB Z}}$ with $N_1\gg N_2$, we have \begin{equation}\label{BSE_3} \begin{split} &\|P_{N_1,M_1}Q_{L_1}u_1\cdot P_{N_2,M_2}Q_{L_2}u_2\|_{L^{\frac{2}{1+\delta}}_{txy}}\\ &\lesssim J_{\delta,\epsilon} (L_1L_2)^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}\|P_{N_1,M_1}Q_{L_1}u_1\|_{L^2_{txy}}\|P_{N_2,M_2}Q_{L_2}u_2\|_{L^2_{txy}}, \end{split} \end{equation} where \[ J_{\delta,\epsilon}=J_{\delta,\epsilon}(N_1,M_1,N_2,M_2)= (N_1M_1N_2M_2)^{\frac{\epsilon}{4}} (N_1^{-1}N_2^\frac{1}{2})^{1-\delta-\epsilon}. \] \end{cor} \begin{comment} \begin{cor} Let $R_{K}^{(j)}$ ($j=1,2$) denote the bilinear operator defined in Proposition~\ref{BSE}. For $N_1$, $N_2$, $L_1$, $L_2$, $K\in 2^{{\BBB Z}}$, $0< \delta \le 1$, and $j=1$, $2$ , we have \begin{equation}\label{BSE_3} \begin{split} &\|R_K^{(j)}(P_{N_1}Q_{L_1}u_1\cdot P_{N_2}Q_{L_2}u_2)\|_{L^p_{txy}}\\ &\lesssim (K^{-\frac{1}{2}}\min\{N_1,N_2\}^{\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}})^{2(1-\frac{1}{p})}\|P_{N_1}Q_{L_1}u_1\|_{L^2_{txy}}\|P_{N_2}Q_{L_2}u_2\|_{L^2_{txy}}. \end{split} \end{equation} for $1\le p\le 2$. In particular, if $N_1\gg N_2$, we have \begin{equation}\label{BSE_4} \begin{split} &\|P_{N_1}Q_{L_1}u_1\cdot P_{N_2}Q_{L_2}u_2\|_{L^p_{txy}}\\ &\lesssim (N_1^{-1}N_2^{\frac{1}{2}}L_1^{\frac{1}{2}}L_2^{\frac{1}{2}})^{2(1-\frac{1}{p})}\| P_{N_1}Q_{L_1}u_1\|_{L^2_{txy}}\|P_{N_2}Q_{L_2}u_2\|_{L^2_{txy}}. \end{split} \end{equation} \end{cor} \end{comment} \begin{proof} By the H\"older inequality and (\ref{inf_Stri}), we have \[ \begin{split} &\|P_{N_1,M_1}Q_{L_1}u_1\cdot P_{N_2,M_2}Q_{L_2}u_2\|_{L^{2}_{txy}}\\ &\lesssim \|P_{N_1,M_1}Q_{L_1}u_1\|_{L^{\infty}_{txy}}^{\frac{1}{2}} \|P_{N_2,M_2}Q_{L_2}u_2\|_{L^{2}_{txy}}^{\frac{1}{2}} \|P_{N_1,M_1}Q_{L_1}u_1\|_{L^{2}_{txy}}^{\frac{1}{2}} \|P_{N_2,M_2}Q_{L_2}u_2\|_{L^{\infty}_{txy}}^{\frac{1}{2}}\\ &\lesssim (N_1M_1L_1N_2M_2L_2)^{\frac{1}{4}} \|P_{N_1,M_1}Q_{L_1}u_1\|_{L^{2}_{txy}}\|P_{N_2,M_2}Q_{L_2}u_2\|_{L^{2}_{txy}}. \end{split} \] By the interpolation between this estimate and (\ref{BSE_4}), we obtain \begin{equation}\label{BSE_delta} \begin{split} &\|P_{N_1,M_1}Q_{L_1}u_1\cdot P_{N_2,M_2}Q_{L_2}u_2\|_{L^{2}_{txy}}\\ &\lesssim J_{\delta,\epsilon}^{\frac{1}{1-\delta}} (L_1L_2)^{\frac{1}{2}-\frac{\epsilon}{4(1-\delta )}}\|P_{N_1,M_1}Q_{L_1}u_1\|_{L^2_{txy}}\|P_{N_2,M_2}Q_{L_2}u_2\|_{L^2_{txy}}. \end{split} \end{equation} While, by the Cauchy-Schwartz inequality, we have \begin{equation}\label{L1_bilin} \|P_{N_1,M_1}Q_{L_1}u_1\cdot P_{N_2,M_2}Q_{L_2}u_2\|_{L^1_{txy}} \lesssim \|P_{N_1,M_1}Q_{L_1}u_1\|_{L^2_{txy}}\|P_{N_2,M_2}Q_{L_2}u_2\|_{L^2_{txy}}. \end{equation} By the interpolation between (\ref{BSE_delta}) and (\ref{L1_bilin}), we obtain (\ref{BSE_3}). \end{proof} Here, we prove Proposition~\ref{bilin_est}. \begin{proof}[Proof of Proposition~\ref{bilin_est}] By using the embedding $l^1\hookrightarrow l^2$ for the summation $\sum_N\sum_M$, and the duality argument, we have \[ \begin{split} &\|(\partial_x+\partial_y)(uv)\|_{X^{s,-\frac{1}{2},1}}\\ &\lesssim \sum_{N_1,M_1,L_1}\sum_{N_2,M_2,L_2}\left(\sum_{N,M,L} \frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\right.\\ &\hspace{14ex}\times \left. \sup_{\|w\|_{L^2}=1}\left|\int P_{N_1,M_1}Q_{L_1}u\cdot P_{N_2,M_2}Q_{L_2}v\cdot P_{N,M}Q_Lwdtdxdy\right|\right). \end{split} \] We put \[ u_{N_1,M_1,L_1}=P_{N_1,M_1}Q_{L_1}u,\ v_{N_2,M_2,L_2}=P_{N_2,M_2}Q_{L_2}v,\ w_{N,M,L}=P_{N,M}Q_Lw, \] \[ f_{N_1,M_1,L_1}=\langle N_1\rangle^s\langle M_1^2+L_1\rangle^{\frac{1-\delta}{2}}u_{N_1,M_1,L_1},\ g_{N_2,M_2,L_2}=\langle N_2\rangle^s\langle M_2^2+L_2\rangle^{\frac{1-\delta}{2}}v_{N_2,M_2,L_2}, \] for $0<\delta \ll 1$ and \[ I=\left|\int u_{N_1,M_1,L_1}\cdot v_{N_2,M_2,L_2}\cdot w_{N,M,L}dtdxdy\right|. \] We note that $L_1^{b}\|u_{N_1,M_1,L_1}\|_{L^2_{txy}} \lesssim \langle N_1\rangle^{-s}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}$ and $L_2^{b}\|v_{N_2,M_2,L_2}\|_{L^2_{txy}} \lesssim \langle N_2\rangle^{-s}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}$ hold for $b\le \frac{1-\delta}{2}$ since $L_i\lesssim \langle M_i^2+L_i\rangle$ $(i=1,2)$. By the symmetry, we can assume $N_1\gtrsim N_2$. We first consider the case $1\ge N_1\gtrsim N_2$. We note that \begin{equation}\label{L2p_Stri} \|P_{N,M}Q_Lu\|_{L^{\frac{2}{1-\delta}}_{txy}} \lesssim L^{\frac{5}{6}\delta}\|P_{N,M}Q_Lu\|_{L^2_{txy}} \end{equation} holds by the interpolation between (\ref{L5_Stri}) and a trivial equality $\|P_{N,M}Q_Lu\|_{L^2_{txy}}=L^0\|P_{N,M}Q_Lu\|_{L^2_{txy}}$. By the H\"older inequality, (\ref{LP_Stri}), and (\ref{L2p_Stri}), we have \[ \begin{split} I&\lesssim \|u_{N_1,M_1,L_1}\|_{L^{\frac{4}{1+\delta}}_{txy}} \|v_{N_2,M_2,L_2}\|_{L^{\frac{4}{1+\delta}}_{txy}} \|w_{N,M,L}\|_{L^{\frac{2}{1-\delta}}_{txy}}\\ &\lesssim (N_1M_1N_2M_2)^{\frac{\epsilon}{2}} L^{\frac{5}{6}\delta}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}} \end{split} \] since $\langle N_i\rangle^{s}\sim 1$ $(i=1,2)$ for any $s\in {\BBB R}$. Therefore, we obtain \begin{equation}\label{low_freq_ineq} \begin{split} &\sum_{N\lesssim 1}\sum_{M\lesssim N}\sum_{L} \frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\\ &\lesssim (N_1M_1N_2M_2)^{\frac{\epsilon}{2}}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \end{split} \end{equation} since \[ \sum_{L}\frac{L^{\frac{5}{6}\delta}}{\langle M^2+L\rangle^{\frac{1}{2}}} \lesssim \sum_{L\lesssim \langle M\rangle^2}\frac{L^{\frac{5}{6}\delta}}{\langle M\rangle} +\sum_{L\gtrsim \langle M\rangle^2}L^{-(\frac{1}{2}-\frac{5}{6}\delta)} \lesssim \langle M\rangle^{-(1-\frac{5}{3}\delta)}\lesssim 1 \] and \[ \sum_{N\lesssim 1}\sum_{M\lesssim N} \langle N\rangle^sM \sim \sum_{N\lesssim 1}\sum_{M\lesssim N}M \lesssim \sum_{N\lesssim 1}N\lesssim 1 \] for any $s\in {\BBB R}$. By using (\ref{low_freq_ineq})and the Cauchy-Schwartz inequality for the summations $\sum_{N_1,M_1\lesssim 1}$ and $\sum_{N_2,M_2\lesssim 1}$, we have \[ \begin{split} &\sum_{N_1,M_1\lesssim 1}\sum_{L_1}\sum_{N_2,M_2\lesssim 1}\sum_{L_2} \left(\sum_{N,M,L}\frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\right) \lesssim \|u\|_{X^{s,\frac{1-\delta}{2},1}}\|v\|_{X^{s,\frac{1-\delta}{2},1}} \end{split} \] for any $s\in {\BBB R}$. Next, we consider the case $N_1\gtrsim N_2$, $N_1\ge 1$. It suffice to show that \begin{equation}\label{bilin_pf} \begin{split} &\sum_{N,M,L}\frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}} \sup_{\|w\|_{L^2}=1}I\\ &\lesssim N_{1}^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \end{split} \end{equation} for small $\epsilon >0$. Indeed, (\ref{bilin_pf}) and the Cauchy-Schwartz inequality for the summations $\sum_{N_1,M_1}$ and $\sum_{N_2,M_2}$ imply \[ \begin{split} &\sum_{\substack{N_1,M_1,L_1\\ N_1\ge 1}}\sum_{\substack{N_2,M_2,L_2\{\BBB N}_2\lesssim N_1}} \left(\sum_{N,M,L}\frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\right)\\ &\lesssim \left(\sum_{N_1\ge 1}\sum_{M_1\lesssim N_1} \sum_{N_2\lesssim N_1}\sum_{M_2\lesssim N_2} N_1^{-2\epsilon}(M_1M_2)^{\frac{\epsilon}{2}}\right)^{\frac{1}{2}}\\ &\ \ \ \ \times \left\{\sum_{N_1}\sum_{M_1}\left(\sum_{L_1}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\right)^2\right\}^{\frac{1}{2}} \left\{\sum_{N_2}\sum_{M_2}\left(\sum_{L_2}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\right)^2\right\}^{\frac{1}{2}}\\ &\lesssim \|u\|_{X^{s,\frac{1-\delta}{2},1}}\|v\|_{X^{s,\frac{1-\delta}{2},1}}. \end{split} \] Now, we prove (\ref{bilin_pf}). \\ \text{} \\ \underline{Case\ 1:\ $N_1\sim N_2\gg N$,\ $N_1\ge 1$.} We note that $M\lesssim \max\{M_1,M_2\}$ since $\xi +\eta =(\xi_1+\eta_1)+(\xi-\xi_1+\eta -\eta_1)$. By the symmetry, we can assume $M\lesssim M_1$. By the H\"older inequality, we have \[ I\lesssim \|u_{N_1,M_1,L_1}\|_{L^{\frac{2}{1-\delta}}_{txy}} \|v_{N_2,M_2,L_2}\cdot w_{N,M,L}\|_{L^{\frac{2}{1+\delta}}_{txy}}. \] Furthermore, we have \[ \|u_{N_1,M_1,L_1}\|_{L^{\frac{2}{1-\delta}}_{txy}} \lesssim L_1^{\frac{5}{6}}\|u_{N_1,M_1,L_1}\|_{L^2_{txy}} \lesssim \frac{N_1^{-s}}{\langle M_1^2+L_1\rangle^{\frac{1-\delta}{2}-\frac{5}{6}\delta}}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}} \] by (\ref{L2p_Stri}), and we have \[ \begin{split} &\|v_{N_2,M_2,L_2}\cdot w_{N,M,L}\|_{L^{\frac{2}{1+\delta}}_{txy}}\\ &\lesssim J_{\delta,\epsilon}(N_2,M_2,N,M)(L_2L)^{\frac{1-\delta}{2}-\frac{\epsilon}{4}} \|v_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}}\\ &\lesssim (M_1M_2)^{\frac{\epsilon}{4}}N_2^{-s-1+\delta+\frac{5}{4}\epsilon} N^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}L^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}} \end{split} \] by (\ref{BSE_3}) and $M\lesssim M_1$. Therefore, if we choose $\epsilon >0$ as $\epsilon =\frac{10}{3}\delta$, we obtain \[ \begin{split} &\sum_{N\ll N_1}\sum_{M\lesssim M_1}\sum_{L} \frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\\ &\lesssim (M_1M_2)^{\frac{\epsilon}{4}}N_1^{-s}N_2^{-s-1+\delta+\frac{5}{4}\epsilon} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\\ &\hspace{10ex}\times \left(\sum_{N\ll N_1}\langle N\rangle^sN^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}\sum_{M\lesssim M_1} \frac{M}{\langle M_1^2+L_1\rangle^{\frac{1-\delta}{2}-\frac{5}{6}\delta}} \sum_{L}\frac{L^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}}{\langle M^2+L\rangle^{\frac{1}{2}}}\right)\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}}N_1^{-s-\frac{1}{2}+\frac{\delta}{2}+2\epsilon}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \end{split} \] for $s\ge -\frac{1-\delta}{2}+\frac{\epsilon}{4}$ since \[ \sum_{M\lesssim M_1} \frac{M}{\langle M_1^2+L_1\rangle^{\frac{1-\delta}{2}-\frac{5}{6}\delta}} \sum_{L}\frac{L^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}}{\langle M^2+L\rangle^{\frac{1}{2}}} \lesssim \sum_{M\lesssim M_1} \frac{M^{1-\delta-\frac{\epsilon}{2}}}{\langle M_1\rangle^{1-\frac{8}{3}\delta}} \lesssim 1. \] As a result, we get (\ref{bilin_pf}) for $s> -\frac{1}{2}$ if we choose $\delta >0$ as $0<\delta <\frac{6}{43}\left(s+\frac{1}{2}\right)$. \\ \\ \underline{Case\ 2:\ $N\sim N_1\gg N_2$,\ $N_1\ge 1$. } By the H\"older inequality, we have \[ I\lesssim \|u_{N_1,M_1,L_1}\cdot v_{N_2,M_2,L_2}\|_{L^\frac{2}{1+\delta}_{txy}} \|w_{N,M,L}\|_{L^\frac{2}{1-\delta}_{txy}}. \] Furthermore, we have \[ \|w_{N,M,L}\|_{L^\frac{2}{1-\delta}_{txy}} \lesssim L^{\frac{5}{6}\delta}\|w_{N,M,L}\|_{L^2_{txy}} \] by (\ref{L2p_Stri}), and we have \[ \begin{split} &\|u_{N_1,M_1,L_1}\cdot v_{N_2,M_2,L_2}\|_{L^\frac{2}{1+\delta}_{txy}}\\ &\lesssim J_{\delta,\epsilon}(N_1,M_1,N_2,M_2)(L_1L_2)^{\frac{1-\delta}{2}-\frac{\epsilon}{4}} \|u_{N_1,M_1,L_1}\|_{L^2_{txy}}\|v_{N_2,M_2,L_2}\|_{L^2_{txy}}\\ &\lesssim (M_1M_2)^{\frac{\epsilon}{4}}N_1^{-s-1+\delta+\frac{5}{4}\epsilon} \langle N_2\rangle^{-s}N_2^{\frac{1-\delta}{2}-\frac{\epsilon}{4}} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \end{split} \] by (\ref{BSE_3}). Therefore, if $s\le \frac{1-\delta}{2}-\frac{\epsilon}{4}$, we obtain \[ \begin{split} &\sum_{N\sim N_1}\sum_{M\lesssim N}\sum_{L} \frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\\ &\lesssim (M_1M_2)^{\frac{\epsilon}{4}}N_1^{-s-\frac{1-\delta}{2}+\epsilon} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \left(\sum_{M\lesssim N_1}M \sum_{L}\frac{L^{\frac{5}{6}\delta}}{\langle M^2+L\rangle^{\frac{1}{2}}}\right)\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} N_1^{-s-\frac{1}{2}+\frac{13}{6}\delta+2\epsilon}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}, \end{split} \] since \[ \sum_{L}\frac{L^{\frac{5}{6}\delta}}{\langle M^2+L\rangle^{\frac{1}{2}}} \lesssim M^{-(1-\frac{5}{3}\delta)}. \] As a result, we get (\ref{bilin_pf}) for $\frac{1}{2}>s> -\frac{1}{2}$ if we choose $\delta >0$ and $\epsilon >0$ as $0<\epsilon <\frac{1}{2}(s+\frac{1}{2})$, $0<\delta <\min\left\{\frac{6}{13}\left(s+\frac{1}{2}-2\epsilon\right), 2\left(\frac{1}{2}-s-\frac{\epsilon}{2}\right)\right\}$. While if $s\ge \frac{1}{2}$, then we have \[ I\lesssim (M_1M_2)^{\frac{\epsilon}{4}}N_1^{-s-\frac{1-\delta}{2}+\epsilon}L^{\frac{5}{6}\delta} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}} \] by the same argument with using $\langle N_2\rangle^{-s}\lesssim 1$. Therefore, we obtain \[ \begin{split} &\sum_{N\sim N_1}\sum_{M\lesssim N}\sum_{L} \frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} N_1^{-\frac{1}{2}+\frac{13}{6}\delta+2\epsilon}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}, \end{split} \] which implies (\ref{bilin_pf}) since $-\frac{1}{2}+\frac{13}{6}\delta+2\epsilon<0$.\\ \\ \underline{Case\ 3:\ $N\sim N_1\sim N_2\ge 1$} We can assume $M\lesssim M_1$ such as Case\ 1. We split $v_{N_2,M_2,L_2}$ and $w_{N,M,L}$ into \[ v_{N_2,M_2,L_2}=\sum_{i=1}^3R_{i}v_{N_2,M_2,L_2},\ \ w_{N,M,L}=\sum_{j=1}^3R_{j}w_{N,M,L}. \] We put \[ I_{i,j}=\left|\int u_{N_1,M_1,L_1}\cdot R_{i}v_{N_2,M_2,L_2}\cdot R_{j}w_{N,M,L}dtdxdy\right|, \] where $R_i$ $(i=1,2,3)$ are projections given by \[ \mathcal{F}_{xy}[R_1f]=\mbox{\boldmath $1$}_{\{|\xi |\gg |\eta |\}}\widehat{f},\ \mathcal{F}_{xy}[R_2f]=\mbox{\boldmath $1$}_{\{|\xi |\sim |\eta |\}}\widehat{f},\ \mathcal{F}_{xy}[R_3f]=\mbox{\boldmath $1$}_{\{|\xi |\ll |\eta |\}}\widehat{f}. \] We note that $\mathcal{F}_{xy}[w_{N,M,L}]$ is supported in at least one of $\{(\xi,\eta)|\ |\xi|\sim N\}$ or $\{(\xi,\eta)|\ |\eta |\sim N\}$. By the symmetry, we can assume $\operatorname{supp} \mathcal{F}_{xy}[w_{N,M,L}]\subset \{(\xi,\eta)|\ |\xi|\sim N\}$. Then, it suffice to show the estimate for $I_{i,j}$ with $i=1,2,3$, $j=1,2$. \\ \\ \underline{Estimate for $I_{1,1}$} In this case, we note that $N\sim N_1\sim N_2\sim M\sim M_1\sim M_2$ and \[ |\xi \xi_1\xi_2+\eta \eta_1\eta_2|\sim |\xi \xi_1 \xi_2|\sim N_1^3 \] for $(\xi_1,\eta_1)\in \operatorname{supp} \mathcal{F}_{xy}[u_{N_1,M_1,L_1}]$, $(\xi_2,\eta_2)\in \operatorname{supp} \mathcal{F}_{xy}[v_{N_2,M_2,L_2}]$ with $\xi_1+\xi_2=\xi$, $\eta_1+\eta_2=\eta$. It implies \[ \max\{L_1,L_2,L\}\gtrsim N_1^3 \] since \[ |(\tau_1-\xi_1^3-\eta_1^3)+(\tau_2-\xi_2^3-\eta_2^3)-(\tau-\xi^3-\eta^3)| =3|\xi\xi_1\xi_2+\eta\eta_1\eta_2|. \] holds for $(\tau_i,\xi_i,\eta_i)$ $(i=1,2)$ with $(\tau,\xi,\eta)=(\tau_1+\tau_2,\xi_1+\xi_2,\eta_1+\eta_2)$.\\ \text{} \\ (i)\ For the case $L\gtrsim N_1^3$ By the H\"older inequality, (\ref{LP_Stri}), and (\ref{L2p_Stri}), we have \[ \begin{split} I&\lesssim \|u_{N_1,M_1,L_1}\|_{L^{\frac{4}{1+\delta}}_{txy}} \|v_{N_2,M_2,L_2}\|_{L^{\frac{4}{1+\delta}}_{txy}} \|w_{N,M,L}\|_{L^{\frac{2}{1-\delta}}_{txy}}\\ &\lesssim (N_1M_1N_2M_2)^{\frac{\epsilon}{4}} (L_1L_2)^{\frac{5(1-\delta)}{12}-\frac{\epsilon}{6}}L^{\frac{5}{6}\delta}\|u_{N_1,M_1,L_1}\|_{L^2_{txy}}\|v_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}}\\ &\sim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}}N_1^{-2s+\frac{3}{2}\epsilon}L^{\frac{5}{6}\delta}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}}. \end{split} \] Therefore, we obtain \[ \begin{split} &\sum_{N\sim N_1}\sum_{M\lesssim N}\sum_{L\gtrsim N_1^3} \frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} N_1^{-s+\frac{3}{2}\epsilon} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \left(\sum_{M\lesssim N_1}M \sum_{L\gtrsim N_1^3}\frac{L^{\frac{5}{6}\delta}}{\langle M^2+L\rangle^{\frac{1}{2}}}\right)\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}}N_1^{-s-\frac{1}{2}+\frac{5}{2}\delta+\frac{3}{2}\epsilon}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}, \end{split} \] since \[ \sum_{L\gtrsim N_1^3}\frac{L^{\frac{5}{6}\delta}}{\langle M^2+L\rangle^{\frac{1}{2}}} \lesssim \sum_{L\gtrsim N_1^3}L^{-(\frac{1}{2}-\frac{5}{6}\delta)} \lesssim N_1^{-\frac{3}{2}+\frac{5}{2}\delta}. \] As a result, we get (\ref{bilin_pf}) for $s> -\frac{1}{2}$ if we choose $\delta >0$ and $\epsilon >0$ as $0<\epsilon <\frac{2}{3}(s+\frac{1}{2})$, $0<\delta <\frac{2}{5}\left(s+\frac{1}{2}-\frac{3}{2}\epsilon\right)$. \\ \\ (ii)\ For the case $L_1\gtrsim N_1^3$ By the H\"older inequality, (\ref{L2p_Stri}), and (\ref{LP_Stri}), we have \[ \begin{split} I&\lesssim \|u_{N_1,M_1,L_1}\|_{L^{\frac{2}{1-\delta}}_{txy}} \|v_{N_2,M_2,L_2}\|_{L^{\frac{4}{1+\delta}}_{txy}} \|w_{N,M,L}\|_{L^{\frac{4}{1+\delta}}_{txy}}\\ &\lesssim L_1^{\frac{5}{6}\delta}(N_2M_2NM)^{\frac{\epsilon}{4}} (L_2L)^{\frac{5(1-\delta)}{12}-\frac{\epsilon}{6}}\|u_{N_1,M_1,L_1}\|_{L^2_{txy}}\|v_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}}\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} N_1^{-2s-\frac{3}{2}+4\delta+\frac{3}{2}\epsilon} L^{\frac{5(1-\delta)}{12}-\frac{\epsilon}{6}}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}} \end{split} \] since $L_1^{\frac{5}{6}\delta}\langle M_1^2+L_1\rangle^{-\frac{1-\delta }{2}} \lesssim L_1^{-(\frac{1}{2}-\frac{4}{3}\delta)}\lesssim N_1^{-\frac{3}{2}+4\delta}$. Therefore, we obtain \[ \begin{split} &\sum_{N\sim N_1}\sum_{M\lesssim N}\sum_{L} \frac{\langle N\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} N_1^{-s-\frac{3}{2}+4\delta+\frac{3}{2}\epsilon} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \left(\sum_{M\lesssim N_1}M \sum_{L}\frac{L^{\frac{5(1-\delta )-2\epsilon}{12}}}{\langle M^2+L\rangle^{\frac{1}{2}}}\right)\\ &\lesssim N_1^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} N_1^{-s-\frac{2}{3}+\frac{19}{6}\delta+\frac{7}{6}\epsilon}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}, \end{split} \] since \[ \sum_{L}\frac{L^{\frac{5(1-\delta )-2\epsilon}{12}}}{\langle M^2+L\rangle^{\frac{1}{2}}} \lesssim M^{-\frac{1+5\delta+2\epsilon}{6}}. \] As a result, we get (\ref{bilin_pf}) for $s> -\frac{2}{3}$ if we choose $\delta >0$ and $\epsilon >0$ as $0<\epsilon <\frac{6}{7}(s+\frac{1}{2})$, $0<\delta <\frac{6}{19}\left(s+\frac{2}{3}-\frac{7}{6}\epsilon\right)$. The case $L_2\gtrsim N^3$ is same.\\ \\ \underline{Estimate for $I_{2,2}$} In this case, we have \[ |\xi_2|\sim |\eta_2|\sim N_2,\ |\xi|\sim |\eta|\sim N. \] By the H\"older inequality, (\ref{L2p_Stri}), (\ref{mLP_Stri}), and $M\lesssim M_1$, we have \[ \begin{split} I&\lesssim \|u_{N_1,M_1,L_1}\|_{L^{\frac{2}{1-\delta}}_{txy}} \|v_{N_2,M_2,L_2}\|_{L^{\frac{4}{1+\delta}}_{txy}} \|w_{N,M,L}\|_{L^{\frac{4}{1+\delta}}_{txy}}\\ &\lesssim L_1^{\frac{5}{6}\delta}(N_2M_2NM)^{\frac{\epsilon}{4}}(N_2N)^{-\frac{1-\delta-\epsilon}{4}} (L_2L)^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}\|u_{N_1,M_1,L_1}\|_{L^2_{txy}}\|v_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}}\\ &\lesssim (M_1M_2)^{\frac{\epsilon}{4}} N_1^{-2s-\frac{1-\delta}{2}+\epsilon}\frac{L^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}}{\langle M_1^2+L_1\rangle^{\frac{1-\delta}{2}-\frac{5}{6}\delta}}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}\|w_{N,M,L}\|_{L^2_{txy}}. \end{split} \] Therefore, we get (\ref{bilin_pf}) for $s> -\frac{1}{2}$ by the same argument as in Case 1. \\ \\ \underline{Estimate for $I_{1,2}$} In this case, we have \[ |\eta_2^2-\eta^2|\sim |\eta|^2 \sim N^2 \] Therefore, we obtain \[ \begin{split} &\|R_{1}v_{N_2,M_2,L_2}\cdot R_{2}w_{N,M,L}\|_{L^{2}_{txy}} \lesssim N^{-\frac{1}{2}}L_2^{\frac{1}{2}}L^{\frac{1}{2}}\|R_{1}v_{N_2,M_2,L_2}\|_{L^2_{txy}}\|R_{2}w_{N,M,L}\|_{L^2_{txy}} \end{split} \] by (\ref{BSE_1}) since \[ R_{1}v_{N_2,M_2,L_2}\cdot R_{2}w_{N,M,L} =R_{K}^{(2)}(R_{1}v_{N_2,M_2,L_2}\cdot R_{2}w_{N,M,L}) \] with $K\sim N$ holds. While, by the Cauchy-Schwartz inequality, we have \[ \|R_{1}u_{N_1,M_1,L_1}\cdot R_{j}v_{N_2,M_2,L_2}\|_{L^{1}_{txy}} \lesssim \|R_{1}v_{N_2,M_2,L_2}\|_{L^2_{txy}}\|R_{2}w_{N,M,L}\|_{L^2_{txy}}. \] Therefore, we obtain the bilinear Stirchartz estimate such as (\ref{BSE_3}) for the product $R_{1}v_{N_2,M_2,L_2}\cdot R_{2}w_{N,M,L}$ , and we get (\ref{bilin_pf}) for $s>-\frac{1}{2}$ by the same argument as in Case 1 since $M\lesssim M_1$. The estimates for $I_{2,1}$, $I_{3,1}$, and $I_{3,2}$ are obtained by the same way. \end{proof} \begin{rem}\label{be_mod_rem} We can also obtain the bilinear estimate \[ \|(\partial_x+\partial_y)(uv)\|_{X^{s,-\frac{1}{2},1}} \le \frac{C_3}{2}\left(\|u\|_{X^{s,\frac{1-\delta}{2},1}}\|v\|_{X^{s_0,\frac{1-\delta}{2},1}} +\|u\|_{X^{s_0,\frac{1-\delta}{2},1}}\|v\|_{X^{s,\frac{1-\delta}{2},1}}\right) \] for $s\ge s_0>-\frac{1}{2}$ by using \[ \langle \xi\rangle^s\lesssim \langle \xi\rangle^{s_0} \left(\langle \xi_1\rangle^{s-s_0}+\langle \xi-\xi_1\rangle^{s-s_0}\right). \] \end{rem} \section{Proof of the well-posedness} In this section, we prove Theorem~\ref{LWP} and \ref{GWP}. For $T>0$ and $v_0\in H^s({\BBB R}^2)$, we define the map $\Phi_{T, v_0}$ as \[ \Phi_{T, v_0}(v)(t):=\psi(t)\left( W(t)u_0 +\int_0^tW(t-t')(\partial_x+\partial_y)(\psi_T(t')^2v(t')^2)dt' \right), \] where $\psi$ is cut-off function defined in Section\ 2, and $\psi_T(t)=\psi\left(\frac{t}{T}\right)$. For $R>0$ and Banach space $X$, we define $B_R(X):=\{u\in X|\ \|u\|_{X}\le R\}$. To obtain the well-posedness of (\ref{ZKB_sym}) in $H^s({\BBB R}^2)$, we prove that $\Phi_{T, v_0}$ is a contraction map on closed subset of $X^{s,\frac{1}{2},1}$. \begin{lemm}\label{sol_loc} Let $0<T\le 1$, $0<\delta \le 1$. There exist $C_4>0$ and $\mu =\mu (\delta )>0$, such that for any $u\in X^{s,\frac{1}{2},1}$, we have \[ \|\psi_Tu\|_{X^{s,\frac{1-\delta}{2},1}} \le C_4T^{\mu}\|u\|_{X^{s,\frac{1}{2},1}}. \] \end{lemm} The proof of Lemma~\ref{sol_loc} is almost same as the proof of Lemma~2.5 and 3.1 in \cite{GTV97}. \begin{proof}[Proof of Theorem~\ref{LWP}] Let $s\ge s_0>-\frac{1}{2}$ and $v_{0}\in H^s({\BBB R}^2)$ are given, and $T\in (0,1]$, $R>0$ will be chosen later. We define the function space $Z^s$ as \[ Z^{s}:=\{v\in X^{s,\frac{1}{2},1}|\ \|v\|_{Z^s}:=\|v\|_{X^{s_0,\frac{1}{2},1}}+\alpha \|v\|_{X^{s,\frac{1}{2},1}}<\infty\}, \] where $\alpha=\|v_0\|_{H^{s_0}}/\|v_0\|_{H^s}$. For $v$, $v_1$, $v_2\in B_R(Z^{s})$, we have \[ \begin{split} \|\Phi_{T,v_{0}}(v)\|_{Z^{s}} &\le C_1(1+\alpha)\|v_{0}\|_{H^{s_0}} +C_2C_3C_4^2T^{2\mu}\|v\|_{Z^{s}}^2\\ &\leq C_1(1+\alpha )\|v_0\|_{H^{s_0}}+C_2C_3C_4^2T^{2\mu}R^2 \end{split} \] and \[ \begin{split} \|\Phi_{T,v_{0}}(v_1)-\Phi_{T,v_{0}}(v_2)\|_{Z^{s}} &\leq C_2C_3C_4^2T^{2\mu}\|v_1+v_2\|_{Z^{s}} \|v_1-v_2\|_{Z^{s}}\\ &\leq C_2C_3C_4^2T^{2\mu}R\|v_1-v_2\|_{Z^{s}} \end{split} \] by Proposition~\ref{lin_est}, ~\ref{duam_est}, ~\ref{bilin_est}, Remark~\ref{be_mod_rem}, and Lemma~\ref{sol_loc}. Therefore, if we choose $T$, $R$ as \[ R=2C_1(1+\alpha )\|v_0\|_{H^{s_0}},\ 0<T^{2\mu}<(4C_1C_2C_3C_4^2(1+\alpha)\|v_0\|_{H^{s_0}})^{-1}, \] then $\Phi_{T,v_0}$ is contraction map on $B_R(Z^{s})$. We note that $T=T(\|v_0\|_{H^{s_0}})$. By Banach's fixed point theorem, there exists a solution $v\in X^{s,\frac{1}{2},1}$ to $v(t)=\Phi_{T,v_0}(v)(t)$ and $v|_{[0,T]}\in X^{s,\frac{1}{2},1}_T$ satisfies (\ref{ZKB_sym_int}) on $[0,T]$. The Lipschitz continuous dependence on initial data is obtained by the similar argument as above. The uniqueness is obtained by the same argument as in Section\ 4.2 of \cite{MR02}. \end{proof} Next, to prove the global well-posedness of (\ref{ZKB_sym}) in $\widetilde{H}^{s}({\BBB R}^2)$, we define the function space $\widetilde{X}^{s,b,1}$ as the completion of the Schwartz class ${\mathcal S}({\BBB R}_{t}\times {\BBB R}^2_{x,y})$ with the norm \[ \|u\|_{\widetilde{X}^{s,b,1}}=\left\{\sum_{N\in 2^{{\BBB Z}}}\sum_{M\in 2^{{\BBB Z}}}\left(\sum_{L\in 2^{{\BBB Z}}}\langle M\rangle^s\langle M^2+L\rangle^{b}\|P_{N,M}Q_{L}u\|_{L^2_{txy}}\right)^2\right\}^{\frac{1}{2}}. \] We also define $\widetilde{X}^{s,b,1}_T$ as the time localized space of $\widetilde{X}^{s,b,1}$. \begin{rem} We can see that $\widetilde{X}^{s,\frac{1}{2},1}_T\hookrightarrow L^2((0,T);\widetilde{H}^{s+1}({\BBB R}^2))$ since $\langle M\rangle^{s+1} \lesssim \langle M\rangle^{s}\langle M^2+L\rangle^{\frac{1}{2}}$ and $l^1_L\hookrightarrow l^2_L$ hold. \end{rem} \begin{prop}\label{lin_est_gwp} Let $s\in {\BBB R}$. There exists $C_1>0$, such that for any $u_0\in \widetilde{H}^s({\BBB R}^2)$, we have \[ \|\psi (t)W(t)u_0\|_{\widetilde{X}^{s,\frac{1}{2},1}}\le C_1\|u_0\|_{\widetilde{H}^s}. \] \end{prop} \begin{prop}\label{duam_est_gwp} Let $s\in {\BBB R}$. There exists $C_2>0$, such that for any $F\in \widetilde{X}^{s,-\frac{1}{2},1}$, we have \[ \left\|\psi (t){\mathcal L}F(t)\right\|_{\widetilde{X}^{s,\frac{1}{2},1}}\le C_2\|F\|_{\widetilde{X}^{s,-\frac{1}{2},1}} \] \end{prop} The proof of Proposition~\ref{lin_est_gwp} and ~\ref{duam_est_gwp} are same as the proof of Proposition~\ref{lin_est} and ~\ref{duam_est}. \begin{prop}\label{bilin_est_gwp} Let $s>-\frac{1}{2}$. There exist $0<\delta \ll 1$ and $C_3>0$, such that for any $u$, $v\in \widetilde{X}^{s,\frac{1-\delta}{2},1}$, we have \[ \|(\partial_x+\partial_y)(uv)\|_{\widetilde{X}^{s,-\frac{1}{2},1}}\le C_3\|u\|_{\widetilde{X}^{s,\frac{1-\delta}{2},1}}\|v\|_{\widetilde{X}^{s,\frac{1-\delta}{2},1}}. \] \end{prop} The proof of Proposition~\ref{bilin_est_gwp} is similar to the proof of Proposition~\ref{bilin_est}. We will give the proof at the last part of this section. \begin{proof}[Proof of Theorem~\ref{GWP}] Let $s\ge s_0>-\frac{1}{2}$ are given. By Proposition~\ref{lin_est_gwp},~\ref{duam_est_gwp},~\ref{bilin_est_gwp}, and using the same argument as in the proof of Theorem~\ref{LWP}, we obtain the solution $v\in \widetilde{X}^{s,\frac{1}{2},1}_T$ to (\ref{ZKB_sym}) on $[0,T]$ with $T=T(\|v_0\|_{\widetilde{H}^{s_0}})$. Let $T'\in (0,T)$ be fixed. Since $\widetilde{X}^{s,\frac{1}{2},1}_T\hookrightarrow L^2([0,T];\widetilde{H}^{s+1}({\BBB R}^2))$ holds, there exists $t_0\in (0,T')$ such that $v(t_0)\in \widetilde{H}^{s+1}({\BBB R}^2)$. Therefore, by choosing $v(t_0)$ as the initial data and using the uniqueness of the solution, we obtain $v(t_0+\cdot)\in \widetilde{X}^{s+1,\frac{1}{2},1}_{T-t_0}$. In particular, we have $v(T')\in \widetilde{H}^{s+1}({\BBB R}^2)$. By repeating this argument, we get $v(T')\in \widetilde{H}^{\infty}({\BBB R}^2)$. Since we can choose $T'>0$ arbitrary small, $v$ belongs to $C((0,T];\widetilde{H}^{\infty}({\BBB R}^2))$. This arrows us to take the $L^2$-scalar product of (\ref{ZKB_sym}) with $v$, and we have \[ \begin{split} \frac{d}{dt}\|v(t)\|_{L^2_x}^2 &=(\partial_tv(t), v(t))_{L^2_x}\\ &=\left(-(\partial_x^3+\partial_y^3)v(t)+(\partial_x+\partial_y)^2v(t)+(\partial_x+\partial_y)(v(t)^2),v(t)\right)_{L^2_x}\\ &=-\|(\partial_x+\partial_y)v(t)\|_{L^2_x}^2\le 0 \end{split} \] for any $t\in (0,T)$. Therefore, $\|v(t)\|_{L^2_x}$ is non-increasing, and we can extend the solution $v$ globally in time. \end{proof} \begin{rem} We note that the embedding $X^{s,\frac{1}{2},1}_T\hookrightarrow L^2([0,T];H^{s+1}({\BBB R}^2))$ does not hold. Therefore, we cannot use the above argument for initial data $v_0\in H^s({\BBB R}^2)$. \end{rem} Finally, we give the proof of Proposition~\ref{bilin_est_gwp} \begin{proof}[Proof of Proposition~\ref{bilin_est_gwp}] We put \[ u_{N_1,M_1,L_1}=P_{N_1,M_1}Q_{L_1}u,\ v_{N_2,M_2,L_2}=P_{N_2,M_2}Q_{L_2}v,\ w_{N,M,L}=P_{N,M}Q_Lw, \] \[ f_{N_1,M_1,L_1}=\langle M_1\rangle^s\langle M_1^2+L_1\rangle^{\frac{1-\delta}{2}}u_{N_1,M_1,L_1},\ g_{N_2,M_2,L_2}=\langle M_2\rangle^s\langle M_2^2+L_2\rangle^{\frac{1-\delta}{2}}v_{N_2,M_2,L_2} \] for $0<\delta \ll 1$ and \[ I=\left|\int u_{N_1,M_1,L_1}\cdot v_{N_2,M_2,L_2}\cdot w_{N,M,L}dtdxdy\right|. \] We use $L_1^{b}\|u_{N_1,M_1,L_1}\|_{L^2_{txy}} \lesssim \langle M_1\rangle^{-s}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}$ and $L_2^{b}\|v_{N_2,M_2,L_2}\|_{L^2_{txy}} \lesssim \langle M_2\rangle^{-s}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}$ instead of $L_1^{b}\|u_{N_1,M_1,L_1}\|_{L^2_{txy}} \lesssim \langle N_1\rangle^{-s}\|f_{N_1,M_1,L_1}\|_{L^2_{txy}}$ and $L_2^{b}\|v_{N_2,M_2,L_2}\|_{L^2_{txy}} \lesssim \langle N_2\rangle^{-s}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}}$ in the proof of Proposition~\ref{bilin_est}. By the same argument as in the proof of Proposition~\ref{bilin_est}, we have \[ \begin{split} &\sum_{N_1,M_1\lesssim 1}\sum_{L_1}\sum_{N_2,M_2\lesssim 1}\sum_{L_2} \left(\sum_{N,M,L}\frac{\langle M\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}}\sup_{\|w\|_{L^2}=1}I\right) \lesssim \|u\|_{\widetilde{X}^{s,\frac{1-\delta}{2},1}}\|v\|_{\widetilde{X}^{s,\frac{1-\delta}{2},1}} \end{split} \] for any $s\in {\BBB R}$ and it suffice to show that \begin{equation}\label{bilin_pf_gwp} \begin{split} &\sum_{N,M,L}\frac{\langle M\rangle^sM}{\langle M^2+L\rangle^{\frac{1}{2}}} \sup_{\|w\|_{L^2}=1}I\\ &\lesssim N_{1}^{-\epsilon}(M_1M_2)^{\frac{\epsilon}{4}} \|f_{N_1,M_1,L_1}\|_{L^2_{txy}}\|g_{N_2,M_2,L_2}\|_{L^2_{txy}} \end{split} \end{equation} for $N_1\ge N_2$, $N_1\ge 1$, and small $\epsilon >0$. \\ \text{} \\ \text{} \\ \underline{Case\ 1':\ $N_1\sim N_2\gg N$} We only have to modify little in the proof of Proposition~\ref{bilin_est}, Case\ 1. Since it hold that \[ \langle M_1\rangle^{-s} \sum_{M\lesssim M_1} \frac{\langle M\rangle^sM}{\langle M_1^2+L_1\rangle^{\frac{1-\delta}{2}-\frac{5}{6}\delta}} \sum_{L}\frac{L^{\frac{1-\delta}{2}-\frac{\epsilon}{4}}}{\langle M^2+L\rangle^{\frac{1}{2}}} \lesssim \sum_{M\lesssim M_1}\frac{M^{s+1-\delta-\frac{\epsilon}{2}}}{\langle M_1\rangle^{s+1-\frac{8}{3}\delta}} \lesssim 1 \] for $\epsilon =\frac{10}{3}\delta$, $s>-1+\frac{8}{3}\delta$, and \[ \langle M_2\rangle^{-s}\lesssim N_1^{-s} \] for $s<0$, we get (\ref{bilin_pf}) for $-\frac{1}{2}<s<0$ by the same way as in the proof of Proposition~\ref{bilin_est}, Case\ 1. \\ \\ \underline{Case\ 2':\ $N\sim N_1\gg N_2$} If $M\ge M_1$, then we have \[ \langle M\rangle^{s}\langle M_1\rangle^{-s}\langle M_2\rangle^{-s} \lesssim \langle M_2\rangle^{-s}\lesssim N_1^{-s} \] for $s<0$. Therefore, we get (\ref{bilin_pf}) for $-\frac{1}{2}<s<0$ by the same way as in the proof of Proposition~\ref{bilin_est}, Case\ 2. While, if $M\le M_1$, then we have \[ J_{\delta,\epsilon}(N,M,N_2,M_2) \lesssim J_{\delta,\epsilon}(N_1,M_1,N_2,M_2). \] Therefore, by estimating \[ I\lesssim \|u_{N_1,M_1,L_1}\|_{L^\frac{2}{1-\delta}_{txy}} \|v_{N_2,M_2,L_2}\cdot w_{N,M,L}\|_{L^\frac{2}{1+\delta}_{txy}} \] instead of \[ I\lesssim \|u_{N_1,M_1,L_1}\cdot v_{N_2,M_2,L_2}\|_{L^\frac{2}{1+\delta}_{txy}} \|w_{N,M,L}\|_{L^\frac{2}{1-\delta}_{txy}} \] in the proof of Proposition~\ref{bilin_est}, Case\ 2, we get (\ref{bilin_pf}) for $-\frac{1}{2}<s<0$ by the same modification such as Case\ 1'\\ \text{} \\ \underline{Case\ 3':\ $N\sim N_1\sim N_2\ge 1$} If ${\rm supp}\mathcal{F}_{x,y}[w_{N,M,L}]\subset \{(\xi, \eta)|\ |\xi|\gg |\eta|\ {\rm or}\ |\xi|\ll |\eta|\}$, then $M\sim N$ holds. Therefore, we have \[ \langle M\rangle^{s}\langle M_1\rangle^{-s}\langle M_2\rangle^{-s} \lesssim \langle N\rangle^{s}\langle N_1\rangle^{-s}\langle N_2\rangle^{-s}\lesssim N_1^{-s} \] for $s<0$ and get (\ref{bilin_pf}) for $-\frac{1}{2}<s<0$ by the same way as in the proof of Proposition~\ref{bilin_est}, Case\ 3. We assume ${\rm supp}\mathcal{F}_{x,y}[w_{N,M,L}]\subset \{(\xi, \eta)|\ |\xi|\sim |\eta|\}$. It suffice to show the estimate for $I_{1,2}$ and $I_{2,2}$, which are defined in Proposition~\ref{bilin_est}, Case\ 3. By the same modification such as in Case\ 1', we can obtain (\ref{bilin_pf}) for $-\frac{1}{2}<s<0$. \end{proof} \section*{Acknowledgements} This work is financially supported by JSPS KAKENHI Grant Number 17K14220 and Program to Disseminate Tenure Tracking System from the Ministry of Education, Culture, Sports, Science and Technology. The author would like to his appreciation to Shinya Kinoshita (Nagoya university) for his useful comments and discussions.
{ "timestamp": "2018-06-05T02:16:30", "yymm": "1806", "arxiv_id": "1806.01039", "language": "en", "url": "https://arxiv.org/abs/1806.01039" }
\section{Introduction} Most asteroseismic results have so far been obtained from photometric observations using a variety of space-borne instruments, such as WIRE \citep{2006MNRAS.371..935F}, MOST \citep{2003PASP..115.1023W}, COROT \citep{2009A&A...506..411A} and Kepler \citep{2010Sci...327..977B}. This will likely also be the case in the future with the launch of TESS \citep{2014SPIE.9143E..20R} and PLATO \citep{2014ExA....38..249R}. The main advantage of photometric observations is that it is possible to observe multiple stars simultaneously, as evidenced by the large number of main sequence stars for which results have been obtained \citep{2013ApJ...765L..41S,2014ApJS..210....1C,2017ApJ...835..172L,2017ApJ...850..110L}. However, Doppler velocity based observations have a superior signal to noise ratio. Unfortunately, it is difficult to observe more than one star at a time with a spectrograph based instrument and it is difficult to obtain the required observing time (typically weeks to months at a high cadence and high duty cycle on a large telescope) and therefore only a few stars have been observed this way, as reviewed by \cite{2014aste.book...60B}. With the recent commissioning of the first SONG telescope \citep{2009CoAst.158..345G,2017ApJ...836..142G}, and the planned expansion of the SONG network, more stars will be observed in the future. Given this very limited resource and that the velocity observations are significantly different from photometric observations, it is essential that the maximum information is extracted from the observed stellar spectra and that the resulting power spectra are accurately modeled. When extracting the oscillation signal from the observed spectra, Doppler shift algorithms developed for radial velocity planet searches \citep[e.g.][]{2001A&A...374..733B} are often used and it is thus implicitly assumed that the oscillation signal is well represented by a Doppler shift. For fitting power spectra it is generally assumed that the mode visibilities, as a function of the azimuthal order $m$, follow the description of \cite{2003ApJ...589.1009G}, where it was assumed that the sensitivity only depends on the center-to-limb distance on the stellar disk. Below I will show that, in the presence of modest stellar rotation, the Doppler shift approach discards a significant amount of information and that the use of the visibility expression of \cite{2003ApJ...589.1009G} can lead to significant systematic errors. The dependence of the visibilities on the spherical harmonic degree $l$ is typically kept free or prescribed to follow a relationship based on simplified models, even though it is known that the observed visibilities for the Sun do not agree with the theoretically expected values for broad band photometry \citep{2014ApJ...782....2L}. A few results, similar to those presented in the present paper, but using a different set of simulations were shown in \cite{2014IAUS..301..481S}. The present paper expands substantially on this by considering stars of different spectral type, the effects of magnetic fields and a variety of analysis methods. Models of the effects of oscillations on spectra and how the information can be extracted have been studied in the past, see for example \cite{1998ApJS..117..563B}, \cite{2003A&A...398..687B} and particularly \cite{2006A&A...455..227Z} where several physical effects were included. However, these models have used simplified line models, rather than the results of three dimensional magnetohydrodynamic (MHD) models, such as those used here and the effects of magnetic fields on the spectra were not considered. The use of a Singular Value Decomposition (SVD) based analysis was also not discussed in those studies. Rather, \cite{2003A&A...398..687B}, for example, used a moments based method, which does not result in as compact a representation as does the SVD based method. In Sect. \ref{sec:back} I start by discussing details of the models used, the radiative transfer and how the spectral perturbations can be calculated. In Sect. \ref{sec:fitting} I consider a variety of analysis methods, starting with the classic cross-correlation. As that is shown to be sub-optimal, I also discuss the option of performing least squares fits and describe an SVD based method. As these methods result in multiple time series, I finish Sect. \ref{sec:fitting} by outlining how multiple time series can be analyzed. Finally I discuss some of the issues still to be addressed and conclude. Issues of how the methods may be applied to a given spectrograph, how they may be made more numerically efficient and the analysis of simulated or real time series is deferred for later consideration. \section{Background} \label{sec:back} Before addressing the fitting of the spectra I will discuss, in the following subsections, the MHD models used, the radiative transfer performed and how those are used to derive the perturbations to the spectra. \subsection{Convection models and radiative transfer} The convection models used here are snapshots from the simulations of \cite{2013A&A...558A..48B} with surface properties corresponding to main sequence stellar models of types F3, G2, K0, K5, M0 and M2 and with sizes ranging from 30~Mm x 30~Mm x 9~Mm for the F3 model to 1.56~Mm x 1.56~Mm x 0.8~Mm for the M2 model. For each spectral type, models with injected magnetic fields of 0~G, 20~G, 100~G and 500~G were made. The reader is referred to \cite{2013A&A...558A..48B} for further details of the various simulations. For the radiative transfer SPINOR \citep{2000A&A...358.1109F} was used to synthesize various lines for a selection of the models discussed above. The main line studied is FeI at 6173~\AA\, which is used by the HMI instrument \citep{2012SoPh..275..229S} and has the advantage that it has simple atomic properties, a large Land\'e Factor, is relatively free of blends and that some of the results can be verified by observations. For selected models the FeI line at 5250~\AA\ and the SiI line at 10827~\AA\ were also used. In addition to these real lines, the 6173~\AA\ line was also modeled assuming abundances of 0.1 and 10 times solar, to simulate weaker and stronger lines at otherwise fixed atomic physics. Finally a line synthesis ("500H") is done for the 500~G case with the magnetic field artificially set to zero. This makes it possible to disentangle the effects of the field on the MHD simulations and on the final radiative transfer calculation. The line profiles are synthesized for a number of viewing angles at a spectral resolution of 7.5~m\AA\ with 201 points covering $\pm$0.75~\AA\ (except for the SiI 10827~\AA\ line where the spacing is 30~m\AA\ covering $\pm$3~\AA\ in order to accommodate the wider line) and the resulting profiles are averaged horizontally. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{profiles.ps} \end{center}\caption[]{ Line profiles for the FeI 6173~\AA\ line for the G2 star with 0~G. From top to bottom the viewing angles are $0^\circ$, $10^\circ$, ..., $80^\circ$, $85^\circ$ and $87^\circ$. The profiles were normalized to the disk center continuum. The red line indicates the disk averaged line profile in the absence of rotation. }\label{avprofiles}\end{figure} Figure \ref{avprofiles} shows the profiles for the G2 star at 0~G. The limb darkening is clearly visible as a decrease of the continuum with viewing angle, as is the broadening of the line towards the limb. The line also becomes shallower and shifts to the red towards the limb due to convective blueshift. Of these quantities the limb darkening is the easiest to verify observationally and Fig. \ref{limbdark} shows a comparison between the observed limb darkening from HMI and that from various calculations. In general the correspondence is quite good, but there are also differences, especially very close to the limb where the point spread function (PSF) of HMI has not been taken into account. More importantly, the calculated limb darkening appears to be too steep, even far from the limb. Interestingly, the results of \cite{2013A&A...554A.118P} are significantly closer to the observed results and they concluded that the key improvement in their calculations is that the radiative transfer used inside their MHD simulation was performed more accurately, resulting in structural changes near the surface. Such detailed radiative transfer is typically not performed as part of large MHD simulations due to computational cost, but was done by \cite{2013A&A...554A.118P} in order to better model the center-to-limb variations of various observed quantities. The radiative transfer used for the detailed line synthesis, given an MHD cube, is believed to be accurate for both their computations and those performed here. In any case, the limb darkening is quite well modeled, giving confidence in the accuracy of the simulations and radiative transfer. For other stars \cite{2017A&A...605A..91D}, recently demonstrated that the accuracy of the line profile calculations may be checked by using spectroscopic observations during planetary transits. \goodbreak \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{limb_0G.ps} \end{center}\caption[]{ Continuum limb darkening near the FeI line at 6173~\AA. Results are shown both as a function of the fractional solar radius $r$ (on the top axis) and $\mu=\sqrt{1-r^2}$ (on the bottom axis). Red: From the simulations shown in Fig. \ref{avprofiles}, including a number of additional angles. Black: An estimate from the HMI instrument. Green: An estimate from \cite{2013A&A...554A.118P} courtesy of R. Trampedach. Vertical dotted lines indicate 1.0, 2.0 and 5.0 pixels from the limb. Note that the results were not corrected for the PSF of the HMI instrument. }\label{limbdark}\end{figure} \subsection{Effects of oscillations on spectra} To calculate the effect of the oscillations on the observed spectra, one needs to integrate them over the stellar disk $(\odot)$: \begin{eqnarray} I(\lambda,t ) &=& \int_{\rm \odot} I (x,y,\lambda,t) d\vec{r} \nonumber \\ &=& \int_{\rm \odot} I_m (\mu , \lambda+k V_0 +k V^\prime(t)) d\vec{r} \nonumber \\ &\approx& \int_{\rm \odot} I_m (\mu , \lambda+k V_0 ) d\vec{r} + \int_{\rm \odot} k V^\prime(t) I_m^\prime (\mu,\lambda+k V_0) d\vec{r} \nonumber \\ &=& I_0 (\lambda) + k \int_{\rm \odot} V^\prime(t) I_m^\prime (\mu,\lambda+k V_0) d\vec{r} \nonumber \\ &\equiv& I_0 (\lambda) + \delta I(\lambda,t) , \label{integrate} \end{eqnarray} where $\lambda$ is the wavelength, $t$ time, $x$ and $y$ are the coordinates relative to disk center (in units of the stellar radius), $\vec{r} = (x,y)$, $\mu^2=1-x^2-y^2=1-r^2$, $r$ is the fractional radius, $I_m$ the model intensity, $I_m^\prime = dI_m/d\lambda$, $V_0$ the background stellar LOS velocity (assumed here to be due to solid body rotation), $V^\prime$ is the LOS velocity induced by the mode (assumed small), $k=d\lambda/d V = \lambda/c$ and $c$ is the speed of light. For simplicity the dependencies of $V_0$ and $V^\prime$ on $x$ and $y$ and the fact that $\lambda$ is in reality discretized are suppressed here and in the rest of this article. Similarly time variations of $V_0$ have been ignored. An obvious source for such variations is the Earth's orbital motion which will cause variations far outside of the linear range and which will have to be dealt with by shifting the spectra or fitting templates. It has been assumed that the only change to the spectrum, at a given spatial location, due to the mode is that caused by Doppler shift. In other words intensity, linewidth and other thermodynamic changes are not taken into account. To calculate the effect of a given mode one needs to calculate the LOS velocity caused by it. Assuming that the modes are undistorted by asphericities (such as rotation or starspots), are undamped and that only the radial component is significant, one obtains, for a mode with unit amplitude: \begin{eqnarray} V^\prime (x,y,t) &=& \Re \left( \mu Y_l^m (x,y) e^{-i\omega_{lm} t}\right) \nonumber \\ &=& \Re \left( \mu P_l^{|m|} (\sin\theta) e^{im\phi-i\omega{_lm} t} \right ) \nonumber\\ &=& \mu P_l^{|m]} (\sin\theta) (\cos\omega_{lm} t \cos m\phi + \sin\omega_{lm} t \sin m\phi) \nonumber \\ &\equiv& V_{lm}^c \cos\omega_{lm} t + V_{lm}^s \sin\omega_{lm} t, \label{vmode} \end{eqnarray} where the factor $\mu$ accounts for the velocity projection factor, $Y_l^m$ is a spherical harmonic, $P_l^m$ an associated Legendre function, $x=\sin\phi\cos\theta$, $y=\sin\theta\sin i - \cos\phi\cos\theta\cos i$, $\phi$ is longitude, $\theta$ is latitude, $i$ is the inclination of the rotation axis, and $\omega_{lm}$ is the mode frequency. With the convention chosen $V_{l-m}^c = V_{lm}^c$ and $ V_{l-m}^s = -V_{lm}^s$. Without loss of generality the coordinate system was chosen to have the stellar rotation axis aligned with the y-axis. Substituting Eq. (\ref{vmode}) into Eq. (\ref{integrate}) one obtains: \begin{eqnarray} \delta I(\lambda,t ) &=& k \int_{\rm \odot} (V_{lm}^c \cos\omega t + V_{lm}^s \sin\omega t) I_m^\prime (\mu,\lambda+k V_0) d\vec{r} \nonumber \\ &\equiv& \delta I_{lm}^c (\lambda) \cos\omega t + \delta I_{lm}^s (\lambda) \sin\omega t \label{combo} \end{eqnarray} Note that $\delta I_{l-m}^c = \delta I_{lm}^c$ and $\delta I_{l-m}^s = -\delta I_{lm}^s$ and that in the absence of E-W asymmetries (e.g. rotation) the $\delta I_{lm}^s$ term vanishes. While I have, for simplicity, assumed that the only background velocity is that from uniform rotation and that the modes are undistorted, the formalism can easily accommodate the more general case. For some details of how this can be done see \cite{2006A&A...455..227Z}. Similarly it is straightforward to include thermodynamic changes, when known. \section{Fitting methods} \label{sec:fitting} In this section I will describe the results of three methods for extracting information from the observed spectra. A simple cross-correlation, a fit of the expected perturbations and an SVD based analysis. \subsection{Cross-correlation analysis} A simple and commonly used method used to extract the Doppler shift is to cross-correlate the reference spectrum with the observed one, using methods such as the one described by \cite{2001A&A...374..733B}. In principle the wavelength dependence of the noise should be taken into account, but for simplicity I will start by assuming a uniform noise and discuss the effect of the wavelength dependence of the noise in the next subsection. As the oscillation velocities are small and do not cause a significant smearing of the line (relative to the zero oscillation case) the reference spectrum can be taken as $I_0$. Given that the perturbations are small, the cross-correlation is equivalent to a fit of $\delta I$ at each time, to the derivative $I_0^\prime$ of $I_0$ with respect to $\lambda$, assuming a uniform error, using the following equation: \begin{equation} \delta I(\lambda,t ) = V_{\rm fit}(t) k I_0^\prime (\lambda), \label{simple-fit} \end{equation} where $V_{\rm fit}$ is the fitted velocity. Given Eq. (\ref{combo}), and since a unit amplitude oscillation was assumed, the visibilities, $S$, can be defined by fitting the perturbations to the derivative: \begin{equation} \delta I_{lm}^c (\lambda) = S_{lm}^c k I_0^\prime (\lambda) \label{simple_sens_c} \end{equation} and \begin{equation} \delta I_{lm}^s (\lambda) = S_{lm}^s k I_0^\prime (\lambda) , \label{simple_sens_s} \end{equation} As discussed later, only $S_{lm}^c$ is significant. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{dprofiles.ps} \end{center}\caption[]{ The perturbations $\delta I_{lm}^c$ (black) for the 6173~\AA\ line for the G2 star with 0~G, $i=0^\circ$ and no rotation. The values of $(l,m)$ are $(0,0)$ (solid), $(1,0)$ (dotted), $(2,0)$ (dashed) and $(3,0)$ (dash-dotted). Also shown (in red) is the derivative $I_0^\prime$ scaled to best fit each perturbation. The shape (but not magnitude) of the perturbations do not depend on $i$ and $m$ in the absence of rotation. }\label{dprofiles}\end{figure} To illustrate how well these profiles match, Fig. \ref{dprofiles} shows $\delta I_{lm}^c$ for various modes together with fits to $I_0^\prime$. For the lowest degrees the fit of $I_0^\prime$ is very good but it starts to disagree more as the degree is increased. Still, the approximation used for Eq. (\ref{simple-fit}) is perhaps adequate to justify using the cross-correlation method for this case. The decrease of the visibility at higher $l$ is also evident. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{dprofiles4.ps} \end{center}\caption[]{ Similar to Fig. \ref{dprofiles} this shows the perturbations $\delta I_{lm}^c$ (black) and $\delta I_{lm}^s$ (blue) for the G2 star with $i=90^\circ$ and solid body rotation with an equatorial rotation velocity of 4~km/s. The values of $(l,m)$ are $(0,0)$ (solid), $(1,1)$ (dotted), $(2,0)$ (dashed) and $(2,2)$ (dash-dotted). Also shown (in red) is the derivative $I_0^\prime$ scaled to best fit each of the $\delta I_{lm}^c$. Note that the unperturbed line, and thus the derivative, is different from the one in Fig. \ref{dprofiles} due to the rotational broadening. }\label{dprofiles4}\end{figure} If the star is rotating and not observed from the pole ($i=0^\circ$) the results change substantially, as illustrated in Fig. \ref{dprofiles4}. While $\delta I_{lm}^c$ is still moderately well fitted by $I_0^\prime$, $\delta I_{lm}^s$ is now non-zero and shows a very different profile, corresponding to a narrowing and widening of the line, which is not well fitted by the derivative. I will return to this issue below. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{vis_lines.ps} \end{center}\caption[]{ Visibilities for the $m=0$ modes for the G2 star as a function of $l$ and spectral line, for the 0~G case and an inclination angle of $0^\circ$. In addition to the regular lines, profiles were synthesized using larger and smaller abundances of Fe for the 6173~\AA\ line (using the same simulations). For clarity the visibilities have, for each spectral line, been divided by the visibility of $l=0$. Note that the visibilities of the other $m$ values are zero at $i=0^\circ$. }\label{vis_lines}\end{figure} Examples of the visibilities $S_{lm}^c$ are shown in Fig. \ref{vis_lines} for a variety of spectral lines. As can be seen the differences are modest, especially in the visible, even if the abundances are changed substantially. The SiI line does show a slightly different behavior, likely because it is a much broader line and in the near infrared. As such it should be possible to only model selected lines and interpolate the results. For different stars (Fig. \ref{vis_stars}) there is a modest trend with stellar type, but the overall trend remains the same. Again it would appear that while the variations should not be neglected, it should be possible to interpolate between the spectral types, avoiding the need to run large simulations for each star. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{vis_stars.ps} \end{center}\caption[]{ Similar to Fig. \ref{vis_lines} for different stars. All cases are for the 6173~\AA\ line without a magnetic field, no rotation, $i=0^\circ$ and $m=0$ and the visibilities have been divided by the one for $m=l=0$. }\label{vis_stars} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{vis_fields.ps} \end{center}\caption[]{ Similar to Fig. \ref{vis_lines} for the different field strength cases in \cite{2013A&A...558A..48B}. The "500H" case used the "500G" simulation but with the field set to zero in the radiative transfer calculations. All cases are for the G2 star using the 6173~\AA\ line, no rotation, $i=0^\circ$ and $m=0$ and the visibilities have been divided by the one for $m=l=0$. }\label{vis_fields}\end{figure} Adding a modest magnetic field also does not have a dramatic effect as shown in Fig. \ref{vis_fields}. For the, possibly unrealistic, 500~G case the change is more significant. It is interesting that the changes in the structure dominates over the radiative transfer, as can be seen by turning off the field in the 500~H case, which changes the result by much less than did the introduction of the magnetic field in the MHD simulations. As such it appears that the presence of magnetic fields can probably be ignored unless the fields are quite strong. \begin{figure*} \begin{center} \includegraphics[width=18cm]{visx_rot4.ps} \end{center}\caption[]{ The visibilities $S_{lm}^c$ of the $l=0$ through $l=3$ modes for the G2 star as a function of inclination angle and equatorial rotation velocity for the case with 0~G and using the 6173~\AA\ line. For clarity the values have been divided by the visibility of the corresponding $m=0$ mode at $i=0^\circ$. The corresponding $S_{lm}^s$ values, which are not shown, go up to 0.02, but are generally less than 0.01. }\label{vis_rot4}\end{figure*} The inclination dependence of the visibilities shown in Fig. \ref{vis_rot4} is much more interesting. Without rotation the inclination dependence follows the prediction by \cite{2003ApJ...589.1009G} (not shown). But even for a modest rotation rate of 4~km/s (roughly twice solar) the deviations are dramatic. For an equatorial observation ($i=90^\circ$) the ratio of the visibility of the $(l,m)=(3,3)$ mode relative to that of the $(3,1)$ mode changes from -1.29 in the absence of rotation to -2.71. That the rotation has a significant effect on the visibilities is is not too surprising. As the rotation increases, the line emitted far from the central meridian moves far away from that at the center (and the average) and so the measurement ceases to be sensitive to the oscillations near the edges. This also explains why the changes become significant when the Doppler shift at the east and west limbs becomes comparable to the linewidth (the FWHM of the average 6173~\AA\ line in the non-rotating case corresponds to about 5.3~km/s). \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth]{sensmap.ps} \end{center}\caption[]{ The sensitivity as a function of position on the disk for the 6173~\AA\ line for the G2 star with 0~G and $i=90^\circ$. The rotation rates for panels {\bf a} through {\bf d} are 0~km/s, 2~km/s, 4~km/s and 6~km/s. The maps are normalized to have the same integrated sensitivity for the four cases. Gray scale goes from -0.15 times the maximum sensitivity to the maximum. }\label{sensmap}\end{figure} This issue is illustrated in Fig. \ref{sensmap}, where the sensitivity as a function of position on the stellar disk is shown for an edge on ($i=90^\circ$) view. As the rotation rate increases the sensitivity becomes more and more concentrated towards the central meridian. Beyond about 4~km/s the sensitivity becomes negative at the east and west limbs as the shift becomes comparable to the line width. While barely visible it is also the case that the sensitivity is not exactly east-west symmetric due to the combination of the rotation and the convective blueshift. In solar-like stars the spacing between modes with different $m$ (which is essentially equal to the rotation frequency) is often small or comparable to the linewidth \citep{2007A&A...470..295B,2014A&A...568L..12N}, resulting in the modes blending together. In this case one therefore has to rely on a model of the visibilities and any errors in that can lead to incorrect mode parameter estimates.. This is illustrated in Fig. \ref{fit_both}. Here limit spectra calculated using the correct visibilities (those including the effects of rotation) were fitted assuming the incorrect visibilities (those not including the effects of rotation). As can be seen both the inclinations and the splittings (the spacing between adjacent $m$ values, which is essentially equal to the rotation frequency for Sun-like stars) are mis-estimated, in some cases resulting in values far from the input ones. Also, the values determined from different values of $l$ are inconsistent, which may or may not be noticeable. When the peaks corresponding to the different values of $m$ are well separated, the error will be obvious and would make it clear that there is a problem. If the correct visibilities are used, the parameters are, of course, correctly estimated. On the other hand the errors are sub-optimal, as discussed in Sect. \ref{sec:SVD}. Taking into account the effects of rotation does not mean that one needs to fit for additional parameters. The visibilities are given by the rotation rate (and thus the splittings) and inclination, both of which are already fitted for. All that needs to be done is to replace the use of the expressions in \cite{2003ApJ...589.1009G} by a parametrization of the visibilities (e.g. the results shown in Fig. \ref{vis_rot4}), for example by interpolating the values. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{fit_both.ps} \end{center}\caption[]{ Results of fitting power spectra constructed with sensitivities calculated including the effects of rotation, but using a model calculated assuming sensitivities ignoring the effects of rotation. Results are for the G2 star with no field, the 6173 \AA\ line and a rotation velocity of 6~km/s. The ratio of the FWHM of the modes to the rotation rate is set to 1.0. The symbols show the results of fitting with one l at a time, the solid line is for using l from 0 to 3. In all cases a global search was performed to ensure that the global minimum has been found. }\label{fit_both}\end{figure} \subsection{Least-squares fit} \label{sec:LS} As mentioned in the previous subsection, the cross-correlation analysis suffers from several problems. Among others, the effects of rotation can be quite substantial, and the implicit assumption that the derivative of the line is a good approximation to the perturbation is thus poor. Also, it is assumed that the noise is independent of wavelength. These approximations can be addressed by performing a maximum likelihood fit, which is considered in this subsection. That the noise is independent of the wavelength is unlikely to be correct, as discussed in \cite{2001A&A...374..733B}. A better approximation is to assume that photon noise is the dominant noise term. Assuming that the number of photons is $>>1$ and that the perturbation is small, the photon noise can be approximated by a normal distribution with a variance proportional to the signal. A fairly benign consequence of this is that an internal estimate of the noise on the derived velocity, assuming that the noise equals that in the continuum, will be wrong relative to that using the photon noise estimate. In the case of the 6173~\AA\ line by an $l$-independent factor around 1.2. A more significant problem is that the estimate is statistically sub optimal, in other words that an estimate with a better signal to noise ratio can be constructed. The optimal (maximum likelihood) estimate can be obtained by performing an error weighted least squares fit, writing the observed spectrum as \begin{equation} \label{eq:LS} I(\lambda,t) = I_0(\lambda) + x_c(t) \delta I_{lm}^c (\lambda) + x_s(t) \delta I_{lm}^s (\lambda), \end{equation} thereby obtaining a time series $x_c$ of the cosine component and another $x_s$ of the sine component. The error weighted fit being equivalent to an unweighted fit in which both the data and the model are divided by the error estimate, which is in this case given by the square root of the model. A downside of the least squares fitting is that a different function should ideally be fitted for each mode. This is discussed in the next subsection. A comparison between the cross-correlation estimates and the fits of only the cosine component (which is dominant at low rotational velocity) is shown in Fig. \ref{vis_opt}. Here the signal to noise ratio is shown, rather than the visibility. As the noise is independent of the mode in the cross-correlation case, this is proportional to the absolute value of the visibility. For the polar ($i=0^\circ$) case, which is unaffected by rotation, the effect is quite modest. But for an equatorial ($i=90^\circ$) view the difference is quite significant, especially at higher degree. At $l=4$ the improvement is roughly a factor of 1.6 and at $l=5$ it is 2.2. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{vis_opt.ps} \end{center}\caption[]{ Signal to noise for various cases. Solid is cross-correlation is $m=0$ for $i=0^\circ$ and dashed the corresponding fits. Dotted is the cross-correlation for $m=l$, 4~km/s equatorial rotation and $i=90^\circ$ while the dash-dotted is the corresponding fit. In all cases the absolute value of the signal to noise ratio is shown divided by the corresponding $l=0$ cross-correlation value. } \label{vis_opt} \end{figure} Results of fitting for both the sine and cosine components are shown in Fig. \ref{visx_rot}. At modest rotation rates the visibility of the real part drops and the visibility of the imaginary part increases. At higher degrees the two visibilities become similar. In other words the broadening and narrowing of the line caused by the sin component becomes as significant as the shift of the line caused by the cos component, which among other things allows for separating prograde and retrograde modes. In general the ability to observe two separate time-series opens up the possibility to derive significantly more information about the modes, but requires a somewhat more complex analysis, as described in Sect. \ref{multi-series}. While the least squares fitting does provide significant advantages over the cross-correlation approach, the SVD based approached, discussed in the next subsection, provides significant advantages, and so the practicalities of implementing the least squares fitting are not discussed further. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{visx_rot.ps} \end{center}\caption[]{ Signal to noise for the real (cos) and imaginary (sin) components for $m=l$, $i=90^\circ$ for various equatorial rotation velocities. The signal to noise ratios were normalized to the $l=0$ case without rotation. } \label{visx_rot} \end{figure} \subsection{SVD analysis} \label{sec:SVD} The fact that the perturbations caused by different modes are different raises a number of questions. Is it necessary to fit a given spectrum multiple times with functions designed to fit each mode? Is it possible to separate more than the cos and sin components, like different $l$ and $m$ values? A way to address these questions is to perform an SVD (Principal Component) analysis, writing the perturbations as: \begin{equation} \frac{\delta I_j(\lambda)}{\sqrt{I_0(\lambda)}} = \sum_k U_{k,j} \sigma_k V_k ( \lambda ) , \label{SVD} \end{equation} where the division by $\sqrt{I_0(\lambda)}$ accounts for the variation of the photon noise, $j$ encodes the mode ID, $\sigma_k$ are the singular values, $U$ the singular vectors in mode ID and $V$ are the singular vectors in wavelength. The mode ID could, for example, be $j=(l,m,p)$ where $p$ is $c$ or $s$ as defined in Eq. (\ref{combo}) at some fixed inclination and rotation rate. The properties of the SVD ensure that the $U$ vectors are orthonormal, as are the $V$ vectors. By convention, the singular values $\sigma$ are ordered by descending value. Their squares give the amount of variance of the left hand side of Eq. (\ref{SVD}) that is captured by the corresponding term (having implicitly assumed that the inherent mode amplitudes are independent $l$ and $m$, as expected for solar-like oscillations on a Sun-like star). An important property of the SVD is that the variance captured by the first $N$ terms is the maximum possible. No other vectors, such as the moments used by \cite{2003A&A...398..687B}, can capture more information using $N$ terms. The vectors $V$ are fitted to the spectra using an unweighted least squares fit of \begin{equation} \label{eq:svdfit} \frac{I(\lambda,t)-I_0(\lambda)}{\sqrt{I_0(\lambda)}} = \sum_{k=1}^{k_{max}} y_k(t) V_k(\lambda), \end{equation} for some suitable $k_{max}$ (see discussion later in this subsection), resulting in several time series $y_k$. As the $V$ vectors are orthonormal, the noise on each term is the same and the $y_k$ are given by dot products \begin{equation} y_k(t)=\sum_\lambda V_k (\lambda) \frac{I(\lambda,t)-I_0(\lambda)}{\sqrt{I_0(\lambda)}}, \end{equation} in the case where the fits are done over the same wavelengths as the SVD. It also follows that the $\sigma U$ are the corresponding visibilities of the various modes. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{svd_plot1.ps} \end{center}\caption[]{ Singular vectors in wavelength for three cases. Black: $i=90^\circ$ with 4~km/s equatorial rotation rate. Red: All inclinations ($i=0^\circ, 10^\circ, ..., 90^\circ$) combined with 4~km/s equatorial rotation rate. Green: All inclinations combined with all equatorial rotation rates (0~m/s, 1~km/s, ..., 6~km/s). Solid, dotted and dashed lines are the first, second and third vectors, respectively. The signs of some of the vectors were inverted (they are arbitrary in an SVD). All cases are for the 6173~\AA\ line at 0~G. } \label{svd_plot1} \end{figure} Figure \ref{svd_plot1} shows the singular vectors in wavelength for three different cases and as can be seen the vectors are very similar. As expected given the earlier results, the vectors look roughly like the first, second and third derivatives of the spectral line with wavelength. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{svd_plot1b.ps} \end{center}\caption[]{ The first four (from left to right) singular vectors in $l$, $m$ and $p$ for the case show in red in Fig. \ref{svd_plot1}. Top row is for the cos component and the bottom for the sin component. Each block shows the mean over $i$ of the absolute value of the singular vectors as a function of $l$ (horizontal) and $m$ (vertical) for $0 \le m \le l \le 10$. The blocks have been individually normalized to have identical maximum values. } \label{svd_plot1b} \end{figure} On the other hand the singular values $\sigma$ are somewhat different for the different cases. A way to quantify this is to ask how much of the variance (i.e. information in the spectra) is covered by the lowest few terms. For an equatorial view and a 4~km/s rotation rate the numbers are 66.3\%, 94.7\%, 98.7\% and 99.8\%, for one through four terms. For a single (4~km/s) rotation rate, but different viewing angles, the corresponding numbers are 84.6\%, 97.4\%, 99.5\% and 99.9\%. For the case with different angles and rotation rates the numbers are 89.3\%, 97.7\%, 99.5\% an 99.8\%. Given these results it appears that one only needs to fit two or three terms to the observed spectra in order to extract the vast majority of the available information. Exactly how many terms are needed to extract all the statistically significant information will depend on the signal to noise ratio, resolution and spectral coverage. The similarity of the spectra also means that there is little need to know the inclination and rotation rate in order to fit the spectra, which greatly simplifies the fitting. How to analyze the multiple resulting time series is discussed in Sect. \ref{multi-series}. The singular vectors in mode ID ($U$) give the relative visibility of a given $V$ to various modes, as shown in Fig. \ref{svd_plot1b}. Given that $V_1$ is very close to the wavelength derivative it is not surprising that the corresponding $U$ vector is dominated by the cos component. Similarly the second vector is dominated by the sin component and so forth. \begin{figure} \begin{center} \includegraphics[width=1.00\columnwidth]{svd_plot2a.ps} \end{center}\caption[]{ Visibility of the sin component for the second term in the SVD as a function of the visibility of the cos component for the first term for the case shown with black in Fig. \ref{svd_plot1}. The modes with the largest visibilities are identified by their $l,m$ values. The steepest dotted line roughly represents the largest ratio, the middle one down half of that and the third one zero. } \label{svd_plot2a} \end{figure} To further illustrate this Fig. \ref{svd_plot2a} shows $\sigma_2 U_{2,j}$ for the sin term as a function of $\sigma_1 U_{1,j}$ for the cos term. As can be seen the ratios of the two visibilities are quite different for different modes, indicating that it may be possible to partially identify the modes based on this ratio. Interestingly the ratio depends mostly on the rotation rate and $(l,m)$ and less on the inclination. The other two lines studied give similar results. The singular vectors in wavelength are quite different for the IR line, but the behavior of the singular vectors in ID and the singular values are similar. Considering the three lines simultaneously does not lead to a significant improvement in the ability to separate modes. It is not practical to perform a brute-force calculation of the perturbations for the entire spectrum. The radiative transfer calculations are extremely expensive given the size of the computation boxes and the number of lines present in a typical spectrum. On the other hand the majority of lines behave in a similar way and their properties vary slowly with atomic parameters, so it is only necessary to perform a detailed radiative transfer calculation for a subset. Also, the number of spatial points needed to adequately sample the convection can likely be reduced dramatically. Whether it is possible to determine the singular vectors in wavelength empirically (e.g. by performing an SVD of a number of observed spectra) is unclear as there are many other sources of variations, such as spectrograph drift and differential extinction. Also, such an empirical determination will not directly identify the modes. Another way to determine the vectors without having to perform the radiative transfer would be to use the method of \cite{2017A&A...605A..91D}, which has the advantage of automatically taking into account the spectrograph PSF, but the obvious disadvantage is that it only works directly for stars with large transiting planets. Similarly, the fact that the SVD represents the most compact representation does not mean that this methods has to be used. Other parameterizations, such as moments \citep{2003A&A...398..687B}, can also be used, but it must be realized that more terms may be needed and/or that some information is lost. It is possible that the addition of the thermodynamic perturbations caused by the modes or some of the other neglected effects will increase the ability to discriminate between modes, but this will have to await 3D simulations capable of predicting these effects reliably. Implementing the fits in practice will require a number of steps. Ideally one would start by obtaining an (M)HD model for the specific stellar type, synthesize the spectrum as a function of viewing angle, convolve with the spectrograph PSF, integrating up the perturbations caused by each mode, perform the SVD, determine how many terms to retain, fitting the spectra using the relevant number of terms and fitting the resulting time-series, as described in the next subsection. Unfortunately, many of these steps have complications. The simulation and spectral synthesis are extremely costly if done brute force. However, as demonstrated the resulting visibilities only depend weakly on stellar type and most of the spectral lines of interest are expected to depend only slowly on the properties of the lines, making it possible to interpolate the results. Indeed, simply interpolating the results shown in this paper, possibly supplemented with a few extra stellar models and spectral lines may be adequate, given the limited signal to noise of the currently available spectra. The effects of the spectrograph PSF have not been discussed here. If the PSF is well known it is straightforward to convolve by it after the line synthesis and to check how the results are affected. If the PSF is poorly known, more research will obviously be needed. Integrating up the contributions for each mode and performing the SVD should similarly not present a problem. Once the S/N for the spectrograph is known, determining the number of terms to retain is also straightforward to estimate, though one may, of course, choose to try adjacent values to test the significance. The fitting of the individual spectra is a linear fit of nearly orthonormal vectors and should thus be stable and easy to implement. \subsection{Analysis of multiple simultaneous time-series} \label{multi-series} In asteroseismology only a single time series has generally been available for a given star and analysis methods initially developed for Sun as a star observations have been used. For an introduction to the basics, see \cite{1990ApJ...364..699A}. Clearly, these analysis methods will not be optimal for the multiple simultaneous time series generated using the methods described in Sects. \ref{sec:LS} and \ref{sec:SVD}. Using the least squares analysis described in Sect. \ref{sec:LS} one obtains two time series ($x_c$ and $x_S$), while in Sect. \ref{sec:SVD} there can be several time series, depending on the signal to noise ratio. Thus the question arises of how to go about the case where each mode appears in multiple series, but with different visibilities. This issue has been addressed extensively in helioseismology and details can be found in e.g. \cite{1992PhDT.......380S} and \cite{2015SoPh..290.3221L}. Here I will briefly outline the general idea and how it might be applied to the present case. I will restrict my case to stochastically driven solar-like oscillations. In the case of stochastically driven solar-like oscillations, the Fourier transform of a single mode of oscillation has real and imaginary parts with a mean of zero and a variance given by \begin{equation} V(\nu)=\frac{P/w}{1+\left (\frac{\nu-\nu_0}{w} \right )^2}, \end{equation} where $V$ is the variance, $\nu$ the frequency, $\nu_0$ the mode frequency, $P$ is proportional to the mode power and $w$ is the HWHM of the mode. Under certain assumptions, most importantly that the mode excitations are frequent and that there are no gaps in the time series, it may be shown that the values of the Fourier transforms are normally distributed and are independent across frequencies and real/imaginary. When a mode is observed using one of the methods described above, the result is that the resulting Fourier transform are multiplied by complex constants $C_{jk}$, where $j$ identifies the time-series and $k$ encodes the mode. The real part of the constant is the sensitivity to the $\cos(m\phi)$ component and the imaginary part the sensitivity to the $\sin(m\phi)$ component, as discussed earlier. For simplicity, and consistently with the rest of the paper, it is assumed that these constants are independent of frequency and thus the radial order $n$ of the mode. As we have assumed that the amplitudes are small, it follows that the observed transforms ($\tilde y$) are the sum over the transforms ($\tilde x$) of the individual modes: \begin{equation} \tilde y_j (\nu)=\sum_k C_{jk} \tilde x_k(\nu) . \end{equation} An important consequence of the fact that a given mode, in general, appears in more than one time-series is that fitting their power spectra (as opposed to Fourier transforms) is sub-optimal. Specifically, the fact that the phases are correlated between the Fourier transforms is ignored resulting in a loss of information when the power spectra are made from the Fourier transforms. While it is possible to continue using complex numbers (e.g. through the use of cross-spectra), it is perhaps simpler and more intuitive to rearrange the equations to be purely real. To this end the complex spectra are split into twice as many real spectra to give: \begin{equation} \tilde y_j^\prime (\nu)=\sum_k C_{jk}^\prime \tilde x_k^\prime (\nu), \end{equation} where $j$ and $k$ now both run over the real and imaginary parts of the observed and mode transforms. As the $\tilde y_j^\prime$ are sums of normally distributed variables with zero mean, they are themselves normally distributed with zero mean and a covariance matrix with elements given by: \begin{equation} E_{nm}({\bf a},\nu_i) = \sum_k C_{nk}^\prime({\bf a})C_{mk}^\prime({\bf a}) V_k ({\bf a},\nu_i) + E_{nm}^{\rm noise} ({\bf a},\nu_i), \end{equation} where the vector ${\bf a}$ containing the parameters describing the modes has been made explicit and where a noise term has been added. Note the dependence of $E$ on ${\bf a}$ through both $C^\prime$ and $V$. The probability density for such a multivariate normal distribution is given by: \begin{equation} P({\bf a},\nu_i) = |2\pi E({\bf a},\nu_i)|^{-1/2} \exp\left(-\frac{1}{2} y(\nu_i)^T E({\bf a},\nu_i)^{-1}y(\nu_i)\right), \end{equation} where $||$ indicates the determinant. For a maximum likelihood estimate we thus need to minimize minus the logarithm of the product of the probabilities: \begin{equation} S({\bf a}) = \sum_i \log | E({\bf a},\nu_i) | + y(\nu_i)^T E({\bf a},\nu_i)^{-1}y(\nu_i), \end{equation} where the sum only needs to be made over positive frequencies, given the lack of independent information in the negative part of the transforms. It is important to note that, despite the superficial similarity, this is not a least squares fit. In a least squares fit, one fits for the mean with a known variance, here the mean is known (zero) and the fit is for the variance. This results in a non-linear fit, but one that is straightforward to implement from a numerical point of view using standard numerical routines. Equivalent Bayesian estimates should be straightforward to calculate. As for implementing such a fit in practice, several things need to be considered. First of all I have not explicitly stated the exact list of parameters ${\bf a}$ used to describe the modes, as a large variety of parameterizations can and have been used, depending on the star and the preferences of various investigators. Having said that, one would expect that parameterizations similar to those in current use \citep[e.g.][]{2014A&A...568L..12N} should work here, which some parameters added to deal with the noise covariance. Similarly, I have not specified how the noise term could be parameterized, but likely the form of the covariance matrix can be determined from data away from the peaks, as done in helioseismology, given that it would be expected to depend slowly on frequency. It may be noted that a brute force implementation of the above is actually somewhat inefficient. In particular many of the coefficients in $C^\prime$ may be almost purely real or imaginary and the covariance matrix $E$ will have many identical entries and zeros. However, given that we only have a few time series, the time saved for a maximum likelihood estimate by a more efficient computation is unlikely to be worth the effort. For an MCMC calculation this may not be the case. \section{Discussion} The analysis described in this paper is, of course, incomplete and many effects have been neglected. For example the stars have been assumed to have no differential rotation in latitude, the distortion of the eigenfunctions by rotation has been neglected, the displacement has been assumed to be purely radial and independent of height, and it has been assumed that there are no thermodynamic perturbations accompanying the oscillations. For a discussion of some of these effects see \cite{2006A&A...455..227Z}. The assumptions of height independence and lack of thermodynamic perturbations are quite difficult to address numerically, as the signal to noise ratio of the oscillations in the MHD simulations is very small due to their limited size and the short simulation time. For the Sun there is a noticeable height dependent effect \citep{2012ApJ...749L...5Z} in the observations and \cite{2012ApJ...760L...1B} have shown that the height dependence in the simulations is far from that predicted by simple theoretical models. The thermodynamic perturbations are difficult to address by the MHD models due to the signal to noise limitations, but given the discrepant visibilities reported by \cite{2014ApJ...782....2L} in intensity and the complexity of the physics it should not be assumed that the simple analytical models often used give reliable results. However, for slowly rotating Sun-like stars the effects considered almost certainly dominate over the ones neglected, at least for spectroscopic observations. For photometric observations the situation is different. Here the present method predicts essentially no perturbations and to obtain reliable estimates the details of the thermodynamic perturbations with height have to be understood in detail, as discussed above. It should also be mentioned that the issues discussed in this paper are also highly relevant for Doppler imaging and that the two areas have many similarities. \section{Conclusion} In summary it is clear that significant improvements can be made by careful modeling and analysis of the data and that while some of the changes are modest they should nonetheless be taken into account in the analysis of stellar spectra. Having said that it is also clear that future work is needed in order to understand some of the remaining uncertainties and it would be useful to determine if the observed mode visibilities agree with those predicted here. \begin{acknowledgements} I would like to thank Benjamin Beeck, Regner Trampedach, Martin Bo Nielsen and Björn Löptien for useful discussions and help with various calculations. The HMI data used are courtesy of NASA/SDO and the HMI science team. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2018-06-05T02:17:01", "yymm": "1806", "arxiv_id": "1806.01055", "language": "en", "url": "https://arxiv.org/abs/1806.01055" }
\chapter{Efficient Policy Learning via Transfer Learning} \label{chap:efficient_policy_learning} In Chapter \ref{chap:model_rl_based}, we introduced the Q-learning and the very recent, popular and effective technique for approximating the Q-function using a deep feed forward neural network, know as Deep Q-Learning (DQN). In this chapter, we will dive in depth, explaining the advantages of the DQN, since we use it extensively in our dialogue systems. \section{Deep Q-Learning (DQN)} There are two types of DQN algorithms: $i)$ standard DQN and $ii)$ Double DQN (DDQN) which is an extension and more robust version of the standard DQN algorithm. \subsection{Standard DQN} The agents should be able to generalize well, over a high-dimensional, partially observable and complex input. Exactly this was the difficulty that most of the RL algorithms were facing, so their applicability was mainly in domains of fully observable, finite and low-dimensional states. However, all of this changed after introducing the Deep Q-Network, by a group of researchers at Deep Mind \cite{mnih2015human}. It is a standard deep feed-forward neural network, which approximately calculates $Q \left( s, a \right | \theta)$, where $\theta$ are the parameters, (i.e. weights) of the Q-Network. As we already explained, the goal in the reinforcement learning is to minimize the equation \ref{eq:objective_func}. However, in the RL community it is widely known that a nonlinear approximator of the Q-function, such as a neural network, causes instability and divergence. This is due to the following two reasons: \begin{enumerate} \item The correlation between the sequence of observations, i.e. every next state depends on the previous states and actions, and \item The targets (labels), depend on the network weights. More precisely, to calculate the target $r + \gamma \max_{a^{\prime}} Q \left( s^{\prime}, a^{\prime}\right)$, used as a correction, we used the same weights which are changing through the time. This is in total contrast with the supervised learning, where the targets are fixed before the learning starts. \end{enumerate} The first problem is solved by using a biologically inspired mechanism, called \textit{experience replay}, i.e. to learn from experiences from an arbitrary point in the past. In order to perform such a mechanism, we store the agent's experiences $e_{t} = \left( s_{t}, a_{t}, r_{t}, s_{t+1} \right)$ in a dataset $D_{t} = \left\{e_{1}, \cdots , e_{t}\right\}$ of tuples. Therefore, at training time, we randomly sample experiences from the dataset $D_{t}$, following some probability distribution, which in the simplest case is a uniform distribution. Using this technique, we overcome the first problem, but we are still not able to perform learning in the network, due to the second issue, the targets still depend on the network weights. For this reason, we introduce another Q-Network, called \textit{target network}, with fixed parameters $\theta^{\prime}$. As its name suggests, this network is only used to calculate the targets $Q^{\star} \left( s^{\prime}, a^{\prime} | \theta^{\prime} \right)$, independently from the primary Q-Network. The target network parameters are only updated with the primary Q-Network parameters every $C$ steps and are held fixed between individual updates. Thus, with the randomly drawn mini-batch of experiences and the target Q-Network we perform learning using the following loss function: \begin{equation} \label{eq:loss_function} \mathcal{L}(\theta) = \mathbb{E}_{s,a,r,s^{\prime}}\left[ \left( r + \gamma \max_{a^{\prime}} \overbrace{Q\left( s^{\prime}, a^{\prime} | \theta^{\prime} \right)}^{\text{calc. by target net}} - \overbrace{Q\left( s, a | \theta \right)}^{\text{calc. by Q-net}} \right)^{2} \right] \end{equation} The full algorithm for the Deep Q-Learning with experience reply is described in more details in Appendix \ref{appendix_a1}. To evaluate the performances of the DQN algorithm, the researchers at Deep Mind took advantage of the Atari 2600 platform, offering 49 challenging games. The DQN algorithm outperformed the best existing reinforcement learning methods on 43 games without incorporating any prior knowledge about the Atari 2600 games. These outstanding results confirmed the superiority of the DQN algorithm and established it as a state-of-the-art technique in the Reinforcement Learning community. \subsection{Double DQN (DDQN)} The standard DQN algorithm is based on the Bellman equation, thus it includes maximization step as shown in Equation \ref{eq:loss_function}. For this reason, it learns unrealistically high action values, which tends to prefer overestimated to underestimated values. In their paper \cite{van2016deep}, theoretically prove that the overestimations occur non-uniformly and negatively affect the performance of the DQN algorithm. Therefore, they proposed an extension of the standard DQN algorithm, called Double DQN (DDQN), in order to overcome these issues. In the standard DQN algorithm, the decision for the next action is taken according to the following identity: $r + \gamma \max_{a^{\prime}} Q \left( s^{\prime}, a^{\prime}\right)$. For this reason, the same neural network is used to evaluate the Q-function and then to select the best action. In Double DQN, this process is decoupled. One neural network is used to evaluate the Q-function and a second neural network is used to select the best action. In mathematical notation, the next action is taken according to the following identity: $r + \gamma Q \left( s^{\prime}, \argmax_{a^{\prime}} Q \left(s^{\prime}, a^{\prime}\right) \right)$. \section{DQN for GO Chatbots} The applications of the DQN algorithm are not only limited to the Atari 2600 games. Very recently the researchers started applying the Deep Q-Learning to various tasks, including the Goal-Oriented Dialogue Systems. In the case of Goal-Oriented Chatbots, the agent is getting the new state $s_{t}$ from the Dialogue State Tracker (DST) and then it takes a new action $a_{t}$, based on the $\epsilon$-greedy policy. It means, with a probability $\epsilon \in \left[0,1\right]$ it will take a random action, while with a probability $1 - \epsilon$ it will take the action resulting with a maximal Q-value. We thus trade between the exploration and exploitation of the dialogue space. For each slot that might appear in the dialogue, the agent can take two actions: either to ask the user for a constraining value or to suggest to the user a value for that slot. Additionally, there is a fixed size of slot-independent actions, to open and close the conversation. The agent receives positive and negative rewards accordingly, in order to force the agent to successfully conduct the dialogue. It is \textit{successful} if the number of totally required dialogue turns to reach the goal is less than a predefined maximal threshold $n_{max\_turns}$. For every additional dialogue turn, the agent receives a predefined negative reward $r_{ongoing}$. In the end, if the dialogue fails, it will receive a negative reward $r_{negative}$ equal to the negative of the predefined maximal allowed dialogue turns. If the dialogue is successful, it will receive a positive reward $r_{positive}$, two times the maximal allowed dialogue turns. An important addition is the \textit{warm-starting} technique that fills the experience replay buffer with experiences coming from a successfully finished dialogues i.e. with positive experiences. This dramatically boosts the agent's performances before the actual training starts, as will be shown in Section \ref{sec:warmstart}. The training process continues with running a fixed number of independent training epochs $n_{epochs}$. In each epoch we simulate a predefined number of dialogues $n_{dialogues}$, thus filling the experience memory buffer. The result consists of mini-batches to train the underlying Deep Q-Net. During the training process, when the agent reaches for the first time a success rate greater or equal to the success rate of a rule-based agent $s_{rule\_based}$, we flush the experience replay buffer, as described in detail in~\cite{li2017end}. We do this because the DQN-based agent cannot produce valuable experiences in the beginning, which are all stored in the experience buffer. \section{Transfer Learning} The main goal of this work is to study the impact of a widely used technique - \textit{Transfer Learning} on goal oriented bots. As the name suggests, transfer learning transfers knowledge from one neural network to another. The former is known as the source, while the latter is the target \cite{pan2010survey}. The goal of the transfer is to achieve better performance on the target domain with limited amount of training data, while benefiting from additional information from the source domain. In the case of dialogue systems, the input space for both source and target nets are their respective dialogue spaces. \begin{figure}[h!] \centering \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{images/no_transfer_Learning_color.png} \caption{} \label{subfig:no_transfer_learning} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=\textwidth]{images/transfer_learning_color.png} \caption{} \label{subfig:transfer_learning} \end{subfigure} \caption{Comparison of the Goal-Oriented Dialogue System training process, without transfer learning (left side) and with transfer learning (right side).} \label{fig:no_transfer_learning_vs_transfer_learning} \end{figure} The training process without transfer learning, shown in Figure \ref{subfig:no_transfer_learning}, processes the two dialogue domains independently, starting from randomly initialized weights. The results are dialogue states from separate distributions. Additionally, the sets of actions the agents might take in each domain are also independent. On the other hand, as depicted in Figure \ref{subfig:transfer_learning} if we want to benefit from transfer learning, we must model the dialogue state in both domains, as if they were coming from the same distribution. The sets of actions have to be shared too. The bots specialized in the source domain must be aware of the actions in the second domain, even if these actions are never used, and vice versa. This requirement stems from the impossibility of reusing the neural weights if the input and output spaces differ. Consequently, when we train the model on the source domain, the state of the dialogue depends not only on the slots that are specific to the source, but also on those that only appear in the target one. This insight can be generalized to a plurality of source and target domains. The same holds for the set of actions. \begin{algorithm}[h!] \caption{Transfer Learning Pseudocode} \label{alg:transfer_learning} \begin{algorithmic}[1] \Procedure{InitializeWeights}{sourceWeights, commonSlotIndices, commonActionIndices} \State $ targetWeigths \gets \textit{RandInit()} $ \For{$i$ in $\textit{commonSlotIndices}$} \State $ \textit{targetWeigths} \left[ i \right] \gets sourceWeights \left[ i \right] $ \EndFor \For{$i$ in $\textit{commonActionIndices}$} \State $ \textit{targetWeigths} \left[ i \right] \gets sourceWeights\left[ i\right ]$ \EndFor \State \textbf{return} \textit{targetWeigths} \EndProcedure \end{algorithmic} \end{algorithm} When training the target domain model, we no longer randomly initializing all weights. The weights related to the source domain - both for slots and actions - are copied from the source model. The pseudocode for this weight initialization is portrayed in the Algorithm \ref{alg:transfer_learning}. \chapter{Overview of Dialogue Systems} \label{chap:dial_sys} This chapter gives an overview of the Dialogue Systems in general. The research on the dialogue systems intents to create comprehensive systems that can hold a real conversation, successfully employing all aspects: reasoning power, giving well defined and reasonable responses, emotion detection etc. Depending on the nature of the input and output, there are two types of Dialogue Systems: $i)$ Spoken Dialogue Systems (SDS) and $ii)$ Text-Based Dialogue Systems colloquially known as Chatbots. Fundamentally, both types exhibit many similarities, only the pre- and post-processing techniques differ. There is no consensus on the architecture of the Dialogue Systems, it is case-dependent. In general they are always composed of two parts: $i)$ user, whether real or simulated and $ii)$ the internal system. One good reference is given in \cite{pieraccini2005we}. In any case, both parts converse in alternating manner such that a \textit{dialogue turn} is one cycle of consecutive utterances from the user and the system. \section{Spoken Dialogue Systems} The \textit{Spoken Dialogue Systems} are specially designed for environments which does not include user interfaces such as big screens and keyboards, they rather use microphone and speakers. \cite{henderson2015discriminative} shows a typical composition of the Spoken Dialogue Systems as well as the information flow. For this reason, the input and output is continuous speech signal, which requires special modules and techniques to handle (see Figure \ref{fig:sds_pipeline}), which includes: \begin{itemize} \item The \textit{Automatic Speech Recognition} (ASR) unit \cite{zhang2017towards} assigns probabilities to the words in the user utterance. \item The \textit{Spoken Language Understanding} (SLU) unit \cite{yao2014spoken} infers the semantics of the user input. \item The \textit{Speech Synthesis} (SS) unit \cite{zen2009statistical}, will convert the system's response into speech. \end{itemize} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{SDS_pipeline} \caption{The architecture of a Spoken Dialogue System from \cite{henderson2015discriminative}} \label{fig:sds_pipeline} \end{figure} \section{Text-Based Dialogue Systems (Chatbots)} Contrary to the Spoken Dialogue Systems, the Text-Based Systems (known as Chatbots) only focus on the interfaces which include screens and keyboards, meaning the input and output is text. Consequently, they include appropriate units and algorithms to handle text. Depending on the nature of the conversation, the Chatbots are divided in Open-Domain and Closed-Domain Chatbots known as Goal-Oriented (GO) Chatbots. In the open-domain setting, the conversation can go in any direction, usually in the form of chit-chatting, without any purpose. Because of their nature to cover every possible case, it is almost impossible to create a perfect open-domain Chatbot, as shown in \cite{serban2016building}. Thus, most of the chatbot research is on the closed-domain Chatbots, which is a case in this thesis. \section{Goal-Oriented (GO) Chatbots} The Goal-Oriented (GO) Dialogue Systems are more useful and practical and are easier to implement. This is due to the fact that their domain of expertise is much narrower, focusing only on few key points of the dialogue. In general there are two dominant paradigms in GO Dialogue Systems implementations: $i)$ Fully-Supervised and $ii)$ Reinforcement Learning (RL) based. \subsection{Fully-Supervised GO Chatbots} In case of fully supervised implementation, we apply the recurrent neural networks (RNNs) encoder-decoder principles, mainly applied in the machine translation. An example of this kind of models is presented in \cite{bordes2016learning,wen2016network}. These models are trained in a sequence-to-sequence fashion \cite{sutskever2014sequence}, that encode the user request and its context and decode the bot answer directly. The fully supervised GO chatbots require a considerable amount of annotated human-human or human-machine dialogues since the system is trying to mimic the knowledge of the expert. Moreover, we don't have a control over the internal state, which means we can not model the dialogue as we wish. \subsection{Reinforcement Learning (RL) based GO Chatbots} On the other hand, we can model the GO Chatbots as a Partially Observable Markov Decision Process (POMDP)\cite{young2013pomdp}. The Reinforcement Learning (RL) \cite{sutton1998reinforcement} algorithms are one rich subset of powerful and promising algorithms that can be applied in this case. \begin{figure}[t!] \centering \includegraphics[width=\textwidth] {images/Dialogue_System} \caption{Text-Based Dialogue System modeled as a Partially Observable Markov Decision Process. The user utterance is parsed by the NLU unit producing a dialogue act understandable for the system. In the Dialogue Manager, the state tracker is estimating the state such that the RL agent could take the ideal action. This action is further passed to the NLG unit and finally presented to the user in a human readable form.} \label{fig:dialogue_systems} \end{figure} Figure \ref{fig:dialogue_systems} shows a typical composition of an RL-based GO Chatbot, as well as the information flow. Following the pipeline, there are three separate components, each having a specific role in the process: \begin{enumerate} \item First of all the \textit{Natural Language Understanding} unit \cite{hakkani2016multi} will infer the semantics of the user input. This includes understanding the user intent and the slots (i.e. the relevant information). \item In the second step, the \textit{Dialogue Manager} (DM) will take care of the dialogue. Based on the context, previous user and system actions it will produce the next system action. Usually it includes two other subcomponents: $i)$ \textit{Dialogue State Tracker} (DST) \cite{henderson2015machine}, which purpose is to build reliable state of the dialogue and a \textit{Policy Learning} module which reads the dialogue states and takes the next system action. \item Finally, the \textit{Natural Language Generation} unit \cite{wen2015semantically}, based on the DM output, will generate a natural sentence, understandable for the end user. \end{enumerate} RL-based Chatbots require less annotated dialogues than their sequence-to-sequence counterparts, due to their ability to simulate the conversation, thus exploring the unknown dialogue space more efficiently. The data requirements are however not trivial and obtaining the dialogue data is still the biggest obstacle their creators face. In the following chapters we dive deeper in the RL-based GO Chatbots and show how this obstacles can be surpassed using \textit{Transfer Learning}. \chapter{Experiments and Results} \label{chap:experiments_results} In this chapter we present all valuable experiments and the obtained results. Our work is based on the work from \cite{li2017end}. For this reason, in the first part we briefly present the baseline experiments and results. In the second part we focus on the transfer learning experiments. \section{Baseline Experiments} \begin{figure}[b!] \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.98\textwidth]{frame_level_success_rate.png} \caption{Learning curve on a semantic level} \label{subfig:frame_level_base} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.99\textwidth]{nl_level_success_rate.png} \caption{Learning curve on a natural language level} \label{subfig:nl_level_base} \end{subfigure} \caption{Baseline experimental results for 100 runs with a 95\% confidence interval} \label{fig:baseline_exp} \end{figure} In all baseline experiments, the Chatbot is trained on the \textit{Movie Booking} domain. The type of slots for this domain are given in Figure \ref{fig:slot_type}. The size of the training set is 128 user goals. The maximal number of allowed dialogue turns is set to $n_{max\_turns} = 20$, thus the negative reward for a failed dialogue is $r_{negative} = -20$, while the positive reward for a successful dialogue is $r_{positive} = 40$. In all experiments we use a warm-starting and train for $n_{epochs} = 1000$ epochs, each simulating $n_{dialogues} = 100$ dialogues. This is a bit huge number of epochs to run and we believe that the chatbot is overfitting. However, our intention was to reproduce the results from the baseline paper. We present the results for both levels: semantic level and natural-language level. In Figure \ref{subfig:frame_level_base} the learning curve for the training on semantic level is shown, while in Figure \ref{subfig:nl_level_base} the learning curve for the training on natural language level is shown. The same experiment is repeated 100 times, thus the results are reported with a 95\% confidence interval. Due to the noise introduced by the NLU and NLG unit, the Chatbot performance on natural language level is considerably lower than the performance on the semantic level. \section{Transfer Learning Experiments} In this set of experiments, we operate on the semantic level, removing the noise introduced by the NLU and NLG units. We want to focus exclusively on the impact of transfer learning techniques on dialog management. The details of the system implementation\footnote{Link to the GitHub repository: \href{https://github.com/IlievskiV/Master_Thesis_GO_Chatbots}{https://github.com/IlievskiV/Master\_Thesis\_GO\_Chatbots}} are presented in Appendix \ref{appenix_a3}. \subsection{Setup of Experiments} All experiments are executed using a setup template. Firstly, we train a model on the \textit{source domain} and reuse the common knowledge to boost the training and testing performance of the model trained on a different, but similar \textit{target domain}. Secondly, we train a model exclusively on the target domain, without any prior knowledge. This serves as a baseline. Finally, we compare the results of these two models. We thus have two different cases: \begin{enumerate} \item \textit{Domain Overlap} - the source \(Movie Booking\) and target \(Restaurant Booking\) domains are different, but share a fraction of the slots. \item \textit{Domain Extension} - the source domain, now \(Restaurant Booking\), is extended to \textit{Tourist Information}, that contains all the slots from the source domain along with some additional ones. \end{enumerate} \begin{figure}[h!] \centering \includegraphics[width=.75\textwidth]{domains_venn_diagram.png} \caption{Slot types in the three different domains} \label{fig:slot_type} \end{figure} The reason for the choice of source domain in the domain overlap case is designed to enable a direct comparison to the results of ~\cite{li2017end}, who built a GO bot for movie booking. For the domain extension case, the only combination available was \(Restaurant - Tourism\). The type of slots in each domain are given in Figure \ref{fig:slot_type}. For each domain, we have a training set of 120 user goals, and a testing set of 32 user goals. Following the above mentioned setup template, we conduct two sets of experiments for each of the two cases. The first set shows the overall performance of the models leveraging the transfer learning approach. The second set shows the effects of the warm-starting jointly used with the transfer learning technique. In all experiments, when we use a warm-starting, the criterion is to fill agent's buffer, such that 30 percent of the buffer is filled with positive experiences (coming from a successful dialogue). After that, we train for $n_{epochs} = 50$ epochs, each simulating $n_{dialogues} = 100$ dialogues. We flush the agent's buffer when the agent reaches, for a first time, a success rate of $s_{rule\_based} = 0.3$. We set the maximal number of allowed dialogue turns $n_{max\_turns}$ to 20, thus the negative reward $r_{negative}$ for a failed dialogue is $-20$, while the positive reward $r_{positive}$ for a successful dialogue is $40$. In the consecutive dialogue turns over the course of the conversation, the agent receives a negative reward of $r_{ongoing} = -1$. In all cases we set $\epsilon = 0.05$ to leave a space for exploration. By using this hyperparameters, we prevent the system to overfit and to generalize very well over the dialogue space. Finally, we report the success rate as a performance measure. \subsection{Training with Less Data} Due to labeling costs, the availability of in-domain data is the bottleneck for training successful and high performing Goal-Oriented chatbots. We thus study the effect of transfer learning on training bots in data-constrained environments. From the available 120 user goals for each domain's training set, we randomly select subsets of 5, 10, 20, 30, 50 and all 120. We then warm-start and train both the independent and transfer learning models on these sets. We test the performance on both the training set (\textit{training performance}) and the full set of 32 test user goals (\textit{testing performance}). We repeat the same experiment 100 times, in order to reduce the uncertainty introduced by the random selection. Finally, we report the success rate over the user goal portions with a 95\% confidence interval. \begin{figure}[h!] \centering \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=.49\textwidth]{training_over_user_goal_portions_rest_booking.png} \includegraphics[width=.49\textwidth]{testing_over_user_goal_portions_rest_booking.png} \caption{Restaurant Booking with pre-training on Movie Booking domain} \label{subfig:warm_up_rest_booking} \end{subfigure}\\% \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=.49\textwidth]{training_over_user_goal_portions_tourist_info.png} \includegraphics[width=.5\textwidth]{testing_over_user_goal_portions_tourist_info.png} \caption{Tourist Info with pre-training on Restaurant Booking domain} \label{subfig:warm_up_tourist_info} \end{subfigure} \caption{Average training and testing success rates with 95\% confidence, for 100 runs over a randomly selected user goal portions of size 5, 10, 20, 30, 50 and 120, for both models: with and without transfer learning.} \label{fig:warming_up_user_goal_portions} \end{figure} The training and testing results, in the first case of domain overlapping, are shown in Figure \ref{subfig:warm_up_rest_booking}. The success rate of the model obtained with transfer learning is 65\% higher than that of the model trained without any external prior knowledge. In absolute terms the success rate climbs on average from 30\% to 50\%. For the test dataset, transfer learning improves the success rate from 25\% to 30\%, for a still noteworthy 20\% relative improvement. In the case of domain extension, the difference between the success rates of the two models is even larger (Figure \ref{subfig:warm_up_tourist_info}). This was expected, as the extended target domain contains all slots from the source domain, therefore not losing any source domain information. The overall relative success rate boost for all user goal portions is on average 112\%, i.e. a move from 40\% to 85\% in absolute terms. For the test set, this difference is even larger, from 22 to 80\% absolute success rate, resulting in 263\% relative boost. These results show that transferring the knowledge from the source domain, we boost the target domain performance in data constrained regimes. \subsection{Faster Learning} \label{sec:warmstart} In a second round of experiments, we study the effects of the transfer learning in the absence and in combination with the warm-starting phase. As warm starting requires additional labeled data, removing it further reduces the amount of labeled data needed. We also show that the two methods are compatible, leading to very good joint results. \begin{figure}[h!] \centering \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=.49\textwidth]{learning_curve_training_data_set_rest_booking.png} \includegraphics[width=.49\textwidth]{learning_curve_testing_data_set_rest_booking.png} \caption{Restaurant Booking with pre-training on Movie Booking domain} \label{subfig:no_warm_up_rest_booking} \end{subfigure}\\% \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=.49\textwidth]{learning_curve_training_data_set_tourist_info.png} \includegraphics[width=.5\textwidth]{learning_curve_testing_data_set_tourist_info.png} \caption{Tourist Info with pre-training on Restaurant Booking domain} \label{subfig:no_warm_up_tourist_info} \end{subfigure} \caption{Average training and testing success rates with 95\% confidence, for 100 runs over the number of epochs, for both models: with and without transfer learning. The model with transfer learning is not warm-started.} \label{fig:no_warming_up_learning_curve} \end{figure} We report the training and testing learning curves (success rate over the number of training epochs), such that we use the full dataset of 120 training user goals and the test set of 32 user goals. We repeat the same process 100 times and report the results with a 95\% confidence interval. The performances in the first case of domain overlapping are shown in Figure \ref{subfig:no_warm_up_rest_booking}, while for the other case of domain extension, in Figure \ref{subfig:no_warm_up_tourist_info}. The bot using transfer learning, but no warm-starting, shows better learning performances than the warm-started model without transfer learning. Transfer learning is thus a viable alternative to warm starting. \begin{figure}[h!] \centering \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=.5\textwidth]{all_cases_learning_curve_training_data_set_rest_booking.png} \includegraphics[width=.49\textwidth]{all_cases_learning_curve_testing_data_set_rest_booking.png} \caption{Restaurant Booking with pre-training on Movie Booking domain} \label{subfig:all_cases_rest_booking} \end{subfigure}\\% \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=.49\textwidth]{all_cases_learning_curve_training_data_set_tourist_info.png} \includegraphics[width=.5\textwidth]{all_cases_learning_curve_testing_data_set_tourist_info.png} \caption{Tourist Info with pre-training on Restaurant Booking domain} \label{subfig:all_cases_tourist_info} \end{subfigure} \caption{Success rates for all model combinations - with and without Transfer Learning (TF), with and without Warm Starting (WS).} \label{fig:all_cases_learning_curve} \end{figure} However, models based on transfer learning have a significant variance, as the learning is progressing. This happens because in many experiment runs the success rate over all epochs is 0. In those cases, the agent does not find an optimal way to learn the policy in the early stages of the training process. This results with filling its experience replay buffer mostly with negative experiences. Consequently, in the later stages, the agent is not able to recover. This makes a combination with warm starting desirable. For convenience reasons, in Figure \ref{fig:all_cases_learning_curve} we show all possible cases of using and combining the transfer learning and warm-starting techniques. We can see that the model combines the two techniques performs the best by a wide margin. This leads to a conclusion that the transfer learning is complimentary to the warm-starting, such that their joint application brings the best outcomes. \chapter{Related Work} \label{chap:related_work} \section{Goal-Oriented (GO) Dialogue Systems} The Goal-Oriented (GO) Dialogue Systems have been under development in the past two decades, starting from the basic, handcrafted Dialogue Systems. For instance, \cite{larsson2000information} introduced a framework for dialogue management development, based on hand-crafted rules. In the same direction, \cite{zue2000juplter} built a weather information, hand-crafted Goal-Oriented Chatbot. The recent efforts to build such systems are generally divided in three lines of research. \subsection{Fully-Supervised Models} The first way is to treat them in an end-to-end, fully supervised sequence-to-sequence manner \cite{sutskever2014sequence}. Thus, we can use the power of the deep neural networks based on the encoder-decoder principle to infer the latent representation of the dialogue state. However, it is worth noting that these models require considerable amount of data. The authors in \cite{vinyals2015neural} used standard Recurrent Neural Networks (RNNs) and trained a Goal-Oriented Chatbot in a straightforward sequence-to sequence fashion. They benchmarked their findings in IT helpdesk troubleshooting domain, where costumers face computer related issues, and a specialist help them by conversing and walking through a solution. Due to the incapability of the recurrent nets to compress very long dependencies, this chatbot is not having a strong reasoning power. To overcome the RNN memory limitations, in their work \cite{bordes2016learning} used LSTM cells in combination with explicit memory, known as Memory Networks \cite{sukhbaatar2015end}, to build a Goal-Oriented Chatbot, using the bAbI\footnote{https://research.fb.com/downloads/babi/} tasks for restaurant reservation. The chatbot demonstrated better reasoning power, thus remembering and updating the past user preferences. For this reason, this work represents a testbed for testing the shortcomings and strengths of fully-supervised end-to-end Goal-Oriented Dialogue Systems. \subsection{Reinforcement Learning-based Models} Another branch of research had emerged, focusing on the Deep Reinforcement Learning because the fully-supervised approach is data-intensive. These models have a quite complex structure, since they include many submodules, such as Natural Language Understanding (NLU)~\cite{hakkani2016multi} and Natural Language Generation (NLG)~\cite{wen2015semantically} units, as well as a Dialogue State Tracker (DST). Aligned in this direction, \cite{cuayahuitl2017simpleds} created a simple Dialogue System for a restaurant reservation domain. The system's actions solely depend on the RL-based agent, by performing action selection directly from raw text of the last user and system responses instead of manual feature engineering. Therefore, this system does not include any language understanding and state tracking units, which in turn is quite constraining. Another simple Question-Answering Chatbot (Q\&A bot) is presented in \cite{dhingra2016end}. It is an end-to-end Reinforcement Learning Chatbot, which helps users search Knowledge Bases (KBs) for movies, without composing complicated queries. As an extension of all the previous work, \cite{li2017end} went one step further and built a comprehensive Movie Booking Chatbot. It includes a User Simulator \cite{li2016user}, which simulates the user in the training process, Natural Language Understanding (NLU) and Natural Language Generation (NLG) units, as well as a basic Dialogue State Tracker and a Policy Learning module. It is trained in an end-to-end fashion, by leveraging the Deep Q-Nets (DQN) \cite{mnih2015human} for the policy learning. \subsection{Hybrid Models} This line of research combines both, the Fully-Supervised and Reinforcement Learning-based approaches, in order to escape the limitations, characteristic for both of them. In their work ~\cite{su2016continuously} described a two-step approach to train a policy for Goal-Oriented Chatbot. In the first step, the algorithm is trained on a fixed corpus data in a supervised way. In the second step, the policy is fine-tuned using RL-based policy gradient \cite{williams1992simple} technique, in order to explore the dialogue space more efficiently. Similarly, \cite{williams2017hybrid}, proposed a solution to train a Goal-Oriented Dialogue System in two modes: off-line and on-line mode. In the off-line mode, the system is trained in a fully-supervise manner, combining an LSTM network with hand-crafted templates, to mitigate the data requirements. Afterwards, in the on-line mode, the system learns autonomously by incorporating RL-based policy gradient approach. \section{Data-Constrained Dialogue Systems} One desired property of the Goal-Oriented Chatbots is the ability to switch to new domains and at the same time not to lose any knowledge learned from training on the previous ones. This property is enforced due to the lack of in-domain data required to train high-quality Goal-Oriented Chatbots. In this direction, the authors in \cite{gavsic2015distributed} proposed a Gaussian Process-based technique to learn generic dialogue polices, which are organized in a class hierarchy. These policies with a modest amount of data can be furthermore adjusted according to the use case of the dialogue system. On the other hand, \cite{wang2015learning} learned domain-independent dialogue policies, such that they parametrized the ontology of the domains. In this way, they show that the policy optimized for a restaurant search domain can be successfully deployed to a lap-top sale domain. Last but not the least, ~\cite{lee2017toward} utilized a continual learning technique, to smoothly add new knowledge in the neural networks that specialized a dialogue policy in an end-to-end fully-supervised manner. Nevertheless, none of the previously mentioned papers tackles the problem of transferring the domain knowledge in case when the dialogue policy is optimized using a Deep Reinforcement Learning. In this thesis, we propose such method, based on the standard Transfer Learning technique~\cite{pan2010survey}. Therefore, using this method we surpass the limitations to transfer the in-domain knowledge in Goal-Oriented Dialogue Systems based on Deep Reinforcement Learning. \chapter{Model of RL-based GO Chatbots} \label{chap:model_rl_based} If we model the GO Dialogue Systems as POMDP and apply the Reinforcement Learning techniques as in \cite{zhao2016towards,li2017end}, then, several components are compromising that system as shown in Figure \ref{fig:dialogue_systems}. It consists of two independent units: the \textit{User Simulator} on the left side and the \textit{Dialogue Manager} (DM) on the right side. In-between there are the Natural Language Understanding (NLU) and Natural Language Generator (NLG) units. Our work is based on the model from~\cite{li2017end}, who proposed an end-to-end reinforcement learning approach to build a Goal-Oriented Chatbot in a movie booking domain. Goal-Oriented bots contain an initial NLU component, that is tasked with determining the user's \textit{intent} (e.g. \textit{book a movie ticket}) and its parameters, also known as \textit{slots} (e.g. date: \textit{today}, count: \textit{three people}, time: \textit{7 pm}). The usual practice in the RL-based Goal-Oriented Chatbots is to define the user-bot interactions as \textit{semantic frames}. At some point $\mathbf{t}$ in the time, given the user utterance $\mathbf{u_{t}}$, the system needs to perform an action $\mathbf{a_{t}}$. A bot action is, for instance, to request a value for an empty slot or to give the final result. The entire dialogue can be reduced to a set of \textit{slot-value} pairs, called \textit{semantic frames}. Consequently, the conversation can be executed on two distinct levels: \begin{enumerate} \item \textbf{Semantic level:} the user sends and receives only semantic frames as messages. \item \textbf{Natural language level:} the user sends and receives natural language sentences, which are reduced to, or derived from a semantic frame by using Natural Language Understanding (NLU) and Natural Language Generation (NLG) units respectively~\cite{wen2015semantically,hakkani2016multi}. \end{enumerate} For instance, in the movie booking domain one semantic frame could be defined as: \makebox[\textwidth]{\textit{\{movie\_name: ``Titanic'', number\_of\_people: ``2'', theater, intent=``request''\}, }} which in natural language could be written as:\\ \makebox[\textwidth]{\textit{``Which theater I can book 2 tickets for the movie Titanic?''}} By exchanging this kind of a data structures, the user and the system can convey the entire conversation until reaching the goal. \section{User Simulator} The \textit{User Simulator} creates a user - bot conversation, given the semantic frames. Because the model is based on Reinforcement Learning, a dialogue simulation is necessary to successfully train the model \cite{li2016user}. From the dataset of available user goals the User Simulator randomly picks one, which is unknown for the Dialogue Manager. The user goal consists of two different sets of slots: \textit{inform slots} and \textit{request slots}. \begin{itemize} \item \textit{Inform slots} are the slots for which the user knows the value, i.e. they represent the user constraints (e.g. \{movie\_name: ``avengers'', number\_of\_people: ``3'', date: ``tomorrow''\}). \item \textit{Request slots} are ones for which the user is looking for an answer (e.g. \{ city, theater, start\_time \} ). \end{itemize} Having the user goal as an anchor, the user simulator generates the \textit{user utterances} $\mathbf{u_{t}}$. The initial user utterance, similar to the user goal, consists of the initial inform and request sets of slots. Additionally, it includes a user intent, like \textit{open dialogue} or \textit{request additional info}. The user utterances generated over the course of the conversation follow an agenda-based model~\cite{schatzmann2009hidden}. According to this model, the user is having an internal state $\mathbf{s_{u}}$, which consists a goal $G$ and an agenda $A$. The goal furthermore is split in user constraints $C$ and user requests $R$. In every consecutive time step $\mathbf{t}$, the user simulator creates the user utterance $\mathbf{u_{t}}$, using its current state $\mathbf{s_{u}}$ and the last system action $\mathbf{a_{t}}$. In the end, using the newly generated user utterance $\mathbf{u_{t}}$, it updates the internal state $\mathbf{s^{\prime}_{u}}$. \section{Natural Language Understanding Unit} The \textit{NLU} unit is responsible for transforming the user utterance to a predefined \textit{semantic frame} according to the system's conventions, i.e. to a format understandable for the system. This includes a task of slot filling and intent detection. For example, the intent, could be a \textit{greeting}, like \textit{Hello}, \textit{Hi}, \textit{Hey}, or it could have an \textit{inform} nature, for example \textit{I like Indian food}, where the user is giving some additional information. Depending on the interests, the slots could be very diverse, like the \textit{actor name}, \textit{price}, \textit{start time}, \textit{destination city} etc. As we can see, the intents and the slots are defining the closed-domain nature of the Chatbot. The task of slot filling and intent detection is seen as a sequence tagging problem. For this reason, the NLU component is usually implemented as an LSTM-based recurrent neural network with a Conditional Random Field (CRF) layer on top of it. The model presented in \cite{hakkani2016multi}, is a sequence-to-sequence model using bidirectional LSTM net, which fills the slots and predicts the intent in the same time. On the other hand, the model in \cite{liu2016attention} is doing the same using an attention-based RNN. To achieve such a task, the dataset labels consist of: concatenated B--I--O (Begin, Inside, Outside) slot tags, the intent tag and an additional end-of-string (EOS) tag. As an example, in a restaurant reservation scenario, given the sentence \textit{Are there any French restaurants in Toronto downtown?}, the task is to correctly output, or fill, the following slots: \textit{\{cuisine: French\}} and \textit{\{location: Toronto downtown\}}. The table below shows how we would correctly tag the previous example. One very effective technique to build better NLU units with less data (based on the \textit{Active Learning} methodologies) is presented in \cite{dimovski2018submodularity}. \begin{table}[h!] \centering \caption{An example of tagging a sentence in B--I--O (Begin, Inside, Outside) format} \label{table:bio_example} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \rowcolor[HTML]{C0C0C0} Are & there & any & French & restaurants & in & Toronto & downtown? \\ \hline O & O & O & B-Cuisine & O & O & B-Location & I-Location \\ \hline \end{tabular} \end{table} \section{Natural Language Generator Unit} The NLG unit, on the other hand is the glue between the system and the user. Given the system response as a \textit{semantic frame}, it maps back to a natural language sentence, understandable for the end user. The \textit{NLG} component can be rule-based or model-based. In some scenarios it can be a hybrid model, i.e. combination of both. The rule-based NLG outputs some predefined template sentences for a given \textit{semantic frame}, thus they are very limited without any generalization power. For this reason, they are only used in special occasions. On the other hand, the model-based NLG units, are having learnable parameters and are usually trained in a sequence-to-sequence fashion. The models presented in \cite{wen2016conditional,wen2015semantically}, use an LSTM-decoder with a given \textit{semantic frame}, to generate template-like sentences with slot placeholders. Afterwards, a beam searched is applied, to replace the placeholders with actual values. \section{Dialogue Manager} At the core of the GO Dialogue Systems lies the \textit{Dialogue Manager} (DM), supported by the NLU and NLG units. Additionally, the DM could be connected to some external Knowledge Base (KB) or Data Base (DB), such that it can produce more meaningful answers. The Dialogue Manager consists the following two components: the \textit{Dialogue State Tracker} (DST) and the \textit{Policy Learning} which is the RL agent. The \textit{Dialogue State Tracker} (DST) is a complex and essential component that should correctly infer the belief about the state of the dialogue, given all the history up to that turn. The \textit{Policy Learning} is responsible for selecting the best action, i.e. the system response to the user utterance, that should lead the user towards achieving the goal in a minimal number of dialogue turns. \subsection{Dialogue State Tracker} The \textit{Dialogue State Tracker} (DST) is producing a meaningful state $s_{t}$ of the dialogue up to time $t$ in the time. The state $s_{t}$ is a data structure, that should depict the state of the conversation to a level of detail that provides all necessary information in order an intelligent agent to easily and reliably select the next action. The tracker, takes all possibly observable input up to time $t$ which includes: all the user utterances and system actions taken so far, all the results from the \textit{NLU} unit. Additionally, it might include all external knowledge provided in a knowledge base or a data base. For example in a restaurant search scenario, the state might indicate the user price range and cuisine preferences, what information they are seeking, like a telephone number or address. Therefore, given all of this information, a robust dialogue state tracker outputs a distribution $p\left( s \right)$ over all possible dialogue states. This is due to the fact that the true state is not fully observable from the raw input. Several factors contribute for that: ambiguous or not clearly specified user utterances, the noise and the error from the \textit{NLU}, changes in user goal etc. In order to tackle these challenges, in the literature there are three types of state trackers: \textit{rule-based}, \textit{generative models} and \textit{discriminative models}. Very recently, the series of the Dialogue State Tracking Challenge (DSTC) have started \cite{the-dialog-state-tracking-challenge-series-a-review,henderson:ml-for-dst-review}, a competition aiming to boost the state trackers to the next level. Many state-of-the-art dialogue state trackers emerged from this competitions. \subsubsection{Hand-Crafted DST models} The \textit{hand-crafted} dialogue state trackers are the most basic ones and they were used in the early dialogue systems. They infer the state of the dialogue by a manually designed and tuned parameters, such that the new state $s^{\prime}$ is derived from the last state $s$ using the last user utterance. An example of such system is the weather information system developed at MIT, called JUPITER \cite{zue2000juplter}. Moreover, in \cite{larsson2000information} a hand-crafted rules are used to build a complex dialogue management system. One strong advantage of this kind of state trackers is that they do not require any training data. However, due to the lack of flexibility and inability to adapt and generalize over many possible states, a data-driven approach is required and obvious. \subsubsection{Generative DST models} For this reason, the \textit{generative models} emerged, modeling the dialogue as a dynamic Bayesian network considering the state $s$, and the user action $u$ as an unobservable random variables. In general, given the input vector $x \in \mathbb{R}^{N}$ for some $N \in \mathbb{N}$, and the label $y$, the generative models are trying to estimate the joint distribution of $x$ and $y$, i.e. $\Pr \left(y, x \right)$. Thus, the rest of the probabilities can be derived using the Bayesian rules. The Bayesian net for inferring the new state $s^{\prime}$ is shown in Figure \ref{fig:dbn_dst}. The new state $s^{\prime}$, depends on the previous state $s$, and the current machine action $a^{\prime}$, which on the other side depends on the noisy user action $\textbf{\underline{u}}$. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth] {images/dynamic_bayesian_net_dst} \caption{Dynamic Bayesian Net for inferring the state of the dialogue. The state $s^{\prime}$ depends on the previous state $s$ and the system action $a^{\prime}$. This leads to a new user utterance $u^{\prime}$ (depending both on the state and system action), which becomes noisy afterwards.} \label{fig:dbn_dst} \end{figure} \subsubsection{Discriminative DST models} Finally, the \textit{discriminative models} for state tracking are the most powerful ones. They are directly estimating the conditional probability $\Pr \left(y | x \right)$, where $y$ is the label and $x$ is the underlying data. One example and very successful discriminative model is the model described in \cite{henderson2014word,Henderson2014d}. Using a word-based approach, the authors successfully scaled the model to work on unseen slots and values. This is done by using the \textit{n-gram} technique on top of the slot-value pairs. Moreover, in order to make the system slot invariant, \textit{delexicalized} features are used, which means introducing generic symbols. Afterwards, a Recurrent Neural Network is used to discriminate between the dialogue states. \subsection{Policy Learning} The \textit{Policy Learning} module selects the next system actions to drive the user towards the goal in the smallest number of steps. It does that by using the Reinforcement Learning theory. The theory of Reinforcement Learning is motivated by the neuroscientific perspectives of the animal and human behavior, deeply and well rooted in the nature. Therefore, by mathematically modeling the nature, we have an intelligent agent acting in an environment and perceiving a state $s$. Upon taking an action $a$, based on the policy learned from the past experiences, it is receiving a reward $r$ and it changes the state of the environment. Therefore, the role of the dialogue agent is to learn an optimal policy for conducting efficient and successful dialogue with the user. This is done by following the reinforcement way of learning, defining final, and immediate reward, such that the agent should maximize the cumulative future reward. One way to do so, is by applying on off-policy method, such as Q-learning \cite{watkins1992q}. The Q-function is the utility of taking an action $a$, when the agent is perceiving the state $s$, by following a policy $\pi = P(a | s)$. The utility measure is defined as the problem of maximizing the cumulative future reward that the agent will receive. The optimal action $a$, in a given state $s$, in a given time point $t$, according to the Q-learning is defined as: \begin{equation}\label{eq:Q_func} Q^{*} \left( s, a\right) = \max_{\pi}\mathbb{E}\left[ r_{t} + \gamma r_{t+1} + \gamma^{2} r_{t + 2} + \cdots | s_{t} = s, a_{t} = a, \pi \right], \end{equation} where $r_{t}, r_{t+1}, \ldots $ are the rewards at each time step, $\gamma \in [0,1]$ is the discount factor, i.e. the relevance of the future rewards, and $\pi = P(a|s)$ is the agent's policy. The optimal action-value (or Q) function obeys an important identity known as the Bellman equation, which states: \begin{equation}\label{eq:Bellman_Eq} Q^{*} \left( s, a\right) = r + \gamma \max_{a^{\prime}} Q^{*} \left( s^{\prime}, a^{\prime}\right). \end{equation} Obviously, this agent would follow a greedy strategy, and will always exploit the same set of actions in order to reach the goal. In practice, we want the agent to generalize well over the state space. For this reason, the agents incorporate different exploitation-exploration strategies. The most popular and quite effective one is the $\epsilon$ - greedy strategy, for $\epsilon \in [0, 1]$. That means with a probability of $\epsilon$ the agent will select a random action, while with a probability of $1 - \epsilon$ it will follow a greedy approach. However, by following the Bellman Equation \ref{eq:Bellman_Eq}, the objective function for learning the Q-function would be: \begin{equation} \label{eq:objective_func} \mathbb{E}_{s,a,r,s^{\prime}}\left[ \left( \overbrace{ r + \gamma \max_{a^{\prime}} Q\left( s^{\prime}, a^{\prime}\right)}^\text{target value} - \overbrace{ Q\left( s, a\right)}^\text{old value} \right)^{2} \right]. \end{equation} Therefore, in order to find a function approximation of the Q-function, we have to minimize the equation \ref{eq:objective_func}. The recently developed method, by a group of researches at Deep Mind, successfully applies a deep feed-forward neural network as a function approximator of the Q-function. Detailed information about the Deep Q-learning (DQN) technique and how to build more efficient policies using them will be provided in Chapter \ref{chap:efficient_policy_learning}. \chapter{Conclusion and Future Work} \label{chap:conclusion} In this thesis, we examined in depth the Goal-Oriented Chatbots, especially the Reinforcement Learning-based. We show that the \textit{Transfer Learning} technique can be successfully applied to boost the performances of the RL-based Goal-Oriented Chatbots. We do this for two different use cases: $i)$ when the source and the target domain overlap, and $ii)$ when the target domain is an extension of the source domain. We show the advantages of the transfer learning in a low data regime for both cases. When a low number of user goals is available for training in the target domain, transfer learning makes up for the missing data. Even when the whole target domain training data is available, the transfer learning benefits are maintained, with the success rate increasing threefold. We also demonstrate that the transfer knowledge can be a replacement of the warm-starting period in the agents or can be combined with it for best results. Last but not the least, we create and share two datasets for training Goal-Oriented Dialogue Systems in the domains of Restaurant Booking and Tourist Information. With the promising results we achieved during the work on this thesis, the following possibilities are worth trying to investigate into: \begin{itemize} \item Study the effects of the \textit{Transfer Learning} when the Chatbot is working on a natural language level. \item Build better \textit{User Simulators}, since the simulator in this work is hand-crafted and too limited. \item To build better and more robust Dialogue State Trackers. \end{itemize} \chapter{Introduction} Today, we live in the era of Artificial Intelligence (AI), which is penetrating in every aspect of our life. Part of this AI ecosystem are the spoken and text-based Dialogue Systems, which usage is constantly growing. These systems are quite popular because they have a potential to convey a conversation, just like a real human, acting in a truly intelligent manner. To increase the hype, the Loebner prize \cite{epstein1992quest} is a competition for text-based Dialogue Systems inspired by the Turing's imitation game. The aim is to stimulate and motivate the creation of truly intelligent conversational machines. Despite this, the competition has not had a winner in all previous editions. According to \cite{kurzweil2010singularity} it is only a matter of time when that will happen. Moreover, with the increasing pervasiveness of smart phones and ubiquitous computer systems, the Dialogue Systems are getting even more attractive. Consequently, they started being used in a plethora of different applications, ranging from trivial chit-chatting to personal assistants. An example of such systems are the popular Apple's Siri, Google Now and Cortana from Microsoft \cite{strayer2017smartphone}. Therefore, it is of paramount importance to continue the development of these systems and push the boundaries even further. The text-based Dialogue Systems colloquially known as Chatbots, are divided in two groups, depending on the nature of the conversation. In fact, there are: $i)$ open-domain and $ii)$ closed-domain Chatbots. In the open-domain setting the conversation can go in any direction, which means the users can have an open conversation with the Chatbot about everything, usually in the format of chit-chatting, without any or minimal functionality. For instance, \cite{serban2016building} used the Movie-DiC \cite{banchs2012movie} corpus of movie dialogues to build a general-purpose Chatbot. Because of the general-coverage nature, the open-domain Chatbots require huge amount of annotated data, thus making it almost impossible to create one. On the other hand, the closed-domain Dialogue Systems are more practical and easier to implement, because they focus only on few aspects and are designed to help users to achieve predetermined goals in a predefined domains. For example, it could be a travel planning task \cite{peng2017composite} or restaurant table booking dialogue system \cite{wen2016network}, to help users book a flight or table in restaurant, in the most convenient way, i.e. by conversation. For this reason they are called Goal-Oriented (GO) Chatbots and can be grouped together in larger systems such as Amazon Alexa\footnote{https://developer.amazon.com/alexa} to give an impression of a general coverage. Each individual component (which in Amazon Alexa can be viewed as skills of the overarching generalist bot) is closed-domain in nature. \section{Contributions and Thesis Outline} This thesis focuses only on a subset of the Goal-Oriented Chatbots, modeled as Partially Observable Markov Decision Processes (POMDP) \cite{young2013pomdp}, where the rich set of Reinforcement Learning (RL) algorithms \cite{sutton1998reinforcement} can be used to train them, for instance the Deep Q-Nets (DQN) \cite{mnih2015human}. The lack of in-domain dialogue data is a key problem for training high quality RL-based Goal-Oriented Chatbots. We need in-domain labeled dialogues for two reasons: $i)$ to warm-start the Chatbot, which is a standard widely used technique and $ii)$ to train the chatbot by simulating a considerable number of different conversations. In this thesis we argue that the domain similarity can be leveraged in a clever way to build efficient GO Chatbots with less data, using the so-called \textit{Transfer Learning} technique. We use the similarity between a \textit{source} and a \textit{target} domain, as many domains, such as restaurant and movie booking, share to a large extent some common information. For example, in the restaurant booking scenario, the user might ask the question \textit{``Which restaurant I can book a table for 3 people for tomorrow?''}, while in the movie booking domain the question could be \textit{``Which theater I can book 3 tickets for tomorrow?''}. In both domains, the user includes information for number of people and time. We believe this information need not be learnt twice and that a transfer is possible. We successfully combine \textit{Transfer Learning} and RL-based Goal-Oriented Chatbots and to the best of our knowledge we are the first ones doing that. As a result of the research over the course of this thesis, we published a paper \cite{ilievski2018goal}, which is submitted for a review at the 27th International Joint Conference on Artificial Intelligence (IJCAI). The contributions of this thesis are following: \begin{itemize} \item\textbf{Training GO Chatbots with less data}: In data constrained environments, models trained with \textit{Transfer Learning} achieve better training and testing performances than ones trained independently. \item\textbf{Better GO Chatbot performance}: Using \textit{Transfer Learning} has a significant positive effect on performance even when all the data from the target domain is available. \item\textbf{Intuitions on further improvements}: We show the gains obtained with \textit{Transfer Learning} are complementary to the ones due to warm-starting and the two can be successfully combined. \item\textbf{New published datasets}: We publish new datasets for training Goal-Oriented Dialogue Systems, for restaurant booking and tourist info domains\footnote{The datasets will be published in the camera-ready version}. They are derived from the third Dialogue State Tracking Challenge~\cite{henderson2013dialog}. \end{itemize} After the Related Work Chapter (Chapter \ref{chap:related_work}), we first make a general overview of the Dialogue Systems in Chapter \ref{chap:dial_sys}. Then, in Chapter \ref{chap:model_rl_based}, we describe the model of the RL-based GO Chatbots, which performance relies on a robust dialogue state tracking and an efficient learnt policy, as described in Chapter \ref{chap:efficient_policy_learning}. We conduct our experiments and show the results in Chapter \ref{chap:experiments_results}. Finally in Chapter \ref{chap:conclusion} we conclude our work and present the possible future work. \chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} Goal-Oriented (GO) Dialogue Systems, colloquially known as goal oriented chatbots, help users achieve a predefined goal (e.g. book a movie ticket) within a closed domain. A first step is to understand the user's goal by using natural language understanding techniques. Once the goal is known, the bot must manage a dialogue to achieve that goal, which is conducted with respect to a learnt policy. The success of the dialogue system depends on the quality of the policy, which is in turn reliant on the availability of high-quality training data for the policy learning method, for instance Deep Reinforcement Learning. \\ Due to the domain specificity, the amount of available data is typically too low to allow the training of good dialogue policies. In this master thesis we introduce a transfer learning method to mitigate the effects of the low in-domain data availability. Our transfer learning based approach improves the bot's success rate by $20\%$ in relative terms for distant domains and we more than double it for close domains, compared to the model without transfer learning. Moreover, the transfer learning chatbots learn the policy up to 5 to 10 times faster. Finally, as the transfer learning approach is complementary to additional processing such as warm-starting, we show that their joint application gives the best outcomes. \chapter*{Acknowledgements} \markboth{Acknowledgements}{Acknowledgements} \addcontentsline{toc}{chapter}{Acknowledgements} First of all, I would like to express my gratitude to my supervisor at EPFL, prof. Patrick Thiran for sharing his valuable ideas and insights with me and the team over the course of this master thesis.\\ I would also like to thank my supervisor at Swisscom, Dr. Claudiu Musat for motivating, leading and supporting me during my work on this master thesis.\\ I would also like to express my sincere gratitude to my family - my parents and my sister - for supporting me throughout my studies and my life in general. I could not have imagined achieving this without their support.\\ Last but not least, I would like to thank prof. Joseph Sifakis and Dr. Simon Bliduze for giving me an opportunity to work part time as a Research Scholar at the Rigorous System Design Lab (RiSD), and support myself during the studies at EPFL. \bigskip \noindent\textit{Lausanne, 16 March 2018} \hfill Vladimir Ilievski \tableofcontents \addcontentsline{toc}{chapter}{List of figures} \listoffigures \setlength{\parskip}{1em} \mainmatter \include{main/ch1_introduction} \include{main/ch2_related_work} \include{main/ch3_dialogue_systems} \include{main/ch4_rl_go_chatbots} \include{main/ch6_policy_learning_go_chatbots} \include{main/ch7_experiments_results} \include{main/ch8_conclusion} \include{tail/appendix_A} \include{tail/biblio} \end{document} \chapter{Appendix} \label{appendix_a} \section{Standard DQN algorithm} \label{appendix_a1} \begin{algorithm}[h!] \caption{Deep Q-Learning with Experience Replay \cite{mnih2015human}} \label{alg:dqn_algo} \begin{algorithmic}[1] \Procedure{DQN}{N, M, T, $\epsilon$, $\gamma$} \State Initialize replay memory $\mathcal{D}$ to capacity $N$ \State Initialize action-value function $Q$ with random weights $\theta$ \For{$episode$ in $1,M$} \State Initialize sequence $s_{1}={x_{1}}$ and preprocessed sequence $\phi_{1} = \phi(s_{1})$ \For{$t$ in $1,T$} \State With probability $\epsilon$ take a random action $a_{t}$, \State otherwise select $a_{t} = \max_{a}Q^{\star}(\phi(s_{t}), a; \theta)$ \State Execute action $a_{t}$ and observe a reward $r_{t}$ and a new state $x_{t+1}$ \State Set $s_{t+1} = s_{t}, a_{t}, x_{t+1}$ and preprocess $\phi_{t+1} = \phi(x_{t+1})$ \State Store transition $(\phi_{t}, a_{t}, r_{t}, \phi_{t+1})$ in $\mathcal{D}$ \State Sample random minibatch of transitions $(\phi_{j}, a_{j}, r_{j}, \phi_{j+1})$ from $\mathcal{D}$ \State Set $y_{j} = \begin{cases} r_{j} \quad \qquad \qquad \qquad \qquad \qquad \qquad \text{for terminal $\phi_{j+1}$} \\ r_{j} + \gamma \max_{a^{\prime}}Q^{\star}(\phi_{j+1}, a^{\prime}; \theta) \qquad \text{for non-terminal $\phi_{j+1}$} \end{cases}$ \State Perform a gradient descent step on $(y_{j} - Q(\phi_{j}, a_{j};\theta))^{2}$ (Eq. \ref{eq:loss_function}) \EndFor \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \section{System Implementation} \label{appenix_a3} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{images/GO_Chatbot_Platform.pdf} \caption{High-level overview of the system for training Goal-Oriented Chatbots} \label{fig:GO_platform} \end{figure} \begin{figure}[h!] \centering \includegraphics[angle=90, origin=c, width=\textwidth, height=170mm]{images/UML_Diagram.pdf} \caption{UML Diagram of the system for training Goal-Oriented Chatbots} \label{fig:uml_diagram} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{images/Sequence_Diagram.pdf} \caption{Sequence Diagram of the system for training Goal-Oriented Chatbots} \label{fig:sequence_diagram} \end{figure}
{ "timestamp": "2018-06-05T02:10:48", "yymm": "1806", "arxiv_id": "1806.00780", "language": "en", "url": "https://arxiv.org/abs/1806.00780" }
\section{Introduction} Correlation functions of Wilson loops are the most interesting observables in gauge theories: these are the gauge invariant quantities needed to understand confinement and various phase transitions. Unfortunately, they are much more difficult to calculate, even approximately. Therefore, of interest are particular cases, where Wilson loop averages are exactly calculable, and one can hope to develop a formalism capturing and efficiently describing the peculiar properties of these non-local observables. The best known example of such exactly solvable problem is $3d$ Chern-Simons theory \cite{CS}, which is topological and essentially Gaussian in particular gauges like ${\cal A}_0=0$, which allows one to formulate it in very different dual terms, for instance, as the Reshetikhin-Turaev (RT) lattice theory \cite{RT} on 4-valent graphs, which are the knot/link diagrams in projection from three to two spatial dimensions. In result, the correlators in simply-connected $3d$ space-time are essentially rational functions with well controlled denominators, naturally called knot/link polynomials \cite{knotpols,Con}, being investigated in knot theory for already about a century. One of spectacular results of the RT approach in its modern Tanaka-Krein version \cite{RTmod} is a possibility of defining and exploiting ``observables", arising when the Wilson loops are cut in pieces, into Wilson lines. An advantage is that one can then construct the original correlators by "gluing" them from much simpler building blocks. The problem is, however, that, in gauge theory, the open loops provide gauge-non-invariant quantities which can not be ascribed any meaning, neither physical, nor mathematical. However, a reformulation in RT terms, where everything is automatically gauge invariant, allows one to bypass this difficulty and introduce ``tangle blocks" \cite{Con,ML,tangcalc1}, which has no clear definition in the original Chern-Simons theory, however, can be efficiently used in cut-and-glue procedures for constructing and evaluating link invariants \cite{ML,tangcalc}. A great capability of this approach was already demonstrated by developing the arborescent calculus \cite{arbor}, which is currently the most advanced working method to evaluate colored link invariants. The main task of the tangle calculus is, however, more ambitious: it should help to understand a complicated network of non-linear relations between various knot invariants for different links/knots and different representations, and provide a closed self-reliable theory of Wilson loop correlators, which does not explicitly refer to the ``mother" Chern-Simons model, relevant for a perturbative description of one particular phase of the entire theory. Unfortunately, the method is still limited by the lack of general view and of clear relation to other approaches. The goal of the present letter is to describe such relations, and even, in the simplest possible examples, this provides new insights and new results. In particular, we discuss a relation to another framework: the framework of topological strings, dual to Chern-Simons theory \cite{GV,OV}. The main object in this theory is the topological vertex \cite{tv,Aganagic:2003db} (see \cite{IKV,GIKV,AK0,AK} for its refined version), and \cite{Zenk} for related network models, which are arbitrary convolutions of vertices. It is a long-standing problem to express the knot and link invariants in these terms, and, more generally, in terms of arbitrary tangle blocks. The simplest example is provided by the resolved conifold constructed from just a pair of such vertices. It is well known \cite{tv,Aganagic:2003db} that, for a particular choice of two non-trivial representations on external legs, the answer coincides with that for the 2-component Hopf link, while for arbitrary choice one could rather expect the 4-component link $L_{8n8}$ . See also \cite{BFM}, where the resolved conifold with four non-trivial representations was considered as a covering contribution to the Hopf link of $SO/Sp$ Chern-Simons theory. We demonstrate in this paper that the actual relation is less trivial: the resolved conifold stays related to the Hopf link, only composite representations get involved. As to $L_{8n8}$, only a piece of its invariant is reproduced in this way, while the entire expression is rather a sum of reduced conifold contributions over a set of representations at external legs. In fact, $L_{8n8}$ is a cable of the Hopf link, thus it is not a big surprise that these invariants are related this way, still this is the first example of a formula for a non-torus knot/link constructed from topological vertices. Thus, the main claim of the paper is that the resolved conifold with branes on the four external legs $Z_{\mu_1,\mu_2;\lambda_1,\lambda_2}$, which is a function of four representations $\lambda_{1,2}$, $\mu_{1,2}$, describes a special (in a sense, maximal) projection of the colored HOMFLY invariant ${\cal H}^{L_{8n8}}$ of the link $L_{8n8}$ (in accordance with the Thistlethwaite link table \cite{twi}) in the colored space, which ultimately reduces to the HOMFLY invariant of the Hopf link ${\cal H}^{\rm Hopf}$ with the two components colored by the composite representations $(\lambda_1,\lambda_2)$ and $(\mu_1,\mu_2)$: \be \boxed{\boxed{ {\cal G}^{L_{8n8}}_{\mu_1\times\lambda_1\times\mu_2\times\lambda_2}: = \hbox{\bf Pr}\left[{\cal H}^{L_{8n8}}_{\mu_1\times\lambda_1\times\mu_2\times\lambda_2}\right]_{\rm max}= \frac{Z_{\mu_1,\mu_2;\lambda_1,\lambda_2}} {Z_{\varnothing,\varnothing:\varnothing, \varnothing}} ={\cal H}^{\rm Hopf}_{(\lambda_1,\lambda_2)\times(\mu_1,\mu_2)} }} \ee Note that this is an open question if there is a possibility of expressing generic knots and links through topological vertices within the network model framework. The only examples known so far are the torus knots/links, and their invariants are either constructed from the Chern-Simons $S$-matrix \cite{tv,Aganagic:2003db,AS} which is the Hopf invariant and is associated with the topological vertex only in this way, or constructed \cite{Klemm} by basically substituting the Adams coefficients into the Rosso-Jones formula \cite{RJ}, with the second ingredient being eigenvalues of the cut-and-join (or Casimir) operator, which do not have any explicit meaning in topological calculus, and are just introduced "by hands". Hence, even in the case of torus knots and links it remains unclear if they can be obtained directly via network configurations. Specifics of the Hopf link invariant is that, in this case, the Rosso-Jones formula, involving a Casimir-weighted sum of the Schur functions in various representations, accidentally (?) coincides with just a single Schur function at a peculiarly deformed topological locus, and thus acquires a simple representation theory meaning. The plan of the paper is as follows: in sections 2 and 3, we describe the HOMFLY Hopf invariant. Then, in section 4, the Hopf link is related with the conifold description, which naturally gives rise to link $L_{8n8}$ in the general case. In fact, it turns out that, from the conifold approach, one obtains not the full colored HOMFLY invariant for the $L_{8n8}$ link, but its projection, which is ultimately projection to composite representation. Hence, one needs to consider the Hopf link colored with composite representations. It is done in section 5. After this preliminary work, in section 6, we are able to formulate our main statement: that the conifold consideration through the topological vertex, indeed, gives rise to the projected $L_{8n8}$ link aka the Hopf link in composite representations. This statement is proved in sections 7-8. Section 9 described symmetry properties of the HOMFLY Hopf invariant. At last, in section 10, we discuss relations of the description HOMFLY Hopf invariant as the Schur function and the Rosso-Jones formula \cite{RJ} and generalization to the superpolynomials. The Appendix contains illustrative examples of the main statement of this paper for first representations. \section{Hopf recursion} The simplest relation provided by the tangle calculus of \cite{tangcalc} arises when one connects the open ends of the Hopf tangle \begin{picture}(400,110)(-200,-60) \put(0,0){ \put(0,5){\line(1,0){25}} \put(32,5){\vector(1,0){18}} \put(25,-5){\line(-1,0){25}} \put(32,-5){\vector(1,0){18}} \qbezier(25,-20)(32,0)(25,20) \qbezier(25,-20)(18,-30)(15,-8) \qbezier(25,20)(18,30)(15,8) \qbezier(14.5,-3)(14.2,0)(14.5,3) \put(-20,12){\mbox{$\lambda_1$}} \put(-20,-18){\mbox{$\lambda_2$}} \put(25,30){\mbox{$\mu$}} \put(25,20){\vector(1,-4){2}} } \end{picture} \noindent and treats it in two different ways: \begin{picture}(400,110)(-200,-60) \put(-100,0){ \put(0,2){\line(1,0){25}} \put(32,2){\vector(1,0){18}} \put(25,-2){\line(-1,0){25}} \put(32,-2){\vector(1,0){18}} \qbezier(25,-20)(32,0)(25,20) \qbezier(25,-20)(18,-30)(15,-8) \qbezier(25,20)(18,30)(15,8) \put(0,3){ \qbezier(0,-5)(-20,-5)(-20,-20) \qbezier(0,-35)(-20,-35)(-20,-20) \put(50,-35){\vector(-1,0){50}} \qbezier(50,-5)(70,-5)(70,-20) \qbezier(50,-35)(70,-35)(70,-20) } \qbezier(0,2)(-24,2)(-24,-19) \qbezier(0,-36)(-24,-36)(-24,-19) \put(50,-36){\vector(-1,0){50}} \qbezier(50,2)(74,2)(74,-19) \qbezier(50,-36)(74,-36)(74,-19) \put(-30,-2){\mbox{$\lambda$}} \put(25,30){\mbox{$\mu$}} \put(25,20){\vector(1,-4){2}} } \put(100,0){ \put(0,5){\line(1,0){25}} \put(32,5){\vector(1,0){18}} \put(25,-5){\line(-1,0){25}} \put(32,-5){\vector(1,0){18}} \qbezier(25,-20)(32,0)(25,20) \qbezier(25,-20)(18,-30)(15,-8) \qbezier(25,20)(18,30)(15,8) \qbezier(14.5,-3)(14.2,0)(14.5,3) \qbezier(0,-5)(-20,-5)(-20,-20) \qbezier(0,-35)(-20,-35)(-20,-20) \put(50,-35){\vector(-1,0){50}} \qbezier(50,-5)(70,-5)(70,-20) \qbezier(50,-35)(70,-35)(70,-20) \qbezier(0,5)(-20,5)(-20,20) \qbezier(0,35)(-20,35)(-20,20) \put(50,35){\vector(-1,0){50}} \qbezier(50,5)(70,5)(70,20) \qbezier(50,35)(70,35)(70,20) \put(-17,20){\mbox{$\lambda_1$}} \put(-17,-20){\mbox{$\lambda_2$}} \put(25,25){\mbox{$\mu$}} \put(25,20){\vector(1,-4){2}} } \end{picture} \noindent The 3-component link at the right is composite, i.e. its {\it reduced} HOMFLY ``polynomial"\footnote{The word ``polynomial" should not lead to a confusion here: the colored HOMFLY invariants are, in fact, rational functions of $q$ and $A=q^N$ with simple denominators proportional to quantum dimensions. Hereafter, we call them HOMFLY polynomials in order to use the same term both for links and knots (when the reduced HOMFLY invariant is, indeed, a Laurent polynomial of $A$ and $q$).} is just a product of two, for the two constituent Hopf links. At the left, we treat the same configuration as a single Hopf link, where reducible is the representation $\lambda_1\otimes\lambda_2$, which can be decomposed into irreps, thus the HOMFLY polynomial is a sum of Hopf polynomials in different representations, taken with appropriate multiplicities $N^\lambda_{\lambda_1\lambda_2}$ (which actually are all unities if either $\lambda_1$ or $\lambda_2$ is symmetric, or the both are rectangular). Therefore, we get a relation between the {\it unreduced} Hopf polynomials \be \boxed{ D_\mu\cdot \!\!\!\! \sum_{\lambda\in \lambda_1\otimes\lambda_2}N^\lambda_{\lambda_1\lambda_2}\cdot {\cal H}^{\rm Hopf}_{\lambda,\mu} \ \ = \ \ {\cal H}^{\rm Hopf}_{\lambda_1,\mu}\cdot{\cal H}^{\rm Hopf}_{\lambda_2,\mu} } \label{HvsHH} \ee where $D_\mu$ is the quantum dimension of the representation $\mu$. This relation is correct for the Hopf polynomial in the standard, or canonical framing \cite{Atiah,MarF,China1,tangcalc}, which we use throughout the paper. It can be used as a recursion providing efficient and explicit formulas for colored Hopf invariants \cite{tangcalc}, which do not involve any functions difficult to deal with, like the Schur polynomials. Its $t$-deformation changes the coefficients in the sum over $\lambda$, still, it seems to persist for colored superpolynomials available from \cite{DMMSS,Ch,MMSS,GN}. This is non-trivial, because there is no reason why the Khovanov-Rozansky cohomological calculus, underlying today's theory of superpolynomials, should respect cut-and-gluing procedures of the effective lattice theory that are behind the Reshetikhin-Turaev formalism. Indeed, it is not {\it quite} respected, because the coefficients are deformed, but still it survives, since non-linear relations like (\ref{HvsHH}) continue to exist. \section{Recursion from character decomposition} On the other hand, the topological vertex formalism \cite{tv,Aganagic:2003db,IKV,GIKV,AK,IK} is associated with an expression for the Hopf polynomials through characters, Schur or Macdonald functions, which is not very efficient in description of knot polynomials but is very convenient for gluing via the Cauchy summation formulas. Namely, according to \cite{ML,Mar,GIKV,AK}, \be {\cal H}_{\lambda,\mu}^{\rm Hopf} =q^{2|\lambda||\mu|\over N}\cdot \Sch_\lambda(q^{-\rho})\cdot \Sch_\mu(q^{-\lambda-\rho}) \label{HopfthroughSchur} \ee where $\rho$ is the Weyl vector (half-sum of all positive roots) and $\lambda$, $\mu$ are the weights of representations, and $\Sch_R(x_i)$ is the Schur function associated with the Young diagram $R$, i.e. the character of the $U(N)$ group representation associated with $R$ \cite{Sch}. $\Sch_R(x_i)$ is here a symmetric function of $N$ variables $x_i^{2}$, $i=1...N$ associated with components of the corresponding vectors in the Cartan plane. For instance, $2\rho=(N-1,N-3,\ldots,-N+3,-N+1)=\{N-2i+1\}$. In (\ref{HopfthroughSchur}), we manifestly have taken into account the $U(1)$-factor $q^{2|\lambda||\mu|\over N}$, \cite{Atiah,MarF,China1,tangcalc}. The Hopf polynomial (\ref{HopfthroughSchur}) depends on variables $q$ and $A=q^N$. The drawback of this presentation is that (\ref{HopfthroughSchur}) is defined for a concrete $N$, and obtaining its $A$-dependence requires an analytic continuation. Instead, one can proceed with a presentation with the number of variables in the Schur function not related with {\it the parameter} $A$, \cite{AK}. To this end, one has to introduce the shifted Weyl vector $2\rho_0=(-1,-3,\ldots)=\{-2i+1\}$, and consider the Schur functions as symmetric functions of two sets of variables: \be {\cal H}_{\lambda,\mu}^{\rm Hopf} =q^{2|\lambda||\mu|\over N}\cdot \Sch_\lambda(A^{-1}q^{-\rho_0},Aq^{\rho_0})\cdot \Sch_\mu(A^{-1}q^{-\lambda-\rho_0},Aq^{\rho_0}) \ee and now one may not restrict by $N$ the number of variables in the Schur symmetric function. The Hopf polynomial is, in fact, symmetric under the permutation of $\lambda$ and $\mu$, i.e. \be \Sch_\lambda(q^{-\rho})\cdot \Sch_\mu(q^{-\lambda-\rho}) = \Sch_\mu(q^{-\rho})\cdot \Sch_\lambda(q^{-\mu-\rho}). \label{symrel} \ee Then \be {\cal H}_{\lambda_1,\mu}^{\rm Hopf} \cdot {\cal H}_{\lambda_2,\mu}^{\rm Hopf} =q^{2(|\lambda_1|+|\lambda_2|)|\mu|\over N}\cdot \Sch_\mu(q^{-\rho}) \cdot \Sch_\mu(q^{-\rho})\cdot \Sch_{\lambda_1}(q^{-\mu-\rho})\cdot \Sch_{\lambda_2}(q^{-\mu-\rho}). \ee Since we made a clever choice between the two versions in (\ref{symrel}), the arguments of $\Sch_{\lambda_1}$ and $\Sch_{\lambda_2}$ depend on the same diagram $\mu$, i.e. coincide, and one can use the multiplication rule \be \Sch_{\lambda_1}\cdot \Sch_{\lambda_2} = \sum_{\lambda\in \lambda_1\otimes\lambda_2} N^\lambda_{\lambda_1\lambda_2}\cdot\Sch_\lambda \ee where $N^\lambda_{\lambda_1\lambda_2}$ are integer-valued Littlewood-Richardson coefficients. This gives: \be {\cal H}_{\lambda_1,\mu}^{\rm Hopf} \cdot {\cal H}_{\lambda_2,\mu}^{\rm Hopf} &&= \sum_{\lambda\in \lambda_1\otimes\lambda_2} N^\lambda_{\lambda_1\lambda_2}\cdot \overbrace{\Sch_\mu(q^{ -\rho})}^{\cdot D_\mu}\cdot \overbrace{q^{2(|\lambda_1|+|\lambda_2|)|\mu|\over N}\cdot\Sch_\mu(q^{ -\rho})\cdot \Sch_\lambda(q^{-\mu-\rho}) }^{ {\cal H}_{\lambda,\mu}^{\rm Hopf}} \CR &&=D_\mu \cdot\!\!\!\! \sum_{\lambda\in \lambda_1\otimes\lambda_2} N^\lambda_{\lambda_1\lambda_2} \cdot {\cal H}_{\lambda,\mu}^{\rm Hopf} \label{HvsHH2} \ee i.e. reproduces (\ref{HvsHH}). One can also use the time variables $p_k:=\sum_i x_i^{2k}$ so that the quantum dimension is \be D_\mu =\Sch_\mu\{p^*\},\ \ \ \ \ \ \ \ p_k^*:= \frac{A^k-A^{-k}}{q^k-q^{-k}} \ee Similarly, one can consider the time variables for $q^{-\lambda-\rho}$, moreover, one can absorb the $U(1)$-factor (\ref{HopfthroughSchur}) in their definition so that \be {p_k^{*\lambda}} = q^{2|\lambda|k\over N}\Big(p^*_k - A^{-k}(q^k-q^{-k})\sum_{i,j\in\lambda}q^{2k(i-j)}\Big)= q^{2|\lambda|k\over N}\Big(p^*_k + A^{-k}\sum_i q^{(2i-1)k}(q^{-2k\lambda_i}-1)\Big) \label{plambdatimes} \ee where $\lambda_i$ are lengths of lines of the Young diagram $\lambda$, $|\lambda|=\sum_i\lambda_i$ and $A=q^N$. In terms of these time variables \be\label{10} {\cal H}^{\rm Hopf}_{\mu\lambda}=D_\lambda \cdot \Sch_\mu\{p^{*\lambda}\} \ee For instance, \be \frac{{\cal H}_{[1],[1]}^{\rm Hopf}}{D_{[1]}} = p_1^{*[1]} =q^{2\over N}\ \frac{A-A^{-1}(q^2-1+q^{-2})}{q-q^{-1}} \ee If $\lambda$ is the adjoint representation, then the {\it uniform} Hopf polynomial \cite{MMkrM} is associated with the time variables \be p^{*{\rm adj}}_k = (q^{2k}-1+q^{-2k})p_k^* \ee see \cite{tangcalc,MMHopf} for further details. \section{Conifold description \label{coni}} In topological vertex theory of \cite{tv,Aganagic:2003db,IKV,GIKV,AK}, the Hopf polynomial ${\cal H}_{\lambda,\mu}^{\rm Hopf}$ is associated with the brane pattern, symbolically depicted as \begin{picture}(300,100)(-150,-70) \put(0,0){\line(1,0){30}} \put(0,0){\line(0,1){30}} \put(0,0){\line(-1,-1){30}} \put(-30,-30){\line(-1,0){30}} \put(-30,-30){\line(0,-1){30}} \put(-10,-25){\mbox{$Q $}} \put(-5,15){\line(1,0){10}} \put(-11,19){\mbox{$\mu$}} \put(15,-5){\line(0,1){10}} \put(19,8){\mbox{$\lambda$}} \put(65,-17){\mbox{$\longleftrightarrow$}} \put(150,-10){ \put(0,0){\line(1,0){25}} \put(32,0){\vector(1,0){18}} \qbezier(25,-20)(32,0)(25,20) \qbezier(25,-20)(18,-30)(15,-8) \qbezier(25,20)(18,30)(15,8) \put(0,5){ \qbezier(0,-5)(-20,-5)(-20,-20) \qbezier(0,-35)(-20,-35)(-20,-20) \put(50,-35){\vector(-1,0){50}} \qbezier(50,-5)(70,-5)(70,-20) \qbezier(50,-35)(70,-35)(70,-20) } \put(-30,-2){\mbox{$\lambda$}} \put(25,30){\mbox{$\mu$}} \put(25,20){\vector(1,-4){2}} } \end{picture} \noindent According to our logic in the present paper, $\lambda$ can actually be a representation from the product of two. In fact, the same can be true for $\mu$. In other words, we can consider four branes put on the external legs: \begin{picture}(300,120)(-150,-70) \put(0,0){\line(1,0){40}} \put(0,0){\line(0,1){40}} \put(0,0){\line(-1,-1){30}} \put(-30,-30){\line(-1,0){30}} \put(-30,-30){\line(0,-1){30}} \put(-5,15){\line(1,0){10}} \put(-19,13){\mbox{$\mu_1$}} \put(15,-5){\line(0,1){10}} \put(13,8){\mbox{$\lambda_1$}} \put(-5,25){\line(1,0){10}} \put(-19,27){\mbox{$\mu_2$}} \put(25,-5){\line(0,1){10}} \put(27,8){\mbox{$\lambda_2$}} \put(65,-17){\mbox{$\longleftrightarrow$}} \put(150,-10){ \put(0,2){\line(1,0){20}} \put(32,2){\vector(1,0){18}} \put(20,-2){\line(-1,0){20}} \put(32,-2){\vector(1,0){18}} \qbezier(20,-20)(27,0)(20,20) \qbezier(20,-20)(13,-30)(10,-8) \qbezier(20,20)(13,30)(10,8) \qbezier(25,-20)(32,0)(25,20) \qbezier(25,-20)(18,-30)(15,-8) \qbezier(25,20)(18,30)(15,8) \put(0,3){ \qbezier(0,-5)(-20,-5)(-20,-20) \qbezier(0,-35)(-20,-35)(-20,-20) \put(50,-35){\vector(-1,0){50}} \qbezier(50,-5)(70,-5)(70,-20) \qbezier(50,-35)(70,-35)(70,-20) } \qbezier(0,2)(-24,2)(-24,-19) \qbezier(0,-36)(-24,-36)(-24,-19) \put(50,-36){\vector(-1,0){50}} \qbezier(50,2)(74,2)(74,-19) \qbezier(50,-36)(74,-36)(74,-19) \put(-30,-2){\mbox{$\lambda_1$}} \put(-15,-15){\mbox{$\lambda_2$}} \put(5,30){\mbox{$\mu_1$}}\put(25,30){\mbox{$\mu_2$}} \put(25,20){\vector(1,-4){2}} \put(20,20){\vector(1,-4){2}} } \end{picture} \noindent and interpret this picture as the double sum \begin{picture}(300,130)(-240,-80) \put(-250,-17){\mbox{$\sum_{\lambda\in\lambda_1\otimes \lambda_2} \sum_{\mu\in \mu_1\otimes\mu_2} \ \ N^{\lambda}_{\lambda_1\lambda_2}\cdot N^\mu_{\mu_1\mu_2}\cdot $}} \qbezier(-70,-70)(-90,-15)(-70,40) \qbezier(200,-70)(220,-15)(200,40) \put(0,0){\line(1,0){30}} \put(0,0){\line(0,1){30}} \put(0,0){\line(-1,-1){30}} \put(-30,-30){\line(-1,0){30}} \put(-30,-30){\line(0,-1){30}} \put(-5,15){\line(1,0){10}} \put(-11,19){\mbox{$\mu$}} \put(15,-5){\line(0,1){10}} \put(19,8){\mbox{$\lambda$}} \put(50,-17){\mbox{$\longleftrightarrow$}} \put(120,-10){ \put(0,0){\line(1,0){25}} \put(32,0){\vector(1,0){18}} \qbezier(25,-20)(32,0)(25,20) \qbezier(25,-20)(18,-30)(15,-8) \qbezier(25,20)(18,30)(15,8) \put(0,5){ \qbezier(0,-5)(-20,-5)(-20,-20) \qbezier(0,-35)(-20,-35)(-20,-20) \put(50,-35){\vector(-1,0){50}} \qbezier(50,-5)(70,-5)(70,-20) \qbezier(50,-35)(70,-35)(70,-20) } \put(-30,-2){\mbox{$\lambda$}} \put(25,30){\mbox{$\mu$}} \put(25,20){\vector(1,-4){2}} } \end{picture} \noindent Alternatively, one can drag two of these four branes through to the other external legs in order to get: \begin{picture}(300,120)(-90,-75) \put(0,0){\line(1,0){30}} \put(0,0){\line(0,1){30}} \put(0,0){\line(-1,-1){30}} \put(-30,-30){\line(-1,0){30}} \put(-30,-30){\line(0,-1){30}} \put(-5,15){\line(1,0){10}} \put(-18,19){\mbox{$\mu_1$}} \put(15,-5){\line(0,1){10}} \put(19,8){\mbox{$\lambda_1$}} \put(-35,-45){\line(1,0){10}} \put(-21,-42){\mbox{$\mu_2$}} \put(-45,-35){\line(0,1){10}} \put(-53,-20){\mbox{$\lambda_2$}} \put(-23,-13){\footnotesize\mbox{$S^2$}} \put(65,-17){\mbox{$\longleftrightarrow$}} \put(150,-10){ \put(0,20){\circle{30}} \put(-20,0){\circle{30}} \put(20,0){\circle{30}} \put(0,-20){\circle{30}} \put(10,40){\mbox{$\mu_1$}} \put(40,10){\mbox{$\lambda_1$}} \put(10,-40){\mbox{$\mu_2$}} \put(-48,10){\mbox{$\lambda_2$}} } \put(220,-17){\mbox{$\longleftrightarrow$}} \put(330,0){ \put(-30,-30){\line(1,0){60}} \put(-30,-30){\line(-1,0){30}} \put(0,0){\line(0,1){30}} \put(0,0){\line(0,-1){25}} \put(0,-35){\line(0,-1){30}} \put(0,0){\line(-1,-1){30}} \put(-5,15){\line(1,0){10}} \put(-18,19){\mbox{$\mu_1$}} \put(15,-35){\line(0,1){10}} \put(19,-20){\mbox{$\lambda_1$}} \put(-5,-55){\line(1,0){10}} \put(-18,-51){\mbox{$\mu_2$}} \put(-45,-35){\line(0,1){10}} \put(-53,-20){\mbox{$\lambda_2$}} \put(-23,-13){\footnotesize\mbox{$S^3$}} } \end{picture} \noindent The link on the r.h.s. is the 4-component $L_{8n8}$, and it is obviously the same as the link in the second pictures of the present section. The identity reflecting the possibility of multiplying representations on one leg, or, alternatively, the possibility of brane-dragging states that \be \boxed{ {\cal H}^{L_{8n8}}_{\mu_1,\lambda_1,\mu_2,\lambda_2} = \sum_{\stackrel{\lambda\in \lambda_1\otimes\bar\lambda_2}{\mu\in \mu_1\otimes\bar\mu_2}} \ \ N^{\lambda}_{\lambda_1\lambda_2}\cdot N^\mu_{\mu_1\mu_2}\cdot {\cal H}^{\rm Hopf}_{\lambda,\mu} } \label{L8n8} \ee and is immediately clear from the picture: \begin{picture}(300,180)(-120,-135) \qbezier(-40,0)(-40,20)(0,20) \qbezier(40,0)(40,20)(0,20) \qbezier(-40,-10)(-40,-30)(0,-30) \qbezier(40,-10)(40,-30)(0,-30) \put(0,-80){ \qbezier(-40,0)(-40,20)(0,20) \qbezier(40,0)(40,20)(0,20) \qbezier(-40,-10)(-40,-30)(0,-30) \qbezier(40,-10)(40,-30)(0,-30) } \qbezier(-40,-5)(-60,-5)(-60,-45)\qbezier(-40,-85)(-60,-85)(-60,-45) \qbezier(-40,-5)(-20,-5)(-20,-25)\qbezier(-40,-85)(-20,-85)(-20,-65) \qbezier(-19.5,-32)(-19,-45)(-19.5,-58) \qbezier(40,-5)(60,-5)(60,-45)\qbezier(40,-85)(60,-85)(60,-45) \qbezier(40,-5)(20,-5)(20,-25)\qbezier(40,-85)(20,-85)(20,-65) \qbezier(19.5,-32)(19,-45)(19.5,-58) \put(-60,-45){\vector(0,1){2}} \put(-19,-45){\vector(0,-1){2}} \put(0,20){\vector(1,0){2}} \put(0,-30){\vector(-1,0){2}} \put(60,-45){\vector(0,-1){2}} \put(19,-45){\vector(0,1){2}} \put(0,-110){\vector(-1,0){2}} \put(0,-60){\vector(1,0){2}} \put(-75,-20){\mbox{$\lambda_2$}} \put(-20,25){\mbox{$\mu_1$}} \put(65,-20){\mbox{$\lambda_1$}} \put(-20,-120){\mbox{$\mu_2$}} \put(100,-40){\mbox{$=$}} \put(190,0){ \qbezier(-40,-5)(-40,20)(0,20) \qbezier(40,0)(40,20)(0,20) \qbezier(-40,-5)(-40,-30)(0,-30) \qbezier(40,-8)(40,-30)(0,-30) \qbezier(-43,-5)(-43,23)(0,23) \qbezier(43,0)(43,23)(0,23) \qbezier(-43,-5)(-43,-33)(0,-33) \qbezier(43,-8)(43,-33)(0,-33) \qbezier(40,-5)(60,-5)(60,-45)\qbezier(40,-85)(60,-85)(60,-45) \qbezier(40,-5)(20,-5)(20,-25)\qbezier(40,-85)(20,-85)(20,-65) \qbezier(19.5,-35)(19,-45)(20,-65) \qbezier(43,-2)(63,-5)(63,-48)\qbezier(43,-88)(63,-85)(63,-48) \qbezier(43,-2)(17,-1)(17,-25)\qbezier(43,-88)(17,-89)(17,-65) \qbezier(19.5,-35)(19,-45)(20,-65) \qbezier(16.5,-35)(16,-45)(17,-65) \put(0,20){\vector(1,0){2}} \put(0,-30){\vector(-1,0){2}} \put(0,23){\vector(-1,0){2}} \put(0,-33){\vector(1,0){2}} \put(60,-45){\vector(0,-1){2}} \put(19,-45){\vector(0,1){2}} \put(63,-45){\vector(0,1){2}} \put(16,-45){\vector(0,-1){2}} \put(47,-45){\mbox{$\lambda_1$}} \put(-10,30){\mbox{$\mu_2$}} \put(67,-45){\mbox{$\lambda_2$}} \put(-10,12){\mbox{$\mu_1$}} } \end{picture} \noindent Note that the apparent cyclic symmetry of the l.h.s. in (\ref{L8n8}) is hidden on its r.h.s. This is an interesting formula, because one can independently calculate both sides by the methods of \cite{tangcalc}. The l.h.s. is a necklace with ${\cal H}^{L_{8n8}}=\Tr \tau^4$, where the lock-element $ \tau$, can be extracted from the knowledge about twist knots, where $ {\cal H}^{{\rm twist}_k}=\Tr \tau\,\bar T^{2k}$. The r.h.s. involves the Hopf polynomials, which we review in the next section. An even more interesting option is to apply conifold formulas from \cite{GIKV,AK}, and we do this in sec.\ref{sumfor} below. Surprisingly or not, they do {\it not} reproduce (\ref{L8n8}), but pick up a single item ${\cal H}^{\rm Hopf}_{(\lambda_1,\lambda_2)\times(\mu_1,\mu_2)}$ from the r.h.s. sum. We explain it in more details in the next sections. \section{Hopf polynomials} Colored HOMFLY polynomials for the Hopf link are in the intersection of application domains of very different approaches: from the Rosso-Jones formula to conifold calculus. Moreover, while the Rosso-Jones formula nicely describes ``universal" representations, i.e. the representations of $SU(N)$ that do not depend\footnote{This independence on $N$ means that the character is determined by some Young diagram(s) not involving $N$. The formal definition is related with the notion of universal character, see \cite{Koike}.} on $N$ at large enough $N$, the conifold calculus deals with the composite (or rational, \cite{Koike,Kanno}; or coupled, \cite{Vafa}) representations, which manifestly depend on $N$, the simplest example of these being conjugate representations. First of all, we need to explain what is the composite representation, which is, in a sense, a ``maximal" representation in the product of $R$ and the conjugate $\bar P$ of $P$. The composite representation is the most general finite-dimensional irreducible highest weight representations of $SU(N)$ \cite{Koike,GW,Vafa,Kanno,MarK}, which are associated with the Young diagram obtained by putting $R$ atop of $p_1$ lines of the lengths $N-p_i^{\vee}$ ($p_i^{\vee}$ are length of lines of the transposed Young diagram $P^{\vee}$), i.e. $$(R,P)= \Big[r_1+p_1,\ldots,r_{l_R}+p_1,\underbrace{p_1,\ldots,p_1}_{N-l_{\!_R}-l_{\!_P}}, p_1-p_{_{l_{\!_P}}},p_1-p_{{l_{\!_P}-1}},\ldots,p_1-p_2\Big]$$ or, pictorially, \begin{picture}(300,125)(-90,-30) \put(0,0){\line(0,1){90}} \put(0,0){\line(1,0){250}} \put(50,40){\line(1,0){172}} \put(0,90){\line(1,0){10}} \put(10,90){\line(0,-1){20}} \put(10,70){\line(1,0){20}} \put(30,70){\line(0,-1){10}} \put(30,60){\line(1,0){10}} \put(40,60){\line(0,-1){10}} \put(40,50){\line(1,0){10}} \put(50,50){\line(0,-1){10}} \put(265,2){\mbox{$\vdots$}} \put(265,15){\mbox{$\vdots$}} \put(265,28){\mbox{$\vdots$}} \put(252,0){\mbox{$\ldots$}} \put(253,40){\mbox{$\ldots$}} \put(239,40){\mbox{$\ldots$}} \put(225,40){\mbox{$\ldots$}} \put(222,40){\line(0,-1){10}} \put(222,30){\line(1,0){10}} \put(232,30){\line(0,-1){20}} \put(232,10){\line(1,0){18}} \put(250,0){\line(0,1){10}} \put(0,90){\line(1,0){10}} \put(10,90){\line(0,-1){20}} \put(10,70){\line(1,0){20}} \put(30,70){\line(0,-1){10}} \put(30,60){\line(1,0){10}} \put(40,60){\line(0,-1){10}} \put(40,50){\line(1,0){10}} \put(50,50){\line(0,-1){10}} \put(-60,40){\mbox{$(R,P) \ \ =$}} {\footnotesize \put(123,17){\mbox{$ \bar P$}} \put(17,50){\mbox{$R$}} \put(243,22){\mbox{$\check P$}} \qbezier(270,3)(280,20)(270,37) \put(280,18){\mbox{$h_P = l_{P^{\vee}}=p_{_1}$}} \qbezier(5,-5)(132,-20)(260,-5) \put(130,-25){\mbox{$N $}} \qbezier(5,35)(25,25)(45,35) \put(22,20){\mbox{$l_R$}} \qbezier(225,43)(245,52)(265,43) \put(243,52){\mbox{$l_{\!_P}$}} } \put(4,40){\mbox{$\ldots$}} \put(18,40){\mbox{$\ldots$}} \put(32,40){\mbox{$\ldots$}} \end{picture} \noindent where $l_{\!_P}$ is the number of lines in the Young diagram $P$. This $(R,P)$ is the first\footnote{``First" means here the representation that is associated with the Young diagram obtained just by attaching the diagram $R$ to $\bar P$ atop.} (hence, the word ``maximal") representation contributing to the product $R\otimes \bar P$. It can be manifestly obtained from the tensor products (i.e. as a projector from $R\otimes \bar P$) by formula \cite{Koike} \be (R,P)=\sum_{Y,Y_1,Y_2}(-1)^{l_{\!_Y}}N^R_{YY_1}N^{P}_{Y^{\vee}Y_2}\ Y_1\otimes\overline{Y_2} \ee where $^\vee$ denotes the transposition of the Young diagram. The adjoint representation in this notation is ${\rm adj} = ([1],[1])$, while the conjugate of the representation $R$ is $\overline{R} = (\varnothing,R)$. The product \be [m]\otimes \overline{[m]} = \sum_{k=0}^m ([k],[k]) \ee where $(\varnothing,\varnothing) \stackrel{SU(N)}{\cong} \varnothing$, and the other items are diagrams with $2k$ lines, $k$ of length $N-1$ and $k$ of length one. Similarly, \be [1^m]\otimes \overline{[1^m]} = \sum _{k=0}^m ([1^k],[1^k]) \ee where contributing are the diagrams with just two lines of lengths $N-k$ and $k$. The expression for the Hopf polynomial in arbitrary composite representations can be again written in the form \be\label{30} {\cal H}_{ (R,P)\times (T,S)} = D_{(R,P)}\cdot \Sch_{(T,S)}\{p^{_*(R,P)}\} \ee Moreover, the main ingredients in this formula can be presented in a universal form that involves $N$ (if at all) as a simple parameter, however, the explicit expressions for them are far more involved: $p^{_*(R,P)}_k$ implied by (\ref{plambdatimes}) for the composite Young diagram $(R,P)$ are \cite{ML,MMHopf} \be p_k^{*(R,P)} = q^{2{|R|-|P|\over N}k}\left(p_k^*+ \frac{1}{A^k}\cdot \sum_{j=1}^{l_{\!_R}} q^{(2j-1)k}\cdot(q^{-2kr_j}-1) + A^k\cdot \sum_{i=1}^{l_P}q^{(1-2i)k}\cdot(q^{ 2kp_i}-1) \right) \label{compolocus} \ee the quantum dimensions are \cite{MMHopf} \be\label{Dims} D_{(R,P)} = D_{_R}(N-l_{\!_P})\, D_{_P}(N-l_{\!_R})\, \frac{ \prod_{i=1}^{l_{\!_R}}[N-l_{\!P}-i]!\prod_{i'=1}^{l_{\!_P}}[N-l_{\!R}-i']! } {\prod_{i=1}^{l_{\!_R}+l_{\!_P} } [N-i]!} \, \prod_{i=1}^{l_{\!_R}}\prod_{i'=1}^{l_{\!_P}} [N+r_i+p_{i'}+1-i-i'] \ee where $[...]$ denotes quantum numbers, and the corresponding Schur functions are \cite{Koike,Kanno,MMHopf} \be \Sch_{(R,P)}\{p^{*(T,S)}\}= \sum_{\eta\in R\cap P^{\vee} } (-)^{|\eta|}\cdot\Sch_{R/\eta}\{p^{*(T,S)}\} \cdot \Sch_{P/\eta^{\vee}}\{p^{*(T,S)}(A^{-1},q^{-1})\} \label{compoSchur} \ee where $\Sch_{R/\eta}$ denotes the skew Schur function. Note that the ``mirror-reflecting" substitute $(A,q)\to (A^{-1},q^{-1})$ in (\ref{compoSchur}) could be replaced just by the transposition of Young diagrams except for the $U(1)$-factor $q^{2{|R|-|P|\over N}k}$ that have to be accordingly changed. \section{Resolved conifold: Hopf link versus $L_{8n8}$} Surprisingly or not, the topological vertex formalism provides expressions, which are different from (\ref{L8n8}), but in an interesting way. The four-leg diagrams from section \ref{coni} can be also considered as two glued 3-leg topological vertices with summation over intermediate states. The topological vertex which is used in all-genus topological string calculations on toric Calabi-Yau 3-fold, is given by \cite{Aganagic:2003db, Okounkov:2003sp}\footnote{One can make the change $q\to 1/q$ so that the Schur polynomials would contain exponentials $q^{-\rho_0}$ like in formulas of sec.3. This would require the identification $Q=A^{-2}$ and reversing arrows in the figure below, and interchanging representations in the composite representation: $(R,P)\to (P,R)$.} \beq C_{\xi\mu\lambda} (q) = q^{\varkappa(\lambda)}\cdot \Sch_{\mu}( q^{\rho_0}) \sum_{\eta} \Sch_{\xi/\eta} (q^{\mu + \rho_0})\cdot \Sch_{\lambda^\vee/\eta} (q^{\mu^\vee+ \rho_0}), \label{TV} \eeq where $\varkappa(\lambda) = 2\sum_{i,j\in\lambda}(j-i)$, and the time variables are \be p_k^{(\mu)} = p_k(q^{\mu+\rho_0}) = \sum_{j=1}^\infty q^{(2\mu_j-2j+1)k} = \frac{1}{q^{k}-q^{-k}} + \sum_{j=1}^\infty q^{(1-2j)k}(q^{2\mu_jk}-1)= \frac{1}{q^{k}-q^{-k}}+(q^k-q^{-k})\sum_{i,j\in\mu} q^{2k(j-i)}, \label{pmu} \ee Note that they are independent of $A$ and different from $p^{*\mu}_k$ in (\ref{plambdatimes}). In topological string theory, the parameter $q$ is associated with the string coupling (genus expansion parameter) $g_s$ by $q= e^{-g_s/2}$. Since $\eta$ in the skew-characters should be a sub-diagram of $\lambda$, the sum over $\eta$ is actually finite. Though it is not manifest in \eqref{TV}, $C_{\xi^\vee\mu\lambda^\vee}(q) \cdot q^{-\varkappa(\mu)}$ is symmetric under the cyclic permutation of $(\xi, \mu, \lambda)$. \unitlength 2mm \begin{picture}(40,30)(-20,2) \thicklines \put(10,12){\vector(0,-1){5}} \put(10,12){\line(-1,0){5}} \put(10,12){\line(1,1){4}} \put(10,3){\line(0,1){5}} \put(0,12){\vector(1,0){6}} \put(18,20) {\vector(-1,-1){5}} \put(18,20){\line(1,0){5}} \put(28,20){\vector(-1,0){5}} \put(18,30){\line(0,-1){5}} \put(18,20){\vector(0,1){6}} \put(13,17){$\xi$} \put(23,18){$\lambda_1$} \put(5,14){$\lambda_2$} \put(19,25){$\mu_1$} \put(7,7){$\mu_2$} \end{picture} \unitlength 0.35mm \noindent \lq\lq Four point function\rq\rq\ on the resolved conifold geometry is given by gluing two topological vertices: \beq Z_{\mu_1, \mu_2 ; \lambda_1, \lambda_2} = \sum_\xi (-Q)^{\vert \xi \vert} C_{\xi \mu_1^\vee \lambda_1}(q) C_{\xi^\vee \mu_2^\vee \lambda_2}(q), \label{4pt} \eeq where $Q= e^{t}$ and $t$ is the K\"ahler parameter of the rational curve $\mathbf{P}^1$ represented by the internal edge. The prescription of large $N$ duality tells that the 't Hooft coupling is $t = N g_s = \frac{2\pi i N}{N+k}$, and we may identify $Q=q^{2N}=A^2$. In the precise formulation, the diagram is equipped with arrows which distinguish between the representations and their transposed. In writing down \eqref{4pt}, we set the clockwise order of edges at each vertex. If the edge is outgoing, we have to use the transpose of the Young diagram attached to the edge. To make the summation over $\xi$ easy, we put $\xi$ to the first position by using the cyclic symmetry of the topological vertex. In the toric diagram, the slope of the edge corresponds to a cycle of $T^2$ fibration, in particular, the horizontal (vertical) line corresponds to $(1, 0)$($(0,1)$) cycle, respectively. Hence we expect that $\mu_{1,2}$ and $\lambda_{1,2}$ are linked with linking number $\pm 1$, while $\mu_{1}(\lambda_1)$ and $\mu_{2}(\lambda_2)$ are parallel to each other. Note that the notation is adjusted to purposes of the present paper, where the cyclic ordering is less important and can be made obscure. Our main claims in this paper are: \begin{itemize} \item the normalized sum (\ref{4pt}) is equal to \be\label{main} \boxed{ \begin{array}{c} \hat{\cal G}^{L_{8n8}}_{\mu_1\times\lambda_1\times\mu_2\times\lambda_2} = \displaystyle{\frac{Z_{\mu_1,\mu_2;\lambda_1,\lambda_2}} {Z_{\varnothing,\varnothing:\varnothing, \varnothing}}} =(-A)^{|\mu_1|+|\mu_2|}q^{\varkappa(\lambda_1)+ \varkappa(\lambda_2)+\varkappa(\mu_1)+\varkappa(\mu_2)}\cdot D_{(\mu_1,\mu_2)}\times \cr \cr \times\sum_{\sigma,\eta_1,\eta_2}(-A^2)^{|\eta_1|+|\eta_2|-|\sigma|}\cdot \Sch_{\lambda_1^\vee/\eta_1}\{p^{(\mu_1^\vee)}\}\cdot\Sch_{\lambda_2^\vee/\eta_2}\{p^{(\mu_2^\vee)}\} \cdot \Sch_{\eta_1^\vee/\sigma }\{p^{(\mu_2)}\} \cdot\Sch_{\eta_2^\vee/\sigma^\vee}\{p^{(\mu_1)}\} \end{array} } \ee where the sums run over $\eta_1\subset\lambda_1^\vee$, $\eta_2\subset\lambda_2^\vee$ and $\sigma\subset\eta_1\cap\eta_2$. The quantum dimension of the composite representation, $D_{(\mu_1,\mu_2)}$ is given in (\ref{Dims}). \item instead of the full unreduced HOMFLY of link $L_{8n8}$, which in this case is given by (\ref{L8n8}) and contains many terms: \be {\cal H}^{L_{8n8}}_{\lambda_1\times\mu_1\times\lambda_2\times\mu_2} = \sum_{\stackrel{\lambda\in \lambda_1\otimes\bar\lambda_2}{\mu\in \mu_1\otimes\bar\mu_2}} N^{\lambda}_{\lambda_1\lambda_2}\cdot N^\mu_{\mu_1\mu_2}\cdot {\cal H}^{\rm Hopf}_{\lambda\times\mu} \ee the sum (\ref{main}) provides the contribution of only the maximal representations $\lambda_{\rm max} \in \lambda_1\otimes\bar\lambda_2$ and $\mu_{\rm max} \in \mu_1\otimes\bar\mu_2$ so that \be \boxed{ {\cal G}^{L_{8n8}}_{\lambda_1\times\mu_1\times\lambda_2\times\mu_2} = {\cal H}^{\rm Hopf}_{(\lambda_1,\lambda_2)\times(\mu_1,\mu_2)}=D_{(\mu_1,\mu_2)}\cdot \Sch_{(\lambda_1,\lambda_2)}\{p^{_*(\mu_1,\mu_2)}\} } \label{calG} \ee Here ${\cal G}$ differs from $\hat{\cal G}$ in (\ref{main}) by changing the framing to the standard one and taking into account the $U(1)$-factor: \be\label{factors} {\cal G}:=(-1)^{|\lambda_1|+|\lambda_2|+|\mu_1|+|\mu_2|}\cdot q^{-C_2(\lambda_1)-C_2(\lambda_2)-C_2(\mu_1)-C_2(\mu_2)}\cdot q^{2(|\lambda_1|-|\lambda_2|)(|\mu_1|-|\mu_2|)\over N}\cdot \hat {\cal G} \ee Here $C_2(R)=\varkappa(R)+N|R|$ is the eigenvalue of the second Casimir operator in the representation $R$, and we took into account that, for the composite representations, $C_2((R,S))=C_2(R)+C_2(S)$, \cite{GW}. \end{itemize} In the next two sections, we explain how to get formulas (\ref{main}), (\ref{calG}), and, in the Appendix, how the identity (\ref{calG}) works. \section{Proof of (\ref{main}) \label{sumfor}} Here we present summation formulas that lead to formula (\ref{main}). We start with the simplest example: when a diagram at external line is empty, say, $\lambda =\varnothing$, then the sum over $\eta$ in (\ref{TV}) is restricted to $\eta=\varnothing$ and \be C_{\xi\mu\varnothing} =\Sch_\mu(q^{\rho_0})\cdot \Sch_\xi(q^{\mu+\rho_0}) \label{lambda0first} \ee Then (\ref{4pt}) implies: \be Z_{\mu_1,\mu_2;\varnothing,\varnothing} \sim \sum_\xi (- Q)^{|\xi|} \cdot\Sch_\xi(q^{\mu_1+\rho_0}) \cdot \Sch_{\xi^{\vee}}(q^{\mu_2 +\rho_0}) = \exp\left(-\sum_k \frac{Q^k \,p_k^{(\mu_1)}p_k^{(\mu_2)}}{k}\right) \ee Substituting (\ref{pmu}), we get for the $\mu$-dependent factor \be \exp\left(-\sum_k \frac{Q^k \,p_k^{(\mu_1)}p_k^{(\mu_2)}}{k}\right) \sim \exp \left\{-\sum_k \frac{Q^k }{k} \left( \sum_{j} \frac{q^{\mu^1_jk}-q^{-\mu^1_jk}}{q^k-q^{-k}}\cdot q^{(\mu^1_j-2j+1)k} +\sum_j \frac{q^{\mu_j^2k }-q^{-\mu^2_jk }}{q^k-q^{-k}}\cdot q^{(\mu^2_j-2j+1)k} +\right.\right.\nn\\ \left.\left. + \sum_{j_1,j_2} (q^{2\mu^1_{j_1}k}-1)(q^{2\mu^2_{j_2}k}-1)q^{-2(j_1+j_2-1)k} \right) \right\} \ee For particular diagrams $\mu_1$ and $\mu_2$, the ratios in this formula become polynomials, and the entire expression is a product of factors $(1-Qq^{2n})$ with some $\mu$-dependent powers $n$. For example, if $\mu_1=\mu_2=[1]$, there are just two factors: \be \exp\left(-\sum_k \frac{Q^k \,p_k^{([1])}p_k^{([1])}}{k}\right) & \sim & \exp\left\{-\sum_k \frac{Q^k}{k} \Big(\overbrace{2+(q^{2k}-1)^2q^{-2k}}^{q^{2k}+q^{-2k}}\Big)\right\} =\\ &=&(1-Qq^2)(1-Qq^{-2}) = A^2\Big(Aq-(Aq)^{-1}\Big)\Big(Aq^{-1}-A^{-1}q\Big)=A^2(q-q^{-1})^2D_{\rm adj} \nonumber \ee Generally, restoring the $\mu$-independent factor \be \exp\left(-\sum_k {1\over k}\cdot\frac{Q^k }{(q^k-q^{-k})^2}\right)=\prod_{i=1}^\infty (1-Qq^{-2i})^i \ee one obtains\footnote{Alternatively, one can derive this formula using instead of $p_k^{(\mu)}$ the components of $q^{\mu+\rho_0}=q^{2\mu_j-2j+1}$ in the Cartan plane, (\ref{pmu}). Then, the l.h.s. of (\ref{exp}) is a product \be \exp\left(-\sum_k \frac{Q^k \,p_k^{(\mu_1)}p_k^{(\mu_2)}}{k}\right)=\prod_{i,j\ge 1}\Big(1-Qq^{2(\mu^1_i+\mu_j^2-i-j+1)}\Big) \ee which straightforwardly gives the r.h.s. of (\ref{exp}), see \cite{EC} for further details.} \be\label{exp} \exp\left(-\sum_k \frac{Q^k \,p_k^{(\mu_1)}p_k^{(\mu_2)}}{k}\right) = (-A)^{|\mu_1|+|\mu_2|} q^{\varkappa(\mu_1)+\varkappa(\mu_2)\over 2}h_{\mu_1}h_{\mu_2} D_{(\mu_1,\mu_2)}\prod_{i=1}^\infty (1-Qq^{-2i})^i \ee Here $h_\mu:=\prod_{i,j\in\mu}(q^{h_{i,j}}-q^{-h_{i,j}})$, $h_{i,j}$ is length of the hook $(i,j)$, and the quantum dimension $D_{(\mu_1,\mu_2)}$ is given in (\ref{Dims}). Let us now note that \be\label{spec0} \Sch_{\mu}( q^{\rho_0})={q^{\varkappa(\mu)\over 2}\over h_\mu} \ee Hence, we finally obtain \be\label{35} { Z_{\mu_1,\mu_2;\varnothing,\varnothing}\over Z_{\varnothing,\varnothing;\varnothing,\varnothing}}= (-A)^{|\mu_1|+|\mu_2|} q^{\varkappa(\mu_1)+\varkappa(\mu_2)}D_{(\mu_1,\mu_2)} \ee which is a particular case of (\ref{main}) and, being proportional to the quantum dimension of $(\mu_1,\mu_2)$, is consistent with (\ref{calG}) at $\lambda_1=\lambda_2=\varnothing$. \bigskip When diagrams $\lambda$ are also non-trivial, non-trivial can also be $\eta$ and summation over the intermediate representation $\xi$ can be performed by the Cauchy formula for skew Schur functions \be\label{sumC} &&\sum_{\xi} (-Q)^{|\xi|} \cdot\Sch_{\xi/\eta_1}\{p\}\cdot\Sch_{\xi^\vee/\eta_2}\{p'\}=\\ &=&\exp\left(-\sum_k \frac{Q^kp_kp_k'}{k}\right) \cdot \sum_\sigma (-Q)^{|\eta_1|+|\eta_2|-|\sigma|}\cdot \Sch_{\eta_1^\vee /\sigma}\{p'\}\cdot\Sch_{\eta_2^\vee /\sigma^\vee}\{p\} \nonumber \ee Now formulas (\ref{4pt}), (\ref{TV}), along with (\ref{exp}) and (\ref{sumC}), give rise to (\ref{main}). \section{Proof of (\ref{calG})} Now let us prove that the triple sum (\ref{calG}) reduces to the Hopf polynomial in the composite representation in the form (\ref{compoSchur}). To this end, we use the formula \be\label{skew} \Sch_{R/T}\{p^{(1)}+p^{(2)}\}=\sum_P\Sch_{R/P}\{p^{(1)}\}\cdot\Sch_{P/T}\{p^{(2)}\} \ee Let us start studying formula (\ref{main}) with the case of $\lambda_2 = \mu_2 =\varnothing$. In this case, the sum in (\ref{main}) reduces to \be \sum_{\eta}(-A^2)^{|\eta|}\cdot\Sch_{\lambda_1^\vee/\eta}\{p^{(\mu_1^\vee)}\}\cdot\Sch_{\eta^\vee}\{p^{(\varnothing)}\}&=& \sum_{\eta}A^{2|\eta|}\cdot\Sch_{\lambda_1/\eta^\vee}\{-p^{(\mu_1^\vee)}\}\cdot\Sch_{\eta^\vee}\{p^{(\varnothing)}\}=\nonumber\\ =\sum_{\eta}A^{|\lambda_1|}\cdot\Sch_{\lambda_1/\eta^\vee}\{-A^{-k}p^{(\mu_1^\vee)}_k\}\cdot\Sch_{\eta^\vee} \left\{{A^k\over q^k-q^{-k}}\right\}&=&A^{|\lambda_1|}\cdot\Sch_{\lambda_1}\left\{{A^k\over q^k-q^{-k}}-A^{-k}p^{(\mu_1^\vee)}\right\}=\nonumber\\ =A^{|\lambda_1|}\cdot\Sch_{\lambda_1}\left\{p^*_k + A^{-k}\sum_i q^{(2i-1)k}(q^{-2k\mu^1_i}-1)\right\}&=& q^{-{2|\lambda_1||\mu_1|\over N}}A^{|\lambda_1|}\cdot\Sch_{\lambda_1}\left\{p^{*\mu_1}\right\} \label{40} \ee where we used that \be\label{id} \Sch_{R^\vee/T^\vee}\{p_k\}=(-1)^{|R|+|T|}\Sch_{R/T}\{-p_k\}, \ \ \ \ \ \ p^{(\mu_1^\vee)}_k=(-1)^{k+1}p^{(\mu_1)}_k\Big|_{q\to -1/q} \ee and formula (\ref{skew}) with $T=\varnothing$. Taking into account all the factors in (\ref{main}) and (\ref{factors}), we immediately obtain from (\ref{calG}) formula (\ref{10}) for the Hopf link. \bigskip In complete analogy, in the general case, using (\ref{skew}) we can evaluate in (\ref{main}) the sum over $\eta_1$ \be &&\hspace{-1cm}\sum_{\eta_1}(-A^2)^{|\eta_1|}\cdot \Sch_{\lambda_1^\vee/\eta_1}\{p^{(\mu_1^\vee)}\}\cdot \Sch_{\eta_1^\vee/\sigma }\{p^{(\mu_2)}\}= A^{|\sigma|}(-A)^{|\lambda_1|}\cdot\sum_{\eta_1} \Sch_{\lambda_1/\eta_1^\vee}\{-A^{-k}p^{(\mu_1^\vee)}_k\}\cdot \Sch_{\eta_1^\vee/\sigma }\{A^kp^{(\mu_2)}_k\}=\nonumber\\ &=&A^{|\sigma|}(-A)^{|\lambda_1|}\cdot \Sch_{\lambda_1/\sigma}\{A^kp^{(\mu_2)}_k-A^{-k}p^{(\mu_1^\vee)}_k\}= q^{-{2(|\lambda_1|-|\sigma|)(|\mu_1|-|\mu_2|)\over N}}A^{|\sigma|}(-A)^{|\lambda_1|}\cdot \Sch_{\lambda_1/\sigma}\{p^{*(\mu_1,\mu_2)}\} \ee and, similarly, we evaluate the sum over $\eta_2$: \be \sum_{\eta_1}(-A^2)^{|\eta_2|}\cdot \Sch_{\lambda_2^\vee/\eta_2}\{p^{(\mu_2^\vee)}\}\cdot \Sch_{\eta_2^\vee/\sigma^\vee }\{p^{(\mu_1)}\}= q^{{2(|\lambda_2|-|\sigma|)(|\mu_1|-|\mu_2|)\over N}}A^{|\sigma|}(-A)^{|\lambda_2|}\cdot \Sch_{\lambda_2/\sigma^\vee}\{p^{*(\mu_1,\mu_2)}(A^{-1},q^{-1})\} \ee Hence, we are remaining with \be (-A)^{|\lambda_1|+|\lambda_2|}q^{{2(|\lambda_2|-|\lambda_1|)(|\mu_1|-|\mu_2|)\over N}}\sum_\sigma (-1)^{|\sigma|}\cdot\Sch_{\lambda_1/\sigma}\{p^{*(\mu_1,\mu_2)}\}\cdot \Sch_{\lambda_2/\sigma^\vee}\{p^{*(\mu_1,\mu_2)}(A^{-1},q^{-1})\} \ee which is exactly the sum in (\ref{compoSchur}). Now, taking into account all the factors in (\ref{main}), (\ref{factors}), we obtain (\ref{calG}). \section{Symmetry properties of Hopf polynomials} The main tools of Hopf calculus in \cite{tangcalc} are the recursion (\ref{HvsHH}), which is a characteristic feature of characters and thus implies (\ref{HopfthroughSchur}), and a peculiar property of the Hopf link, \be {\cal H}^{\rm Hopf}_{R_1\times\bar R_2}(A,q) = {\cal H}^{\rm Hopf}_{R_1\times R_2}(A^{-1},q^{-1}) \label{conjHopf} \ee which is obvious from the picture \begin{picture}(300,100)(-120,-50) \qbezier(-30,0)(-30,30)(0,30) \qbezier(-30,0)(-30,-30)(0,-30) \qbezier(30,0)(30,-30)(0,-30) \qbezier(30,0)(30,20)(22,24) \put(30,0){ \qbezier(-30,0)(-30,30)(0,30) \qbezier(30,0)(30,30)(0,30) \qbezier(30,0)(30,-30)(0,-30) \qbezier(-30,0)(-30,-20)(-22,-24) } \put(0,1){\vector(0,-1){2}} \put(30,-1){\vector(0,1){2}} {\footnotesize \put(3,-5){\mbox{$\bar R_2$}} \put(18,2){\mbox{$R_1$}} } \put(85,-2){\mbox{$=$}} \put(150,0){ \qbezier(-30,0)(-30,30)(0,30) \qbezier(-30,0)(-30,-30)(0,-30) \qbezier(30,0)(30,30)(0,30) \qbezier(30,0)(30,-20)(22,-24) \put(30,0){ \qbezier(-30,0)(-30,-30)(0,-30) \qbezier(30,0)(30,30)(0,30) \qbezier(30,0)(30,-30)(0,-30) \qbezier(-30,0)(-30,20)(-22,24) } \put(0,-1){\vector(0,1){2}} \put(30,-1){\vector(0,1){2}} {\footnotesize \put(3,-5){\mbox{$R_2$}} \put(18,2){\mbox{$R_1$}} } } \end{picture} \noindent supplemented by the property of ${\cal R}$-matrix, that its inversion is equivalent to inversion of $A$ and $q$. Note that the equality (\ref{conjHopf}) specifically holds for the Hopf link despite ${\cal R}_{R_1\times\bar R_2}(A,q)\neq {\cal R}_{R_1\times R_2}(A^{-1},q^{-1})$. Other symmetry properties for the Hopf polynomials in the case of composite representations are \be\label{Id1} {\cal H}^{\rm Hopf}_{(R,P)\times S} = {\cal H}^{\rm Hopf}_{ (P,R)\times S}(A^{-1},q^{-1}) \ee which follows from the relation $\overline{(R,P)}=(P,R)$, and \be\label{LRD} {\cal H}^{\rm Hopf}_{(R,P)\times (S,T)} =(-1)^{|R|+|P|+|S|+|T|}q^{4(|R|-|P|)(|S|-|T|)\over N} {\cal H}^{\rm Hopf}_{ (R^\vee,P^\vee)\times (S^\vee,T^\vee)}(A,q^{-1}) \ee which is a Hopf version of the standard level-rank duality, \cite{LRD} \be H_{\{R_i\}}=H_{\{R_i^\vee\}}(A,q^{-1})\nonumber \ee the latter valid for any HOMFLY invariant being a fact from group representation theory. Formula (\ref{LRD}) follows from (\ref{30}) and (\ref{compoSchur}) using (\ref{compolocus}) and (\ref{id}). From the three identities (\ref{conjHopf})-(\ref{LRD}), it follows that \be {\cal H}^{\rm Hopf}_{(R,P)\times [1]} =(-1)^{|P|+|R|+1}q^{4(|P|-|R|)\over N} {\cal H}^{\rm Hopf}_{ (P^\vee,R^\vee)\times [1]}(A^{-1},q) \label{RPPR} \ee In fact, the change of variable $A \to A^{-1}$ is natural from the viewpoint of conifold geometry. Under the flop operator of conifold, which flips the sign of the K\"ahler parameter and hence $A \to A^{-1}$, the representation attached with $\lambda_1=[1]$ changes from $\mu_1$ to $\mu_2$. Hence, this identity. While formulas of the type (\ref{HopfthroughSchur}) and their generalizations to the composite representations \cite{Kanno,MMHopf} as well as the Rosso-Jones formula are convenient for general studies of the Hopf link, the most efficient for obtaining general explicit formulas is at the moment the tangle calculus of \cite{tangcalc}. Making use of these identities and one additional calculation for ${\cal H}^{\rm Hopf}_{{\rm adj}\times [s]}$, the old knowledge of Hopf polynomials in two symmetric representations ${\cal H}^{\rm Hopf}_{[r]\times[s]}$ can be extended to the case of one arbitrary diagram and one symmetric. Explicit formulas for the Hopf polynomials that we use in the Appendix are all picked up from \cite{tangcalc}. \section{Rosso-Jones formula and superpolynomials} There is another, basically very different formula for the Hopf polynomial based on the paper by M. Rosso and V. F. R. Jones \cite{RJ} \be {\cal H}_{\lambda\times \mu}^{\rm Hopf}=q^{2|\lambda||\mu|\over N} q^{\varkappa_\lambda+\varkappa_\mu} \sum_{\eta\in \lambda\otimes \mu} N_{\lambda\mu}^\eta \cdot q^{-\varkappa_{\eta}}\cdot D_{\eta} \label{RJ} \ee In this paper, we explained how the representation of the Hopf polynomial as a specialization of the character, (\ref{HopfthroughSchur}) is directly related with the topological vertices. However, any direct relation either of these two to the Rosso-Jones formula (\ref{RJ}) is not clear, to our best knowledge. All known proofs of equivalencies with this relation are provided by an additional averaging procedure, either in Chern-Simons theory, \cite{Lab}, or via (related) a multiple integral representation \cite{BEMT}, or using a scalar product of two characters defined in some other way \cite{EK}. There is, certainly, a derivation based on the modular categories \cite{MK}, i.e. basically on the Verlinde formula \cite{Ver}, however, it is not straightforward as well. As a particular manifestation of this problem, generalizing the Rosso-Jones formula for other knots and links, one can not obtain a counterpart of the character specialization (see, however, \cite{HL}). Thus, we have a diagram of the type \bigskip $$ \begin{array}{ccccc} \hbox{Schur representation (\ref{HopfthroughSchur})}&&\xleftarrow{\hspace*{1cm}}&&\hbox{Averaging procedure}\\ &\nwarrow&&\nearrow &\\ \Bigg\updownarrow&&\boxed{\hbox{Hopf polynomial}}&&\Bigg\downarrow\\ &\swarrow&&\searrow&\\ \hbox{Topological vertex approach}&&&&\hbox{Rosso-Jones formula} \end{array} $$ \bigskip \noindent It would be interesting to restore the missing arrows. What is, however, important, the Rosso-Jones formula admits very immediate and simple generalization to any other torus knots and links \cite{RJ}, and also to the superpolynomials \cite{DMMSS}, while other ingredients at the diagram are only easily generalizable to the superpolynomials \cite{EK,AS,IK}. Indeed, the Rosso-Jones formula for the Hopf superpolynomial looks like (hereafter in this section, we omit the $U(1)$-factor) \be {\cal P}_{\lambda,\mu}^{\rm Hopf} = q^{-\nu_\lambda-\nu_\mu}t^{\nu'_{\lambda}+\nu'_{\mu}} \sum_{\eta\in \lambda\otimes \mu} \mathfrak{N}_{\lambda\mu}^\eta \cdot q^{\nu_\eta}t^{-\nu'_{\eta}} \cdot {\cal M}_\eta \label{supRJHopf} \ee where $\nu_\lambda:=2\sum_i (i-1)\lambda_i$, $\nu'_\lambda:=\nu_{\lambda^\vee}$ so that $\varkappa_\lambda=\nu'_\lambda-\nu_\lambda$, and $M_\eta$ is the Macdonald dimension of $\eta$, i.e. the specialization of the Macdonald symmetric function $M_\eta \{q,t|p\}$ at the topological locus (in time variables) $p_k=p_k^*$ \cite{DMMSS}: \be {\cal M}_\eta:=M_\eta \{q,t|\mathfrak{p}^*\},\ \ \ \ \ \ \ \mathfrak{p}_k^*={A^k-A^{-k}\over t^k-t^{-k}} \ee where we use the Gothic letters in order to stress the superpolynomial deformation. In particular, the coefficients $\mathfrak{N}_{\lambda\mu}^\eta$ are defined now as coefficients of expansion of the products of Macdonald polynomials \be M_\lambda \{q,t|p\}\cdot M_\mu \{q,t|p\}=\sum_{\eta\in \lambda\otimes \mu}\mathfrak{N}_{\lambda\mu}^\eta\cdot M_\eta \{q,t|p\} \ee In contrast with the Littlewood-Richardson coefficients, $\mathfrak{N}_{\lambda\mu}^\eta$ are not obligatory integer. A counterpart of formula (\ref{HopfthroughSchur}) in the superpolynomial form looks as \be {\cal P}_{\lambda,\mu}^{\rm Hopf} = M_\lambda(t^{-\rho})\cdot M_\mu(q^{-\lambda}t^{-\rho})={\cal M}_\lambda\cdot M_\mu(q^{-\lambda}t^{-\rho}) \label{supthroughSchur1} \ee These symmetric functions of the components of vectors in the Cartan plane can be again rewritten in terms of the time variables \be \mathfrak{p}_k^{*\lambda} = \mathfrak{p}^*_k - A^{-k}(q^k-q^{-k})\sum_{i,j\in\lambda}t^{k(2i-1)}q^{k(1-2j)}=\mathfrak{p}^*_k + A^{-k}\sum_i t^{(2i-1)k}(q^{-2k\lambda_i}-1) \ee where, as above, the Gothic letter refers to the superpolynomial deformation: \be M_\mu(q^{-\lambda}t^{-\rho})=M_\mu \{q,t|\mathfrak{p}_k^{*\lambda}\} \ee Thus, (\ref{supthroughSchur1}) can be written in the form \be\label{supHopf} {\cal P}_{\lambda,\mu}^{\rm Hopf} ={\cal M}_\lambda\cdot M_\mu \{q,t|\mathfrak{p}_k^{*\lambda}\} \ee The equivalence of the two representations for the Hopf superpolynomials, (\ref{supRJHopf}) and (\ref{supHopf}) is again proved through an intermediate averaging representation \cite{EK,AS,IK}. Now one can repeat the machinery developed in the present paper, in this superpolynomial case, \cite{AKMMM}. \section{Conclusion} In this letter, we considered an elementary example of the tangle calculus of \cite{tangcalc}: the quadratic recursion formula (\ref{HvsHH}) for Hopf polynomials, which immediately follows from pictorial gluing of free ends of the Hopf tangle. We explained the relation to the traditional description of Hopf polynomials, which provides an algebraic proof of the recursion identity from multiplication of Schur functions. This proof is easily lifted to superpolynomials, and emerging identity suggests that tangle calculus can be extended to this area, despite the Reshetikhin-Turaev formalism, of which it is a simple corollary, is no longer applicable in this case in any known way. This adds a new evidence to the comparably surprising results of \cite{DMMSS} and \cite{AnoMevoKR} about survival of evolution property for the torus superpolynomials and even for the torus Khovanov-Rozansky polynomials at finite $N$. As another application, we described the relation of the link polynomial for $L_{8n8}$ and of the Hopf polynomial. To summarize this application, the link $L_{8n8}$, which one could naturally associate with the 4-point toric diagram, in knot theory is distinguished by existence of two dual descriptions: as a closed necklace made from 4 unknots and as a Hopf link in reducible representations $ (\mu_1\otimes\mu_2)\times(\lambda_1\otimes\lambda_2)$. \begin{picture}(300,180)(-200,-135) \qbezier(-40,0)(-40,20)(0,20) \qbezier(40,0)(40,20)(0,20) \qbezier(-40,-10)(-40,-30)(0,-30) \qbezier(40,-10)(40,-30)(0,-30) \put(0,-80){ \qbezier(-40,0)(-40,20)(0,20) \qbezier(40,0)(40,20)(0,20) \qbezier(-40,-10)(-40,-30)(0,-30) \qbezier(40,-10)(40,-30)(0,-30) } \qbezier(-40,-5)(-60,-5)(-60,-45)\qbezier(-40,-85)(-60,-85)(-60,-45) \qbezier(-40,-5)(-20,-5)(-20,-25)\qbezier(-40,-85)(-20,-85)(-20,-65) \qbezier(-19.5,-32)(-19,-45)(-19.5,-58) \qbezier(40,-5)(60,-5)(60,-45)\qbezier(40,-85)(60,-85)(60,-45) \qbezier(40,-5)(20,-5)(20,-25)\qbezier(40,-85)(20,-85)(20,-65) \qbezier(19.5,-32)(19,-45)(19.5,-58) \put(-60,-45){\vector(0,1){2}} \put(-19,-45){\vector(0,-1){2}} \put(0,20){\vector(1,0){2}} \put(0,-30){\vector(-1,0){2}} \put(60,-45){\vector(0,-1){2}} \put(19,-45){\vector(0,1){2}} \put(0,-110){\vector(-1,0){2}} \put(0,-60){\vector(1,0){2}} \put(-75,-20){\mbox{$\lambda_2$}} \put(-20,25){\mbox{$\mu_1$}} \put(65,-20){\mbox{$\lambda_1$}} \put(-20,-120){\mbox{$\mu_2$}} \end{picture} \noindent The possibility of two different descriptions allows one to double-check the formulas, see \cite{tangcalc}, but here we used only the Hopf-related expressions. Our claim (\ref{calG}) was that the sum (\ref{4pt}) provides the contribution of the composite representations \be {\cal G}^{L_{8n8}}_{\lambda_1\times\mu_1\times\lambda_2\times\mu_2} = {\cal H}^{\rm Hopf}_{(\mu_1,\mu_2)\times (\lambda_1,\lambda_2)} \label{calG1} \ee rather than the full unreduced HOMFLY polynomial. \bigskip It is an interesting task to extend all the components of our discussion: tangle calculus, character calculus, conifold calculus and their refinement beyond the Hopf link example. Some parts of this extension are already long-standing problems, however, the interplay between these different approaches seems to provide new tools to finally solve them and make new steps towards building a powerful and efficient calculational technique. \section*{Acknowledgements} Our work is supported in part by Grants-in-Aid for Scientific Research (\# 17K05275) (H.A.), (\# 15H05738) (H.K.) and JSPS Bilateral Joint Projects (JSPS-RFBR collaboration) ``Topological Field Theories and String Theory: from Topological Recursion to Quantum Toroidal Algebra'' from MEXT, Japan. It is also partly supported by the grant of the Foundation for the Advancement of Theoretical Physics ``BASIS" (A.Mor.), by RFBR grants 16-01-00291 (A.Mir.) and 16-02-01021 (A.Mor.), by joint grants 17-51-50051-YaF, 18-51-05015-Arm (A.M.'s). \section*{Appendix. Explicit examples of formula (\ref{calG})} Here we give a series of examples that illustrate how formula (\ref{calG}) works. First of all, we will need a series of specializations of the Schur functions, some of them being an illustration of formula (\ref{spec0}): \beqa \Sch_{[1]} (q^{\rho_0}) &=& p_1(q^{\rho_0}) = (q - q^{-1} )^{-1}, \CR \Sch_{[2]} (q^{\rho_0}) &=& \frac{1}{2} \Big((p_1(q^{\rho_0}))^2 + p_2(q^{\rho_0})\Big) = (q - q^{-1} )^{-2}(1 + q^{-2} )^{-1}, \CR \Sch_{[1^2]} (q^{\rho_0}) &=& \frac{1}{2} \Big((p_1(q^{\rho_0}))^2 - p_2(q^{\rho_0})\Big) = (q - q^{-1} )^{-2}(1 + q^2 )^{-1}. \eeqa Other answers that we will need in examples are \beqa \Sch_{[1]} (q^{[1] + \rho_0}) &=& (q - q^{-1} ) + \Sch_{[1]} (q^{\rho_0}), \CR \Sch_{[1]} (q^{[2] + \rho_0}) &=& (1+ q^2) (q - q^{-1} ) + \Sch_{[1]} (q^{\rho_0}), \CR \Sch_{[1]} (q^{[1^2] + \rho_0}) &=& (1+ q^{-2}) (q - q^{-1} ) + \Sch_{[1]} (q^{\rho_0}), \label{2box} \eeqa \subsection*{\underline{$\lambda_1 = \lambda_2 =\varnothing$}} This case was already considered in (\ref{35}). Using formula (\ref{Dims}), we can compare \beqa E([1],[1]) &\sim & \exp\left\{-\sum_k \frac{Q^k}{k} \Big(q^{2k}+q^{-2k}\Big)\right\} =(1-Qq^2)(1-Qq^{-2})=A^2(q-q^{-1})^2[N+1][N-1]\nonumber\\ \hbox{with}\ \ \ \ {\cal G}^{L_{8n8}}_{[1] \times\varnothing\times [1] \times\varnothing} &=& [N+1][N-1] = D_{{\rm adj}} = {\cal H}^{\rm Hopf}_{([1],[1])\times\varnothing}, \CR E([2],[1]) &\sim & \exp\left\{-\sum_k \frac{Q^k}{k} \Big(q^{4k}+1+q^{-2k}\Big)\right\} =(1-Qq^4)(1-Q)(1-Qq^{-2})=\nonumber\\ &=&A^3(q-q^{-1})^3[N+2][N][N-1]\nonumber\\ \hbox{with}\ \ \ \ {\cal G}^{L_{8n8}}_{[2] \times\varnothing\times [1] \times\varnothing} &=&\frac{1}{[2]} [N+2][N][N-1] =D_{([2],[1])} = {\cal H}^{\rm Hopf}_{([2],[1])\times\varnothing}, \CR E([1,1],[1]) &\sim& \exp\left\{-\sum_k \frac{Q^k}{k} \Big(q^{2k}+1+q^{-4k}\Big)\right\} =(1-Qq^2)(1-Q)(1-Qq^{-4})=\nonumber\\ &=&A^3(q-q^{-1})^3[N+1][N][N-2]\nonumber\\ \hbox{with}\ \ \ \ {\cal G}^{L_{8n8}}_{[1,1] \times\varnothing\times [1] \times\varnothing} &=& \frac{1}{[2]} [N+1][N][N-2] = D_{([1,1],[1])} = {\cal H}^{\rm Hopf}_{([1,1],[1])\times\varnothing}, \CR E([2],[2]) &\sim& \exp\left\{-\sum_k \frac{Q^k}{k} \Big(q^{6k}+2+q^{-2k}\Big)\right\} =(1-Qq^6)(1-Q)^2(1-Qq^{-2})=\nonumber\\ &=&A^4(q-q^{-1})^4[N+3][N]^2[N-1]\nonumber\\ \hbox{with}\ \ \ \ {\cal G}^{L_{8n8}}_{[2] \times\varnothing\times [2] \times\varnothing} &=& \frac{1}{[2]^2} [N+3] [N]^2 [N-1] = D_{([2],[2])} = {\cal H}^{\rm Hopf}_{([2],[2])\times\varnothing}, \CR E([1,1],[1,1]) &\sim& \exp\left\{-\sum_k \frac{Q^k}{k} \Big(q^{2k}+2+q^{-6k}\Big)\right\} =(1-Qq^2)(1-Q)^2(1-Qq^{-6})=\nonumber\\ &=&A^4(q-q^{-1})^4[N+1][N]^2[N-3]\nonumber\\ \hbox{with}\ \ \ \ {\cal G}^{L_{8n8}}_{[1,1] \times\varnothing\times [1,1] \times\varnothing} &=& \frac{1}{[2]^2} [N+1] [N]^2 [N-3] = D_{([1,1],[1,1])} = {\cal H}^{\rm Hopf}_{([1,1],[1,1])\times\varnothing}, \CR E([2],[1,1]) &\sim& \exp\left\{-\sum_k \frac{Q^k}{k} \Big(q^{4k}+q^{2k}+q^{-2k}+q^{-4k}\Big)\right\} =(1-Qq^4)(1-Qq^2)(1-Qq^{-2})(1-Qq^{-4})=\nonumber\\ &=&A^4(q-q^{-1})^4[N+2][N+1][N-1][N-2]\nonumber\\ \hbox{with}\ \ \ \ {\cal G}^{L_{8n8}}_{[2] \times\varnothing\times [1,1] \times\varnothing} &=& \frac{1}{[2]^2}[N+2] [N+1] [N-1] [N-2] = D_{([2],[1,1])} = {\cal H}^{\rm Hopf}_{([2],[1,1])\times\varnothing}, \eeqa where we used the notation $\exp\left(-\sum_k \frac{Q^k \,p_k^{(\mu)}p_k^{(\nu)}}{k}\right):=E(\mu,\nu)$. \subsection*{\underline{$ \lambda_1 = [1], \lambda_2 =\varnothing$}} In this case we have \beq Z_{\mu_1, \mu_2 ; [1] ,\varnothing} = \Sch_{\mu_1} (q^{\rho_0})~\Sch_{\mu_2} (q^{\rho_0}) \sum_\xi (-Q)^{\vert \xi \vert} \sum_{\tau} \Sch_{\xi^\vee/\tau}( q^{\mu_1 + \rho_0} )~\Sch_{[1]/\tau}( q^{\mu_1^\vee+ \rho_0} ) ~\Sch_\xi( q^{\mu_2 + \rho_0} ), \eeq where we can write down the summation over $\tau$ explicitly; \beq \sum_{\tau} \Sch_{\xi^\vee/\tau}( q^{\mu_1 + \rho_0} )~\Sch_{[1]/\tau}( q^{\mu_1^\vee+ \rho_0} ) = \Sch_{\xi^\vee}( q^{\mu_1 + \rho_0} )~\Sch_{[1]}( q^{\mu_1^\vee+ \rho_0} ) + \Sch_{\xi^\vee/[1] }( q^{\mu_1 + \rho_0} ). \label{decomp} \eeq Performing the summation over $\xi$ by the Cauchy formula, we obtain \beq Z_{\mu_1, \mu_2 ; [1] ,\varnothing} = Z_{\mu_1, \mu_2 ; \varnothing ,\varnothing} \left( \Sch_{[1]} ( q^{\mu_1^\vee + \rho_0}) - Q \cdot \Sch_{[1]} (q^{\mu_2 + \rho_0}) \right). \eeq Now, using (\ref{2box}), we obtain \beqa {\cal G}^{L_{8n8}}_{[1] \times [1] \times [1] \times\varnothing} (A,q) &= &[N+1][N] [N-1] (q^2 -1 + q^{-2}) = {\cal H}^{\rm Hopf}_{([1],[1])\times [1]} \CR {\cal G}^{L_{8n8}}_{[2] \times [1] \times [1] \times\varnothing} (A,q) &=& q^{{2/ N}}\cdot\frac{ [N+2][N][N-1]}{[2]} \Big( [3][N] - A^{-1} q^{-2} (q - q^{-1})^2 \Big) = {\cal H}^{\rm Hopf}_{([2],[1])\times [1]} \CR {\cal G}^{L_{8n8}}_{[2] \times [1] \times [2] \times\varnothing} (A,q) &=& \frac{[N+3] [N]^2 [N-1]}{[2]^2} \Big( [3][N]+ [N+2] (q - q^{-1})^2 \Big) = {\cal H}^{\rm Hopf}_{([2],[2])\times [1]} \CR {\cal G}^{L_{8n8}}_{[2] \times [1] \times [1,1] \times\varnothing} (A,q) &=& \frac{[N+2] [N+1][N] [N-1] [N-2]}{[2]^2} (q^4 - 1 + q^{-2}) = {\cal H}^{\rm Hopf}_{([2],[1,1])\times [1]} \nonumber \eeqa and all other cases of the Young diagrams up to the level 2 are obtained from these using formulas (\ref{Id1})-(\ref{RPPR}). \subsection*{\underline{$ \lambda_1 = \lambda_2 = [1]$}} In this case we have \beqa Z_{\mu_1, \mu_2 ; [1], [1] } &=& \Sch_{\mu_1} (q^{\rho_0})~\Sch_{\mu_2} (q^\rho_0) \sum_\xi (-Q)^{\vert \xi \vert} \sum_{\tau} \Sch_{\xi^\vee/\tau}( q^{\mu_1 + \rho_0} )~\Sch_{[1]/\tau}( q^{\mu_1^\vee+ \rho_0} ) \CR &&~~ \times \sum_{\sigma} \Sch_{\xi /\sigma}( q^{\mu_2 + \rho_0} )~\Sch_{[1]/\sigma}( q^{\mu_2^\vee+ \rho_0} ). \eeqa Using \eqref{decomp}, we find \beq {\cal G}^{L_{8n8}}_{\mu_1 \times [1] \times \mu_2 \times [1]} = {\cal G}^{L_{8n8}}_{\mu_1 \times \varnothing \times \mu_2 \times \varnothing } \cdot z(\mu_1, \mu_2), \eeq where \beqa z(\mu_1, \mu_2) &:=& \Sch_{[1]} (q^{\mu_1^\vee + \rho_0})~\Sch_{[1]} (q^{\mu_2^\vee + \rho_0}) -Q (1 + \Sch_{[1]} (q^{\mu_1 + \rho_0})~\Sch_{[1]} (q^{\mu_1^\vee + \rho_0}) \CR &&~~ + \Sch_{[1]} (q^{\mu_2 + \rho_0})~\Sch_{[1]} (q^{\mu_2^\vee + \rho_0}) ) + Q^2 \cdot \Sch_{[1]} (q^{\mu_1 + \rho_0})~\Sch_{[1]} (q^{\mu_2 + \rho_0})), \eeqa which is symmetric under $\mu_1 \leftrightarrow \mu_2$. Thus we obtain \beq {\cal G}^{L_{8n8}}_{[1] \times [1] \times [1] \times [1]} &=& [N+1][N-1] \Big( -1 + [3]^2[N]^2 \Big) = {\cal H}^{\rm Hopf}_{{\rm adj}\times {\rm adj}} \CR {\cal G}^{L_{8n8}}_{[2] \times [1] \times [1] \times [1]} &=& {[3]\over [2]}\ [N+2][N+1][N][N-1]\Big((q^3+q^{-3})[N]-[N+1]\Big) = {\cal H}^{\rm Hopf}_{([2],[1])\times {\rm adj}}, \CR {\cal G}^{L_{8n8}}_{[2] \times [1] \times [2] \times [1]} &=& \frac{[N+3][N+1][N]^2[N-1]}{[2]^2}\Big((q^3+q^{-3})^2[N+1]-2(q^3+q^{-3})[N+2]+[N+3]\Big) = {\cal H}^{\rm Hopf}_{([2],[2])\times {\rm adj}}, \CR {\cal G}^{L_{8n8}}_{[2] \times [1] \times [1,1] \times [1]} &=& \frac{[N+2][N+1][N-1][N-2]}{[2]^2}\Big((q^3+q^{-3})[N]-[N+1]\Big)\Big((q^3+q^{-3})[N]-[N-1]\Big)={\cal H}^{\rm Hopf}_{([2],[1,1])\times {\rm adj}}\CR \eeqa \subsection*{\underline{$ \lambda_1 = [2]~\mathrm{or}~[1,1] , \lambda_2 =\varnothing$}} For $\lambda_2 = [2]$, we have \beq Z_{\mu_1, \mu_2 ; [1] ,\varnothing} = \Sch_{\mu_1} (q^{\rho_0})~\Sch_{\mu_2} (q^\rho_0) \sum_\xi (-Q)^{\vert \xi \vert} \sum_{\tau} \Sch_{\xi^\vee/\tau}( q^{\mu_1 + \rho_0} )~\Sch_{[2]/\tau}( q^{\mu_1^\vee+ \rho_0} ) ~\Sch_\xi( q^{\mu_2 + \rho_0} ), \eeq where \beqa &&\sum_{\tau} \Sch_{\xi^\vee/\tau}( q^{\mu_1 + \rho_0} )~\Sch_{[2]/\tau}( q^{\mu_1^\vee+ \rho_0} ) \CR &&~~= \Sch_{\xi^\vee}( q^{\mu_1 + \rho_0} )~\Sch_{[2]}( q^{\mu_1^\vee+ \rho_0} ) + \Sch_{\xi^\vee/[1]}( q^{\mu_1 + \rho_0} )~\Sch_{[1]}( q^{\mu_1^\vee+ \rho_0} ) + \Sch_{\xi^\vee/[2] }( q^{\mu_1 + \rho_0} ). \label{decomp2} \eeqa The Cauchy formula gives \beq {\cal G}^{L_{8n8}}_{\mu_1 \times [2] \times \mu_2 \times \varnothing} = {\cal G}^{L_{8n8}}_{\mu_1 \times \varnothing \times \mu_2 \times \varnothing} \left( \Sch_{[2]} ( q^{\mu_1^\vee + \rho_0}) - Q \cdot \Sch_{[1]} (q^{\mu_1^\vee + \rho_0})~\Sch_{[1]} (q^{\mu_2 + \rho_0}) + Q^2 \cdot \Sch_{[1,1]} (q^{\mu_2 + \rho_0})\right). \eeq When $\mu_1 = \mu_2 = [1]$, we obtain \beqa {\cal G}^{L_{8n8}}_{[1] \times [2] \times [1] \times \varnothing} &=& A^{2} [N+1][N-1] \left( s_{[2]} ( q^{[1] + \rho_0}) - Q s_{[1]} (q^{[1] + \rho_0})^2 + Q^2 s_{[1,1]} ( q^{[1] + \rho_0}) ) \right) = \CR &=&\frac{[N+1][N][N-1]}{[2]}\Big((q^3+q^{-3})[N+2]-[N+3]\Big) = {\cal H}^{\rm Hopf}_{{\rm adj}\times [2]} \eeqa On the other hand, if $\lambda_1 = [1,1]$, $\Sch_{[2]}$ and $\Sch_{[1,1]}$ switch places everywhere, and the parallel computation gives \beq A^4 \cdot {\cal G}^{L_{8n8}}_{[1] \times [1,1] \times [1] \times \varnothing} (A, q) = A^4 \cdot {\cal G}^{L_{8n8}}_{[1] \times [2] \times [1] \times \varnothing}(A, q^{-1}) = {\cal H}^{\rm Hopf}_{{\rm adj}\times [1,1]}. \eeq
{ "timestamp": "2018-08-23T02:06:54", "yymm": "1806", "arxiv_id": "1806.01146", "language": "en", "url": "https://arxiv.org/abs/1806.01146" }
\section{Introduction} Shape analysis of anatomical structures is of core importance for many tasks in medical imaging, not only as a regularization prior for segmentation tasks, but also as a powerful tool to assess differences between subjects and populations. A fundamental question when operating on shapes is to find a suitable numerical representation for a given task. Hence, many different types of parameterizations have been proposed in the past including point distribution models \cite{Cootes1995}, spectral signatures~\cite{Wachinger2015}, spherical harmonics \cite{gerardin2009multidimensional}, medial representations \cite{Gorczowski2007}, and diffeomorphisms \cite{miller2014diffeomorphometry}. Even though these representations have proven their utility for the analysis of shapes in the medical domain, they might not be optimal for a particular task. In recent years, deep networks have had ample success for many medical imaging tasks by learning complex, hierarchical feature representations from images. These representations have proven to outperform \emph{hand-crafted} features in a variety of medical imaging applications \cite{Litjens2017}. One of the main reasons for the success of these methods is the use of convolutional layers, which take advantage of the shift-invariance properties of images \cite{Bronstein2017}. However, the use of deep networks in medical shape analysis is still largely unexplored; mainly because typical shape representations such as point clouds and meshes do not possess an underlying Euclidean or grid-like structure. In this work, we propose an alternative approach to perform supervised learning on medical shape data. Our method is based on PointNet~\cite{Qi2017}, a deep neural network architecture, which operates directly on a point cloud and predicts a label in an end-to-end fashion. Point clouds present a raw and simple parameterization that avoids complexities involved with meshes and that is trivial to obtain given a segmented surface. The network does not require the alignment of point clouds, as a spatial transformer network maps the data to a canonical space before further processing. PointNet has been proposed for object classification, where the category of a single shape is predicted. For many medical applications however, not just a single anatomical structure is important for the prediction but a simultaneous view of multiple structures is required for a more comprehensive analysis of a subject's anatomy. Hence, we propose the Multi-Structure PointNet (MSPNet), which is able to simultaneously predict a label given the shape of multiple structures. We evaluate MSPNet in two neuroimaging applications, neurodegenerative disease prediction and age regression \subsection{Related Work} Several shape representations have previously been used for supervised learning tasks. Spherical harmonics for approximating the hippocampal shape have been proposed in \cite{gerardin2009multidimensional}. Shape information has been derived from thickness measurements of the hippocampus from a medial representation~\cite{costafreda2011automated}. Statistical shape models to detect hippocampal shape changes were proposed by \cite{shen2012detecting}. Multi-resolution shape features with non-Euclidean wavelets were employed for the analysis of cortical thickness \cite{Kim2014107}. The use of medial axis shape representations was used to compare the brain morphology of autistic and normal populations \cite{Gorczowski2007}. Recently, shape representation based on spectral signatures have been introduced to perform age regression and disease prediction~\cite{Wachinger2015,wachinger2016domain}. All the mentioned approaches rely on computing pre-defined shape features. Alternatively, a variational auto-encoder was proposed to automatically extract features from 3D surfaces, which can in turn be used in a classification task~\cite{Shakeri2016}. However different to our approach, this is not an end-to-end learning since the variational encoder is not directly linked to the classification task. Consequently, the learned features capture overall variation but are not directly optimized for the given task. In addition, this approach relies on computing point correspondences between meshes and focuses on a single structure, while we simultaneously model multiple structures. \def{\mathbb{R}}{{\mathbb{R}}} \def{MSPNet}{{MSPNet}} \section{Method} We propose a method for multiple structure shape analysis that is divided into two main stages: the extraction of point clouds representing the anatomy of different structures from medical images (section \ref{sec:pointcloud}), and a Multi-Structure PointNet ({MSPNet}) (section \ref{sec:architecture}). Figure~\ref{fig:network} illustrates the architecture of MSPNet, which is based on PointNet~\cite{Qi2017}, and extends on it to allow the simultaneous processing of multiple structures. \begin{figure*}[t] \centering\includegraphics[width=0.9\textwidth]{figures/network_2.png} \caption{MSPNet Architecture. The network consists of one branch per structure (illustrated for three structures), which are fused before the final multilayer perceptron (MLP). Each structure is represented by a point cloud with $n$ points that pass through transformer networks and multilayer perceptrons of the individual branch. Numbers in brackets are layer sizes. \label{fig:network}} \end{figure*} \noindent \subsection{Point Cloud Extraction} \label{sec:pointcloud} We extract point clouds from MRI T1-weighted images of the brain. We process the images with the FreeSurfer pipeline~\cite{Fischl2012} and obtain segmentations of multiple neuroanatomical regions. From the resulting segmentations, point clouds are created by uniformly sampling the boundary of each brain structure. After this process, the anatomy of a subject is represented by a collection of $m$ point clouds $S=\{ P_0, P_1, \hdots P_m \}$, where each point cloud represents a structure. A point cloud is defined as a set of $n$ points $P = [\mathbf{p}_0, \mathbf{p}_1, \hdots , \mathbf{p}_n] $, where each point is a vector of Cartesian coordinates $\mathbf{p}_i = (x,y,z)$. \noindent \subsection{MSPNet Architecture}\label{sec:architecture} We aim at finding a network architecture corresponding to a function $f : S \mapsto y$, mapping a collection of shapes described by $S$ to a prediction~$y$. An overview of the network is shown in figure \ref{fig:network}. MSPNet consists of multiple branches, where each branch processes the point cloud of one structure independently. This ensures that an optimal feature representation is learned per structure. At the end, the features of all branches are merged to perform a joint prediction. Each branch can be divided into the following stages: 1) point cloud alignment using a transformation network, 2) feature extraction, 3) feature alignment with a second transformation net, 4) dropout and 5) prediction. The first three stages of the architecture of each branch resemble that of a single PointNet architecture. The last two stages are particular to {MSPNet}. \textbf{Point Transformation Network:} In contrast to previous approaches in deep medical shape analysis~\cite{Shakeri2016}, MSPNet does not require point correspondences across shapes, i.e., the i-$th$ points of two shapes, $\mathbf{p}_i^1$ and $\mathbf{p}_i^2$, respectively, do not need to represent the same anatomical position. We obtain the invariance to rigid transformations in MSPNet by (i) augmenting the training dataset by applying a random rigid transformation to each shape during training time and by (ii) introducing a transformation network (T-Net). This network estimates a $3 \times 3$ transformation matrix, which is applied to the input as a first step. One can think of the T-Net as a transformation into a canonical space to roughly align point clouds before any processing is done. The T-Net is shown in figure \ref{fig:tnet} and is composed of a multilayer perceptron (MLP), a max pooling operator and two fully connected layers. \textbf{Feature Extractions:} The transformed points are fed into a MLP with shared weights among points. This MLP layer can be thought of as the feature extraction stage of the network. At this stage of the network, each point has access to the position of all the remaining points of the point cloud, and therefore as the output of the network, we obtain a $k$-dimensional feature vector for each point (in our case $k$ = 64). Although each point is assigned a single feature vector, in practice each feature vector point contains a global signature of the input point cloud. \textbf{Feature Transformation:} A second T-Net is applied to the computed features. This network has the same properties as the first transformation network, but its output corresponds to a $k \times k$ transformation matrix. This transformation matrix has a much higher dimension than the previous spatial transformation, which makes the optimization more challenging. To facilitate the optimization of this larger feature transformation matrix $T$, we constrain it to be close to an orthogonal matrix $C_\text{reg} = \| I - T T^\top \|^2_F$, similar to~\cite{Qi2017}. The regularization term ensures a more stable convergence of the network. After the points are transformed they are fed to a MLP layer. \textbf{Dropout and prediction:} Up to this point, the architecture of each branch mirrors that of the PointNet. However the final dropout and prediction stage is particular to {MSPNet}. In PointNet, the last stage corresponds to a max-pooling layer performed across $n$ points, so that the output is a vector with size corresponding to the feature dimensionality. Instead of performing max-pooling, which leads to a strong shrinkage in feature space, we propose to keep the localized information per point. This leads to an increase in the network capacity, which may lend itself to overfitting. Hence, we introduce a dropout layer (keep probability = 0.3) for regularization. The main advantage of the new design is that more localized information is retained in the network, which we hypothesize may boost the predictive power of our network. Finally, the individual features from each branch are concatenated and fed into a last MLP to perform prediction. Batch normalization is used for all MLP layers and ReLU activations are used. The last MLP perceptron counts with intermediate dropout layers with 0.7 keep probabilities as in PointNet. To facilitate the exposition, we assumed that each structure per branch is described by the same number of $n$ points, but in practice each structure can be represented by point clouds of different dimensions. \begin{figure*}[t] \centering\includegraphics[width=0.8\textwidth]{figures/tnet.png} \caption{Transformation network (T-Net) for predicting a transformation matrix to map a point cloud to canonical space before processing. A similar network is used to transform the features; the only difference is that the output corresponds to a $64 \times 64$ matrix. \label{fig:tnet}} \end{figure*} \noindent \section{Results} We evaluate the performance of MSPNet in two supervised learning tasks, classification and regression. For the classification task, we aim at using shape descriptors to discriminate between healthy controls (HC), and patients diagnosed with mild cognitive impairment (MCI) or Alzheimer's disease (AD). For the regression task, we perform age estimation of a subject based on shape information. In all our experiments, we compare to the standard PointNet architecture and spectral shape descriptors in BrainPrint~\cite{Wachinger2015}, which achieved high performance in a competition for Alzheimer's disease classification~\cite{wachinger2016domain}. For PointNet, the multi-structure input corresponds to a concatenation of the point clouds of all structures. We use image data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu)~\cite{Jack2008}. We work with a total of 7,974 images (2,423 HC, 978 AD, and 4,625 MCI). \subsection{AD and MCI Classification on Shape Data} For this experiment, we perform classification based on the shape of the left and right hippocampus and the left and right lateral ventricles, due to their key importance in Alzheimer's disease~\cite{thompson2004mapping}. Each structure is represented by a Pointcloud of 512 points. For our experiments the dataset is split in a training, validation and test set (75\%,15\%,15\%). Splitting is done on a per subject basis, to guarantee that the same subject does not appear in different sets. Table \ref{table:classification} reports the results of the classification experiment, where we report average classification precision, recall and F1-score. In both classification scenarios, PointNet shows a higher accuracy than BrainPrint, illustrating the potential of learning feature representations. Further, MSPNet showed the best performance, highlighting the benefit of individual feature learning in each branch of the network. \begin{table*} \centering \caption{Average precision, recall and F1-score for the mild cognitive impairment and Alzheimer's classification experiments.} \begin{tabular}{ l c c c | c c c } \multicolumn{1}{c}{} &\multicolumn{3}{c}{HC-MCI} & \multicolumn{3}{c}{HC-AD} \\ & \ \ \ Precision& \ \ \ Recall & \ \ \ F1-score & \ \ \ Precision & \ \ \ Recall & \ \ \ F1-score \\ \rowcolor{lightgray} BrainPrint &0.57 & 0.59 & 0.57& 0.76 & 0.77 & 0.78 \\ PointNet & 0.60& 0.61 & 0.59& 0.77 & 0.77 &0.78 \\ \rowcolor{lightgray} {MSPNet} & 0.62 & 0.60 & \textbf{0.61} & 0.78 & 0.79 & \textbf{0.80} \\ \end{tabular} \label{table:classification} \end{table*} \subsection{Age Prediction on Shape Data} For the age estimation task, we perform two different evaluations. In the first one, we perform age estimation only on the healthy controls of the ADNI database. For the second evaluation, we also include patients diagnosed with MCI and AD. The evaluations are done again on the same brain structures used for the classification task. The results of these two experiments are summarized in the mean absolute error plots of figure \ref{fig:regression}. For the experiment on HC MSPNet significantly outperformed BrainPrint (p-value \num{2.69e-09}) and PointNet (p-value 0.03). In the experiment on all subjects both PointNet and MSPNet presented comparable performance, both outperforming BrainPrint (p-value 0.01). \begin{figure}[t!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/boxplot_healthy.pdf} \end{minipage} \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/boxplot_dx.pdf} \end{minipage} \caption{Mean absolute error for the age prediction experiment on healthy subjects (left) and on all subjects, including MCI and AD (right)} \label{fig:regression} \end{figure} \subsection{Visualizing Point Importance} Of key importance for making predictions with shapes is the ability to visualize the part of the anatomy that is driving the decision. This holds in particular in the clinical context. In {MSPNet}, we introduce a simple yet effective method to visualize the importance that each point has in the prediction. Our visualization is inspired by the commonly used occlusion method \cite{Grun2016}, which consists of occluding parts of a test image and observing differences in the network response. We apply a similar concept to visualize the response of {MSPNet}. In our case, we assess the importance of each point in the classification task by occluding this point (making the point coordinates equal to 0) together with its nearest neighbors. Then the occluded point cloud is passed through the network and the response of the output ReLU is compared to that obtained when the full point cloud is evaluated. The difference between these responses can then be assigned as the importance of this particular point. In figure \ref{fig:visualization}, we can observe the result of using this visualization technique for one of the AD test subjects in the HC-AD classification experiment. If a point tends towards the red side of the scale, it indicates that by occluding this particular point, the network increases the activation of the AD class. This means that the region around this point is used by the network to predict AD. The exact opposite is true for points on the blue side of the scale. White points indicate that the network response was not largely affected by occluding this point. In the particular case of the example in figure \ref{fig:visualization}, the decision of the network to give this subject a AD label is mainly driven by the left hippocampus. \begin{figure}[!t] \centering \includegraphics[width=0.85\textwidth]{figures/visualization.pdf} \caption{Visualization of point importance for the HC-AD classification task for an AD subject. Figure on the left illustrates ventricles and hippocampi, while figures on the right illustrate single structures of the left hemisphere. } \label{fig:visualization} \end{figure} \section{Conclusion} We introduced MSPNet, a deep neural network for shape analysis on multiple brain structures. To the best of our knowledge, this is the first time that a neural network for shape analysis on point clouds is proposed in medical applications. We have shown that our method is able to achieve high accuracy in both classification and regression tasks, when compared to shape descriptors based on spectral signatures. This performance is achieved without relying on point correspondences or meshes. MSPNet learns feature representations from multiple structures simultaneously. Finally, we illustrated point-wise importance for the prediction by adapting the occlusion method. \\ \textbf{Acknowledgments.} This work was supported in part by SAP SE and the Bavarian State Ministry of Education, Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B). \section{Introduction} Shape analysis of anatomical structures is of core importance for many tasks in medical imaging, not only as a regularization prior for segmentation tasks, but also as a powerful tool to assess differences between subjects and populations. A fundamental question when operating on shapes is to find a suitable numerical representation for a given task. Hence, many different types of parameterizations have been proposed in the past including point distribution models \cite{Cootes1995}, spectral signatures~\cite{Wachinger2015}, spherical harmonics \cite{gerardin2009multidimensional}, medial representations \cite{Gorczowski2007}, and diffeomorphisms \cite{miller2014diffeomorphometry}. Even though these representations have proven their utility for the analysis of shapes in the medical domain, they might not be optimal for a particular task. In recent years, deep networks have had ample success for many medical imaging tasks by learning complex, hierarchical feature representations from images. These representations have proven to outperform \emph{hand-crafted} features in a variety of medical imaging applications \cite{Litjens2017}. One of the main reasons for the success of these methods is the use of convolutional layers, which take advantage of the shift-invariance properties of images \cite{Bronstein2017}. However, the use of deep networks in medical shape analysis is still largely unexplored; mainly because typical shape representations such as point clouds and meshes do not possess an underlying Euclidean or grid-like structure. In this work, we propose an alternative approach to perform supervised learning on medical shape data. Our method is based on PointNet~\cite{Qi2017}, a deep neural network architecture, which operates directly on a point cloud and predicts a label in an end-to-end fashion. Point clouds present a raw and simple parameterization that avoids complexities involved with meshes and that is trivial to obtain given a segmented surface. The network does not require the alignment of point clouds, as a spatial transformer network maps the data to a canonical space before further processing. PointNet has been proposed for object classification, where the category of a single shape is predicted. For many medical applications however, not just a single anatomical structure is important for the prediction but a simultaneous view of multiple structures is required for a more comprehensive analysis of a subject's anatomy. Hence, we propose the Multi-Structure PointNet (MSPNet), which is able to simultaneously predict a label given the shape of multiple structures. We evaluate MSPNet in two neuroimaging applications, neurodegenerative disease prediction and age regression \subsection{Related Work} Several shape representations have previously been used for supervised learning tasks. Spherical harmonics for approximating the hippocampal shape have been proposed in \cite{gerardin2009multidimensional}. Shape information has been derived from thickness measurements of the hippocampus from a medial representation~\cite{costafreda2011automated}. Statistical shape models to detect hippocampal shape changes were proposed by \cite{shen2012detecting}. Multi-resolution shape features with non-Euclidean wavelets were employed for the analysis of cortical thickness \cite{Kim2014107}. The use of medial axis shape representations was used to compare the brain morphology of autistic and normal populations \cite{Gorczowski2007}. Recently, shape representation based on spectral signatures have been introduced to perform age regression and disease prediction~\cite{Wachinger2015,wachinger2016domain}. All the mentioned approaches rely on computing pre-defined shape features. Alternatively, a variational auto-encoder was proposed to automatically extract features from 3D surfaces, which can in turn be used in a classification task~\cite{Shakeri2016}. However different to our approach, this is not an end-to-end learning since the variational encoder is not directly linked to the classification task. Consequently, the learned features capture overall variation but are not directly optimized for the given task. In addition, this approach relies on computing point correspondences between meshes and focuses on a single structure, while we simultaneously model multiple structures. \def{\mathbb{R}}{{\mathbb{R}}} \def{MSPNet}{{MSPNet}} \section{Method} We propose a method for multiple structure shape analysis that is divided into two main stages: the extraction of point clouds representing the anatomy of different structures from medical images (section \ref{sec:pointcloud}), and a Multi-Structure PointNet ({MSPNet}) (section \ref{sec:architecture}). Figure~\ref{fig:network} illustrates the architecture of MSPNet, which is based on PointNet~\cite{Qi2017}, and extends on it to allow the simultaneous processing of multiple structures. \begin{figure*}[t] \centering\includegraphics[width=0.9\textwidth]{figures/network_2.png} \caption{MSPNet Architecture. The network consists of one branch per structure (illustrated for three structures), which are fused before the final multilayer perceptron (MLP). Each structure is represented by a point cloud with $n$ points that pass through transformer networks and multilayer perceptrons of the individual branch. Numbers in brackets are layer sizes. \label{fig:network}} \end{figure*} \noindent \subsection{Point Cloud Extraction} \label{sec:pointcloud} We extract point clouds from MRI T1-weighted images of the brain. We process the images with the FreeSurfer pipeline~\cite{Fischl2012} and obtain segmentations of multiple neuroanatomical regions. From the resulting segmentations, point clouds are created by uniformly sampling the boundary of each brain structure. After this process, the anatomy of a subject is represented by a collection of $m$ point clouds $S=\{ P_0, P_1, \hdots P_m \}$, where each point cloud represents a structure. A point cloud is defined as a set of $n$ points $P = [\mathbf{p}_0, \mathbf{p}_1, \hdots , \mathbf{p}_n] $, where each point is a vector of Cartesian coordinates $\mathbf{p}_i = (x,y,z)$. \noindent \subsection{MSPNet Architecture}\label{sec:architecture} We aim at finding a network architecture corresponding to a function $f : S \mapsto y$, mapping a collection of shapes described by $S$ to a prediction~$y$. An overview of the network is shown in figure \ref{fig:network}. MSPNet consists of multiple branches, where each branch processes the point cloud of one structure independently. This ensures that an optimal feature representation is learned per structure. At the end, the features of all branches are merged to perform a joint prediction. Each branch can be divided into the following stages: 1) point cloud alignment using a transformation network, 2) feature extraction, 3) feature alignment with a second transformation net, 4) dropout and 5) prediction. The first three stages of the architecture of each branch resemble that of a single PointNet architecture. The last two stages are particular to {MSPNet}. \textbf{Point Transformation Network:} In contrast to previous approaches in deep medical shape analysis~\cite{Shakeri2016}, MSPNet does not require point correspondences across shapes, i.e., the i-$th$ points of two shapes, $\mathbf{p}_i^1$ and $\mathbf{p}_i^2$, respectively, do not need to represent the same anatomical position. We obtain the invariance to rigid transformations in MSPNet by (i) augmenting the training dataset by applying a random rigid transformation to each shape during training time and by (ii) introducing a transformation network (T-Net). This network estimates a $3 \times 3$ transformation matrix, which is applied to the input as a first step. One can think of the T-Net as a transformation into a canonical space to roughly align point clouds before any processing is done. The T-Net is shown in figure \ref{fig:tnet} and is composed of a multilayer perceptron (MLP), a max pooling operator and two fully connected layers. \textbf{Feature Extractions:} The transformed points are fed into a MLP with shared weights among points. This MLP layer can be thought of as the feature extraction stage of the network. At this stage of the network, each point has access to the position of all the remaining points of the point cloud, and therefore as the output of the network, we obtain a $k$-dimensional feature vector for each point (in our case $k$ = 64). Although each point is assigned a single feature vector, in practice each feature vector point contains a global signature of the input point cloud. \textbf{Feature Transformation:} A second T-Net is applied to the computed features. This network has the same properties as the first transformation network, but its output corresponds to a $k \times k$ transformation matrix. This transformation matrix has a much higher dimension than the previous spatial transformation, which makes the optimization more challenging. To facilitate the optimization of this larger feature transformation matrix $T$, we constrain it to be close to an orthogonal matrix $C_\text{reg} = \| I - T T^\top \|^2_F$, similar to~\cite{Qi2017}. The regularization term ensures a more stable convergence of the network. After the points are transformed they are fed to a MLP layer. \textbf{Dropout and prediction:} Up to this point, the architecture of each branch mirrors that of the PointNet. However the final dropout and prediction stage is particular to {MSPNet}. In PointNet, the last stage corresponds to a max-pooling layer performed across $n$ points, so that the output is a vector with size corresponding to the feature dimensionality. Instead of performing max-pooling, which leads to a strong shrinkage in feature space, we propose to keep the localized information per point. This leads to an increase in the network capacity, which may lend itself to overfitting. Hence, we introduce a dropout layer (keep probability = 0.3) for regularization. The main advantage of the new design is that more localized information is retained in the network, which we hypothesize may boost the predictive power of our network. Finally, the individual features from each branch are concatenated and fed into a last MLP to perform prediction. Batch normalization is used for all MLP layers and ReLU activations are used. The last MLP perceptron counts with intermediate dropout layers with 0.7 keep probabilities as in PointNet. To facilitate the exposition, we assumed that each structure per branch is described by the same number of $n$ points, but in practice each structure can be represented by point clouds of different dimensions. \begin{figure*}[t] \centering\includegraphics[width=0.8\textwidth]{figures/tnet.png} \caption{Transformation network (T-Net) for predicting a transformation matrix to map a point cloud to canonical space before processing. A similar network is used to transform the features; the only difference is that the output corresponds to a $64 \times 64$ matrix. \label{fig:tnet}} \end{figure*} \noindent \section{Results} We evaluate the performance of MSPNet in two supervised learning tasks, classification and regression. For the classification task, we aim at using shape descriptors to discriminate between healthy controls (HC), and patients diagnosed with mild cognitive impairment (MCI) or Alzheimer's disease (AD). For the regression task, we perform age estimation of a subject based on shape information. In all our experiments, we compare to the standard PointNet architecture and spectral shape descriptors in BrainPrint~\cite{Wachinger2015}, which achieved high performance in a competition for Alzheimer's disease classification~\cite{wachinger2016domain}. For PointNet, the multi-structure input corresponds to a concatenation of the point clouds of all structures. We use image data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu)~\cite{Jack2008}. We work with a total of 7,974 images (2,423 HC, 978 AD, and 4,625 MCI). \subsection{AD and MCI Classification on Shape Data} For this experiment, we perform classification based on the shape of the left and right hippocampus and the left and right lateral ventricles, due to their key importance in Alzheimer's disease~\cite{thompson2004mapping}. Each structure is represented by a Pointcloud of 512 points. For our experiments the dataset is split in a training, validation and test set (75\%,15\%,15\%). Splitting is done on a per subject basis, to guarantee that the same subject does not appear in different sets. Table \ref{table:classification} reports the results of the classification experiment, where we report average classification precision, recall and F1-score. In both classification scenarios, PointNet shows a higher accuracy than BrainPrint, illustrating the potential of learning feature representations. Further, MSPNet showed the best performance, highlighting the benefit of individual feature learning in each branch of the network. \begin{table*} \centering \caption{Average precision, recall and F1-score for the mild cognitive impairment and Alzheimer's classification experiments.} \begin{tabular}{ l c c c | c c c } \multicolumn{1}{c}{} &\multicolumn{3}{c}{HC-MCI} & \multicolumn{3}{c}{HC-AD} \\ & \ \ \ Precision& \ \ \ Recall & \ \ \ F1-score & \ \ \ Precision & \ \ \ Recall & \ \ \ F1-score \\ \rowcolor{lightgray} BrainPrint &0.57 & 0.59 & 0.57& 0.76 & 0.77 & 0.78 \\ PointNet & 0.60& 0.61 & 0.59& 0.77 & 0.77 &0.78 \\ \rowcolor{lightgray} {MSPNet} & 0.62 & 0.60 & \textbf{0.61} & 0.78 & 0.79 & \textbf{0.80} \\ \end{tabular} \label{table:classification} \end{table*} \subsection{Age Prediction on Shape Data} For the age estimation task, we perform two different evaluations. In the first one, we perform age estimation only on the healthy controls of the ADNI database. For the second evaluation, we also include patients diagnosed with MCI and AD. The evaluations are done again on the same brain structures used for the classification task. The results of these two experiments are summarized in the mean absolute error plots of figure \ref{fig:regression}. For the experiment on HC MSPNet significantly outperformed BrainPrint (p-value \num{2.69e-09}) and PointNet (p-value 0.03). In the experiment on all subjects both PointNet and MSPNet presented comparable performance, both outperforming BrainPrint (p-value 0.01). \begin{figure}[t!] \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/boxplot_healthy.pdf} \end{minipage} \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/boxplot_dx.pdf} \end{minipage} \caption{Mean absolute error for the age prediction experiment on healthy subjects (left) and on all subjects, including MCI and AD (right)} \label{fig:regression} \end{figure} \subsection{Visualizing Point Importance} Of key importance for making predictions with shapes is the ability to visualize the part of the anatomy that is driving the decision. This holds in particular in the clinical context. In {MSPNet}, we introduce a simple yet effective method to visualize the importance that each point has in the prediction. Our visualization is inspired by the commonly used occlusion method \cite{Grun2016}, which consists of occluding parts of a test image and observing differences in the network response. We apply a similar concept to visualize the response of {MSPNet}. In our case, we assess the importance of each point in the classification task by occluding this point (making the point coordinates equal to 0) together with its nearest neighbors. Then the occluded point cloud is passed through the network and the response of the output ReLU is compared to that obtained when the full point cloud is evaluated. The difference between these responses can then be assigned as the importance of this particular point. In figure \ref{fig:visualization}, we can observe the result of using this visualization technique for one of the AD test subjects in the HC-AD classification experiment. If a point tends towards the red side of the scale, it indicates that by occluding this particular point, the network increases the activation of the AD class. This means that the region around this point is used by the network to predict AD. The exact opposite is true for points on the blue side of the scale. White points indicate that the network response was not largely affected by occluding this point. In the particular case of the example in figure \ref{fig:visualization}, the decision of the network to give this subject a AD label is mainly driven by the left hippocampus. \begin{figure}[!t] \centering \includegraphics[width=0.85\textwidth]{figures/visualization.pdf} \caption{Visualization of point importance for the HC-AD classification task for an AD subject. Figure on the left illustrates ventricles and hippocampi, while figures on the right illustrate single structures of the left hemisphere. } \label{fig:visualization} \end{figure} \section{Conclusion} We introduced MSPNet, a deep neural network for shape analysis on multiple brain structures. To the best of our knowledge, this is the first time that a neural network for shape analysis on point clouds is proposed in medical applications. We have shown that our method is able to achieve high accuracy in both classification and regression tasks, when compared to shape descriptors based on spectral signatures. This performance is achieved without relying on point correspondences or meshes. MSPNet learns feature representations from multiple structures simultaneously. Finally, we illustrated point-wise importance for the prediction by adapting the occlusion method. \\ \textbf{Acknowledgments.} This work was supported in part by SAP SE and the Bavarian State Ministry of Education, Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B).
{ "timestamp": "2018-06-05T02:17:27", "yymm": "1806", "arxiv_id": "1806.01069", "language": "en", "url": "https://arxiv.org/abs/1806.01069" }
\section{Introduction} An as of today still experimentally unconfirmed prediction of quantum electrodynamics is that of nonperturbative electron-positron pair creation in the presence of a strong electric field~\cite{Sauter1931, Heisenberg1936, Hund1941}. Schwinger~\cite{Schwinger1951} gave the pair production rate per unit volume $\mathcal{P}_{e^+e^-}$ (or more properly the rate of vacuum decay~\cite{Cohen2008}) in a constant, homogeneous electric field $E$ in 3+1 dimensions as ($\hbar=c=1$) \begin{equation} \label{eq:constHom} \mathcal{P}_{e^+e^-} = \frac{(qE)^2}{4\pi^3}\sum_{n=1}^{\infty}\frac{1}{n^2} \exp\left(-n\pi \frac{m^2}{qE}\right), \end{equation} where $q$ is the elementary charge and $m$ the mass of the electrons and positrons. The generalization to inhomogeneous and time dependent background fields is far from straightforward, since this is a nonperturbative effect (as is visible from the inverse dependence on $q$ and $E$ in the exponent of~\eqref{eq:constHom}). Apart from the fundamental interest in this effect as a prototypical example for a nonperturbative phenomenon in quantum field theory, a better understanding is also desirable in view of the various experimental initiatives aimed at reaching ultra-high field strengths \footnote{For example the \emph{Extreme Light Infrastructure} project \protect\url{https://eli-laser.eu/}}. It is in general difficult to obtain the pair production probability for multi-dimensional fields. While there has recently been some progress~\cite{Kohlfurst2016, Aleksandrov2017b, Kohlfurst2018, Lv2018, Aleksandrov2018, Kohlfurst2018b} in direct numerical computation of the exact probability for multi-dimensional fields, we will instead focus on an approach using the worldline path integral. This formulation is an alternative to path integrals over fields to express amplitudes in quantum field theories. The first steps in this direction were pioneered by Fock, who expressed solutions of the Dirac equation via a four dimensional Schr\"odinger-type equation with space and time parameterized by an additional parameter~\cite{Fock1937}. After Nambu emphasized how beneficial this representation would be in the path integral approach~\cite{Nambu1950}, Feynman derived the Klein-Gordon propagator~\cite{Feynman1950} and Dirac propagator~\cite{Feynman1951} in this worldline formulation. In parallel, Schwinger's famous paper~\cite{Schwinger1951} used a similar representation. It is possible to approximate this worldline path integral for inhomogeneous fields numerically using discretization and Monte Carlo methods~\cite{Gies2001, Gies2003, Gies2005, Gies2011}. Although our method is based on discretization as well, we use an instanton approach to compute the integrals instead of statistical sampling. Both Feynman and Schwinger mentioned the four dimensional particle's equations of motion in the classical limit, but the first explicit mention of an instanton approximation to the (Euclidean) worldline path integral was given by Affleck, Alvarez and Manton~\cite{Affleck1982}. They derived the pair production rate for a constant homogeneous background field in a way that is very similar to the method used today. The approach was extended to inhomogeneous fields, including the sub-leading fluctuation prefactor~\cite{Dunne2005,Dunne2006a}. An exact analytic treatment is possible in some simple cases \cite{Dunne2005,Dunne2006a, Ilderton2015} and analytic approximations allows us to study suitable limiting cases \cite{Schutzhold2008, Gies2016, Schneider2016}, but in general solutions of the instanton equations of motion have to be found numerically. This can be done using, e.g., shooting methods~\cite{Dunne2006}, but the highly nonlinear nature of the equations of motion makes this approach very unstable. After briefly sketching the semiclassical approximation of the worldline path integral in section~\ref{sec:wli-method}, we introduce a different approach to numerically evaluate the path integral by discretization in sections~\ref{sec:discretization} and~\ref{sec:prefactor}, and a method to trace families of solutions over a range of field parameters in section~\ref{sec:continuation}. Finally, we will apply the method to some example cases, both with results known analytically (to assess the accuracy of the approximation) and new examples to demonstrate the scope of the approach in section~\ref{sec:applications}. \section{Worldline instanton method} \label{sec:wli-method} The central object of the method is the effective action $\Gamma_\mathrm{M}$, defined using the vacuum persistence amplitude \begin{equation} e^{i\Gamma_\mathrm{M}} := \braket{0_\mathrm{out}|0_\mathrm{in}}. \end{equation} We take the probability for pair creation to be the complement of the vacuum remaining stable, so \begin{equation} P^{e^+e^-} = 1 - \left|\braket{0_\mathrm{out}|0_\mathrm{in}}\right|^2 = 1-\left|e^{i\Gamma_\mathrm{M}}\right|^2 \approx 2 \Im \Gamma_\mathrm{M}, \end{equation} the subscript $_\mathrm{M}$ denoting the physical, Minkowskian quantity. We will work with the Euclidean effective action $\Gamma$, related to the Minkowski expression by $\Gamma_\mathrm{M} = i\Gamma$, so $\Im\Gamma_\mathrm{M} = \Re\Gamma$~\cite{Dunne2006a}. The Euclidean worldline path integral for spinor QED reads (see, e.g., \cite{Schubert2001, Schubert2012}) \begin{align} \label{eq:WL-path-integral} \Gamma = &\int_0^\infty\frac{\mathrm{d}{T}}{T} e^{-\frac{m^2}{2} T} \int_{x(T)=x(0)}\mathcal{D} x(\tau) \nonumber\\ &\quad\times \Phi[x] \exp\left(-\int_0^T\mathrm{d}{\tau} \left(\frac{\dot{x}^2}{2} + iq A(x)\cdot \dot{x} \right)\right), \end{align} where $A_\mu$ is the Euclidean potential representing the external electromagnetic field $F_{\mu\nu}$ and $x_\mu(\tau)$ are periodic worldlines in Euclidean space parametrized by the ``proper time'' $\tau$ with $\dot x^\mu=\mathrm{d}x^\mu/\mathrm{d}\tau$. There exist a couple of different representations of the spin factor, see \cite{Schubert2001, Gies2005a}. We will use \begin{equation} \Phi[x] = \frac{1}{2}\tr\mathcal{P} e^{\frac{i}{4}\int_0^T\mathrm{d}{\tau}\sigma_{\mu\nu}qF_{\mu\nu}(x)}, \end{equation} with $\mathcal{P}$ denoting path ordering, $\tr$ the trace over spinor indices and $\sigma_{\mu\nu}$ the commutator of the Dirac matrices \begin{equation} \sigma_{\mu\nu} = \frac{1}{2}\left[\gamma_\mu, \gamma_\nu\right]. \end{equation} For simple fields, the Euclidean potential $A_\mu$ and field tensor $F_{\mu\nu}$ are purely imaginary, so $iA_\mu$ and $iF_{\mu\nu}$ are real. We immediately introduce dimensionless quantities using some reference field strength $E$, which makes a numerical treatment possible and simplifies the following derivation, \begin{equation} \tilde{x}_\mu = x_\mu \frac{qE}{m}, \quad \tilde{F}_{\mu\nu} = F_{\mu\nu}\frac{1}{E}, \quad \tilde{A}_\mu = \frac{qE}{m}\frac{1}{E}A_\mu, \end{equation} and also rescale the integration variable \begin{equation} \tilde{T} = qE T, \quad u = \frac{1}{T}\tau = \frac{qE}{\tilde{T}} \tau, \quad \curvearrowright \pd{u} = \frac{\tilde{T}}{qE}\pd{\tau}. \end{equation} We can now exchange the order of integration, \begin{align} \label{eq:worldline:pathIntegralScaled} \Gamma= &\int_{x(1)=x(0)}\mathcal{D}{x}(u) \int_0^\infty\mathrm{d}{\tilde{T}}\,\frac{\Phi[\tilde{x}]}{\tilde{T}}\\ &\quad\times \exp\left(-\frac{m^2}{qE}\left( \frac{\tilde{T}}{2} + \int\limits_0^{1}\mathrm{d}{u} \left( \frac{\dot{\tilde{x}}^2}{2\tilde{T}} + i \dot{\tilde{x}}\cdot \tilde{A} \right) \right)\right),\nonumber \end{align} so we can perform the $\tilde{T}$-integration using Laplace's method. Due to our rescaling, $m^2/qE$ is singled out as the large parameter of the expansion, while all other quantities are of order unity. We obtain the saddle point $\tilde{T}_0 = \sqrt{\int_0^1\mathrm{d}{u}\ \dot{\tilde{x}}^2} =: a[\tilde{x}]$, so including the quadratic fluctuation around the saddle we arrive at the approximation \begin{equation} \label{eq:WL-T-int} \Gamma \approx \int_{x(T)=x(0)}\mathcal{D}{x}(\tau) \sqrt{\frac{2\pi}{a[\tilde{x}]}\frac{qE}{m^2}} \Phi[\tilde{x}] e^{-\frac{m^2}{qE}\mathcal{A}[\tilde{x}]}, \end{equation} with the non-local (due to $a[\tilde{x}]$) action \begin{equation} \mathcal{A}[\tilde{x}] = a[\tilde{x}] + \int\limits_0^1\mathrm{d}{u}\ \dot{\tilde{x}}\cdot i\tilde{A}(\tilde{x}), \end{equation} and the spin factor \begin{equation} \Phi[\tilde{x}] = \frac{1}{2}\tr\mathcal{P}\exp\left({\frac{a[\tilde{x}]}{4} \int_0^1\mathrm{d}{u}\ \sigma_{\mu\nu}i\tilde{F}_{\mu\nu}(\tilde{x})}\right). \end{equation} Note that in \eqref{eq:WL-T-int} we symbolically restored the original parametrization $\mathcal{D}{x}(\tau)$ in the path integral differential, this will be relevant for the discretization in the next section. Applying Laplace's method to the path integral, we need to find a path $\tilde{x}_\mu(u)$ that satisfies the periodic boundary conditions and extremizes the exponent in~\eqref{eq:WL-T-int}, so a solution to the Euler-Lagrange equations (the Lorentz force equation in this case) \begin{equation} \label{eq:instanton-equations-sqrt} \frac{\ddot{\tilde{x}}_\mu}{a[\tilde{x}]} = iq\tilde{F}_{\mu\nu}\dot{\tilde{x}}_\nu. \end{equation} Contracting~\eqref{eq:instanton-equations-sqrt} with $\dot{\tilde{x}}_\mu$ we see that (due to the antisymmetry of $\tilde{F}_{\mu\nu}$) $\dot{\tilde{x}}^2 = \text{const.} = a^2$, simplifying the instanton equations of motion to \begin{equation} \label{eq:instanton-equations} \ddot{\tilde{x}}_\mu = iaq\tilde{F}_{\mu\nu}\dot{\tilde{x}}_\nu. \end{equation} The prefactor of the Laplace approximation is given by the second variation of the action around the classical solution to~\eqref{eq:instanton-equations}, amounting to an operator determinant. The determinant has to be defined carefully, but we can completely sidestep this complication by instead performing Laplace's method \emph{after} discretization, when we can calculate the fluctuation prefactor by standard methods of linear algebra. \section{Discretization} \label{sec:discretization} We approximate~\eqref{eq:WL-T-int} by discretizing the trajectories $\tilde{x}_\mu(u)$ into $N$ $d$-dimensional points (in general $d = 3+1$, but for simple field configurations it is possible to only consider $d=1+1$ or $d=2+1$ dimensions, so we will keep the dimensionality variable): \begin{equation} \label{eq:trajectory-discretization} \tilde{x}_\mu^i \defeq \tilde{x}_\mu\left(\frac{l}{N}\right), \quad l = 0, 1, \dots, N-1. \end{equation} The velocity is then approximated using (forward) finite differences \begin{equation} \label{eq:velocity-discretization} \dot{\tilde{x}}_\mu^l \approx \frac{\tilde{x}_\mu^{l+1} - \tilde{x}_\mu^l}{\varepsilon}, \quad\varepsilon = \frac{1}{N}, \end{equation} with the identification $\tilde{x}_\mu^N\equiv \tilde{x}_\mu^0$. Discretizing the path integral requires a normalization factor for each $\tilde{x}_\mu^i$-integral. We could find these factors by performing the integration in the free case, however this is not necessary: In the derivation of \eqref{eq:WL-path-integral} the path integral arises as an ordinary nonrelativistic transition amplitude, so we can use Feynman's normalization $1/A$ for each integral (see, e.g., \cite{feynman1948}), with \begin{equation} A = \sqrt{\frac{2\pi T_0}{N}} = \sqrt{\frac{2\pi a[x]}{qE N}}. \end{equation} Using this normalization and replacing the $N\times d$ integrations by the dimensionless versions we arrive at the discretized worldline path integral \begin{align} \label{eq:WL-discretized-N} \Gamma \approx &\left(\prod_{k=0}^{N-1}\int\mathrm{d}{^d\tilde{x}^k}\right) \left(\frac{m^2}{qE} \frac{N}{2\pi a[\tilde{x}_\mu^l]}\right)^{Nd/2} \nonumber\\ & \qquad\times\sqrt{\frac{2\pi}{a[\tilde{x}_\mu^l]}\frac{qE}{m^2}} \Phi[\tilde{x}_\mu^l] e^{-\frac{m^2}{qE}\mathcal{A}[\tilde{x}_\mu^l]}. \end{align} As we have now expressed everything in terms of the dimensionless variables, we will from now on drop the tilde. We still need discretized expressions for $a$, $\mathcal{A}$ and $\Phi$, \begin{align} a[x_\mu^l] &\defeq \sqrt{N\sum_{k=0}^{N-1}(x_\nu^{k+1}-x_\nu^k)^2} \\ \mathcal{A}[x_\mu^l] &\defeq a[x_\mu^l] \nonumber\\ \label{eq:gaugeDiscretization} & + \sum_{k=0}^{N-1}\left(\frac{A_\nu(x_\mu^{k+1})+A_\nu(x_\mu^k)}{2}\right) (x_\nu^{k+1}-x_\nu^k), \end{align} the square brackets denoting dependence on all points, instead of a particular choice of indices. The form of discretization of the gauge term is not at all obvious, other choices like having just $A_\nu(x_\mu^k)$ or $A_\nu(x_\mu^{k+1})$ or evaluating the gauge field between points $A_\nu((x_\mu^k+x_\mu^{k+1})/2)$ would yield the same classical continuum limit. That does not mean however that the resulting propagator is the same, see~\cite{Rabello1995, Stone2000, Gaveau2004, Schulman2005}. The midpoint prescription in~\eqref{eq:gaugeDiscretization} arises when the path integral representation is derived from the vacuum persistence amplitude using the time slicing procedure, see e.g.~\cite{DHoker1996}. Special care has to be taken to define a discretized expression for the spin factor that obeys path ordering. Instead of approximating the integral by summation and taking the exponential, we employ the product representation of the exponential function which is automatically path ordered (cf. \cite{Gies2002}), \begin{equation} \Phi[x_\lambda^l] \defeq \tr\left[\prod_{k=0}^{N-1} \left(1 + \frac{a[x_\lambda^l]}{4N}\sigma_{\mu\nu}iF_{\mu\nu}(x_\lambda^k)\right) \right]. \end{equation} The finite dimensional integral~\eqref{eq:WL-discretized-N} can now be approximated using Laplace's method as well, by finding an $N\times d$-dimensional vector $\bar{x}_\mu^l$ (a \emph{discrete worldline instanton}) that extremizes the action function $\mathcal{A}[x_\mu^l]$, that is \begin{equation} \left.\dd[\mathcal{A}]{x_\mu^l}\right|_{x_\mu^l=\bar{x}_\mu^l} = 0. \end{equation} To ease notation, we will condense the proper time index $l$ and the spacetime index $\mu$ into a single vector \begin{equation} \vec{X} = (x_1^0, x_2^0, \dots, x_d^0, x_1^1, \dots, x_d^{N-1}), \end{equation} so a discrete instanton $\vec{\bar{X}}$ has the property \begin{equation} \label{eq:discretized-eom} \vec{F(\vec{\bar{X}})}\defeq\left.\nabla \mathcal{A}(\vec{X})\right|_{\vec{\bar{X}}} = \vec{0}. \end{equation} Equation~\eqref{eq:discretized-eom} describes a system of $N\times d$ nonlinear equations in $N\times d$ unknowns, which can be solved numerically using the Newton-Raphson method or a similar root finding scheme. In this discretized picture, the fluctuation prefactor is readily computed as well, via the determinant of the Hessian of $\mathcal{A}$ \begin{equation} \mat{H}(\vec{\bar{X}}) = (\nabla\otimes\nabla) \left.\mathcal{A}(\vec{X})\right|_{\vec{\bar{X}}}, \end{equation} giving the full semiclassical approximation of the discretized worldline path integral \begin{equation} \label{eq:discrete-unreg} \Gamma \approx \sqrt{\frac{2\pi}{a^\mathrm{cl}}\frac{qE}{m^2}} \left(\frac{N}{a^\mathrm{cl}}\right)^{Nd/2} \frac{\Phi[\vec{\bar{X}}] e^{-\frac{m^2}{qE}\mathcal{A}[\vec{\bar{X}}]}}{\sqrt{\det{\mat{H}[\vec{\bar{X}}]}}}, \end{equation} with $a^\mathrm{cl}\defeq a[\vec{\bar{X}}]$. If the function $\mathcal{A}[\vec{\bar{X}}]$ were entirely well-behaved we would be done now, we would just need to find solutions of~\eqref{eq:discretized-eom} and plug them into~\eqref{eq:discrete-unreg}. The Gaussian integration resulting in the determinant prefactor however is only defined for positive definite matrices in the exponent, which our Hessian $\mat{H}$ is not. \section{Regularization of the prefactor} \label{sec:prefactor} We have two problems with the Hessian matrix of the action $\mathcal{A}$. One is that of negative eigenvalues of $\mat{H}$. The corresponding direction in the Gaussian integration diverges, and the integral has to be defined by analytic continuation. A single negative mode (which is present for a static electric field) thus turns the determinant negative, and the whole expression~\eqref{eq:discrete-unreg} imaginary. This could seem troubling at first, as the pair production is given by the real part of the Euclidean effective action. For a field not depending on time we expect a volume factor from the $x_4$-integration though, which has to be purely imaginary for a real temporal volume factor $V_t = -i V_{x_4}$. A more serious technical issue is that of zero modes. One or more zero eigenvalues of $\mat{H}$ immediately spoil our result, so they have to be removed from the integration in some way. One zero mode that is always present in the worldline path integral is the one corresponding to reparametrization. Due to the periodic boundary conditions we can move every point of the curve along the trajectory without a change in action. We would thus like to separate the integration in this direction (resulting in a ``volume factor'' of the periodicity, in our rescaled expression just unity) from the other integrations. We will use the Faddeev-Popov method~\cite{Faddeev1967} to perform this separation. While it is commonly used to remove gauge-equivalent configurations from a gauge theory path integral, it can be applied to this simpler scenario as well. We insert a factor of unity into the path integral in terms of the identity \begin{equation} 1 = \frac{1}{w} \int\!\mathrm{d}{\lambda}\ \delta(\chi(\lambda))\left|\dd{\lambda}\chi(\lambda)\right|, \end{equation} where $\chi(\lambda)$ is some function chosen so that $\chi=0$ fixes the zero mode, $\lambda$ parametrizes the symmetry and $w$ is the number of times $\chi(\lambda)=0$ occurs over the integration interval~\cite{Gordon2015}. The idea is now that the $\lambda$-integration can be performed due to the symmetry of the path integral, resulting in the desired volume factor and a Dirac delta that fixes the corresponding mode. This is especially elegant for a discrete numerical evaluation of the semiclassical approximation, as we can use an exponential representation of the delta function \begin{equation} \label{eq:delta-exp} \delta(\chi) = \lim_{\varepsilon\to 0} \sqrt{\frac{m^2/qE}{\varepsilon}} \exp\left(-\frac{\pi}{\varepsilon} \frac{m^2}{qE}\chi^2\right), \end{equation} where the Gaussian integration over the zero mode produces a factor of $\sqrt{\varepsilon}$ canceling the prefactor, enabling us to simply set $\varepsilon = 1$. We insert the factor of $m^2/qE$ for convenience, so the action $\mathcal{A}$ in~\eqref{eq:WL-discretized-N} just gets an additional term $\pi \chi^2$. To fix the reparametrization mode, we take (cf. \cite{Gordon2015, Gordon2016, ZinnJustin1996}) \begin{equation} \chi_u(\lambda_u) = \frac{2}{(a^\mathrm{cl})^2} \int_0^1\!\mathrm{d}{u} \ \dot{x}_\nu^{\mathrm{cl}}(u) x_\nu(u + \lambda_u), \end{equation} which is chosen so that \begin{equation} \frac{1}{w} \left|\chi_u'(0)\right| = \frac{1}{2}\frac{2}{(a^\mathrm{cl})^2} \int_0^1\!\mathrm{d}{u}\ \dot{x}_\nu^{\mathrm{cl}}(u)\dot{x}_\nu(u) \overset{x=x^{\mathrm{cl}}}{=} 1, \end{equation} at the saddle point. Due to the translation invariance we can set $\lambda_u=0$ in the integrand so the $\lambda_u$ integration is equal to one. This means we only need to add the (discretized version of) $\chi_u(0)$ to the action as in~\eqref{eq:delta-exp}, the second derivatives to $\mat{H}$ and a factor of $\sqrt{m^2/qE}$ from~\eqref{eq:delta-exp} to the prefactor, and just proceed as if no zero mode were present. Other zero eigenvalues appear if the electric background field does not depend on all spacetime coordinates. They are of course easier to deal with, we could just omit the corresponding integrals and add a volume factor $\tilde{L}_\mu$ (the tilde is to stress that this is in terms of the dimensionless coordinates) per invariant direction $x_\mu$. We can, however, treat these just as the reparametrization direction, which simplifies a numerical implementation that supports arbitrary fields. Choosing $\chi_\mu$ to be the average of $x_\mu$ along the trajectory we obtain the volume $\tilde{L}_\mu$, and again a factor of $\sqrt{m^2/qE}$. To summarize, our final expression for the semiclassical approximation of the effective action is \begin{multline} \label{eq:discrete} \Gamma \approx \frac{V_{N_0}}{m^{-N_0}} \left(\frac{qE}{m^2}\right)^{\frac{N_0}{2}} \sqrt{\frac{2\pi}{a^\mathrm{cl}}} \left(\frac{N}{a^\mathrm{cl}}\right)^{\frac{Nd}{2}}\\ \times\frac{\Phi[\vec{\bar{X}}] e^{-\frac{m^2}{qE}\mathcal{A}[\vec{\bar{X}}]}}{\sqrt{\det{\mat{H}[\vec{\bar{X}}]}}}, \end{multline} where the appropriate terms of $\chi$ and its derivatives have been added to $\mathcal{A}$ and $\mat{H}$, $N_0$ is the number of invariant spacetime directions, and $V_{N_0}$ the corresponding volume factor (with units reinstated). Note that~\eqref{eq:discrete} unambiguously contains the full prefactor including spin effects for an arbitrary background field, without having to resort to limiting cases to determine any normalization constants. In addition, the reference field strength $E$ enters only in the combination $qE/m^2$ in front of the action and in the prefactor, which has two advantages. First, having found an instanton $\vec{\bar{X}}$, we can evaluate~\eqref{eq:discrete} for arbitrary values of $qE/m^2$ without any additional computational effort. Secondly, the accuracy of the discretization does not depend on the field strength, so there are no numerical instabilities for small $E$. Figure~\ref{fig:consthom} shows how the discretization error scales with the number of points $N$ for a constant, homogeneous electric field. For scalar QED (that is, without the spin factor $\Phi$) the error in the prefactor decreases as $N^{-1}$ as expected for a first order discretization procedure. As the first variation of the action vanishes for an instanton, the error of the exponent even decreases as $N^{-2}$. For spinor QED, on the other hand, the error in the prefactor decreases as $N^{-2}$ as well. The reason for this is not obvious, as the only difference is an additional, seemingly independent multiplicative spin factor. \begin{figure} \includegraphics{constAcc} \caption{Accuracy of the method for a constant, homogeneous field. The error of the prefactor decreases as $1/N$ (the first order discretization error) for scalar QED (upper markers in the bottom plot), the error of the action as $1/N^2$ (because the action has an extremum at that point, upper plot). Interestingly, for spinor QED the prefactor decreases as $1/N^2$ as well (lower markers).} \label{fig:consthom} \end{figure} \section{Numerical continuation} \label{sec:continuation} For most fields we are interested in, there is one (or multiple) parameters that we would like to vary, for example the timescale of a pulsed field or the inhomogeneity of a spatially varying field configuration. Let us denote such a parameter $\gamma$. In general we are interested in the full family of instantons $\vec{\bar{X}}(\gamma)$. Methods to numerically map such a solution space are known as \emph{numerical continuation} algorithms~\cite{Allgower2003, Rheinboldt2000}. If we know an instanton for a particular value $\gamma_i$ of the parameter (e.g. the limit $\omega\to 0$ for a time-dependent pulse), we can use it as the starting point for the numerical solution of~\eqref{eq:discretized-eom} for a parameter value $\gamma_{i+1}=\gamma_i+\Delta\gamma$, which is the method used in~\cite{Gould2017}. If we choose a sufficiently small $\Delta\gamma$, we can expect the root finding procedure to quickly converge. This process is called \emph{natural parameter} continuation, because we vary a physical parameter of the problem at hand, instead of introducing an artificial variable to blend between an easy and our actual problem (e.g. solving $0=\vec{G}(\vec{X}, \gamma) \defeq \gamma\vec{F}(\vec{X}) + (1-\gamma)\vec{F}_0(\vec{X})$). Natural parameter continuation works well if the solutions $\vec{\bar{X}}(\gamma)$ depend on the parameter in a smooth and uniform manner. If, however, the dependence on $\gamma$ varies strongly, it is difficult to choose appropriate step lengths $\Delta\gamma$. For some spatially inhomogeneous fields the instantons even grow infinitely large in some limit $\gamma\to\gamma^\text{crit}$, so we need to take ever smaller steps to reach this value. We could, in principle, adaptively adjust the step length when the root finding for the next parameter value converges poorly, but there is an easier method of choosing the increment $\Delta\gamma$: Natural parameter continuation can be viewed as a \emph{predictor-corrector} scheme, with the ``zeroth-order'' predictor step of just taking the last solution as the starting point for the next parameter, and performing the numerical root finding as a corrector step. We can find a better prediction by taking the $\gamma$ derivative of~\eqref{eq:discretized-eom}, yielding the \emph{Davidenko differential equation}~\cite{Davidenko1953}: \begin{equation} \label{eq:davidenko-ode} \vec{0} = \dd{\gamma}\vec{F}(\vec{\bar{X}}, \gamma) = \mat{H}(\vec{\bar{X}}, \gamma)\cdot\dd[\vec{\bar{X}}]{\gamma} + \pd{\gamma}\vec{F}(\vec{\bar{X}}, \gamma) \end{equation} and thus, provided that $\mat{H}$ is invertible (which it is by our regularization scheme), \begin{equation} \label{eq:davidenko-predict} \dd[\vec{\bar{X}}]{\gamma} = -\left(\mat{H}(\vec{\bar{X}}, \gamma)\right)^{-1} \cdot\left(\pd{\gamma}\vec{F}(\vec{\bar{X}}, \gamma)\right). \end{equation} We can now use~\eqref{eq:davidenko-predict} in two ways: first, having found an instanton $\vec{\bar{X}}_i$ for a parameter value $\gamma_i$, it tells us in which way the instanton for a slightly different value of $\gamma$ differs from the current one, so we can use it as a much improved predictor in our predictor-corrector scheme, i.e. $\vec{\bar{X}}_{i+1}\approx \vec{\bar{X}}_i + \Delta\gamma\ \mathrm{d}\vec{\bar{X}}/\mathrm{d}\gamma$. In fact, we could directly integrate~\eqref{eq:davidenko-predict} to obtain all solutions. Unfortunately, evaluating the Hessian is costly and we can afford a much larger step size by performing the corrector steps. As a compromise it is possible to perform multiple steps according to~\eqref{eq:davidenko-predict} before starting the root finding routine. Furthermore, we can use the derivative to scale the step $\Delta\gamma$ by instead specifying a maximum (or mean) difference between the points of $\vec{\bar{X}}_i$ and the proposed guess for $\vec{\bar{X}}_{i+1}$, or even a fixed arclength $\Delta s$ of the solution curve in $\mathbb{R}^{N\times d + 1}$, \begin{align} \Delta s&=\sqrt{(\Delta\gamma\ \mathrm{d}\vec{\bar{X}}/\mathrm{d}\gamma)^2+(\Delta \gamma)^2} \nonumber\\ \label{eq:step-length} \Leftrightarrow \Delta\gamma &= \frac{\Delta s}{\sqrt{(\mathrm{d}\vec{\bar{X}}/\mathrm{d}\gamma)^2 + 1}}. \end{align} A situation may be conceivable where it is not possible to parametrize the solution set as $\vec{\bar{X}}(\gamma)$ at all, because such a function would not be single-valued or have infinite slope somewhere. In this case, we can parametrize both the solution and the parameter $\gamma$ by a new parameter \begin{align} \vec{\bar{Y}}(u) &= (\vec{\bar{X}}(u), \gamma(u))^\intercal \nonumber\\ \label{eq:pseudo-arclength} \Rightarrow 0 &= \dd{u}\vec{\tilde{F}}(\vec{\bar{Y}}) = \mat{\tilde{H}}\cdot \dd[\vec{\bar{Y}}]{u}, \end{align} where $\mat{\tilde{H}}$ is now an $(N d + 1) \times (N d)$ matrix, so \eqref{eq:pseudo-arclength} has to be augmented by an additional condition. This is chosen to be a constraint on the orientation and the ``velocity'' of the flow $1 = \norm{\mathrm{d}\vec{\bar{Y}}/\mathrm{d} u}$, so $\vec{\bar{Y}}(u)$ is parametrized by arclength, hence the name \emph{pseudo-arclength continuation} (\emph{pseudo} because this is only approximately true, as we are taking discrete steps). As long as $\gamma$ is a suitable parameter, this is equivalent to \eqref{eq:step-length}, which is what we will be using in the following. \section{Applications} \label{sec:applications} \begin{figure} \centering \includegraphics{onedimInsts} \caption{Planar instantons for multiple background fields and increasing values of $\gamma_{\omega/k}$. Top: temporal Sauter field $\vec{E} = E \cosh^{-2}(\omega t) \vec{e}_z$, middle: spatial Sauter field $\vec{E} = E \cosh^{-2}(k z) \vec{e}_z$, bottom: spacetime bump profile $\vec{E} = E \cosh^{-2}(\omega t)\cosh^{-2}(k z) \vec{e}_z$ with $k = 3\omega$. The purple trajectories are the limit $\gamma_{\omega/k}\to 0$, blue denotes a decrease in action, red an increase. As is well known, while temporal variation shrinks the instantons and decreases the worldline action (top), spatial inhomogeneity has the opposite effect (middle). As the bottom plot shows, field configurations are possible that both increase and decrease the action in different regimes.} \label{fig:onedim} \end{figure} \begin{figure} \includegraphics{sauterPP} \caption{Imaginary part of the Minkowskian effective action (i.e.\ the pair production rate) for $E=0.033 m^2/q$. Top: temporal Sauter pulse, bottom: spatial Sauter profile. Numerical results are given by markers and the analytic expressions~\eqref{eq:tSauterAna} and~\eqref{eq:xSauterAna} by lines. Note the spacing of markers in the spatial case, the step length decreases to keep the overall arclength $\Delta s$ constant.} \label{fig:sautPP} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{stInst3D} \caption{The same worldline instantons as in the third plot of Figure~\ref{fig:onedim}, but stacked, with the $z$-coordinate given by the parameter $\gamma$. This presentation makes it easier to correlate the instantons' change in shape with the parameter.} \label{fig:st3d} \end{figure} \begin{figure} \includegraphics{bumpPP} \caption{Imaginary part of the Minkowskian effective action for the spacetime bump profile $\vec{E} = E \cosh^{-2}(\omega t)\cosh^{-2}(k z) \vec{e}_z$ with $k = 3\omega$ and $E=0.033m^2/q$. Here (and in the following cases) there are no analytical results to compare with, so we just add the connecting dashed lines as a guide to the eye.} \label{fig:stPP} \end{figure} Let us now apply the method outlined above to some background fields. The strategy in all cases is to start with a limit that is reasonably close to a static, homogeneous field and perform pseudo-arclength continuation to map the solution space for a chosen parameter range. In all figures depicting worldline instantons we color the homogeneous limit (i.e.\ a circular instanton) purple, and all further instantons proportional to the change in action (blue for a decrease, red for an increase, so blue means more, red less pair production). In all figures that show the full effective action we choose $E=0.033 m^2/q$ for the reference field strength. This is simply the value we already used in earlier works, and it does not influence the quality of the discretization in any way. We also use $N=500$ points in the discretization, which yields good accuracy while it still takes less than thirty seconds to obtain the family of instantons in the cases below, with the exception of the $e$-dipole pulse. \subsection{Temporal Sauter pulse} First let us consider simple, one-dimensional inhomogeneities where we can compare to analytic results. As an example, we choose the Euclidean four-potential $iA_3 = \tan(\gamma_\omega x_4)/\gamma_\omega$ describing the (physical) field $\vec{E} = E \cosh^{-2}(\omega t) \vec{e}_z$ with the \emph{Keldysh parameter} $\gamma_\omega=m\omega/qE$~\cite{Keldysh1965}. Since the field does not depend on any spatial coordinates, we have $N_0=3$ translational zero modes that need to be held fixed. The analytical worldline instanton result for this field is~\cite{Dunne2006a} \begin{multline} \label{eq:tSauterAna} \frac{\Im\Gamma_{\mathrm{M}}^{\mathrm{Sauter}^\omega}}{V_3} = \frac{(qE)^{3/2}}{2(2\pi)^3} \frac{(1+\gamma_\omega^2)^{5/4}}{\gamma_\omega}\\ \times\exp\left( -\frac{m^2\pi}{qE}\frac{2}{1+\sqrt{1+\gamma_\omega^2}} \right). \end{multline} In Figure~\ref{fig:onedim} the first plot shows a family of instantons in the range of $0 < \gamma_\omega < 3.5$, and the top panel in Figure~\ref{fig:sautPP} compares the numerical result~\eqref{eq:discrete} in this parameter range to the analytical expression~\eqref{eq:tSauterAna}, showing near-perfect agreement. \subsection{Spatial Sauter pulse} We can also consider the spatially inhomogeneous profile $iA_4 = \tanh(\gamma_k x_3)/\gamma_k$ describing the (physical) field $\vec{E} = E \cosh^{-2}(k z) \vec{e}_z$ with (the spatial analog of) the Keldysh parameter $\gamma_k=mk/qE$. The analytical result is related to~\eqref{eq:tSauterAna} by $\gamma_\omega \to i\gamma_k$~\cite{Dunne2006a}, \begin{multline} \label{eq:xSauterAna} \frac{\Im\Gamma_{\mathrm{M}}^{\mathrm{Sauter}^k}}{V_t V_2} = \frac{(qE)^{3/2}}{2(2\pi)^3} \frac{(1-\gamma_k^2)^{5/4}}{\gamma_k}\\ \times\exp\left( -\frac{m^2\pi}{qE}\frac{2}{1+\sqrt{1-\gamma_k^2}} \right), \end{multline} where the instanton is now confined in $x_3$-direction and we obtain a ``temporal volume factor'' $V_t$ instead. The worldline instantons in this field for the range $0 < \gamma_k < 1$ are depicted in the middle of Figure~\ref{fig:onedim}, and the comparison of the numeric result and the analytic expression~\eqref{eq:xSauterAna} in the bottom panel of Figure~\ref{fig:sautPP}. \subsection{Space-time Sauter pulse} As a simple example of a both space- and time-dependent background we choose the product of the preceding profiles with $\gamma := \gamma_\omega = \gamma_k/3$, i.e.\ $iA_3 = \cosh^{-2}(3\gamma x_3)\tan(\gamma x_4)/\gamma$. The resulting worldline instantons in the range $0 < \gamma < 2.5$ are shown in the bottom plot of Figure~\ref{fig:onedim} and the resulting pair production rate in Figure~\ref{fig:stPP}. With the chosen ratio $\gamma_k/\gamma_\omega=3$ the spatial inhomogeneity dominates for small $\gamma$, giving larger instantons, increased action and lower pair production, while above $\gamma \approx 1$ the time dependence takes over and produces smaller instantons, reduced action and increased pair production. \subsection{Multidimensional instantons} In~\cite{Dunne2006} multidimensional instantons were found for background fields that depend on multiple spatial coordinates using the shooting method. We can obtain instantons for these fields using discretization as well. Consider the potential \begin{equation} \label{eq:dunne3d} iA_4=\frac{1}{\sqrt{2}k}\frac{\tanh(kx_1+kx_2)}{1+(kx_1)^2 + 10(kx_2)^2} \end{equation} from Figure~1 in~\cite{Dunne2006} (with the factor of $1/\sqrt{2}$ added so the peak strength is $1$). This yields three dimensional instantons (in $x_1$-, $x_2$- and $x_4$-direction). Figure~\ref{fig:dunne3d} depicts a family of instantons in a three dimensional plot, while Figure~\ref{fig:dunneProj} shows all two dimensional projections of the same trajectories. The resulting pair production rate is given in Figure~\ref{fig:dunnePP}. \begin{figure} \centering \includegraphics[width=\linewidth]{dunneInst3D} \caption{Worldline instantons for the four-potential~\eqref{eq:dunne3d} from~\cite{Dunne2006}. As before, stronger inhomogeneity stretches the instantons (in a more complicated way than for the one dimensional fields) and increases the action.} \label{fig:dunne3d} \end{figure} \begin{figure} \includegraphics{dunnePP} \caption{Imaginary part of the effective action for the multidimensional field from~\cite{Dunne2006} for $E=0.033m^2/q$. The dashed, connecting line is again a visual aid only.} \label{fig:dunnePP} \end{figure} \begin{figure} \centering \includegraphics{dunneInsts} \caption{The same worldline instantons as in Figure~\ref{fig:dunne3d}, projected onto the coordinate planes.} \label{fig:dunneProj} \end{figure} \subsection{Plane wave plus electric field} \begin{figure} \centering \includegraphics[width=\linewidth]{pwInst3D} \caption{Worldline instantons for the superposition of a static, homogeneous field and a weak, propagating plane wave. The ratio of the plane wave amplitude and the strong field is $10^{-2}$. The $x_1$-component of the trajectories is purely imaginary, hence the imaginary part of $x_1$ on the first axis.} \label{fig:pw3d} \end{figure} \begin{figure} \includegraphics{pwPP} \caption{Pair production rate for a constant field ($E_{\mathrm{Strong}}=0.033m^2/q$) with superimposed plane wave ($E_{\mathrm{Weak}}=10^{-2}E_{\mathrm{Strong}}$). The temporal volume factor $V_t$ arises from the number of instantons, one per oscillation of the wave at a fixed spatial point. The dashed line is added as a guide to the eye.} \label{fig:pwPP} \end{figure} \begin{figure} \centering \includegraphics{pwInsts} \caption{The same worldline instantons as in Figure~\ref{fig:pw3d}, projected onto the coordinate planes.} \label{fig:pwProj} \end{figure} In~\cite{Torgrimsson2017b} we already applied the discrete worldline instanton method to calculate the pair creation rate for the superposition of a weak propagating plane wave and a constant field, a variant of \emph{dynamically assisted} pair production~\cite{Schutzhold2008}. Different pulse shapes have been considered for the weak field before~\cite{Linder2015}, however a plane wave is special in that it cannot produce pairs on its own, so the process is fully nonperturbative for all frequencies. In the case of parallel polarization (the plane wave and the constant field point in the same direction, but perpendicular to the propagation direction) this combination can be represented by the four-potential \begin{equation} \label{eq:pwPot} iA_4 = x_3, \quad iA_3 = i \frac{\varepsilon}{\gamma} \sin\left(\gamma (x_1 - i x_4)\right). \end{equation} The method can handle the perpendicularly polarized case just as well, however that leads to four-dimensional instantons that are cumbersome to visualize. In contrast to the examples considered before, the field~\eqref{eq:pwPot} leads to complex instantons, in particular purely real $x_3(u)$, $x_4(u)$ and purely imaginary $x_1(u)$. A family of instantons is shown in Figures~\ref{fig:pw3d} and~\ref{fig:pwProj}, while the full pair production rate is given in Figure~\ref{fig:pwPP}. \begin{figure} \includegraphics{eDipoleAct} \caption{Top: Instanton action for the Gaussian \emph{e}-dipole \eqref{eq:edipoleE} (markers) compared to the action for a homogeneous field with Gaussian time dependence (line). Bottom: Ratio of the effective action and the locally constant field approximation for $E = 0.033m^2/q$, with a dashed connecting line as a visual aid.} \label{fig:eDipoleActPP} \end{figure} \subsection{E-dipole pulse} An especially interesting, highly non-trivial example is that of an \emph{e-dipole pulse}. It is a solution to Maxwell's equations in vacuum that represents a localized pulse of finite energy~\cite{Gonoskov2012}. It saturates the theoretical upper bound of peak field strength for given laser power~\cite{Bassett1986} and is thus in a sense the optimal (and at the same time physically viable) configuration to study pair creation~\cite{Gonoskov2013}. Its name stems from the structural similarity to dipole radiation, however it does not suffer from the strong singularities at the origin for a simple non-stationary dipole. The electromagnetic field of the \emph{e}-dipole pulse can be given in terms of a driving function $g$ using the vector $\vec{Z}$~\cite{Gonoskov2013} \begin{align} \label{eq:edipoleE} \begin{split} \vec{Z} &= \vec{e}_z \frac{d}{\abs{\vec{r}}}\Big(g(t + \abs{\vec{r}}) - g(t - \abs{\vec{r}})\Big),\\ \vec{E} &= - \nabla\times\left(\nabla\times\vec{Z}\right), \quad \vec{B} = - \nabla\times\dot{\vec{Z}}. \end{split} \end{align} We choose the function \begin{equation} g(t) = \frac{t}{4\omega^2}e^{-\omega^2 t^2}+\frac{\sqrt{\pi}}{8\omega^3}(1+2 \omega^2t^2)\erf(\omega t) \end{equation} and the virtual dipole moment $d = 3E/4$, so that at the origin \begin{equation} \vec{E} \approx E e^{-\omega^2t^2} \vec{e}_z. \end{equation} We cannot immediately apply the instanton approach to this field since it is not given in terms of a four-potential. It is however possible to obtain an expression for the potential in coordinate gauge $A(x)\cdot x = 0$ from the field tensor~\cite{Shifman1980}, \begin{equation} A^\mathrm{M}_\mu(x) = -\int\limits_0^1\!\mathrm{d}{\alpha}\ F^\mathrm{M}_{\mu\nu}(\alpha x)\, \alpha x^\nu. \end{equation} For the field~\eqref{eq:edipoleE} this gives a lengthy expression, which can now be used to obtain worldline instantons. Figure~\ref{fig:eDipoleActPP} shows the result. The top plot compares the instanton action for the \emph{e}-dipole pulse to the action for a field with Gaussian time depencence only. Due to the additional spatial inhomogeneity in the \emph{e}-dipole field the action is slightly larger (and thus pair production slightly lower) than for the purely time dependent pulse. We can also compare the full imaginary part of the effective action with the locally constant field approximation (in the bottom plot of Figure~\ref{fig:eDipoleActPP}), which can be calculated using the saddle point method for $E$ below the critical field strength, giving \begin{equation} \Im \Gamma_{\mathrm{LCFA}}^{e\mathrm{-dipole}} \approx \frac{5\sqrt{5}}{2(2\pi)^3 \gamma^4} \exp\left(-\pi\frac{m^2}{qE}\right). \end{equation} As expected, the worldline instanton result tends to the locally constant field approximation for small values of $\gamma$, while it is exponentially larger for higher $\gamma$. For the parameters considered in~\cite{Gonoskov2013} the adiabaticity is very small, with $\gamma < 10^{-3}$, so the locally constant field approximation is accurate. For high frequency pulses however, the pair production rate is higher than the constant field estimate. \begin{figure} \includegraphics{standingWave} \caption{Top: Instanton action for a transversally polarized standing wave (markers) compared to just the oscillating time dependent field (line). Bottom: The same comparison for the imaginary part of the effective action with $E=0.033m^2/q$ including the prefactor. The transversal inhomogeneity does not change the exponent at all, but has a small effect on the prefactor.} \label{fig:standingWave} \end{figure} \subsection{Transversal standing wave} Let us now briefly consider a purely transversal inhomogeneity. Two counterpropagating laser beams create a standing wave pattern, i.e. $\vec{E} = \cos(\omega t)\cos(k x)\vec{e}_z$ with $k=\omega$. In~\cite{Lv2018} the authors find that omitting the spatial inhomogeneity leads to qualitatively incorrect results in the high frequency regime. In \cite{Aleksandrov2017b} strong deviations in the momentum spectrum have been found in the homogeneous approximation as well. In the semiclassical regime and for the total integrated rate however we can now check that approximating the standing wave by an oscillating homogeneous field works well. It is easy to see that the transversal inhomogeneity does not change the instanton and thus the action~\cite{Linder2015}, but the effect on the prefactor is not as obvious. Calculating the full effective action using the discrete instantons shows that while the prefactor does change, the difference from the homogeneous result is small and barely visible, see Figure~\ref{fig:standingWave}. Note, however, that the momentum spectrum could still display noticeable differences between the standing wave and the purely time dependent field. \subsection{Constant electric and magnetic fields} \begin{figure} \includegraphics{magPrefs} \caption{Prefactor divided by $(qE)^2$ for the sum of a constant electric field and a magnetic field with varying $B/E$. The markers show the discrete instanton result, the lines the analytic expressions~\eqref{eq:magAna}. The results for scalar QED are blue, the results for spinor QED red.} \label{fig:magPrefs} \end{figure} In all examples up to now, the spin factor had only a small impact, apart from the trivial factor of two in the pair production probability. Let us finally consider a simple example where there is a large, qualitative difference between scalar and spinor QED, a parallel superposition of constant electric and magnetic fields of strength $E$ and $B$ respectively. The (first term of) the effective action for this combination is given by (see e.g. \cite{Kim2006} and references therein) \begin{align} \label{eq:magAna} \begin{split} \Gamma_\mathrm{Scalar} &\approx \frac{(qE)^2}{(2\pi)^3} \pi \frac{B}{E} \csch\left(\pi\frac{B}{E}\right)\exp\left(-\pi\frac{m^2}{qE}\right), \\ \Gamma_\mathrm{Spinor} &\approx \frac{2(qE)^2}{(2\pi)^3} \pi \frac{B}{E} \coth\left(\pi\frac{B}{E}\right)\exp\left(-\pi\frac{m^2}{qE}\right). \end{split} \end{align} Figure~\ref{fig:magPrefs} depicts the prefactors of these expressions, so the $B/E$ dependence, together with the discrete instanton result, showing perfect agreement. \section{Summary and conclusion} We have introduced a new approach to numerically implement the worldline instanton method for electron-positron pair creation. We use a discretization scheme that turns the infinite-dimensional path integral into a finite dimensional integration that we can then perform using Laplace's method. Crucially, this also means that the fluctuation prefactor is simply given by a finite dimensional determinant that can be computed without the great care that is needed for a properly normalized treatment of the functional determinant. After having implemented the necessary root finding and continuation steps outlined in sections~\ref{sec:discretization}, \ref{sec:prefactor} and \ref{sec:continuation}, full pair production results for arbitrary background fields can be obtained in minutes. Section~\ref{sec:applications} gives a (by no means exhaustive) sample of such applications. Although we used a frequency or inhomogeneity scale as the continuation parameter in all examples, we could have also chosen a different field parameter like the polarization direction or the ellipticity of the field, or even an entirely synthetic parameter to slowly transition to an especially complicated field configuration. In this paper we have only considered cases for which there is one dominant instanton, which is continuously connected to a circular one in the constant field limit. It would be interesting for future studies to consider cases where there are more than one instanton, and where some of them might have a nontrivial topology. \acknowledgments{We thank Holger Gies and Christian Kohlf\"urst for interesting discussions. G.~T. acknowledges support from the Alexander von Humboldt foundation.}
{ "timestamp": "2018-07-03T02:09:02", "yymm": "1806", "arxiv_id": "1806.00943", "language": "en", "url": "https://arxiv.org/abs/1806.00943" }
\section{Introduction}\label{sec1} The aim of this paper is to refine and extend to the more general case of ``large enough'' tree graphs the approach used by \cite{dala17} to prove an oracle inequality for the Fused Lasso estimator, also known as total variation regularized estimator. As a side result, we will obtain some insight into the irrepresentable condition for such ``large enough'' tree graphs. The main reference of this article is \cite{dala17}, who consider the path graph. We refine and generalize their approach (i.e. their Theorem 3, Proposition 2 and Proposition 3) to the case of more general tree graphs. The main refinements we prove are an oracle theory for the total variation regularized estimators over trees when the first coefficient is not penalized, a proof of an (in principle tight) lower bound for the compatibility constant and, as a consequence of this bound, the substitution in the oracle bound of the minimum of the distances between jumps by their harmonic mean. We elaborate the theory from the particular case of the path graph to the more general case of tree graphs which can be cut into path graphs. The tree graph with one branch is in this context the simplest instance of such more complex tree graphs, which allows us to develop insights into more general cases, while keeping the overview. The paper is organized as follows: in Section \ref{sec1} we expose the framework together with a review of the literature on the topic; in Section \ref{sec2} we refine the proof of Theorem 3 of \cite{dala17} and adapt it to the case where one coefficient of the Lasso is left unpenalized: this proof will be a working tool for establishing oracle inequalities for total variation penalized estimators; in Section \ref{sec3} we expose how to easily compute objects related to projections which are needed for finding explicit bounds on weighted compatibility constants and when the irrepresentable condition is satisfied; in Section \ref{sec4} we present a tight lower bound for the (weighted) compatibility constant for the Fused Lasso and use it with the approach exposed in Section \ref{sec2} to prove an oracle inequality; in Section \ref{sec5} we generalize Section \ref{sec4} to the case of the branched path graph; Section \ref{sec6} presents further extensions to more general tree graphs; Section \ref{sec7} handles the asymptotic pattern recovery properties of the total variation regularized estimator on the (branched) path graph exposes an extension to more general tree graphs; Section \ref{sec8} concludes the paper. \subsection{General framework} We study total variation regularized estimators on graphs, their oracle properties and their asymptotic pattern recovery properties. For a vector $v\in\mathbb{R}^n$ we write $\norm{v}_1=\sum_{i=1}^n \abs{v_i}$ and $\norm{v}^2_n=\frac{1}{n}\sum_{i=1}^n v_i^2$. Let $\mathcal{G}=(V,E)$ be a graph, where $V$ is the set of vertices and $E$ is the set of edges. Let $n:=\abs{V}$ be its number of vertices and $m:=\abs{E}$ its number of edges. Let the elements of $E$ be denoted by $e(i,j)$, where $i,j\in V$ are the vertices connected by an edge. Let $D_{\mathcal{G}}\in\mathbb{R}^{m\times n}$ denote the \textbf{incidence matrix} of a graph $\mathcal{G}$, defined as \begin{equation*} (D_e)_k=\begin{cases} -1, & \text{if } k=\min(i,j)\\ +1, & \text{if } k=\max(i,j)\\ 0, & \text{else}, \end{cases} \end{equation*} where $D_e\in\mathbb{R}^n$ is the row of $D_{\mathcal{G}}$ corresponding to the edge of $e(i,j)$. Let $f\in\mathbb{R}^n$ be a function defined at each vertex of the graph. The \textbf{total variation} of $f$ over the graph $\mathcal{G}$ is defined as \begin{equation*} \text{TV}_{\mathcal{G}}(f):= \norm{D_{\mathcal{G}}f}_1 =\sum_{e(i,j)\in E} \abs{f_j-f_i} . \end{equation*} Assume we observe the values of a signal $f^0\in\mathbb{R}^n$ contaminated with some Gaussian noise $\epsilon \sim \mathcal{N}_n(0,\sigma^2 \text{I}_n)$, i.e. $Y=f^0+\epsilon$. The \textbf{total variation regularized estimator} $\widehat{f}$ of $f^0$ over the graph $\mathcal{G}$ is defined as \begin{equation*} \widehat{f}:=\arg\min_{f\in\mathbb{R}^n}\left\{\norm{Y-f}^2_n+2\lambda\norm{D_{\mathcal{G}}f}_1 \right\}, \end{equation*} where $\lambda>0$ is a tuning parameter. This is a special case of the generalized Lasso with design matrix $\text{I}_n$ and penalty matrix $D_\mathcal{G}$. Hereafter we suppress the subscript $\mathcal{G}$ in the notation of the incidence matrix of the graph $\mathcal{G}$. In this article, we restrict our attention to tree graphs, i.e. connected graphs with $m=n-1$. For a tree graph we have that $D\in\mathbb{R}^{(n-1)\times n}$ and $\text{rank}(D)=n-1$. In order to manipulate the above problem to obtain an (almost) ordinary Lasso problem, we define $\widetilde{D}$, the \textbf{incidence matrix rooted at vertex $i$}, as \begin{equation*} \widetilde{D}:=\begin{bmatrix} A\\ D \end{bmatrix}\in\mathbb{R}^{n\times n}, \end{equation*} where \begin{equation*} A:=(0,\ldots, 0,\underbrace{1}_{i},0,\ldots,0)\in\mathbb{R}^n \end{equation*} In the following, we are going to root the incidence matrix at the vertex $i=1$, obtaining in this way a lower triangular matrix with ones on the diagonal, and minus ones as nonzero off-diagonal elements. The quadratic matrix $\widetilde{D}$ is invertible and we denote its inverse by $X:= {\widetilde{D}}^{-1}$. We now perform a change of variables. Let $\beta:=\widetilde{D}f$, then $f=X\beta$. The above problem can be rewritten as \begin{equation*} \widehat{\beta}=\arg\min_{\beta\in\mathbb{R}^n}\left\{\norm{Y-X\beta}_n^2 +2\lambda\sum_{i=2}^n \abs{\beta_i} \right\}, \end{equation*} i.e. an ordinary Lasso problem with $p=n$, where the first coefficient $\beta_1$ is not penalized. Note that, in order to perform this transformation, it is necessary that we restrict ourselves to tree graphs, since we want $\widetilde{D}$ to be invertible. Let $X=(X_1,X_{-1})$, where $X_1\in\mathbb{R}^n$ denotes the first column of $X$ and $X_{-1}\in\mathbb{R}^{n\times (n-1)}$ the remaining $n-1$ columns of $X$. Let $\beta_{-1}\in \mathbb{R}^{n-1}$ be the vector $\beta$ with the first entry removed. Thanks to some easy calculations and denoting by $\widetilde{Y}$ and $\widetilde{X}_{-1}$ the column centered versions of $Y$ and $X_{-1}$, it is possible to write \begin{equation*} \widehat{\beta}_{-1}=\arg\min_{\beta_{-1}\in\mathbb{R}^{n-1}}\left\{ \norm{\widetilde{Y}-\widetilde{X}_{-1}\beta_{-1}}^2_n+2\lambda\norm{\beta_{-1}}_1 \right\} \end{equation*} and \begin{equation*} \widehat{\beta}_1=\frac{1}{n} \sum_{i=1}^n Y_i-(X_{-1})_i\widehat{\beta}_{-1}, \end{equation*} and both $\widehat{\beta}_{-1}$ and $\widehat{\beta}_1$ depend on $\lambda$. Note that prediction properties of $\widehat{\beta}$, i.e. the properties of $X\widehat{\beta}$, will translate into properties of the estimator $\widehat{f}$, often also called Edge Lasso estimator. \begin{remark} In the construction of an invertible matrix starting from $D$, it would be possible to choose $A=(1, \ldots, 1)$ as well. Indeed, when we perform the change of variables from $f$ to $\beta$, $\hat{\beta}_{-1}$ estimates the jumps and thus gives information about the relative location of the signal. However to be able to estimate the absolute location of the signal we either need an estimate of the absolute location of the signal at one point (choice $A:=(0,\ldots, 0,1,0,\ldots,0)$, $\hat{\beta}_{i}=\hat{f}_{i}$, in particular we consider the case $i=1$), or of the ``mean'' location of the signal (choice $A=(1, \ldots, 1)$, $\hat{\beta}_1=\sum_{i=1}^n \hat{f}_i$). \end{remark} \subsection{The path graph and the path graph with one branch} In this article we are interested, besides the more general case of ``large enough'' tree graphs, in the particular cases of $D$ being the incidence matrix of either the path graph or the path graph with one branch. The choice of $A$ makes it easy to calculate the matrix $X$ and gives a nice interpretation of it. Let $P_1$ be the \textbf{path matrix} of the graph $\mathcal{G}$ with reference root the vertex 1. The matrix $P_1$ is constructed as follows: \begin{equation*} (P_1)_{ij}:= \begin{cases} 1, & \text{ if the vertex $j$ is on the path from vertex 1 to vertex $i$,}\\ 0, &\text{ else.} \end{cases} \end{equation*} \begin{theorem}[Inversion of the rooted incidence matrix]\label{invroot} For a tree graph, the rooted incidence matrix $\widetilde{D}$ is invertible and \begin{equation*} X=\widetilde{D}^{-1}=P_1. \end{equation*} \end{theorem} \begin{proof}[Proof of Theorem \ref{invroot}] For a formal proof we refer to \cite{jaco08} and to \cite{bapa14}. The intuition behind this theorem is to proceed as follows. We have to check that $\text{rank}(\widetilde{D})=n$. One can perform Gaussian elimination on the rooted incidence matrix. Keep the first row as it is and for row $i$ add up the rows indexed by the vertices belonging to the path going from vertex 1 to vertex $i$. In this way one can obtain an identity matrix and thus $\text{rank}(\widetilde{D})=n$. Similarly one can find the inverse, which obviously corresponds to $P_1$. \end{proof} \begin{example}[Incidence matrix and path matrix with reference vertex 1 for the path graph] Let $\mathcal{G}$ be the path graph with $n=6$ vertices. The incidence matrix is \begin{equation*} D=\begin{pmatrix*}[r] -1&1& & & & \\ &-1&1& & & \\ & &-1&1& & \\ & & &-1&1& \\ & & & &-1&1 \\ \end{pmatrix*} \in\mathbb{R}^{5\times 6} \end{equation*} and the path matrix with reference vertex 1 is \begin{equation*} X=\begin{pmatrix*}[r] 1& & & & & \\ 1&1& & & & \\ 1&1&1& & & \\ 1&1&1&1& & \\ 1&1&1&1&1& \\ 1&1&1&1&1&1\\ \end{pmatrix*}\in\mathbb{R}^{6\times 6}. \end{equation*} \end{example} \begin{example}[Incidence matrix and path matrix with reference vertex 1 for the path graph with a branch] Let $\mathcal{G}$ be the path graph with one branch. The graph has in total $n=n_1+n_2$ vertices. The main branch consists in $n_1$ vertices, the side branch in $n_2$ vertices and is attached to the vertex number $b<n_1$ of the main branch. Take $n_1=4$, $n_2=2$ and $b=2$. The incidence matrix is \begin{equation*} D=\begin{pmatrix*}[r] -1&1& & & & \\ &-1&1& & & \\ & &-1&1& & \\ &-1 & & &1& \\ & & & &-1&1 \\ \end{pmatrix*}\in\mathbb{R}^{5\times 6} \end{equation*} and the path matrix with reference vertex 1 is \begin{equation*} X=\begin{pmatrix*}[r] 1& & & & & \\ 1&1& & & & \\ 1&1&1& & & \\ 1&1&1&1& & \\ 1&1& & & 1& \\ 1&1& & &1&1\\ \end{pmatrix*}\in\mathbb{R}^{6\times 6}. \end{equation*} \end{example} \subsubsection{Notation}\label{notation} Here we expose the notational conventions used for handling the (branched) path graph and later branching points with arbitrarily many ($K$) branches. \begin{itemize} \item \textbf{(Branched) path graph} We decide to enumerate the vertices of the (branched) path graph starting from the root $1$, continuing up to the end of the main branch $n_1$ and then continuing from the vertex $n_1+1$ of the side branch attached to vertex $b$ up to the last vertex of the side branch $n=n_1+n_2$. We are going to use two different notations: the one is going to be used for finding explicit expressions for quantities related to the projection of a column of $X$ onto some subsets of the columns of $X$. The other is going to be used when calculating the compatibility constant and is based on the decomposition of the (branched) path graph into smaller path graphs. In both notations we let the set $S\subseteq \{2, \ldots , n\} $ be a candidate set of active edges. \bigskip \textbf{First notation} We partition $S$ into three mutually disjoint sets $S_1,S_2,S_3$, where $S_1\subseteq\{2,\ldots, b\}$, $S_2\subseteq\{b+1,\ldots, n_1\}$, $S_3\subseteq\{n_1+1,\ldots, n\}$. We write the sets $S_1,S_2,S_3$ as: \begin{equation*} S_1=: \left\{i_1,\ldots, i_{s_1} \right\}, S_2=: \left\{j_1,\ldots, j_{s_2} \right\}, S_3=: \left\{k_1,\ldots, k_{s_3} \right\}. \end{equation*} Note that $s_i:=\abs{S_i}, i\in \{1,2,3 \}$ and $s:=\abs{S}=s_1+s_2+s_3$. Let us write $S=\{\xi_1,\ldots, \xi_{s_1+s_2+s_3} \}$. Define \begin{eqnarray*} B & = & \{ \xi_1 -1, \xi_2 -\xi_1, \xi_3 -\xi_2, \ldots, b-\xi_{s_1}+1,\\ & &\xi_{s_1+1}-b-1, \ldots, \xi_{s_1+s_2}-\xi_{s_1+s_2-1}, n_1-\xi_{s_1+s_2}+1, \\ & &\xi_{s_1+s_2+1}-n_1-1, \xi_{s_1+s_2+2} - \xi_{s_1+s_2+1}, \ldots, n-\xi_{s_1+s_2+s_3} +1 \}\\ &=:& \{b_{1},b_2, b_3, \ldots, b_{s_1+1},b_{s_1+2} ,\ldots, b_{s_1+s_2+1},b_{s_1+s_2+2},\\ & & b_{s_1+s_2+3},b_{s_1+s_2+4}, \ldots, b_{s_1+s_2+s_3+3} \}. \end{eqnarray*} Define $b^*:=b_{s_1+1}+b_{s_1+2}+b_{s_1+s_2+3}$. In the case where we consider the path graph we simply take $S=S_1$ (i.e. $n=n_1$) \bigskip \textbf{Second notation} (for bounding the compatibility constant). What is meant with this second notation is that we decompose the branched path graph into three smaller path graphs. However the end of the first one does not necessarily coincide with the point $b$ and the begin of the other two does not necessarily coincide with the points $b+1$ and $n_1+1$ respectively. Let us write \begin{equation*} S_1=\{d^1_1+1, d^1_1+d^1_2+1, \ldots, d^1_1+d^1_2+\ldots+ d^1_{s_1}+1 \}=S\cap\{1, \ldots, b\}, \end{equation*} and \begin{equation*} S_i=\{p_i+1, p_i+d_2^i+1,p_i+d^i_2+d^i_3+1, \ldots, p_i+d^i_2+d^i_3+\ldots+ d^i_{s_i}+1 \}, i=2,3, \end{equation*} where, using the first notation introduced, $p_2=j_1-1$, $p_3=k_1-1$, $d^2_{s_2+1}=n_1-\xi_{s_1+s_2}+1$ and $d^3_{s_3+1}=n-\xi_{s_1+s_2+s_3}+1$. Note that $b^*=d^1_{s_1+1}+d^2_1+d_1^3=b_{s_1+1}+ b_{s_1+2}+b_{s_1+s_2+3}$. We require, $\forall i$, $d^i_1\geq 2$,$d^i_j\geq 4, \forall j\in \{2, \ldots, s_i \}$, and $d^i_{s_i+1}\geq 2$. Let $u^i_j\in\mathbb{N}$ satisfy $2 \le u^i_j\le d^i_j-2$ for $j\in \{2, \ldots, s_i \}$ and $i\in \{1,2,3\}$. The elements $d_{s_1+1}^1,d^2_1,d^3_1$ are only constrained by the fact that they have to be greater or equal than two, otherwise, for a given $S$ their choice is left free. Moreover note that $\sum_{i=1}^3 \sum_{j=1}^{s_1+1}d_j^i=n$. We thus end up with three sequences of integers $\{d_j^i\}_{i=1}^{s_i}, i\in\{1,2,3\}$. \begin{remark} We can relate part of these sequences to the set $B$ defined in the first notation. Indeed, \begin{itemize} \item $ \{d^1_{i} \}_{i=1}^{s_1}=\{b_{i} \}_{i=2}^{s_1}$; \item $ \{d^2_{i} \}_{i=2}^{s_2+1}=\{b_{i} \}_{i=s_1+3}^{s_1+s_2+2}$; \item $ \{d^3_{i} \}_{i=2}^{s_3+1}=\{b_{i} \}_{i=s_1+s_2+4}^{s_1+s_2+s_3+3}$. \end{itemize} We see that the only place where there might be some discrepancy between the first and the second notation is at $d^1_{s_1+1},d^2_1,d^3_1$, which might be different from $b_{s_1+1},b_{s_1+2},b_{s_1+s_2+3}$. \end{remark} In the case of the path graph we just consider one single of these path graphs and thus $S=S_1$ and $s=s_1$ and we omit the index $i$. \item \textbf{Branching point with arbitrarily many branches} In Sections \ref{sec3} and \ref{sec6} we are going to consider branching points participating in $K+1$ edges. In these cases we are ging go denote by $b_1$ the number of vertices between the ramification point and the last vertex in $S$ in the main branch, with these two extreme vertices included, and by $b_2, \ldots, b_{K+1}$ the number of vertices after the ramification point and before the first vertex in $S$ (or the end of the relative branch). In these more complex cases for the sake of simplicity we only consider situations where the first and second notation coincide. We are often going to restrict our attention to \textbf{``large enough''} general tree graphs. These can be seen as tree graphs composed of $g$ path graphs glued together at their \textbf{extremities} with $d^i_j\ge 4, \forall j \in \{1, \ldots, s_i+1 \}, \forall i \in \{1, \ldots, g \}$. The reason of these requirements will become clear in Sections \ref{sec5} and \ref{sec6}. \end{itemize} \subsection{Review of the literature} While to our knowledge there is no attempt in the literature to analyze the specific properties of the total variation regularized least squares estimator over general branched tree graphs, there is a lot of work in the field of the so called Fused Lasso estimator. An early analysis of the Fused Lasso estimator can be found in \cite{mamm97-2}. Some other early work is exposed in \cite{tibs05,frie07,tibs11}, where also computational aspects are considered. In the literature we can find two main currents of research, the one focusing on the pattern recovery properties (which is going to be quickly exposed in Section \ref{sec7}) and the other on the analysis of the mean squared error to prove oracle inequalities. \subsubsection{Minimax rates} In this subsection we expose some results on minimax rates, making use of the notation found in \cite{sadh16}. In particular, let \begin{equation*} \mathcal{T}(C)=\left\{f\in\mathbb{R}^n: \norm{Df}_1\leq C \right\} \end{equation*} be the class of (discrete) functions of bounded total variation on the path graph, where $D$ is its incidence matrix. Assume the linear model with $f^0\in\mathcal{T}(C)$ for some $C>0$ and with iid Gaussian noise with variance $\sigma\in(0,\infty)$ . It has been shown in \cite{dono98} that the minimax risk over the class of functions with bounded total variation $\mathcal{R}(\mathcal{T}(C))$ satisfies \begin{equation*} \mathcal{R}(\mathcal{T}(C)):=\inf_{\widehat{f}} \sup_{f^0\in\mathcal{T}(C)}\mathbb{E}[\norm{\widehat{f}-f^0}_n^2]\asymp (C/n)^{2/3}. \end{equation*} \cite{mamm97-2} prove that, if $\lambda\asymp n^{-2/3} C^{1/3}$, then the Fused Lasso estimator achieves the minimax rate within the class $\mathcal{T}(C)$. \cite{sadh16} also point out, that estimators which are linear in the observations can not achieve the minimax rate within the class of functions of bounded total variation, since they are not able to adapt to the spatially inhomogeneous smoothness of some elements of this class. \subsubsection{Oracle inequalities} We expose some recent results, appeared in the papers by \cite{hutt16,dala17,lin17b,gunt17}. In particular we give the rates of the remainder term in the (sharp) oracle inequalities holding with high probability exposed in these papers. \begin{itemize} \item \textbf{\cite{hutt16}} obtain a quite general result, in the sense that it applies to any graph $\mathcal{G}$ with incidence matrix $D\in\mathbb{R}^{m\times n}$. In particular for the choice of the tuning parameter $\lambda=\sigma\rho\sqrt{2\log\left({em}/{\delta}\right)}/n, \delta\in(0,\frac{1}{2})$, they obtain the rate $$\mathcal{O}\left(\frac{\abs{S}\rho^2}{n\kappa^2_D(S)}\log\left({em}/{\delta} \right)\right),$$ where, for a set $S\subseteq [m]$, \begin{equation*} \kappa_D(S):=\inf_{f\in\mathbb{R}^n}\frac{\sqrt{\abs{S}}\norm{f}_2}{\norm{(Df)_S}_1}, S\not= \emptyset \end{equation*} is called \textbf{compatibility factor} and $\rho$ is the largest $\ell^2$-norm of a column of the Moore-Penrose pseudoinverse $D^+=(\delta^+_1,\ldots,d^+_m)\in\mathbb{R}^{n\times m}$ of the incidence matrix $D$, i.e. $\rho=\max_{j\in[m]}\norm{\delta^+_j}_2$, and is called \textbf{inverse scaling factor}. For the path graph, we have $m=n-1$, $\rho\asymp \sqrt{n}$ and, according to Lemma 3 in \cite{hutt16}, $\kappa_D(S)=\Omega\left(1 \right)$, if $\abs{S}\geq 2$. \item \textbf{\cite{dala17}} obtain that, $\forall S\not=\emptyset$, for $\delta\in(0,\frac{1}{2})$ and the choice of the tuning parameter $\lambda:=2\sigma \sqrt{{2}\log\left({n}/{\delta} \right)/n}$, the remainder term has rate $$\mathcal{O}\left(\frac{s\log n}{W_{\min,S}}+\frac{s \log^2 n}{n} \right),$$ where $S=\left\{i_1,\ldots, i_s \right\}$, $s=\abs{S}$, $W_{\min,S}:=\min_{2\leq j\leq s}\abs{i_j-i_{j-1}}$. \item \textbf{\cite{lin17b}} prove a result similar to the one of \cite{dala17} using a technique that they call lower interpolant. Their result states that the mean squared error of the Fused Lasso estimator with the choice of the tuning patameter $\lambda=n^{-\frac{3}{4}} W_{\min,S_0}^{\frac{1}{4}}$ has error rate $$ \mathcal{O}\left(\frac{s_0}{n}\left((\log s_0+\log\log n)\log n +\sqrt{\frac{n}{W_{\min,S_0}}}\right)\right). $$ \item\textbf{\cite{gunt17}} consider the sequence of estimators $\{\widehat{f}_\lambda,\lambda\geq 0 \}$, where \begin{equation*} \widehat{f}_\lambda=\arg\min_{f\in\mathbb{R}^n}\left\{\norm{Y-f}^2_2+2\sigma\lambda\norm{Df}_1 \right\}, \end{equation*} and prove that, when the minimum length condition $W_{\min,S_0}\geq \frac{c n}{s_0+1}, c\geq 1, $ is satisfied, then with high probability \begin{equation*} \inf_{\lambda\geq 0} \norm{\widehat{f}_\lambda-f^0}^2_n=\mathcal{O}\left( \frac{s_0+1}{n}\log\left(\frac{ne}{s_0+1} \right) \right). \end{equation*} \end{itemize} \section{Approach for general tree graphs}\label{sec2} The approach we follow is very similar to the one presented in the proof of Theorem 3 of \cite{dala17}. However, we refine their proof by not penalizing the first coefficient of $\beta$ and by adjusting the definition of compatibility constant accordingly. Note that by not penalizing the first coefficient we allow it to be always active. This is a more natural approach to utilize, considered our problem definition. Let $\beta\in\mathbb{R}^n$ be a vector of coefficients, $S\subseteq\{2,\ldots, n \}$ a subset of the indices of $\beta$, called active set with $s:=\abs{S}$ being its cardinality. \begin{definition}[\textbf{Compatibility constant}]\label{defcc} The compatibility constant $\kappa(S)$ is defined as \begin{equation*} \kappa^2(S):=\min\left\{(s+1)\norm{X\beta}^2_n: \norm{\beta_S}_1-\norm{\beta_{-(\{1\}\cup S)}}_1=1 \right\}. \end{equation*} \end{definition} Let $V_{\{1\}\cup S}$ denote the linear subspace of $\mathbb{R}^n$ spanned by the columns of $X$ with index in $\{1\}\cup S$. Let $\Pi_{\{1\}\cup S}$ be the orthogonal projection matrix onto $V_{\{1\}\cup S}$. We have that $\Pi_{\{1\}\cup S}=X_{\{1\}\cup S}(X_{\{1\}\cup S}'X_{\{1\}\cup S})^{-1}X_{\{1\}\cup S}'$. \begin{definition} The vector $\omega\in\mathbb{R}^n$ is defined as \begin{equation*} \omega_j=\frac{\norm{X_j'(\text{I}-\Pi_{\{1\}\cup S})}_2}{\sqrt{n}},\forall j \in [n]. \end{equation*} \end{definition} \begin{remark} Note that $\omega_{\{1\}\cup S}=0$ and $0 \leq \omega\leq 1$ since for tree graphs the maximum $\ell^2$-norm of a column of $X$ is $\sqrt{n}$. \end{remark} \begin{definition} Take $\gamma>1$. The vector of weights $w\in\mathbb{R}^n$ is defined as \begin{equation*} w_j=1-\frac{\omega_j}{\gamma}, \forall j \in [n]. \end{equation*} \end{definition} \begin{remark} Note that $0\leq w \leq 1$ and that $w_{\{1\}\cup S}=1$. \end{remark} For two vectors $a,b\in\mathbb{R}^k$, $a\odot b:=(a_1b_1,a_2b_2,\ldots,a_kb_k)'$. \begin{definition}[\textbf{Weighted compatibility constant}]\label{defwcc} The weighted compatibility constant $\kappa_w(S)$ is defined as \begin{equation*} \kappa_w^2(S):=\min\left\{(s+1)\norm{X\beta}^2_n: \norm{(w\odot\beta)_S}_1-\norm{(w\odot \beta)_{-(\{1\}\cup S)}}_1=1 \right\}. \end{equation*} \end{definition} \begin{remark} Note that the (weighted) compatibility constant depends on the graph through $X$, which is the path matrix of the graph rooted at the vertex 1. \end{remark} \begin{remark} Note that a key point in our approach is the computation of a lower bound for the compatibility constant over the path graph, which is shown to be tight in some special cases. The concept of compatibility constant for total variation estimators over graphs is already presented in \cite{hutt16}. However, we refer to the (different) definition given in \cite{dala17}, which we slightly modify to adapt it to our problem definition. \end{remark} \begin{theorem}[Oracle inequality for total variation regularized estimators over tree graphs]\label{t21} Fix $\delta\in (0,1)$ and $\gamma>1$. \\ Choose $ \lambda={\gamma\sigma}\sqrt{2\log\left({4(n-s-1)}/{\delta} \right)/n}$. Then, with probability at least $1-\delta$, it holds that \begin{eqnarray*} \norm{\widehat{f}-f^0}_n^2&\leq& \inf_{f\in\mathbb{R}^n } \left\{\norm{f-f^0}_n^2 + 4\lambda\norm{(Df)_{-S}}_1 \right\}\\ &+&\frac{4\sigma^2}{n}\left((s+1)+2\log\left({2}/{\delta}\right)+\frac{\gamma^2 (s+1)}{\kappa^2_w(S)} \log \left({4(n-s-1)}/{\delta} \right) \right). \end{eqnarray*} \end{theorem} \begin{proof}[Proof of Theorem \ref{t21}] See Appendix \ref{appE} \end{proof} \section{Calculation of projection coefficients and lengths of antiprojections, a local approach}\label{sec3} In this section we are going to present an easy and intuitive way of calculating (anti-)projections and the related projection coefficients of the column of a path matrix rooted at vertex 1 of a tree onto a subset of the column of the same matrix. Let this matrix be called $X$. These calculations are motivated by the necessity of finding explicit expressions for the length of the antiprojections (for the weighted compatibility constant) and for the projection coefficients (to check for which signal patterns the irrepresentable condition is satisfied). In particular consider the task of projecting a column $X_j, j\not \in \{1\}\cup S$ onto $X_{\{1\}\cup S}$. This can be seen as finding the following argmin: $$ \hat{\theta}^j:=\arg\min_{\theta^j\in\mathbb{R}^{s+1}}\norm{X_j-X_{\{1\}\cup S}\theta^j}^2_2. $$ We see that \begin{itemize} \item ${\hat{\theta}^j}'$ corresponds to the $j^{\text{th}}$ row of $X'X_{\{1\}\cup S}(X_{\{1\}\cup S}'X_{\{1\}\cup S})^{-1}$; \item $\norm{X_j-X_{\{1\}\cup S}\hat{\theta}^j}^2_2=n\omega^2_j$. \end{itemize} The direct computation of these quantities can be quite laborious. Here, we show an easier way to compute these projections and we prove that they can be computed ``locally'', i.e. taking into account only some smaller part of the graph. We start by considering the path graph. Then we treat the more general situation of ``large enough'' tree graphs. \subsection{Path graph} Let $j\not\in \{1\}\cup S$ be the index of a column of $X$ that we want to project onto $X_{\{1\}\cup S}$. Define \begin{equation}\label{jm} j^- :=\max\left\{ i<j, i \in \{1\}\cup S \right\}, \end{equation} \begin{equation}\label{jp} j^+ :=\min\left\{ i>j, i \in \{1\}\cup S\cup \{ n+1\} \right\}, \end{equation} and denote their indices inside $ \{1\}\cup S\cup \{ n+1\}=\{i_{1}, \ldots, i_{s+2} \}$ by $l^-$ and $l^+$, i.e. $j^-=i_{l^-}$ and $j^+=i_{l^+}$. We use the convention $X_{n+1}=0\in\mathbb{R}^n$. We are going to show that the projection of $X_j$ onto $X_{\{1\}\cup S}$ is the same as its projection onto $X_{\{j^-\}\cup \{j^+ \}}$. This means that the part of the set $\{1\}\cup S$ not bordering with $j$ can be neglected. The intuition behind this insight can be clarified as follows. Projecting $X_j$ onto $X_{\{1\}\cup S}$ amounts to finding the projection coefficients $\hat{theta}^j$ minimizing the length of the antiprojection. The projection is then $X_{\{1\}\cup S}\hat{\theta}^j$. Since the columns of $X_{\{1\}\cup S}$ can be seen as indicator functions on $[n]$, this projection problem can be interpreted as the problem of finding the least squares approximation to $1_{\{i\ge j\}}$ by using functions in the class $\left\{1_{\{i\ge j^* \}}, j^*\in \{1\}\cup S \right\}$. We now apply a linear transformation in order to obtain orthogonal desing. Note that $\text{I}_{s+1}= \tilde{D}^{(s+1)}X^{(s+1)}$, where $\tilde{D}^{(s+1)}$ is the incidence matrix of a path graph with $s+1$ vertices rooted at vertex 1 and $X^{(s+1)}$ is its inverse, i.e. the corresponding rooted path matrix. We get that \begin{equation*} \min_{\theta^j\in\mathbb{R}^{s+1}}\norm{X_j-X_{\{1\}\cup S}\theta^j}^2_2=\min_{\tau^j\in\mathbb{R}^{s+1}}\norm{X_j-X_{\{1\}\cup S}\tilde{D}^{(s+1)} \tau^j}^2_2, \end{equation*} where $\tau^j=X^{(s+1)}\theta^j$, i.e. the progressively cumulative sum of the components of $\theta^j$ and $X_{\{1\}\cup S}\tilde{D}{(s+1)}\in\mathbb{R}^{n\times{(s+1)}}$ is a matrix containing as columns the indicator functions $\left\{1_{\{i_l\le i < i_{l+1} \}}, l\in\{1, \ldots, s+1 \} \right\}$, which are pairwise orthogonal. Because of the orthogonality of the design matrix, we can now solve $s+1$ separate optimization problems to find the components of $\hat{\tau}^j$. It is clear that, to minimize the sum of squared residuals (i.e. the length of the antiprojection), $\hat{\tau}^j$ must be s.t. $$ \{\hat{\tau}^j_i \}_{i<l^-}=0 \text{ and } \{\hat{\tau}^j_i \}_{i\ge l^+}=1.$$ It now remains to find $\hat{\tau}^j_{l^-}$ by solving $$ \hat{\tau}^j_{l^-}=\arg\min_{x\in\mathbb{R}}\left\{(j-j^-)x^2+ (j^+-j)(1-x)^2 \right\}= \frac{j^+-j}{j^+-j^-}= 1-\frac{j-j^-}{j^+-j^-}. $$ We see that, to get this projection coefficient, we either need to know $j^+$ and $j^-$ or the information on the length of the constant segment in which $j$ lies with its position within this segment. Thus we obtain that $$ \hat{\tau}^j=\begin{pmatrix} 0 \\ \vdots \\ 0 \\ \frac{j^+-j}{j^+-j^-} \\ 1 \\ \vdots \\ 1 \end{pmatrix} \text{ and } \hat{\theta}^j=\begin{pmatrix} 0 \\ \vdots \\ 0 \\ \frac{j^+-j}{j^+-j^-} \\ \frac{j-j^-}{j^+-j^-}\\ 0 \\ \vdots \\ 0 \end{pmatrix}, $$ and have proved the following Lemma. \begin{lemma}[Localizing the projections]\label{l61} Let $X$ be the path matrix rooted at vertex 1 of a path graph with $n$ vertices and $S\subseteq \{ 2, \ldots, n \}$. For $j\not\in \{1\}\cup S$ define $j^-$ and $j^+$ as in Equations (\ref{jm}) and (\ref{jp}). Then $$ \min_{\theta^j\in\mathbb{R}^{s+1}}\norm{X_j-X_{\{1\}\cup S}}^2_2= \min_{\tilde{\theta}^j\in\mathbb{R}^2}\norm{X_j-X_{\{j^-\}\cup \{ j^+\}}\tilde{\theta}^j}^2_2, $$ i.e. the (length of the) (anti-)projections can be computed in a ``local'' way. Moreover by writing $A_{\{1\}\cup S}=\text{I}_n-\Pi_{\{1\}\cup S}$ we have that $$ \norm{A_{\{1\}\cup S}X_j}^2_2= \frac{(j^+-j)(j-j^-)}{(j^+-j^-)}. $$ Furthermore, for $j<i_{s}, j\not \in \{1\}\cup S$, the sum of the entries of $\hat{\theta}^j$ is 1. \end{lemma} \subsection{General branching point}\label{s32} Using arguments similar to the ones above we can now focus on a ramification point of a general tree graph. Let us consider $K$ path graphs attached at the end of one path graph (which we assume to contain the root). The path matrix rooted at the first vertex is $$ X= \begin{pmatrix} X^{(b_1)} & & & \\ 1 & X^{(b_2)} & & \\ \vdots & & \ddots & \\ 1 & & & X^{(b_{K+1})} \end{pmatrix} $$ and we want to find the projections of $X_{-1}$ onto $X_{1}=(1, \ldots, 1)'$. The entries $X^{(b_i)}, i \in \{1, \ldots, K+1\}$ of the matrix $X$ are $b_i\times b_i$ lower triangular matrices of ones. Let $b^*=\sum_{i=1}^{K+1}b_j$. Let us write $j=1+i, i\in \{1, \ldots, b_1-1 \}$ and $j=\sum_{l=1}^{i^*}b_l-i, i\in \{1, \ldots, b_l \}, l\in \{2, \ldots, K+1 \}$. Without loss of generality we can consider only one $i^*\in \{2, \ldots, K+1 \}$. We now consider two cases $l=1$ and $l\not=1$. \begin{itemize} \item First case: $l=1$.\\ We have $$ \hat{\tau}^j_1=\arg\min_{x\in\mathbb{R}}\left\{ix^2+ (b^*-i)(1-x)^2 \right\}= 1-\frac{i}{b^*} $$ and $$ \norm{A_{\{1\}\cup S} X_{\sum_{i=1}^{i^*}b_i-j+1}}^2_2= \frac{i(b^*-i)}{b^*}, 1\le i \le b_1-1. $$ \item Second case: $l\not= 1$.\\ We have $$ \hat{\tau}^j_i=\arg\min_{x\in\mathbb{R}}\left\{i(1-x)^2+ (b^*-i)x^2 \right\}= \frac{i}{b^*} $$ and $$ \norm{A_{\{1\}\cup S} X_{j}}^2_2= \frac{i(b^*-i)}{b^*}, 1\le i \le b_l. $$ \end{itemize} Note that in the last region before the end of one branch, the approximation of the indicator function we implicitely calculate does not have to jump up to one and thus only one coefficient of the respective $\hat{\theta}^j$ will be nonzero and this coefficient will be smaller than one. Now we focus on the case, where each of the branches (path graphs) involved in a ramification, presents at least one jump (i.e. one element of the set $S$). The length of the antiprojections is calculated in the same way as above. According to the arguments exposed in precedence, we can consider only the jumps surrounding the ramification point. Let us call them $j_1,j_2, \ldots, j_{k+1}$. We have to find \begin{eqnarray*} \hat{\theta}^j & = & \arg\min_{\theta^j\in \mathbb{R}^{s+1}}\norm{X_j-X_{\{1\}\cup S}\theta^j}^2_2\\ & = & \arg\min_{\tilde{\theta}^j\in\mathbb{R}^{K+1}} \norm{X_j-X_{\{j_1\}\cup \ldots \cup \{j_{K+1}\}}\tilde{\theta}^j}^2_2\\ & = & \arg\min_{\tilde{\theta}^j\in\mathbb{R}^{K+1}}\norm{X_j-X_{\{j_1\}\cup \ldots \cup \{j_{K+1}\}}D^{\star}X^{\star} \tilde{\theta}^j}^2_2, \end{eqnarray*} where $$ D^{\star}=\begin{pmatrix} 1 & & & \\ -1 & 1 & & \\ \vdots & & \ddots & \\ -1 & & & 1 \\ \end{pmatrix} \in\mathbb{R}^{(K+1)\times (K+1)}\text{ and }X^{\star}=\begin{pmatrix} 1 & & & \\ 1 & 1 & & \\ \vdots & & \ddots & \\ 1 & & & 1 \\ \end{pmatrix} \in\mathbb{R}^{(K+1)} $$ are respectively the rooted incidence matrix of a star graph with $(K+1)$ vertices and its inverse. Let us write $j=j_1+i, i\in \{1, \ldots, b_1-1 \}$ and $j=j_l-i, i\in \{1, \ldots, b_l \}, l\in \{2, \ldots, K+1 \}$. Now let $$ \hat{\tau}^j=\arg\min_{\tau^j\in \mathbb{R}^{K+1}}\norm{X_j-X_{\{j_1\}\cup \ldots \cup \{j_{K+1}\}}D^{\star}\tau^j}^2_2. $$ We now consider two cases: $l=1$ and $l\not=1$. \begin{itemize} \item First case: $l=1$.\\ We have \begin{align*} \hat{\tau}^j_1 & =1-\frac{i}{b^*}\\ \hat{\tau}^j_l & =1,l=\{2, \ldots, K+1 \}, \\ \end{align*} which translates into \begin{align*} \hat{\theta}^j_1 & =1-\frac{i}{b^*}\\ \hat{\theta}^j_l & =\frac{i}{b^*},l=\{2, \ldots, K+1 \}. \\ \end{align*} \item Second case: $l\not=1$.\\ For $l=l'\not= 1$ we have \begin{align*} \hat{\tau}^j_1 & =\frac{i}{b^*}\\ \hat{\tau}^j_l & =0,l\in \{2, \ldots, K+1 \}\setminus \{l'\},\\ \hat{\tau}^j_l & =1,l=l', \\ \end{align*} which translates into \begin{align*} \hat{\theta}^j_1 & =\frac{i}{b^*}\\ \hat{\theta}^j_l & =-\frac{i}{b^*},l\in\{2, \ldots, K+1 \}\setminus \{l'\},\\ \hat{\theta}^j_l & =1-\frac{i}{b^*},l=l'. \\ \end{align*} \end{itemize} \section{Path graph}\label{sec4} \subsection{Compatibility constant} In this section we assume $\mathcal{G}$ to be the path graph with $n$ vertices. We give two lower bounds for the compatibility constant for the path graph with and without weights. The proofs are postponed to the Appendix \ref{appB}, where we present some elements that allow extension to the branched path graph and to more general tree graphs as well. These bounds are presented in a paper by \cite{vand18} as well. We use the second notation exposed in Subsection \ref{notation}. \begin{lemma}[Lower bound on the compatibility constant for the path graph, part of Theorem 6.1 in \cite{vand18}]\label{l31} For the path graph it holds that \begin{equation*} \kappa^2(S)\geq \frac{s+1}{ n}\frac{1}{K}, \end{equation*} where \begin{equation*} K={1 \over d_1} + \sum_{j=2}^s \left({1 \over u_j}+ {1 \over d_j-u_j} \right) + {1 \over d_{s+1} }. \end{equation*} \end{lemma} \begin{proof}[Proof of Lemma \ref{l31}] See Appendix \ref{appB}. \end{proof} \begin{Coro}[The bound can be tight, part of Theorem 6.1 in \cite{vand18}]\label{c32} Assume $d_j$ is even $\forall j \in \{2,\ldots, s \}$. Then we can take $u_j = d_j / 2$. Let us now define $f^{*}\in\mathbb{R}^n$ by \begin{equation*} f^{*}_i=\begin{cases} -{n \over d_1} & i=1 , \ldots , d_1 \cr {2n \over d_2} & i=d_1+1 , \ldots , d_1 + d_2 \cr \vdots & \ \cr (-1)^s {2n \over d_s} & i= \sum_{j=1}^{s-1} d_j +1 , \ldots ,\sum_{j=1}^s d_j \cr (-1)^{s+1} {n \over d_{s+1} } & i= \sum_{j=1}^s d_j +1 , \ldots , n \cr \end{cases} . \end{equation*} Let $\beta^{*}$ be defined by $f^{*}=X\beta^{*}$. Then \begin{equation*} \kappa^2(S)= \frac{s+1}{n }\frac{1}{K} , \end{equation*} where \begin{equation*} K={1 \over d_1} + \sum_{j=2}^{s} {4 \over d_j} + {1 \over d_{s+1} } . \end{equation*} \end{Coro} \begin{proof}[Proof of Corollary \ref{c32}] See Appendix \ref{appB}. \end{proof} \begin{remark} For the compatibility constant we want to find the largest possible lower bound. Thus we have to choose the $u_j$'s s.t. $K$ is minimized. We look at the first order optimality conditions and notice that they reduce to finding the extremes of $(s-1)$ functions of the type $g(x)=\frac{1}{d-x}+\frac{1}{x}$, $x\in (0,d)$, where $t\in\mathbb{N}$ is fixed. The global minimum of $g$ on $(0,d)$ is achieved at $x=\frac{d}{2}$. Thus, we can not obtain the optimal value of $K$ as soon as at least one $d_j$ is odd. \end{remark} \begin{lemma}[Lower bound on the weighted compatibility constant for the path graph, Lemma 9.1 in \cite{vand18}]\label{l33} For the path graph it holds that \begin{equation*} \kappa_w^2(S)\geq \frac{s+1}{n }\frac{1}{(\norm{w}_{\infty}\sqrt{K}+\norm{Dw}_2 )^2} \geq \frac{s+1}{n}\frac{1}{2 (\norm{w}^2_{\infty}K +\norm{Dw}^2_2)}, \end{equation*} where $D$ is the incidence matrix of the path graph. \end{lemma} \begin{proof}[Proof of Lemma \ref{l33}] See Appendix \ref{appB}. \end{proof} \subsection{Oracle inequality} Define the vector \begin{equation*} \Delta:=\left( {d_1}, \lfloor{d_2}/{2}\rfloor,\lceil{d_2}/{2}\rceil, \ldots,\lfloor{d_s}/{2}\rfloor,\lceil{d_s}/{2}\rceil, {d_{s+1}} \right)\in\mathbb{R}^{s+1} \end{equation*} and let $\overline{\Delta}_h$ be its harmonic mean. We now want to translate the result of Theorem \ref{t21} to the path graph. To do so we need a lower bound for the weighted compatibility constant, i.e. an explicit upper bound for $\sum_{i=2}^n(w_i-w_{i-1})^2$. In this way we can obtain the following Corollary. \begin{Coro}[Sharp oracle inequality for the path graph]\label{c31} Assume $d_i\ge 4, \forall i \in \{1, \ldots, s+1 \}$. It holds that \begin{eqnarray*} \norm{\widehat{f}-f^0}^2_n &\leq& \inf_{f\in\mathbb{R}^n} \left\{\norm{f-f^0}^2_n+4\lambda\norm{(Df)_{-S}}_1 \right\}\\ & + & {\frac{8\log(2/\delta)\sigma^2}{n} } + {4\sigma^2}\frac{s+1}{n}\\ & + & 8\sigma^2 \log(4(n-s-1)/\delta)\left( \frac{2\gamma^2 s}{ \bar{\Delta}_h}+5 \frac{s+1}{n}\log\left(\frac{n}{s+1} \right) \right). \end{eqnarray*} If we choose $f=f^0$ and $S=S_0$ we obtain that \begin{eqnarray*} \norm{\widehat{f}-f^0}^2_n &=& \mathcal{O} (\log(n)s_0 / \bar{\Delta}_h)+ \mathcal{O} (\log(n)\log( n/s_0)s_0/n) . \end{eqnarray*} \end{Coro} \begin{proof}[Proof of Corollary \ref{c31}] See Appendix \ref{appB}. \end{proof} \begin{remark} Since the harmonic mean of $\Delta$ is upper bounded by its arithmetic mean, and this upper bound is attained when all the entries of $\Delta$ are the same, we get a lower bound for the order of the mean squared error of $$ \frac{s\log(n)}{n}\left(s+\log\left(\frac{n}{s} \right) \right). $$ \end{remark} \begin{remark} Our result differs from the one obtained by \cite{dala17} in two points: \begin{itemize} \item We have $\bar{\Delta}_h$, the harmonic mean of the distances between jumps, instead of $\min_j\Delta_j$, the minimum distance between jumps; \item We slightly improve the rate from by reducing a $\log(n)$ to $\log(n/s)$. This is achieved with a more careful bound on the square of the consecutive differences of the weights. \end{itemize} \end{remark} \section{Path graph with one branch}\label{sec5} In this section we consider $\mathcal{G}$ to be the path graph with one branch and $n$ vertices. \subsection{Compatibility constant} \begin{lemma}[Lower bound for the compatibility constant for the branched path graph]\label{l41} For the branched path graph it holds that \begin{equation*} \kappa^2(S)\geq \frac{s+1}{n}\frac{1}{K^b}, \end{equation*} where \begin{equation*} K^b=\sum_{i=1}^3\left(\frac{1}{d_1^i}+ \sum_{j=2}^{s_i}\left({1\over u^i_j} +{1\over d^i_j-u^i_j }\right) +\frac{1}{d^i_{s_i+1}} \right) \end{equation*} \end{lemma} \begin{proof}[Proof of Lemma \ref{l41}] See Appendix \ref{appC}. \end{proof} \begin{Coro}[The bound can be tight]\label{l43} Assume $d^i_j$ is even $\forall j\in \{2, \ldots, s_i \},i\in \{1,2,3 \}$. One can then choose $u^i_j=d_j/2,\forall j\in \{2, \ldots, s_i \},i\in \{1,2,3 \}$. Moreover, assume that $d^1_{s_1+1}=d^2_1=d^3_1$. Let $f^i, i\in\{1,2,3 \}$ be the restriction of $f$ to the three path graphs of length $p_i$ each. Let us now define ${f^*}^i\in \mathbb{R}^{p_i}$ by \begin{equation*} {f^*}^i_j=\begin{cases} -\frac{n}{d^1_1} & j=1, \ldots, d^1_1\\ \frac{2n}{d^1_2} & j=d^1_1+1, \ldots, d^1_1+d^1_2\\ \vdots & \\ (-1)^{s_1}\frac{2n}{d^1_{s_1}} & j=\sum_{j=1}^{s_1-1}d^1_j+1, \ldots, \sum_{j=1}^{s_1}d^1_j\\ (-1)^{s_1+1}\frac{n}{d^1_{s_1+1}} & j=\sum_{j=1}^{s_1}d^1_j+1, \ldots, p_1\\ \end{cases} \end{equation*} and for $i\in \{2,3\}$ \begin{equation*} {f^*}^i_j=\begin{cases} (-1)^{s_1+1} \frac{n}{d^i_1} & j=1, \ldots, d^i_1\\ (-1)^{s_1+2}\frac{2n}{d^i_2} & j=d^i_1+1, \ldots, d^i_1+d^i_2\\ \vdots & \\ (-1)^{s_1+s_i+1}\frac{2n}{d^i_{s_i}} & j=\sum_{j=1}^{s_i-1}d^i_j+1, \ldots, \sum_{j=1}^{s_i}d^i_j\\ (-1)^{s_1+s_1+1}\frac{n}{d^i_{s_i+1}} & j=\sum_{j=1}^{s_i}d^i_j+1, \ldots, p_i.\\ \end{cases} \end{equation*} Let $\beta^*$ be defined by $f^*=X\beta^*$. Then \begin{equation*} \kappa^2(S)=\frac{s+1}{n }\frac{1}{K^b}, \end{equation*} where \begin{equation*} K^b=\sum_{i=1}^{3}\left(\frac{1}{d^i_1}+\sum_{j=2}^{s_i} \frac{4}{d^i_{j}} + \frac{1}{d^i_{s_i+1}} \right). \end{equation*} \end{Coro} \begin{proof}[Proof of Corollary \ref{l43}] See Appendix \ref{appC}. \end{proof} Consider the decomposition of the branched path graph into three path graphs, implicitely done by using the \textbf{second notation} in Section \ref{notation}. Let $D^*$ denote the incidence matrix of the branched path graph, where the entries in the rows corresponding to the edges connecting the three above mentioned path graphs have been substituted with zeroes. \begin{lemma}[Lower bound on the weighted compatibility constant for the branched path graph]\label{l42} \begin{eqnarray*} \kappa^2_w(S)&\geq& \frac{s+1}{n}\frac{1}{(\sqrt{K^b}\norm{w}_{\infty}+ \norm{D^*w}_2)^2} \geq\frac{s+1}{n}\frac{1}{2({K^b}\norm{w}^2_{\infty}+ \norm{D^*w}^2_2)}\\ &\geq& \frac{s+1}{n}\frac{1}{2({K^b}\norm{w}^2_{\infty}+ \norm{Dw}^2_2)} . \end{eqnarray*} \end{lemma} \begin{proof}[Proof of Lemma \ref{l42}] See Appendix \ref{appC}. \end{proof} \subsection{Oracle inequality} As in the case of the path graph, to prove an oracle inequality for the branched path graph, we need to find an explicit expression to control the weighted compatibility constant to insert in Theorem \ref{t21}. The resulting bound is similar to the one obtained in the Proof of Corollary \ref{c31}, up to a difference: we now have to handle with care the region around the branching point $b$. For the branched path graph we define the vectors \begin{equation*} \Delta^i := (d^i_1,\lfloor d^i_2/2 \rfloor ,\lceil d^i_2/2 \rceil, \ldots,\lfloor d^i_{s_i}/2 \rfloor,\lceil d^i_{s_i}/2 \rceil , d^i_{s_i+1})\in \mathbb{R}^{2s_i}, \end{equation*} and $\Delta:=(\Delta^1,\Delta^2,\Delta^3)\in \mathbb{R}^{2s}$. Let $\bar{\Delta}_h$ be the harmonic mean of $\Delta$. \begin{remark} As made clear in the \textbf{second notation} in Section \ref{notation}, we require that all $d^1_{s_1+1},d^2_1,d^3_1\geq 2$, i.e. $b^*=b_{s_1+1}+b_{s_1+2}+b_{s_1+s_2+3}\geq 6$. This means that our approach can handle the case where at most one of the jumps surrounding the bifurcation point occurs directly at the bifurcation point. Note that neither $b_{s_1+1}=0$ nor $b_{s_1+2}=b_{s_1+s_2+3}=0$ are allowed. \end{remark} We can distinguish the following four cases: \begin{enumerate}[1)] \item $b_{s_1+1},b_{s_1+2},b_{s_1+s_2+3}\ge 2$; \item $b_{s_1+2}=0$ or $b_{s_1+s_2+3}=0$; \item $b_{s_1+1}=1$; \begin{enumerate}[a)] \item $b_{s_1+2}\wedge b_{s_1+s_2+3}=2$; \item $b_{s_1+2}\wedge b_{s_1+s_2+3}\geq 3$; \end{enumerate} \item $b_{s_1+2}=1$ or $b_{s_1+s_2+3}=1$; \end{enumerate} \begin{Coro}[Sharp oracle inequality for the branched path graph]\label{c41} Assume that $d^1_1, d^2_{s_2+1}, d^3_{s_3+1}\ge 4$. It holds that \begin{eqnarray*} \norm{\widehat{f}-f^0}^2_n &\leq& \inf_{f\in\mathbb{R}^n} \left\{\norm{f-f^0}^2_n+4\lambda\norm{(Df)_{-S}}_1 \right\}\\ &+ & {\frac{8\log(2/\delta)\sigma^2}{n} } + {4\sigma^2}\frac{s+1}{n}\\ &+& 8\sigma^2 \log(4(n-s-1)/\delta)\left(\frac{2\gamma^2 s}{\bar{\Delta}_h}+\frac{5(2s+3)}{2n}\log \left(\frac{n+1}{2s+3} \right)+ \frac{\zeta}{n} \right), \end{eqnarray*} where \begin{equation*} \zeta=\begin{cases} 0 &, \text{ Case 1)}\\ b^*/2 &,\text{ Case 2)}\\ 3 &,\text{ Case 3)a)} \\ b^*/4 &,\text{ Case 3)b)}\\ b^*/4 &,\text{ Case 4)} \end{cases}. \end{equation*} If we choose $f=f^0$ and $S=S_0$ we get that \begin{equation*} \norm{\hat{f}-f^0}^2_n=\mathcal{O}(\log (n)s_0 / \bar{\Delta}_h)+\mathcal{O}(\log (n)\log(n/s_0)s_0/n)+ \mathcal{O}(\log (n)\zeta /n). \end{equation*} \end{Coro} \begin{proof}[Proof of Lemma \ref{c41}] See Appendix \ref{appC}. \end{proof} \section{Extension to more general tree graphs}\label{sec6} In this section we consider only situations corresponding to Case 1) of Corollary \ref{c41}. This means that we assume that, even when at the ramification point is attached more than one branch, the edge connecting the branch to the ramification point and the consecutive one do not present jumps (i.e. are not elements of the set $S$). \subsection{Oracle inequality for general tree graphs} With the insights gained in Section \ref{sec3} we can, by availing ourselves of simple means, prove an oracle inequality for a general tree graph, where the jumps in $S$ are far enough from the branching points, in analogy to Case 1) in Corollary \ref{c41}. Here as well, we utilize the general approach exposed in Theorem \ref{t21} and we need to handle with care the weighted compatibility constant and find a lower bound for it. We know that, when we are in (the generalization of) Case 1) of Corollary \ref{c41}, to prove bounds for the compatibility constant, the tree graph can be seen as a collection of path graphs glued together at (some of) their \textbf{extremities}. As seen in Section \ref{sec3}, the length of the antiprojections for the vertices around ramification points depends on all the branches attached to the ramification point in question. Here, for the sake of simplicity, we assume that $d^i_j\ge 4,\forall j, \forall i$, i.e. between consecutive jumps there are at least four vertices as well as there are at least four vertices before the first and after the last jump of each path graph resulting from the decomposition of the tree graph. This is what we call a ``large enough'' tree graph. Indeed, for $d^i_j\ge 4$, we have that $\log(d^i_j)\le 2\log(d^i_j/2)$. Let $\mathcal{G}$ be a tree graph with the properties exposed above. In particular it can be decomposed into $g$ path graphs. For each of these path graphs, by using the second notation in Subsection \ref{notation}, we define the vectors $$\Delta^i=( d^i_1, \lceil d^i_2/2 \rceil,\lfloor d^i_2/2 \rfloor, \ldots, \lceil d^i_{s_i}/2 \rceil,\lfloor d^i_{s_i}/2 \rfloor, d^i_{s_i+1} )\in \mathbb{R}^{2s_i}, i \in \{1, \ldots, g \} $$ and $$ \abs{\Delta^i}=( \lceil d^i_1/2 \rceil,\lfloor d^i_1/2 \rfloor, \ldots, \lceil d^i_{s_i+1}/2 \rceil,\lfloor d^i_{s_i+1}/2 \rfloor )\in \mathbb{R}^{2s_i+2}, i \in \{1, \ldots, g \}. $$ Moreover we write $$ \Delta= (\Delta^1, \ldots, \Delta^g)\in \mathbb{R}^{2s} \text{ and } \abs{\Delta}= (\abs{\Delta}^1, \ldots, \abs{\Delta}^g)\in \mathbb{R}^{2(s+g)}. $$ We have that for $\mathcal{G}$, $$ \kappa^s(S)\ge \frac{s+1}{n}\frac{1}{K}, K\le \frac{2s}{\bar{\Delta}_h}, $$ where $\bar{\Delta}_h$ is the harmonic mean of $\Delta$. Moreover an upper bound for the inverse of the weighted compatibility constant can be computed by upper bounding the squared consecutive pairwise differences of the weigths for the $g$ path graphs. We thus get that, in analogy to Corollary \ref{c31} $$\frac{1}{\kappa^2_w(S)}\le \frac{2n}{s+1}\left(\frac{2s}{\bar{\Delta}_h}+ \frac{5}{\gamma^2}\frac{s+g}{n}\log \left(\frac{n}{s+g} \right) \right). $$ We therefore get the following Corollary \begin{Coro}[Oracle inequality for a general tree graph] Let $\mathcal{G}$ be a tree graph, which can be decomposed in $g$ path graphs. Assume that $d^i_j\ge 4, \forall j \in \{ 1, \ldots, s_i+1\}, \forall i \in \{1, \ldots, g \}$. Then \begin{eqnarray*} \norm{\widehat{f}-f^0}^2_n & \leq & \inf_{f\in\mathbb{R}^n} \left\{\norm{f-f^0}^2_n+4\lambda\norm{(Df)_{-S}}_1 \right\}\\ & + & {\frac{8\log(2/\delta)\sigma^2}{n} } + {4\sigma^2}\frac{s+1}{n}\\ & + & 8\sigma^2 \log(4(n-s-1)/\delta)\left( \frac{2\gamma^2s}{ \bar{\Delta}_h}+ 5\frac{(s+g)}{n}\log\left(\frac{n}{s+g} \right) \right). \end{eqnarray*} \end{Coro} \begin{remark} Notice that it is advantageous to choose a decomposition where the path graphs are as large as possible, s.t. $g$ is small and less requirement on the $d^i_j$'s are posed. \end{remark} \begin{remark} This approach is of course not optimal, however it allows us to prove in a simple way a theoretical guarantee for the Edge Lasso estimator if some (not extremely restrictive) requirement on $\mathcal{G}$ and $S$ is satisfied. \end{remark} \section{Asymptotic signal pattern recovery: the irrepresentable condition}\label{sec7} \subsection{Review of the literature on pattern recovery}\label{prec} Let $Y=X\beta^0+\epsilon, \epsilon\sim\mathcal{N}_n(0,\sigma^2\text{I}_n)$, where $Y\in \mathbb{R}^n, X\in\mathbb{R}^{n\times p},\beta^0\in \mathbb{R}^p,\epsilon\in\mathbb{R}^n$. Let $S_0:=\left\{j\in[p]:\beta^0_j\not=0 \right\}$ be the active set of $\beta^0$ and $-S_0$ its complement. We are interested in the asymptotic sign recovery properties of the Lasso estimator \begin{equation*} \hat{\beta}:=\arg\min_{\beta\in\mathbb{R}^p}\left\{ \norm{Y-X\beta}^2_n+2\lambda\norm{\beta}_1\right\}. \end{equation*} \begin{definition}[\textbf{Sign recovery}, Definition 1 in \cite{zhao06}] We say that an estimator $\hat{\beta}$ recovers the signs of the true coefficients $\beta^0$ if \begin{equation*} \text{sgn}(\hat{\beta})=\text{sgn}(\beta^0). \end{equation*} We then write \begin{equation*} \hat{\beta}=_s\beta^0. \end{equation*} \end{definition} \begin{definition}[\textbf{Pattern recovery}] We say that an estimator $\hat{f}$ of a signal $f^0$ on a graph $\mathcal{G}$ with incidence matrix $D$ recovers the signal pattern if \begin{equation*} D\hat{f}=_s Df^0. \end{equation*} \end{definition} \begin{definition}[\textbf{Strong sign consistency}, Definition 2 in \cite{zhao06}] We say that the Lasso estimator $\hat{\beta}$ is strongly sign consistent if $\exists \lambda=\lambda(n):$ \begin{equation*} \lim_{n\to \infty}\mathbb{P}\left(\hat{\beta}(\lambda)=_s\beta^0 \right)=1 \end{equation*} \end{definition} \begin{definition}[\textbf{Strong irrepresentable condition}, \cite{zhao06}] Without loss of generality we can write \begin{equation*} \beta^0=\begin{pmatrix} \beta^0_{S_0}\\ \beta^0_{-S_0} \end{pmatrix}=\begin{pmatrix} \beta^0_{S_0}\\ 0 \end{pmatrix}=: \begin{pmatrix} \beta^0_{1}\\ \beta^0_{2} \end{pmatrix}, \end{equation*} where 1 and 2 are shorthand notations for $S_0$ and $-S_0$ and \begin{equation*} \hat{\Sigma}:=\frac{X'X}{n}=\begin{pmatrix} \hat{\Sigma}_{11} & \hat{\Sigma}_{12}\\ \hat{\Sigma}_{21} & \hat{\Sigma}_{22} \end{pmatrix}. \end{equation*} Assume $\hat{\Sigma}_{11}$ and $\hat{\Sigma}_{22}$ are invertible. The strong irrepresentable condition is satisfied if $\exists \eta\in (0,1]:$ \begin{equation*} \norm{\hat{\Sigma}_{21}\hat{\Sigma}_{11}^{-1}\text{sgn}(\beta^0_1)}_{\infty}\leq 1-\eta \end{equation*} \end{definition} \cite{zhao06} prove (in Their Theorem 4) that under Gaussian noise the strong irrepresentable condition implies strong sign consistency of the Lasso estimator, if $\exists 0\le c_1<c_2\le 1$ and $C_1>0: s_0=\mathcal{O}(n^{c_1})$ and $n^{\frac{1-c_2}{2}}\min_{j\in S_0} \abs{\beta^0_j}\ge C_1$. For our setup this means that $s_0$ has to grow more slowly than $\mathcal{O}(n)$ and that the magnitude of the smallest nonzero coefficient has to decay (much) slower than $\mathcal{O}(n^{-1/2})$. In the literature, considerable attention has been given to the question whether or not it is possible to consistently recover the pattern of a piecewise constant signal contaminated with some noise, say Gaussian noise. In that regard, \cite{qian16} highlight the so called \textbf{staircase problem}: as soon as there are two consecutive jumps in the same direction in the underlying signal separated by a constant segment, no consistent pattern recovery is possible, since the irrepresentable condition (cfr. \cite{zhao06}) is violated. Some cures have been proposed to mitigate the staircase problem. \cite{roja15,otte16} suggest to modify the algorithm for computing the Fused Lasso estimator Their strategy is based on the connection made by \cite{roja14} between the Fused Lasso estimator and a sequence of discrete Brownian Bridges. \cite{owra17} propose instead to normalize the design matrix of the associated Lasso problem, to comply with the irrepresentable condition. Another proposal aimed at complying with the irrepresentable condition is the one by \cite{qian16}, based on the preconditioning of the design matrix with the puffer transformation defined in \cite{jia15}, which results in estimating the jumps of the true signal with the soft-thresholded differences of consecutive observations. \subsection{Approach to pattern recovery for total variation regularized estimators over tree graphs} Let us now consider the case of the Edge Lasso on a tree graph rooted at vertex 1. We saw in Section \ref{sec1} that the problem can be transformed into an ordinary Lasso problem where the first coefficient is not penalized. We start with the following remark. \begin{remark}[The irrepresentable condition when some coefficients are not penalized] Let us consider the Lasso problem where some coefficients are not penalized, i.e. the estimator \begin{equation*} \hat{\beta}:=\arg\min_{\beta\in\mathbb{R}^p}\left\{\norm{Y-X\beta}^2_n+2\lambda\norm{\beta_{-U}}_1 \right\}, \end{equation*} where $U,R,S$ are three subsets partitioning $p$. In particular $U$ is the set of the unpenalized coefficients, $R$ is the set of truly zero coefficients and $S$ is the set of truly nonzero (active) coefficients. We assume the linear model $Y=X\beta^0+\epsilon, \epsilon\sim\mathcal{N}_n(0,\sigma^2\text{I}_n)$. The vector of true coefficients $\beta^0$ can be written as \begin{equation*} \beta^0=\begin{pmatrix} \beta^0_{U} \\ \beta^0_{S}\\ 0 \end{pmatrix}. \end{equation*} Moreover we write \begin{equation*} \frac{X'X}{n}=:\hat{\Sigma}= \begin{pmatrix} \hat{\Sigma}_{UU} & \hat{\Sigma}_{US} & \hat{\Sigma}_{UR}\\ \hat{\Sigma}_{SU} & \hat{\Sigma}_{SS} & \hat{\Sigma}_{SR}\\ \hat{\Sigma}_{RU} & \hat{\Sigma}_{RS} & \hat{\Sigma}_{RR} \end{pmatrix} . \end{equation*} Assume that $\abs{U}\le n$ and that $\hat{\Sigma}_{UU}, \hat{\Sigma}_{SS}$ and $\hat{\Sigma}_{RR}$ are invertible. We can write the irrepresentable condition as \begin{equation*} \norm{X_R' A_U X_S(X_S'A_U X_S)^{-1}z^0_S}_{\infty}\le 1-\eta, \end{equation*} where $z^0_S=\text{sgn}(\beta^0_S)$, $A_U=\text{I}_n- \Pi_U$ is the antiprojection matrix onto $V_U$, the linear subspace spanned by $X_U$ and $\Pi_{U}:= X_U(X_U'X_U)^{-1}X_U'$ is the orthogonal projection matrix onto $V_U$. Indeed, write $\delta:=\hat{\beta}-\beta^0$. The KKT conditions can be written as \begin{equation}\label{ekkt1} \hat{\Sigma}_{UU}\delta_U+\hat{\Sigma}_{US}\delta_S+\hat{\Sigma}_{UR}\delta_R-\frac{X_U'\epsilon}{n}=0; \end{equation} \begin{equation}\label{ekkt2} \hat{\Sigma}_{SU}\delta_U+\hat{\Sigma}_{SS}\delta_S + \hat{\Sigma}_{SR}\delta_R-\frac{X_S'\epsilon}{n}+\lambda \hat{z}_S=0, \hat{z}_S\in\delta\norm{\hat{\beta}_S}_1; \end{equation} \begin{equation}\label{ekkt3} \hat{\Sigma}_{RU}\delta_U+\hat{\Sigma}_{RS}\delta_S + \hat{\Sigma}_{RR}\delta_R-\frac{X_R'\epsilon}{n}+\lambda \hat{z}_R=0, \hat{z}_R\in\delta\norm{\hat{\beta}_R}_1. \end{equation} By solving Equation \ref{ekkt1} with respect to $\delta_U$, then inserting into Equation \ref{ekkt2} and solving with respect to $\delta_S$, then inserting the expression for $\delta_R$ in the expression for $\delta_U$ to get $\delta_U(\delta_R)$ and $\delta_S(\delta_R)$ and by finally inserting them into Equation \ref{ekkt3} by analogy with the proof proposed by \cite{zhao06}, we find the irrepresentable condition when some coefficients are not penalized, which writes as follows: $\exists \eta>0:$ \begin{equation*} \norm{\left(\hat{\Sigma}_{RS}-\hat{\Sigma}_{RU}\hat{\Sigma}_{UU}^{-1}\hat{\Sigma}_{US} \right)\left(\hat{\Sigma}_{SS}-\hat{\Sigma}_{SU}\hat{\Sigma}_{UU}^{-1}\hat{\Sigma}_{US} \right)^{-1}z^0_S}_{\infty}\le 1-\eta, \end{equation*} where $z^0_S=\text{sgn}(\beta^0_S)$. Note that $\Pi_U=\frac{1}{n}X_U \hat{\Sigma}_{UU}^{-1}X_U'$ and we obtain the above expression. \end{remark} Thus, by using the notation of the remark above we let $U=\{1\}$, $S=S_0$ and $R=[n]\setminus (S_0\cup \{1\})$. \begin{lemma}\label{l71} We have that \begin{equation*} \norm{X_R'X_{\{1\}\cup {S_0}}(X_{\{1\}\cup {S_0}}'X_{\{1\}\cup {S_0}})^{-1} z^0_{\{1\}\cup {S_0}}}_{\infty}= \norm{X_R'A_1 X_{ {S_0}}(X_{ {S_0}}'A_1X_{ {S_0}})^{-1} z^0_ {S_0}}_{\infty}. \end{equation*} \end{lemma} \begin{proof}[Proof of Lemma \ref{l71}] See Appendix \ref{appA}. \end{proof} This means that for tree graphs the irrepresentable condition can be checked for the ``active set'' $\{1 \} \cup {S_0}$ instead of $ {S_0}$, but then the first column has to be neglected. This fact is justified, however in a different way then the one we propose, in \cite{qian16} as well. \begin{remark}[The irrepresentable condition for asymptotic pattern recovery of a signal on a graph does not depend on the orientation of the edges of the graph] We assume the linear model $Y=f^0+\epsilon, \epsilon\sim\mathcal{N}_n(0,\sigma^2\text{I}_n)$. Then the Edge Lasso can be written as \begin{equation*} \hat{f}=\arg\min_{f\in\mathbb{R}^n}\left\{\norm{Y-f}^2_n+2\lambda\norm{(\tilde{I}\tilde{D}f)_{-1}}_1 \right\}, \end{equation*} where \begin{equation*} \tilde{I}\in\mathcal{I}=\left\{\tilde{I}\in\mathbb{R}^n, \tilde{I} \text{ diagonal}, \text{diag}(\tilde{I})\in\{1,-1\}^n \right\}. \end{equation*} Define $\beta = \tilde{I}\tilde{D}f$. Then $f=X\tilde{I}\beta$. The linear model assumed becomes $Y=X\tilde{I}\beta^0+\epsilon$ and the estimator \begin{equation*} \hat{\beta}=\arg\min_{\beta\in\mathbb{R}^n}\left\{\norm{Y-X\tilde{I}\beta}^2_n+2\lambda \norm{\beta_{-1}}_1 \right\}, \tilde{I}\in\mathcal{I}. \end{equation*} It is clear that the now the design matrix is $X\tilde{I}$. Let us write, without loss of generality \begin{equation*} \tilde{I}=\begin{pmatrix} \tilde{I}_{\{1\}\cup S_0} & 0 \\ 0 & \tilde{I}_{-(\{1\}\cup S_0)}\end{pmatrix}. \end{equation*} According to the Lemma \ref{l71} we can check if $\exists \eta \in (0,1]$: \begin{equation*} \norm{\tilde{I}_{-(\{1\}\cup S_0)} X'_{-(\{1\}\cup S_0)} (X'_{\{1\}\cup S_0}X_{\{1\}\cup S_0})^{-1}\tilde{I}_{\{1\}\cup S_0}\tilde{z}^0_{\{1\}\cup S_0}}_{\infty}\le 1-\eta, \end{equation*} where $\tilde{z}^0_{\{1\}\cup S_0}= \begin{pmatrix}0 \\ \tilde{z}^0_{S_0} \end{pmatrix}$ and $\tilde{z}^0_{S_0}=\text{sgn}(\beta^0_{S_0})= \tilde{I}_{S_0}\text{sgn}(\tilde{D}f^0)= \tilde{I}_{S_0}\text{sgn}(\bar{\beta}^0)$, where $\bar{\beta}^0=\tilde{D}f^0$, i.e. the vector of truly nonzero jumps when the root has sign $+1$ and the edges are oriented away from it. Note that $\tilde{I}_{-(\{1\}\cup S_0)}$ does not change the $\ell^{\infty}$-norm and by inserting the expression for $\tilde{z}^0_{\{1\}\cup S_0}$ we get \begin{equation*} \norm{ \tilde{I}_{-(\{1\}\cup S_0)} X'_{-(\{1\}\cup S_0)} (X'_{\{1\}\cup S_0}X_{\{1\}\cup S_0})^{-1}\tilde{I}_{\{1\}\cup S_0} \begin{pmatrix}0 & \\ & \tilde{I}_{S_0} \end{pmatrix}\begin{pmatrix}0 \\ \bar{z}^0_{S_0} \end{pmatrix}}_{\infty}\le 1-\eta, \forall \tilde{I}\in \mathcal{I}, \end{equation*} where $\bar{z}^0_{S_0}=\text{sgn}(\bar{\beta}^0)$. This means that it is enough to check that $\exists \eta >0$: \begin{equation*} \norm{ X'_{-(\{1\}\cup S_0)} (X'_{\{1\}\cup S_0}X_{\{1\}\cup S_0})^{-1}\begin{pmatrix}0 \\ \bar{z}^0_{S_0} \end{pmatrix}}_{\infty}\le 1-\eta, \forall \tilde{I}\in \mathcal{I} \end{equation*} to know, for all the orientations of the graph, whether the irrepresentable condition holds. The intuition behind this is that, by choosing the orientation of the edges of the graph, we choose at the same time the sign that the true jumps have across the edges. \end{remark} \subsection{Irrepresentable condition for the path graph} \begin{theorem}[Irrepresentable condition for the transformed Fused Lasso, Theorem 2 in \cite{qian16}]\label{t2qian16} Consider the model for a piecewise constant signal and let $S_0$ denote the set of indices of the jumps in the true signal, i.e. \begin{equation*} S_0=\left\{j: f^0_j\not= f^0_{j-1}, j=2,\cdots, n \right\}=\left\{i_1,\cdots,i_{s_0} \right\}, \end{equation*} with $s_0=\abs{S_0}$ denoting its cardinality. The irrepresentable condition for the Edge Lasso on the path graph holds if and only if one of the two following conditions hold: \begin{itemize} \item The jump points are consecutive,\\ i.e. $s_0=1$ or $\max_{2\leq k\leq s_0}(i_k-i_{k-1})=1$. \item All the jumps between constant signal blocks have alternating signs, i.e. \begin{equation*} (f^0_{i_k}-f^0_{i_{k}-1})(f^0_{i_{k+1}}-f^0_{i_{k+1}-1})<0, k=2,\cdots, s_0-1. \end{equation*} \end{itemize} \end{theorem} \begin{remark} This fact can as well be easily read out from the consideration made in Section \ref{sec3} and in particular in Lemma \ref{l61}. \end{remark} \subsection{Irrepresentable condition for the path graph with one branch} \begin{Coro}[Irrepresentable condition for the branched path graph]\label{l44} Assume $S_0\not=0$. The irrepresentable condition for the branched path graph is satisfied if and only if one of the following cases holds, \begin{itemize} \item $s_0=n-1 $ or $s_0=1 $; \item$\text{sgn}(\beta^0_{i_{s_1}})= -\text{sgn}(\beta^0_{j_{1}})= -\text{sgn}(\beta^0_{k_{1}})$ and in the subvectors $\beta^0_{1:n_1}$ and $\beta^0_{(b,n_1+1:n)}$ there are no two consecutive nonzero entries of $\beta^0$ with the same sign being separated by some zero entry. \end{itemize} Note that: \begin{itemize} \item If $i_{s_1}=b$, then the requirement above is relaxed to $\text{sgn}(\beta^0_{j_{1}})= \text{sgn}(\beta^0_{k_{1}})$; \item If $j_1=b+1$, then the requirement above is relaxed to $\text{sgn}(\beta^0_{i_{s_1}})= -\text{sgn}(\beta^0_{k_{1}})$; \item If $k_{1}=n_1+1$, then the requirement above is relaxed to $\text{sgn}(\beta^0_{i_{s_1}})= -\text{sgn}(\beta^0_{j_{1}})$. \end{itemize} \end{Coro} \begin{proof}[Proof of Lemma \ref{l44}] This is a special case of Theorem \ref{t72} and follows directly form it. \end{proof} \subsection{The irrepresentable condition for general branching points} When the graph $\mathcal{G}$ has a branching point where arbitrarily many branches are attached, for the irrepresentable condition to be satisfied it is required, in addition to the absence of staircase patterns along the path graphs building $\mathcal{G}$, that the last jump in the path graph containing the branching point has sign $+$ (resp. $-$) and all the jumps in the other path graphs glued to this branching point have sign $-$ (resp. $+$), with respect to the orientation of the edges away from the root. For the index of the $K+1$ jumps surrounding the ramification point we use the same notation as in Subesction \ref{s32}, i.e we denote them by $\{j_1, \ldots, j_{K+1} \}$. \begin{theorem}\label{t72} Consider the Edge Lasso estimator on a general ``large enough'' tree graph. The irrepresentable condition for the corresponding (almost) ordinary Lasso problem is satisfied if and only if for the path connecting branching points the conditions of Theorem \ref{t2qian16} hold and for the true signal around any ramification point involving $K+1$ edges, the jump just before it and the jumps right after it have opposite signs. More formally this last condition writes: \begin{enumerate} \item $\text{sgn}(j_1)\text{sgn}(j_l)<0, \forall l \in \left\{l^*\in \{2, \ldots, K+1\}, b_{l^*}\not= 0 \right\}$ \item and $\text{sgn}(j_l)\text{sgn}(j_{l'})>0, \forall l, l' \in \left\{l^*\in \{2, \ldots, K+1\}, b_{l^*}\not= 0 \right\}$. \item and $b_1-1,b_2, \ldots, b_{K+1}< \frac{2}{K+1} b^*$. \end{enumerate} Note that if $b_1=1$, then the condition $\text{sgn}(j_1)\text{sgn}(j_l)<0, \forall l \in \left\{l^*\in \{2, \ldots, K+1\}, b_{l^*}\not= 0 \right\}$ is removed. \end{theorem} \begin{proof}[Proof of Theorem \ref{t72}] See Appendix \ref{appA}. \end{proof} \section{Conclusion}\label{sec8} We refined some details of the approach of \cite{dala17} for proving a sharp oracle inequality for the total variation regularized estimator over the path graph. In particular we decided to follow an approach where a coefficient is left unpenalized and we gave a proof of a lower bound on the compatibility constant which does not use probabilistic arguments. The key point of this article is that we proved that the approach applied on the path graph can indeed be generalized to a branched graph and further to more general tree graphs. In particular we found a lower bound on the compatibility constant and we generalized the result concerning the irrepresentable condition obtained for the path graph by \cite{qian16}.
{ "timestamp": "2018-06-19T02:07:37", "yymm": "1806", "arxiv_id": "1806.01009", "language": "en", "url": "https://arxiv.org/abs/1806.01009" }
\section{Introduction} \label{SCintro} In discrete convex analysis \cite{Fuj05book,Mdca98,Mdcasiam}, a variety of discrete convex functions are considered. Among others, integrally convex functions, due to Favati--Tardella \cite{FT90}, constitute a common framework for discrete convex functions, and almost all kinds of discrete convex functions are known to be integrally convex. Indeed, separable convex, {\rm L}-convex, ${\rm L}^{\natural}$-convex, {\rm M}-convex, ${\rm M}^{\natural}$-convex, ${\rm L}^{\natural}_{2}$-convex, and ${\rm M}^{\natural}_{2}$-convex functions are known to be integrally convex \cite{Mdcasiam}. Multimodular functions \cite{Haj85} are also integrally convex, as pointed out in \cite{Mdcaprimer07}. Moreover, BS-convex and UJ-convex functions \cite{Fuj14bisubmdc} are integrally convex. The concept of integral convexity is used in formulating discrete fixed point theorems and found applications in economics and game theory \cite{IMT05,Mdcaeco16,Yan09fixpt}. A proximity theorem for integrally convex functions has recently been established in \cite{MMTT17proxIC} together with a proximity-scaling algorithm for minimization. Fundamental operations for integrally convex functions such as projection and convolution are investigated in \cite{MM17projcnvl}. In this paper we are concerned with subgradients and biconjugates of integer-valued integrally convex functions. For a function $f: \ZZ\sp{n} \to \RR \cup \{ +\infty \}$ we denote its effective domain as $\domZ f = \{ x \in \ZZ\sp{n} \mid f(x) < +\infty \}$; we always assume that $\domZ f$ is nonempty. For an integer-valued function $f: \ZZ\sp{n} \to \ZZ \cup \{ +\infty \}$, we define $f\sp{\bullet}: \ZZ\sp{n} \to \ZZ \cup \{ +\infty \}$ by \begin{align} f\sp{\bullet}(p) &= \sup\{ \langle p, x \rangle - f(x) \mid x \in \ZZ\sp{n} \} \qquad ( p \in \ZZ\sp{n}), \label{conjvexZpZ} \end{align} where $\langle p, x \rangle = \sum_{i=1}\sp{n} p_{i} x_{i}$ is the inner product of $p=(p_{1}, p_{2}, \ldots, p_{n})$ and $x=(x_{1}, x_{2}, \allowbreak \ldots, \allowbreak x_{n})$. This function $f\sp{\bullet}$ is referred to as the \kwd{integral conjugate} of $f$. We can apply (\ref{conjvexZpZ}) twice to obtain $f\sp{\bullet\bullet} = (f\sp{\bullet})\sp{\bullet}$, which is called the \kwd{integral biconjugate} of $f$. Concerning conjugacy and biconjugacy it is natural to ask the following questions for a given class of discrete convex functions. \begin{itemize} \item For an integer-valued function $f$ in the class, does the integral conjugate $f\sp{\bullet}$ belong to the same class? If not, how is it characterized? \item For an integer-valued function $f$ in the class, does integral biconjugacy $f\sp{\bullet\bullet} =f$ hold? \end{itemize} These questions are completely settled for separable convex, {\rm L}-convex, ${\rm L}^{\natural}$-convex, {\rm M}-convex, ${\rm M}^{\natural}$-convex, ${\rm L}^{\natural}_{2}$-convex, and ${\rm M}^{\natural}_{2}$-convex functions; see \cite[Chapter 8]{Mdcasiam}. We may say that they are also settled for multimodular functions via equivalence between ${\rm L}^{\natural}$-convexity and multimodularity pointed out in \cite{Mmult05}. The conjugacy question for BS-convex and UJ-convex functions is addressed in \cite{Fuj14bisubmdc}. For integrally convex functions, the first question about conjugacy is already settled in the negative \cite{MS01rel}. Indeed, there is an example of an integrally convex function whose integral conjugate is not integrally convex; see Remark \ref{RMconjIC} in Section \ref{SCintcnvfn}. The main result of this paper is an affirmative answer to the second question about biconjugacy, which is stated as Theorem~\ref{THbiconjIC} in Section \ref{SCbiconj}. Integral biconjugacy is closely related to integral subgradients. For a point $x \in \domZ f$, the \kwd{integral subdifferential} of $f$ at $x$ is defined as \begin{equation} \label{subgZZdef} \subgZ f(x) = \{ p \in \ZZ\sp{n} \mid f(y) - f(x) \geq \langle p, y - x \rangle \ \mbox{for all } y \in \ZZ\sp{n} \} , \end{equation} and an element of $\subgZ f(x)$ is called an \kwd{integral subgradient} of $f$ at $x$. It is known that $f\sp{\bullet\bullet}(x) = f(x)$ if and only if $\subgZ f(x) \not= \emptyset$; see Lemma \ref{LMbiconjsubg} in Section \ref{SCbiconj}. The condition $\subgZ f(x) \not= \emptyset$ is sometimes referred to as the \kwd{integral subdifferentiability} of $f$ at $x$. Our proof of the integral biconjugacy actually consists in showing the integral subdifferentiability, which is stated as Theorem~\ref{THsubgrIC} in Section \ref{SCsubr}. We can name the following significances of the present result: \begin{enumerate} \item Our result of integral biconjugacy for integrally convex functions serves as a unified proof of integral biconjugacy for various classes of discrete convex functions, such as {\rm L}-convex, ${\rm L}^{\natural}$-convex, {\rm M}-convex, ${\rm M}^{\natural}$-convex, ${\rm L}^{\natural}_{2}$-convex, and ${\rm M}^{\natural}_{2}$-convex functions. The existing proofs for these functions are based on conjugacy statements valid for respective function classes, and as such, vary with function classes. Our proof considers integral biconjugacy directly, without involving conjugacy properties that depend on function classes. \item In addition to being a unified proof for known results, our result reveals new facts that integer-valued BS-convex and UJ-convex functions admit integral subgradients and enjoy integral biconjugacy (Corollaries \ref{COsubgrBSUJ} and \ref{CObiconjBSUJ}). \item Our results imply that a theory of discrete DC functions can be developed for integrally convex functions. In particular, an analogue of the Toland--Singer duality for integrally convex functions can be established. See Section \ref{SCdcprog} for details. \end{enumerate} This paper is organized as follows. Section~\ref{SCintcnvfn} is a review of relevant results on integrally convex functions. Section~\ref{SCres} presents the main results of this paper, followed by Section~\ref{SCproof} for the proofs. Section~\ref{SCconclrem} concludes the paper with some remarks. \section{Integrally Convex Functions} \label{SCintcnvfn} In this section we summarize fundamental facts about integrally convex functions. The reader is referred to \cite{FT90} and \cite[Section 3.4]{Mdcasiam} for backgrounds. For integer vectors $a \in (\ZZ \cup \{ -\infty \})\sp{n}$ and $b \in (\ZZ \cup \{ +\infty \})\sp{n}$ with $a \leq b$, $[a,b]_{\ZZ}$ denotes the integer interval (box, discrete rectangle) between $a$ and $b$, i.e., $[a,b]_{\ZZ} = \{ x \in \ZZ\sp{n} \mid a \leq x \leq b \}$. For $x \in \RR^{n}$ the integral neighborhood of $x$ is defined as \[ N(x) = \{ z \in \mathbb{Z}^{n} \mid | x_{i} - z_{i} | < 1 \ (i=1,\ldots,n) \}. \] For a function $f: \mathbb{Z}^{n} \to \mathbb{R} \cup \{ +\infty \}$ the local convex extension $\tilde{f}: \RR^{n} \to \RR \cup \{ +\infty \}$ of $f$ is defined as the union of all convex envelopes of $f$ on $N(x)$. That is, \begin{equation} \label{fnconvclosureloc2} \tilde f(x) = \min\{ \sum_{y \in N(x)} \lambda_{y} f(y) \mid \sum_{y \in N(x)} \lambda_{y} y = x, \ (\lambda_{y}) \in \Lambda(x) \} \quad (x \in \RR^{n}) , \end{equation} where $\Lambda(x)$ denotes the set of coefficients for convex combinations indexed by $N(x)$: \[ \Lambda(x) = \{ (\lambda_{y} \mid y \in N(x) ) \mid \sum_{y \in N(x)} \lambda_{y} = 1, \lambda_{y} \geq 0 \ (\forall y \in N(x)) \} . \] If $\tilde f$ is convex on $\RR^{n}$, then $f$ is said to be {\em integrally convex} \cite{FT90,Mdcasiam}. A set $S \subseteq \ZZ^{n}$ is said to be integrally convex if the convex hull $\overline{S}$ of $S$ coincides with the union of the convex hulls of $S \cap N(x)$ over $x \in \RR^{n}$, i.e., if, for any $x \in \RR^{n}$, $x \in \overline{S} $ implies $x \in \overline{S \cap N(x)}$. A set $S$ is integrally convex if and only if its indicator function $\delta_{S}: \ZZ\sp{n} \to \{ 0, +\infty \}$ is an integrally convex function, where the indicator function $\delta_{S}$ is defined by $\delta_{S}(x) = \left\{ \begin{array}{ll} 0 & (x \in S) , \\ + \infty & (x \not\in S) . \\ \end{array} \right.$ An integrally convex set $S$ is ``hole-free'' in the sense that \begin{equation} \label{holefree} S = \overline{S} \cap \mathbb{Z}^{n}. \end{equation} In this paper we need the following property of integrally convex sets. \begin{proposition} \label{PRpolyhedICset} The convex hull $\overline{S}$ of an integrally convex set $S \subseteq \ZZ\sp{n}$ is an integer polyhedron. Moreover, for any face $F$ of $\overline{S}$, the smallest affine subspace containing $F$ is given as $\{ x + \sum_{k=1}\sp{h} c_{k} d^{(k)} \mid c_{1}, c_{2}, \ldots, c_{h} \in \RR \}$ for a point $x$ in $F$ and some direction vectors $d^{(k)} \in \{ -1,0,+1 \}\sp{n}$ $(k=1,2,\ldots, h)$. \end{proposition} \begin{proof} The proof is given in Section \ref{SCproofpolyh}. \end{proof} \begin{remark} \rm \label{RMedgedir} The properties mentioned in Proposition \ref{PRpolyhedICset} do not characterize integral convexity of a set. For example, let $S = \{ (0,0,0), \allowbreak (1,0,1), \allowbreak (1,1,-1), (2,1,0) \}$. The convex hull $\overline{S}$ is a parallelogram with edge directions $(1,0,1)$ and $(1,1,-1)$, and hence is an integer polyhedron such that the smallest affine subspace containing each face is spanned by $\{ -1,0,1 \}$-vectors. However, $S$ is not integrally convex, since $x = [(1,0,1) + (1,1,-1) ]/2 = (1,1/2,0) \in \overline{S}$, $N(x) = \{ (1,0,0), (1,1,0) \}$, and $S \cap N(x) = \emptyset$. \finbox \end{remark} The effective domain of an integrally convex function is an integrally convex set. Integral convexity of a function can be characterized by a local condition under the assumption that the effective domain is an integrally convex set. \begin{theorem}[\cite{FT90,MMTT17proxIC}] \label{THfavtarProp33} Let $f: \mathbb{Z}^{n} \to \mathbb{R} \cup \{ +\infty \}$ be a function with an integrally convex effective domain. Then the following properties are equivalent: {\rm (a)} $f$ is integrally convex. {\rm (b)} For every $x, y \in \ZZ\sp{n}$ with $\| x - y \|_{\infty} =2$ we have \ \begin{equation} \label{intcnvconddist2} \tilde{f}\, \bigg(\frac{x + y}{2} \bigg) \leq \frac{1}{2} (f(x) + f(y)). \end{equation} \vspace{-1.7\baselineskip} \\ \finbox \end{theorem} A minimizer of an integrally convex function can be characterized by a local minimality condition as follows. \begin{theorem}[\protect{\cite[Proposition 3.1]{FT90}}; see also \protect{\cite[Theorem 3.21]{Mdcasiam}}] \label{THintcnvlocopt} Let $f: \mathbb{Z}^{n} \to \mathbb{R} \cup \{ +\infty \}$ be an integrally convex function and $x^{*} \in \domZ f$. Then $x^{*}$ is a minimizer of $f$ if and only if $f(x^{*}) \leq f(x^{*} + d)$ for all $d \in \{ -1, 0, +1 \}^{n}$. \finbox \end{theorem} \begin{remark} \rm \label{RMintcnvconcept} The concept of integrally convex functions is introduced in \cite{FT90} for functions defined on integer intervals (discrete rectangles). The extension to functions with general integrally convex effective domains is straightforward, which is found in \cite{Mdcasiam}. Theorem~\ref{THfavtarProp33} is proved in \cite[Proposition 3.3]{FT90} when the effective domain is an integer interval and in \cite{MMTT17proxIC} for the general case. \finbox \end{remark} \begin{remark} \rm \label{RMconjIC} The integral conjugate of an integrally convex function $f$ is not necessarily integrally convex. This is shown by the following example (\cite[Example 4.15]{MS01rel} with a minor correction). Let $S = \{(1, 1, 0, 0), (0, 1, 1, 0), (1, 0, 1, 0), (0, 0, 0, 1)\}$. This is obviously an integrally convex set, as it is contained in $\{ 0,1 \}\sp{4}$. Accordingly, its indicator function $\delta_{S}: \ZZ^{4} \to \{0, + \infty\}$ is integrally convex. The integral conjugate $g = \delta_{S}^\bullet$ is given by \[ g(p_{1}, p_{2}, p_{3}, p_{4}) = \max\{p_{1} + p_{2}, p_{2} + p_{3}, p_{1} + p_{3}, p_{4}\} \qquad (p \in \ZZ^{4}). \] Let $\tilde g$ be the local convex extension of $g$. For $p =(0,0,0,0)$ and $q=(1,1,1,2)$ we have $(p+q)/2 =(1/2,1/2,1/2,1) = [(1,0,0,1) + (0,1,0,1) + (0,0,1,1) + (1,1,1,1)]/4$ and $\tilde g ((p+q)/2) = [g(1,0,0,1) + g(0,1,0,1) + g(0,0,1,1) + g(1,1,1,1)]/4 = (1+1+1+2)/4 = 5/4$, whereas $(g(p)+ g(q)) /2 = (0+2)/2 = 1$. Thus we have $\tilde g ((p+q)/2) > (g(p)+ g(q)) /2$, violating (\ref{intcnvconddist2}) in Theorem~\ref{THfavtarProp33}. Hence $g$ is not integrally convex. \finbox \end{remark} \section{Results} \label{SCres} \subsection{Integral subgradients} \label{SCsubr} \begin{theorem}[Integral subdifferentiability] \label{THsubgrIC} For an integer-valued integrally convex function $f: \ZZ^{n} \to \ZZ \cup \{ +\infty \}$, we have $\subgZ f(x) \neq \emptyset$ for all $x \in\domZ f$. \end{theorem} \begin{proof} The proof is given in Section \ref{SCproofsubgr}. \end{proof} The following example shows that integral subdifferentiability is not guaranteed without the assumption of integral convexity. \begin{example}[{\cite[Example 1.1]{Mdca98}}] \rm \label{EXla1} Let $D = \{ (0,0,0), \pm (1,1,0), \pm (0,1,1), \pm (1,0,1) \}$ and $f: \mathbb{Z}^3 \to \mathbb{Z} \cup \{+\infty\}$ be defined by \begin{align*} f(x_{1},x_{2},x_{3}) = \begin{cases} (x_{1}+x_{2}+x_{3})/2 & (x \in D), \\ +\infty & (\textrm{otherwise}). \end{cases} \end{align*} This function can be naturally extended to a convex function on the convex hull $\overline{D}$ of $D$ and $D$ is hole-free in the sense of (\ref{holefree}). However, $D$ is not integrally convex since for $x=(1,1,0)$ and $y=(-1,0,-1)$ we have $(x+y)/2 = (0, 1/2, -1/2)$, $N((0, 1/2, -1/2)) = \{ (0,0,0), (0,1,0), (0,0,-1), (0,1,-1) \}$, and hence $N((0, 1/2, -1/2)) \cap D \allowbreak = \{ (0,0,0) \}$. Therefore, $f$ is not integrally convex. To investigate the integral subgradient of $f$ at the origin, suppose that $p \in \subgZ f(\veczero) \subseteq \mathbb{Z}^3$. Since $f(y) - f(\veczero) \ge \langle p, y \rangle$ for all $y \in D$, we must have \begin{center} \begin{tabular}{ccc} $ 1 \ge p_{1} + p_{2}$, & $ 1 \ge p_{2} + p_{3}$, & $ 1 \ge p_{3} + p_{1}$, \\ $-1 \ge -p_{1} - p_{2}$, & $-1 \ge -p_{2} - p_{3}$, & $-1 \ge -p_{3} - p_{1}$. \end{tabular} \end{center} However, this system admits no integer solution, though it is satisfied by $(p_{1}, p_{2}, p_{3}) = (1/2, 1/2, 1/2)$. Hence $\subgZ f(\veczero) = \emptyset$. \finbox \end{example} \begin{remark} \rm \label{RMsubgNotIntPolyh} Here is a subtle point about the statement of Theorem~\ref{THsubgrIC}. In parallel to the integral subdifferential $\subgZ f(x)$ in (\ref{subgZZdef}), the (real) subdifferential $\subgR f(x)$ is defined by \begin{equation} \label{subgZRdef2} \subgR f(x) = \{ p \in \RR\sp{n} \mid f(y) - f(x) \geq \langle p, y - x \rangle \ \mbox{for all } y \in \ZZ\sp{n} \}. \end{equation} Theorem~\ref{THsubgrIC} states that $\subgR f(x) \cap \ZZ\sp{n} \not= \emptyset$, but it does not claim a stronger statement that $\subgR f(x)$ is an integer polyhedron. Indeed, $\subgR f(x)$ is not necessarily an integer polyhedron, as the following example shows. Let $f: \mathbb{Z}^3 \to \mathbb{Z} \cup \{+\infty\}$ be defined by $f(0,0,0)=0$ and $f(1,1,0)=f(0,1,1)=f(1,0,1)=1$, with $\domZ f = \{ (0,0,0), (1,1,0), (0,1,1), (1,0,1) \}$. This function is integrally convex and the subdifferential of $f$ at the origin is given as \[ \subgR f(\veczero) = \{ p= (p_{1}, p_{2}, p_{3}) \in \RR\sp{3} \mid p_{1} + p_{2} \leq 1, p_{2} + p_{3} \leq 1, p_{1} + p_{3} \leq 1 \}, \] which is not an integer polyhedron, having a non-integral vertex at $p=(1/2, 1/2, 1/2)$. In contrast, it is known \cite{Mdcasiam} that, $\subgR f(x)$ is an integer polyhedron if $f$ is ${\rm L}^{\natural}$-convex, ${\rm M}^{\natural}$-convex, ${\rm L}^{\natural}_{2}$-convex, or ${\rm M}^{\natural}_{2}$-convex. \finbox \end{remark} \begin{remark} \rm \label{RMsubgBS} BS-convex and UJ-convex functions are investigated by Fujishige \cite{Fuj14bisubmdc}. For an integer-valued BS-convex function $f$, the subdifferential $\subgR f(x)$ in (\ref{subgZRdef2}) contains a half-integral vector \cite[Theorem 2]{Fuj14bisubmdc}, and it contains an integral vector if the function $f$ arises as the conjugate of a $D$-convex function, which, by definition, is associated with a disconcordant Freudenthal simplicial division $D$ \cite[Theorem 5]{Fuj14bisubmdc}. The function used as an example in Remark \ref{RMsubgNotIntPolyh} is actually a BS-convex function (\cite[Example~3]{Fuj14bisubmdc}), and therefore, $\subgR f(x)$ is not necessarily an integer polyhedron for a BS-convex function $f$. Nevertheless, BS-convex and UJ-convex functions admit integral subgradients, as they are integrally convex. This fact is stated below as a corollary of Theorem~\ref{THsubgrIC}. \finbox \end{remark} \begin{corollary} \label{COsubgrBSUJ} \quad \noindent {\rm (1)} For an integer-valued BS-convex function $f$, we have $\subgZ f(x) \neq \emptyset$ for all $x \in\domZ f$. \noindent {\rm (2)} For an integer-valued UJ-convex function $f$, we have $\subgZ f(x) \neq \emptyset$ for all $x \in\domZ f$. \finbox \end{corollary} \subsection{Integral biconjugacy} \label{SCbiconj} In this section we establish the integral biconjugacy $f\sp{\bullet\bullet} = f$ for integer-valued integrally convex functions. \begin{lemma}[{\cite[Lemma 4.1]{Mdca98}}] \label{LMbiconjsubg} For each $x \in \domZ f$ we have: \ \ $f\sp{\bullet\bullet}(x) = f(x) \iff \subgZ f(x) \not= \emptyset$. \end{lemma} \begin{proof} By the definitions (\ref{conjvexZpZ}) and (\ref{subgZZdef}) it holds, for $x \in \domZ f$ and $p \in \ZZ\sp{n}$, that \begin{equation} \label{subgconj} p \in \subgZ f(x) \iff f(x) + f\sp{\bullet}(p) = \langle p, x \rangle . \end{equation} If there exists $p \in \subgZ f(x)$, (\ref{subgconj}) implies that $f(x) + f\sp{\bullet}(p) = \langle p, x \rangle$. From this and the definition of $f\sp{\bullet\bullet}(x)$ we obtain $f\sp{\bullet\bullet}(x) \geq \langle p, x \rangle - f\sp{\bullet}(p) = f(x)$, while $ f\sp{\bullet\bullet}(x) \leq f(x)$ is obvious. Conversely, if $f\sp{\bullet\bullet}(x) = f(x)$, there exists $p \in \ZZ\sp{n}$ such that $\langle p, x \rangle - f\sp{\bullet}(p) = f\sp{\bullet\bullet}(x) = f(x)$. This implies $p \in \subgZ f(x)$ by (\ref{subgconj}). \end{proof} \begin{remark} \rm \label{RMfbbf} The desired integral biconjugacy $f\sp{\bullet\bullet} = f$ does not follow immediately from the combination of Theorem~\ref{THsubgrIC} and Lemma \ref{LMbiconjsubg}. Let $f: \ZZ\sp{2} \to \ZZ \cup \{+\infty \}$ be the indicator function of $S = \{ (x_{1},x_{2}) \in \ZZ\sp{2} \mid x_{2} \geq \sqrt{2} x_{1} - 1/2 \}$, which is not an integrally convex set. Then $\subgZ f(x) = \subgR f(x) = \{ (0,0) \} \not= \emptyset$ for all $x \in S = \domZ f$. On the other hand, we have $f\sp{\bullet}(0,0) = 0$ and $f\sp{\bullet}(p) =+\infty$ for $p \in \ZZ\sp{2}\setminus \{(0,0)\}$, from which follows that $f\sp{\bullet\bullet}(x)= 0$ for all $x \in \ZZ\sp{2}$. Thus we have $\domZ f\sp{\bullet\bullet} = \ZZ\sp{2} \not= \domZ f$, and, a fortiori, $f\sp{\bullet\bullet} \not= f$. This example, taken from \cite[Remark 4.1]{Mdca98}, motivates the technical condition (\ref{Fcond2}) below. \finbox \end{remark} For $f: \mathbb{Z}^{n} \to \mathbb{Z} \cup \{+\infty\}$ we consider the following conditions: \begin{align} & \domZ f = {\rm cl}(\overline{\domZ f}) \cap \mathbb{Z}^{n} \neq \emptyset, \label{Fcond1} \\ & {\rm cl}(\overline{\domZ f}) \textrm{ is rationally-polyhedral}, \label{Fcond2} \\ & \subgZ f(x) \neq \emptyset \ \mbox{for all} \ x \in \domZ f , \label{Fcond3} \end{align} where ${\rm cl}(\overline{\domZ f})$ denotes the closure of the convex hul \footnote{ ${\rm cl}(\overline{\domZ f})$ coincides with the closed convex hull of $\domZ f$; \cite[Section 1.4]{HL01}. } of $\domZ f$, and a closed convex set in $\mathbb{R}^{n}$ is said to be rationally-polyhedral if it is described by a system of finitely many inequalities with coefficients of rational numbers. The first condition (\ref{Fcond1}) is natural, the second condition (\ref{Fcond2}) is rather technical, and the third condition (\ref{Fcond3}) is essential. \begin{lemma}[{\cite[Lemma 4.2]{Mdca98}}] \label{LMsubgext} Suppose that $f: \mathbb{Z}^{n} \to \mathbb{Z} \cup \{+\infty\}$ satisfies the conditions {\rm (\ref{Fcond1})}, {\rm (\ref{Fcond2})}, and {\rm (\ref{Fcond3})} \footnote In Lemma 4.1 of \cite{Mdca98} an additional condition ``$\partial_\mathbb{R} f(x) = {\rm cl}(\overline{\domZ f})$'' is involved in the definition of $\mathcal{F}_G$ in (4.18). However, we can verify that this condition is not needed. } \ Then the following hold. \smallskip \par \noindent {\rm (1)} \ ${\displaystyle \domZ f\sp{\bullet} = \bigcup \{ \subgZ f(x) \mid x \in \domZ f \} \not= \emptyset. }$ \smallskip \noindent {\rm (2)} \ $ \domZ f\sp{\bullet\bullet} = \domZ f$. \smallskip \noindent {\rm (3)} \ $f\sp{\bullet\bullet}(x) = f(x) \qquad (x \in \ZZ\sp{n})$. \smallskip \noindent {\rm (4)} \ For $x \in \domZ f$, $p \in \domZ f\sp{\bullet}:$ \quad $p \in \subgZ f(x) \iff x \in \subgZ f\sp{\bullet}(p)$. \smallskip \noindent {\rm (5)} \ $\subgZ f\sp{\bullet}(p) \not= \emptyset \qquad (p \in \domZ f\sp{\bullet})$. \finbox \end{lemma} \begin{lemma} \label{LMcond123IC} An integer-valued integrally convex function satisfies the conditions {\rm (\ref{Fcond1})}, {\rm (\ref{Fcond2})}, and {\rm (\ref{Fcond3})}. \end{lemma} \begin{proof} Since $S = \domZ f$ is integrally convex, $\overline{S}$ is an integer polyhedron by Proposition \ref{PRpolyhedICset}. In particular, we have ${\rm cl}(\overline{S}) = \overline{S}$. The condition (\ref{Fcond1}) is satisfied by (\ref{holefree}). The property (\ref{Fcond2}) can be shown as follows. By Proposition \ref{PRpolyhedICset}, the smallest affine subspace containing a facet $F$ of $\overline{S}$ is described by a system of equations, say, $A_{F} x = b_{F}$ with the entries of $A_{F}$ belonging to $\{ -1,0,+1 \}$ and $b_{F}$ being an integer vector. This implies the rationality (\ref{Fcond2}). The property (\ref{Fcond3}) is shown in Theorem~\ref{THsubgrIC}. \end{proof} By combining Lemmas \ref{LMsubgext} and \ref{LMcond123IC}, we obtain the following statements about the integral subdifferential, integral conjugate, and integral biconjugate of an integer-valued integrally convex function. \begin{proposition} \label{PRsubgextIC} For an integer-valued integrally convex function $f: \mathbb{Z}^{n} \to \mathbb{Z} \cup \{+\infty\}$, we have the properties {\rm (1)} to {\rm (5)} in Lemma {\rm \ref{LMsubgext}}. \finbox \end{proposition} The integral biconjugacy claimed in Proposition \ref{PRsubgextIC} deserves a separate statement as a theorem. \begin{theorem}[Integral biconjugacy] \label{THbiconjIC} For an integer-valued integrally convex function $f: \ZZ^{n} \to \ZZ \cup \{ +\infty \}$ we have $f\sp{\bullet\bullet}(x) =f(x)$ for all $x \in \ZZ\sp{n}$. \finbox \end{theorem} The following example shows that integral biconjugacy is not guaranteed without the assumption of integral convexity. \begin{example}[{\cite[Example 1.1]{Mdca98}}] \rm \label{EXla1cont} In Example \ref{EXla1}, $D = \domZ f$ is not an integrally convex set, and therefore $f$ is not integrally convex. The integral conjugate of $f$ is given as \[ f^{\bullet}(p) = \max \{ 0, |p_{1}+p_{2}-1|, |p_{2}+p_{3}-1|, |p_{3}+p_{1}-1| \} \] and the integral biconjugate is $f^{\bullet\bullet}(x) = \sup_{p \in \mathbb{Z}^3} \{ \langle p, x \rangle - f^{\bullet}(p) \}$. Hence \[ f^{\bullet\bullet}(\veczero) = - \inf_{p \in \mathbb{Z}^3} \max \{ 0, |p_{1}+p_{2}-1|, |p_{2}+p_{3}-1|, |p_{3}+p_{1}-1| \}. \] Therefore we have $f^{\bullet\bullet}(\veczero) = -1 \neq 0 = f(\veczero)$. This shows $f^{\bullet\bullet} \neq f$. \finbox \end{example} As special cases of Theorem~\ref{THbiconjIC} we obtain integral biconjugacy for {\rm L}-convex, ${\rm L}^{\natural}$-convex, {\rm M}-convex, ${\rm M}^{\natural}$-convex, ${\rm L}^{\natural}_{2}$-convex, and ${\rm M}^{\natural}_{2}$-convex functions given in \cite[Theorems 8.12, 8.36, 8.46]{Mdcasiam}. Integral biconjugacy for BS-convex and UJ-convex functions are also obtained as a corollary of Theorem~\ref{THbiconjIC}. \begin{corollary} \label{CObiconjBSUJ} \quad \noindent {\rm (1)} For an integer-valued BS-convex function $f$, we have $f\sp{\bullet\bullet}(x) =f(x)$ for all $x \in \ZZ\sp{n}$. \noindent {\rm (2)} For an integer-valued UJ-convex function $f$, we have $f\sp{\bullet\bullet}(x) =f(x)$ for all $x \in \ZZ\sp{n}$. \end{corollary} \subsection{Discrete DC programming} \label{SCdcprog} A discrete analogue of the theory of DC functions (difference of two convex functions) and DC programming has recently been proposed in \cite{MM15dcprog} using \Lnat-convex and \Mnat-convex functions. As already noted in \cite[Remark 4.7]{MM15dcprog}, such theory of discrete DC functions can in fact be developed for functions that satisfy integral biconjugacy and integral subdifferentiability. Our present results, Theorems \ref{THsubgrIC} and \ref{THbiconjIC}, enable us to extend the theory of discrete DC functions to integrally convex functions. In particular, an analogue of the Toland--Singer duality~\cite{Sin79,Tol79} can be established for integrally convex functions as a corollary of our results. \begin{theorem}[Toland--Singer duality] \label{THtolandsinger} Let $g$ and $h$ be integer-valued integrally convex functions \footnote As the proof shows, the integral convexity of $g$ is not needed. That is, (\ref{tolandsingerduality}) holds for any $g: \mathbb{Z}^{n} \to \mathbb{Z} \cup \{+\infty\}$, as long as $h: \mathbb{Z}^{n} \to \mathbb{Z} \cup \{+\infty\}$ is integrally convex. } Then \begin{align} \label{tolandsingerduality} \inf_{x \in \mathbb{Z}^{n}} \{ g(x) - h(x) \} = \inf_{p \in \mathbb{Z}^{n}} \{ h^{\bullet}(p) - g^{\bullet}(p) \} . \end{align} \end{theorem} \begin{proof} By integral biconjugacy (Theorem~\ref{THbiconjIC}) of $h$, we can prove (\ref{tolandsingerduality}) as follows: \begin{align*} & \inf_x \{ g(x) - h(x) \} = \inf_x \{ g(x) - h^{\bullet\bullet}(x) \} = \inf_x \{ g(x) - \sup_p \{ \left<p,x\right> - h^{\bullet}(p) \} \} \\ &= \inf_x \inf_p \{ g(x) - \left<p, x\right> + h^{\bullet}(p) \} = \inf_p \{ h^{\bullet}(p) - \sup_x \{ \left<p, x\right> - g(x) \} \} \\ & = \inf_p \{ h^{\bullet}(p) - g^{\bullet}(p) \} . \end{align*} \vspace{-2\baselineskip} \\ \end{proof} \section{Proofs} \label{SCproof} \subsection{Proof of Proposition \ref{PRpolyhedICset} about the convex hull} \label{SCproofpolyh} We start with a basic fact, which will be intuitively obvious. \begin{lemma} \label{LMhullICclosed} The convex hull $\overline{S}$ of an integrally convex set $S$ is a closed set. \end{lemma} \begin{proof} Take any point $x$ in the (topological) closure of $\overline{S}$. There exists a sequence $\{ x_{k} \} \subseteq \overline{S}$ that converges to $x$. We may assume that $N(x) \subseteq N(x_{k})$ holds for all $k$, by considering a subsequence consisting of $\{ x_{k} \}$ with $\| x_{k} - x \|_{\infty} < \varepsilon$ for a sufficiently small $\varepsilon >0$. We may further assume that $N(x_{k})$ is identical for all $k$, since there are finitely many possibilities of the set $N(x_{k})$ and we can choose an appropriate subsequence of $\{ x_{k} \}$. Let $N_{*}$ denote this $N(x_{k})$. Since $S$ is integrally convex and $x_{k} \in \overline{S}$, we have $x_{k} \in \overline{S \cap N(x_{k})} = \overline{S \cap N_{*}}$. Here $\overline{S \cap N_{*}}$ is a closed set, since $S \cap N_{*}$ is a finite set. Therefore, $x = \lim_{k} x_{k} \in \overline{S \cap N_{*}} \subseteq \overline{S}$. \end{proof} Let $S \subseteq \ZZ\sp{n}$ be an integrally convex set, and $F$ be a face of its convex hull $\overline{S}$. Let $L_{F}$ denote the linear subspace of $\RR\sp{n}$ such that the smallest affine subspace containing $F$ is represented as $ x + L_{F}$ for a point $x$ in $F$. In the following we prove Proposition \ref{PRpolyhedICset} by showing that (1) $F$ contains an integer point, (2) $L_{F}$ is spanned by vectors in $\{-1,0,+1\}\sp{n}$, and (3) $\overline{S}$ is a polyhedron. \medskip Proof of (1): Take any $x \in \RR\sp{n}$ in $F$. By the integral convexity of $S$, we have $x \in \overline{S \cap N(x)}$. That is, there exist integer points $y\sp{(1)}, y\sp{(2)}, \ldots, y\sp{(m)} \in S \cap N(x)$ and $\lambda_{1}, \lambda_{2}, \ldots, \lambda_{m} > 0$ such that $ x = \sum_{k=1}\sp{m} \lambda_{k}y\sp{(k)}$ and $\sum_{k=1}\sp{m} \lambda_{k} = 1$. Here we have $y\sp{(1)}, y\sp{(2)}, \allowbreak \ldots, \allowbreak y\sp{(m)} \in F$, since $F$ is a face of $\overline{S}$, $x \in F$, and $y\sp{(1)}, y\sp{(2)}, \ldots, y\sp{(m)} \in \overline{S}$. Proof of (2): Fix $x \in F \cap \ZZ\sp{n}$. We shall show that there exist $d^{(1)}, d^{(2)}, \ldots, d^{(h)} \in \{-1,0,+1\}\sp{n}$ such that \begin{equation} \label{prfFAC1} F = (x + \mbox{span}\{ d^{(1)}, d^{(2)},\ldots, d^{(h)} \}) \cap \overline{S}, \end{equation} where $\mbox{span}\{ \cdots \}$ means the subspace spanned by the vectors in the braces. We assume that $F$ is not a singleton, since otherwise (\ref{prfFAC1}) is trivially true. Take any $y \in F \setminus \{ x \}$ and define $z = (1-\varepsilon)x + \varepsilon y$ with a sufficiently small $\varepsilon > 0$ so that $x \in N(z)$. Since $z \in \overline{S}$ and $S$ is integrally convex, there exist $z\sp{(1)}, z\sp{(2)}, \ldots, z\sp{(m)} \in S \cap N(z)$ and $\lambda_{1}, \lambda_{2}, \ldots, \lambda_{m} > 0$ such that $ z = \sum_{k=1}\sp{m} \lambda_{k}z\sp{(k)}$ and $\sum_{k=1}\sp{m} \lambda_{k} = 1$. Here we have $z\sp{(1)}, z\sp{(2)}, \ldots, z\sp{(m)} \in F$, since $F$ is a face of $\overline{S}$, $z \in F$, and $z\sp{(1)}, z\sp{(2)}, \ldots, z\sp{(m)} \in \overline{S}$. It follows from $(1-\varepsilon)x + \varepsilon y = z = \sum_{k=1}\sp{m} \lambda_{k}z\sp{(k)}$ that \[ y = x + \frac{1}{\varepsilon} \sum_{k=1}\sp{m} \lambda_{k}(z\sp{(k)} - x), \] where each direction vector $z\sp{(k)} - x$ belongs to $\{-1,0,+1\}\sp{n}$, since both $z\sp{(k)}$ and $x$ are members of $N(z)$. By collecting all the direction vectors $z\sp{(k)} - x$ arising from all choices of $y \in F \setminus \{ x \}$ we obtain a set of vectors $\{ d^{(1)}, d^{(2)}, \ldots, d^{(h)} \} \subseteq \{-1,0,+1\}\sp{n}$ for which (\ref{prfFAC1}) holds. Proof of (3): First suppose that $\overline{S}$ is full dimensional. For a facet $F$ of $\overline{S}$, the linear subspace $L_{F}$ is a hyperplane of dimension $n-1$, and is described by an (outward) normal vector. The normal vector is perpendicular to $(n-1)$ linearly independent direction vectors generating $L_{F}$ and is uniquely determined under some appropriate normalization of the length. Since the direction vectors are contained in $\{-1,0,+1\}\sp{n}$ by (\ref{prfFAC1}), there exist only a finite number of possible normal vectors, and hence $\overline{S}$ has a finite number of facets. If $\overline{S}$ is not full dimensional, we consider normal vectors of its facets contained in the subspace $L_{\overline{S}}$. There are only a finite number of such normal vectors, up to scaling. Therefore, $\overline{S}$ is a polyhedron. \subsection{Proof of Theorem~\ref{THsubgrIC} for integral subdifferentiability} \label{SCproofsubgr} Let $f: \mathbb{Z}^{n} \to \mathbb{Z} \cup \{ +\infty \}$ be an integer-valued integrally convex function. For a point $x \in \domZ f$, the subdifferential of $f$ at $x$ is defined as \begin{equation} \label{subgZRdef} \subgR f(x) = \{ p \in \RR\sp{n} \mid f(y) - f(x) \geq \langle p, y - x \rangle \ \mbox{for all } y \in \ZZ\sp{n} \}. \end{equation} The subdifferential $\subgR f(x)$ is nonempty for every $x \in \domZ f$, since an integrally convex function is extensible to a convex function. In the following we prove that $\subgR f(x)$ contains an integer vector, which is the claim of Theorem~\ref{THsubgrIC}. We may assume that $x = \veczero$ and $f(\veczero) = 0$. In the definition of $\subgR f(\veczero)$ by (\ref{subgZRdef}), it suffices, by Theorem~\ref{THintcnvlocopt}, to consider $y$ in $\{-1,0,+1\}^n$. Therefore, we have \begin{equation} \label{subgZR0def} \subgR f(\veczero) = \{ p \in \RR\sp{n} \mid \sum_{j=1}\sp{n} y_{j} p_{j} \leq f(y) \ \mbox{for all } y \in \{ -1,0,+1 \}\sp{n} \} . \end{equation} We represent the system of inequalities $\sum_{j=1}\sp{n} y_{j} p_{j} \leq f(y)$ for $y$ with $f(y) < +\infty$ in a matrix form as \begin{equation}\label{ineqApb} A p \leq b. \end{equation} Let $I$ denote the row set of $A$ and $A = ( a_{ij} \mid i \in I, j \in \{ 1,2,\ldots, n \})$. We denote the $i$th row vector of $A$ by $a_{i}$ for $i \in I$. The row set $I$ is indexed by $y \in \{ -1,0,+1 \}\sp{n}$ with $f(y) < +\infty$, and $a_{i}$ is equal to the corresponding $y$ for $i \in I$; we have $a_{ij} = y_{j}$ for $j=1,2,\ldots, n$ and $b_{i}= f(a_{i})$. Note that $a_{ij} \in \{ -1,0,+1 \}$ and $a_{i} \in \{ -1,0,+1 \}\sp{n}$ for all $i$ and $j$. We apply the Fourier--Motzkin elimination procedure \cite{Sch86} to the system of inequalities (\ref{ineqApb}) to show the existence of an integer vector satisfying (\ref{ineqApb}). The Fourier--Motzkin elimination for (\ref{ineqApb}) goes as follows. According to the value of the coefficient $a_{i1}$ of the first variable $p_{1}$, we partition $I$ into three disjoint parts $(I_{1}^{+},I_{1}^{0},I_{1}^{-})$ as \begin{align*} I_{1}^{+} &= \{ i \in I \mid a_{i1} = +1 \}, \\ I_{1}^{0} &= \{ i \in I \mid a_{i1} = 0 \}, \\ I_{1}^{-} &= \{ i \in I \mid a_{i1} = -1 \}, \end{align*} and decompose (\ref{ineqApb}) into three parts as \begin{align} a_{i} p \leq b_{i} & \qquad (i \in I_{1}^{+}), \label{ineqApbi+} \\ a_{i} p \leq b_{i} & \qquad (i \in I_{1}^{0}) , \label{ineqApbi0} \\ a_{i} p \leq b_{i} & \qquad (i \in I_{1}^{-}) . \label{ineqApbi-} \end{align} For all possible combinations of $i \in I_{1}^{+}$ and $k \in I_{1}^{-}$, we add the inequality for $i$ in (\ref{ineqApbi+}) and the inequality for $k$ in (\ref{ineqApbi-}) to obtain \begin{equation}\label{FMp2pn} (a_{i} + a_{k}) p \leq b_{i} + b_{k} \qquad (i \in I_{1}^{+},\; k \in I_{1}^{-}) . \end{equation} The inequalities in (\ref{FMp2pn}) are free from the variable $p_{1}$, since $a_{i1} + a_{k1}= 0$ for all $i \in I_{1}^{+}$ and $k \in I_{1}^{-}$. For the variable $p_{1}$ we obtain \begin{equation}\label{FMp1} \max_{k \in I_{1}^{-}} \left\{ \sum_{j=2}^n a_{kj}p_{j} - b_{k} \right\} \leq p_{1} \leq \min_{i \in I_{1}^{+}} \left\{ b_{i} - \sum_{j=2}^n a_{ij}p_{j} \right\} \end{equation} from (\ref{ineqApbi+}) and (\ref{ineqApbi-}). It is understood that the maximum over the empty set is equal to $-\infty$ and the minimum over the empty set is equal to $+\infty$. We have thus eliminated $p_{1}$ and obtained a system of inequalities in $(p_{2},\ldots,p_{n})$ consisting of (\ref{ineqApbi0}) and (\ref{FMp2pn}). Once $(p_{2},\ldots,p_{n})$ is found, $p_{1}$ can easily be found from (\ref{FMp1}), if the interval described by (\ref{FMp1}) is nonempty. It is important that the derived system of inequalities in $(p_{1}, p_{2},\ldots,p_{n})$ consisting of (\ref{ineqApbi0}), (\ref{FMp2pn}), and (\ref{FMp1}) is in fact equivalent to the original system consisting of (\ref{ineqApbi+}), (\ref{ineqApbi0}), and (\ref{ineqApbi-}). In particular, $(p_{1}, p_{2},\ldots,p_{n})$ satisfies (\ref{ineqApbi+}), (\ref{ineqApbi0}), and (\ref{ineqApbi-}) if and only if $(p_{2},\ldots,p_{n})$ satisfies (\ref{ineqApbi0}) and (\ref{FMp2pn}), and $p_{1}$ satisfies (\ref{FMp1}). The Fourier--Motzkin elimination applies the above procedure recursively to eliminate variables $p_{1},p_{2}, \ldots,p_{n-1}$. This process results in a single inequality in $p_{n}$ of the form (\ref{FMp1}). Then we can determine $(p_{1}, p_{2},\ldots,p_{n})$ in the reverse order $p_{n}, p_{n-1},\ldots,p_{1}$. By virtue of the integral convexity of $f$, a drastic simplification occurs in the elimination process. The inequalities (\ref{FMp2pn}) that are to be added in general are actually redundant and need not be added, which is shown in the following lemma. The lemma implies in particular that $I_{1}^{0}$ is nonempty if $I_{1}^{+}$ and $I_{1}^{-}$ are nonempty. \begin{lemma} \label{LMelimIC} The inequalities in {\rm (\ref{FMp2pn})} are implied by those in {\rm (\ref{ineqApbi0})}. \end{lemma} \begin{proof} In (\ref{FMp2pn}) we have $b_{i} = f(a_{i})$ and $b_{k} = f(a_{k})$, and hence the inequality in (\ref{FMp2pn}) can be rewritten as \begin{equation}\label{FMp2pnvar} \frac{1}{2} ( a_{i}+ a_{k} ) p \leq \frac{1}{2} ( f(a_{i}) +f(a_{k}) ) . \end{equation} By the integral convexity of $f$ there exist $y\sp{(1)}, y\sp{(2)}, \ldots, y\sp{(m)} \in N((a_{i}+a_{k})/2)$ such that \begin{equation}\label{prfFMredun1} \sum_{l=1}\sp{m} \lambda_{l} y\sp{(l)} = \frac{1}{2} ( a_{i} + a_{k} ), \qquad \sum_{l=1}\sp{m} \lambda_{l} f(y\sp{(l)}) \leq \frac{1}{2} ( f(a_{i}) +f(a_{k}) ) , \end{equation} where $\lambda_l \geq 0$ for $l =1,2,\ldots,m$ and $\sum_{l=1}\sp{m} \lambda_l = 1$ (cf., Theorem~\ref{THfavtarProp33}). Since the first component of $(a_{i}+a_{k})/2$ is zero, the first component of each $y\sp{(l)}$ must also be zero, which means that each $y\sp{(l)}$ coincides with $a_{j}$ for some $j=j(l) \in I_{1}^{0}$. Hence we have $y\sp{(l)} p \leq f(y\sp{(l)})$ for $l =1,2,\ldots,m$ by (\ref{ineqApbi0}). Using this and (\ref{prfFMredun1}) we obtain \begin{equation*} \frac{1}{2}(a_{i} + a_{k}) p = \sum_{l=1}\sp{m} \lambda_l y\sp{(l)} p \leq \sum_{l=1}\sp{m} \lambda_l f(y\sp{(l)}) \leq \frac{1}{2} ( f(a_{i}) +f(a_{k}) ) . \end{equation*} The above argument shows that (\ref{FMp2pnvar}) can be derived from the inequalities in (\ref{ineqApbi0}). \end{proof} For $j=2,3,\ldots,n$, define \begin{align*} I_{j}^{+} &= \{ i \in I_{j-1}^{0} \mid a_{ij} = +1 \}, \\ I_{j}^{0} &= \{ i \in I_{j-1}^{0} \mid a_{ij} = 0 \}, \\ I_{j}^{-} &= \{ i \in I_{j-1}^{0} \mid a_{ij} = -1 \}. \end{align*} Then the original system (\ref{ineqApb}) is equivalent to \begin{align} \max_{k \in I_{1}^{-}} \left\{ \sum_{j=2}^n a_{kj}p_{j} - b_{k} \right\} \leq & \ p_{1} \leq \min_{i \in I_{1}^{+}} \left\{ b_{i} - \sum_{j=2}^n a_{ij}p_{j} \right\} , \nonumber \\ \max_{k \in I_{2}^{-}} \left\{ \sum_{j=3}^n a_{kj}p_{j} - b_{k} \right\} \leq & \ p_{2} \leq \min_{i \in I_{2}^{+}} \left\{ b_{i} - \sum_{j=3}^n a_{ij}p_{j} \right\} , \nonumber \\ & \ \vdots \label{FMp12n} \\ \max_{k \in I_{n-1}^{-}} \left\{ a_{kn}p_{n} - b_{k} \right\} \leq & \ p_{n-1} \leq \min_{i \in I_{n-1}^{+}} \left\{ b_{i} - a_{in}p_{n} \right\} , \nonumber \\ \max_{k \in I_{n}^{-}} \left\{ - b_{k} \right\} \leq & \ p_{n} \leq \min_{i \in I_{n}^{+}} \left\{ b_{i} \right\} . \nonumber \end{align} Note that the expressions above are valid even when some of the index sets $I_{j}^{+}$ and/or $I_{j}^{-}$ are empty. Since $\subgR f(x)$ is nonempty, there exists a real vector $p$ satisfying the inequalities (\ref{FMp12n}). As for integrality, the last inequality in (\ref{FMp12n}) shows that we can choose an integral $p_{n} \in \ZZ$, since $b_{i} = f(a_{i})$ for $i \in I_{n}^{-} \cup I_{n}^{+}$ and $f$ is integer-valued. Then the next-to-last inequality shows that we can choose an integral $p_{n-1} \in \ZZ$, since $a_{kn}p_{n} - b_{k} \in \ZZ$ for $k \in I_{n-1}^{-}$ and $b_{i} - a_{in}p_{n} \in \ZZ$ for $i \in I_{n-1}^{+}$. Continuing in this way we can see the existence of an integer vector $p \in \ZZ\sp{n}$ satisfying (\ref{FMp12n}). This shows $\subgZ f(x) \not= \emptyset$, completing the proof of Theorem~\ref{THsubgrIC}. \begin{remark} \rm \label{BoundedSG} Suppose that $\subgR f(x)$ is a bounded polyhedron for an integrally convex function $f: \mathbb{Z}^{n} \to \mathbb{Z} \cup \{ +\infty \}$ and $x \in \dom f$. The expression (\ref{FMp12n}) shows that there exists an integral vertex of $\subgR f(x)$. Indeed we can choose the (finite) upper bound in (\ref{FMp12n}) for each $p_i$. It is emphasized, however, that not every vertex is an integral vector. \finbox \end{remark} \section{Concluding Remarks} \label{SCconclrem} The established biconjugacy implies that there is a one-to-one correspondence between the class $\calF_{\rm IC}$ of integer-valued integrally convex functions and the class of their integral conjugates $\calF\sp{\bullet}_{\rm IC} = \{ f\sp{\bullet} \mid f \in \calF_{\rm IC} \}$. By the conjugacy theorems related to L- and M-convex functions (see \cite[Fig.~8.1]{Mdcasiam}), the class $\calF\sp{\bullet}_{\rm IC}$ also contains separable convex, {\rm L}-convex, ${\rm L}^{\natural}$-convex, {\rm M}-convex, ${\rm M}^{\natural}$-convex, ${\rm L}^{\natural}_{2}$-convex, and ${\rm M}^{\natural}_{2}$-convex functions. A direct characterization of $\calF\sp{\bullet}_{\rm IC}$ is an interesting question and is left for the future. It will be also interesting to characterize its subclasses $\calF\sp{\bullet}_{\rm BS} = \{ f\sp{\bullet} \mid \mbox{$f$: integer-valued BS-convex} \}$ and $\calF\sp{\bullet}_{\rm UJ} = \{ f\sp{\bullet} \mid \mbox{$f$: integer-valued UJ-convex} \}$. \ \noindent {\bf Acknowledgement} The authors thank Satoru Fujishige and Hiroshi Hirai for helpful comments. This work was supported by CREST, JST, Grant Number JPMJCR14D2, Japan, and JSPS KAKENHI Grant Numbers 26280004, 16K00023.
{ "timestamp": "2018-09-11T02:03:12", "yymm": "1806", "arxiv_id": "1806.00992", "language": "en", "url": "https://arxiv.org/abs/1806.00992" }
\section{Introduction} In 1978, Caccetta and Häggkvist~\cite{CH78} made the following conjecture. \begin{conj}[Caccetta-Häggkvist] \label{conj:CH} For all positive integers $n,r$, every simple $n$-vertex digraph with minimum outdegree at least $r$ contains a directed cycle of length at most $\lceil \frac{n}{r} \rceil$. \end{conj} A digraph $D$ is \emph{simple} if for all $u,v \in V(D)$ there is at most one arc from $u$ to $v$. The Caccetta-Häggkvist conjecture has proven to be a notoriously difficult problem. For example, the case $r=\frac{n}{3}$ has received considerable attention~\cite{CH78, GSS92, bondy97, shen98, HHK07, razborov13, lichiardopol13, HKN17}, but still remains open. See Sullivan~\cite{sullivan06} for a summary of partial results. Although there has been a lot of progress on approximate versions, Conjecture~\ref{conj:CH} is known to hold exactly for only a few values of $r$. The case $r=2$ was actually proved by Caccetta and Häggkvist~\cite{CH78}. \begin{thm}[\cite{CH78}] \label{directed} Every simple $n$-vertex digraph with minimum outdegree at least $2$ contains a directed cycle of length at most $\lceil \frac{n}{2} \rceil$. \end{thm} The case $r=3$ was settled positively by Hamidoune~\cite{hamidoune87}, and $r \in \{4,5\}$ by Hoàng and Reed~\cite{HR87}. Given a graph $G$ and a colouring $c$ of $E(G)$, we say that a subgraph $H$ of $G$ is \emph{rainbow} if no two edges of $H$ are of the same colour. Aharoni (see~\cite{ADH19}) recently proposed the following strengthening of the Caccetta-Häggkvist conjecture. \begin{conj}[Aharoni] \label{conj:aharoni} Let $G$ be a simple $n$-vertex graph and $c$ be a colouring of $E(G)$ with $n$ colours, where each colour class has size at least $r$. Then $(G,c)$ contains a rainbow cycle of length at most $\lceil \frac{n}{r} \rceil$. \end{conj} In fact, we now show that the following weakening of Aharoni's conjecture implies the Caccetta-Häggkvist conjecture. \begin{conj} \label{conj:weakaharoni} Let $G$ be a simple $n$-vertex graph and $c$ be a colouring of $E(G)$ with $n$ colours, where each colour class has size at least $r$. Then $(G,c)$ contains a cycle $C$ of length at most $\lceil \frac{n}{r} \rceil$ such that no two incident edges of $C$ are the same colour. \end{conj} \begin{proof}[Proof of Conjecture~\ref{conj:CH}, assuming Conjecture~\ref{conj:weakaharoni}] Let $D$ be a simple digraph of order $n$ and minimum outdegree at least $r$. Let $G$ be the graph obtained from $D$ by forgetting the orientations of all arcs. Let $V(G)=[n]$ and colour $ij \in E(G)$ with colour $i$ if $(i,j) \in E(D)$. Clearly, this colouring uses $n$ colours. Moreover, since $D$ has minimum outdegree at least $r$, each colour class has size at least $r$. Therefore, by Conjecture~\ref{conj:weakaharoni}, $G$ contains a properly edge-coloured cycle $C$ of length at most $\lceil \frac{n}{r} \rceil$. Let $\vec{C}$ be the subdigraph of $D$ corresponding to $C$. We claim that $\vec{C}$ is a directed cycle. If not, then there exists $i,j,k \in V(\vec{C})$ such that $(j,i) \in E(\vec{C})$ and $(j,k) \in E(\vec{C})$. Thus, the two edges of $C$ incident to vertex $j$ are the same colour. This contradicts that $C$ is properly edge-coloured. \end{proof} Our main theorem is that Aharoni's conjecture holds for $r=2$. \begin{thm} \label{thm:main} Let $G$ be a simple $n$-vertex graph and $c$ be a colouring of $E(G)$ with $n$ colours, where each colour class has size at least $2$. Then $(G,c)$ contains a rainbow cycle of length at most $\lceil \frac{n}{2} \rceil$. \end{thm} The rest of the paper is organized as follows. In Section~\ref{sec:proof}, we prove our main theorem. We show that our bound is tight in Section~\ref{sec:tight}, and that there is a sharp increase in the `rainbow girth' as the number of colours decreases from $n$. In Section~\ref{sec:matroids}, we show that the natural matroid generalization of Theorem~\ref{thm:main} holds for cographic matroids, but fails for binary matroids. We conclude with some open problems in Section~\ref{sec:open}. \section{Proof of the Main Theorem} \label{sec:proof} In this section, we prove Theorem~\ref{thm:main}. Before proceeding, we require some basic definitions. Let $G$ be a graph. A \emph{cut-vertex} of $G$ is a vertex $v$ such that $G-v$ has more connected components than $G$. A \emph{block} of $G$ is a maximal subgraph $H$ such that $H$ has no cut-vertices. A block is \emph{non-trivial} if it has at least three vertices. An \emph{ear-decomposition} of $G$ is collection of subgraphs $\{H_0, H_1, \dots, H_k\}$ of $G$ satisfying $G=H_0 \cup H_1 \cup \dots \cup H_k$, $H_0$ is a cycle, and $H_i$ is a path such that |$V(H_i)| \geq 2$ and $V(H_i) \cap \bigcup_{j=0}^{i-1} V(H_j)$ is the set of ends of $H_i$ for all $i \in [k]$. It is well-known that a graph is $2$-connected if and only if it has an ear-decomposition. The paths $H_1, \dots, H_k$ are the \emph{ears} of the ear-decomposition. A \emph{theta} is a graph which has an ear-decomposition with exactly one ear. Given an edge-coloured graph $(G,c)$, a \emph{transversal} of $(G,c)$ is a subgraph $H$ of $G$ such that $V(H)=V(G)$ and $E(H)$ contains exactly one edge of each colour. In particular, a transversal is a rainbow subgraph (which may contain isolated vertices). \begin{proof}[Proof of Theorem~\ref{thm:main}] Suppose the theorem is false and let $(G,c)$ be a counterexample with $|E(G)|$ minimum. By minimality, each colour class contains exactly two edges. We claim that $G$ contains a vertex $v$ such that all edges incident to $v$ have different colours (note that an isolated vertex satisfies this vacuously). If not, then at each vertex, there is at least one colour that appears twice. Since there are only $n$ colours, at each vertex there is exactly one colour that appears twice. For each vertex $v$, let $e_v$ and $f_v$ be the two edges incident to $v$ that have the same colour. For each $v$, we orient $e_v$ and $f_v$ away from $v$ and apply Theorem~\ref{directed} to find a directed cycle of length at most $\lceil \frac{n}{2} \rceil$. This corresponds to a rainbow cycle in $G$, which contradicts that $(G,c)$ is a counterexample. Let $H$ be an arbitrary transversal of $(G,c)$. Since $H$ has $n$ edges and $n$ vertices, it follows that $H$ contains at least one cycle and hence at least one non-trivial block. If $H$ contains two non-trivial blocks, then $H$ contains two rainbow cycles that meet in at most one vertex, and thus a cycle of length at most $\lceil \frac{n}{2} \rceil$. However, this would contradict that $(G,c)$ is a counterexample. Therefore, $H$ contains exactly one non-trivial block $B$. Suppose $B$ has an ear decomposition with at least two ears. In this case, $|V(B)| \leq n-2$, since $B$ contains at most $n$ edges. Moreover, $B$ contains a subgraph $B'$ which is either two cycles meeting in at most two vertices, or a subdivision of $K_4$. If the former holds, then $B'$ contains a cycle of length at most $\lfloor \frac{|V(B')|+2}{2} \rfloor \leq \lceil \frac{n}{2} \rceil$. If the latter holds, then $B'$ contains four cycles $C_1, \dots, C_4$ such that $\sum_{i \in [4]} |V(C_i)|=2|V(B)'|+4 \leq 2n$. Thus, one of these four cycles has length at most $\lceil \frac{n}{2} \rceil$. Since $(G,c)$ is a counterexample, $B$ contains at most one ear. That is, $B$ is a cycle or a theta. It follows that every transversal of $(G,c)$ is either a connected graph with exactly one cycle, or the disjoint union of a tree and a graph containing a theta. \begin{clm} $(G,c)$ contains a rainbow theta. \end{clm} \begin{proof}[Subproof] Since $G$ contains a vertex $v$ such that all edges incident to $v$ have different colours, there is a transversal $H$ of $(G,c)$ such that $v$ is an isolated vertex in $H$. It follows that the other component of $H$ contains a theta. In particular, $(G,c)$ contains a rainbow theta. \end{proof} The rest of the proof only uses the fact that $(G,c)$ contains a rainbow theta. Let $\theta$ be a rainbow theta in $(G,c)$ with $|V(\theta)|$ minimum. Let $P_1,P_2$ and $P_3$ be paths in $\theta$ such that $\theta=P_1 \cup P_2 \cup P_3$, $V(P_i) \cap V(P_j):=\{x,y\}$ for all $i \neq j$. and $|V(P_1)| \leq |V(P_2)| \leq |V(P_3)|$. \begin{clm} \label{clm:thetabound} $\theta$ contains a cycle of length at most $\lfloor \frac{2|V(\theta)|+2}{3} \rfloor$. Moreover, if $|V(\theta)|=3k+2$, then $\theta$ contains a cycle of length at most $2k+1$, unless $|V(P_1)| = |V(P_2)| = |V(P_3)|$. \end{clm} \begin{proof}[Subproof] $P_1 \cup P_2$ is a cycle of length at most $\lfloor \frac{2|V(\theta)|+2}{3} \rfloor$. Moreover, if $|V(\theta)|=3k+2$, then $P_1 \cup P_2$ is a cycle of length at most $2k+1$, unless $|V(P_1)| = |V(P_2)| = |V(P_3)|$. \end{proof} A \emph{chord} of $\theta$ is an edge $e \in E(G) \setminus E(\theta)$ such that both ends of $e$ are in $V(\theta)$. \begin{clm} \label{clm:chord} $\theta$ has at most two chords. \end{clm} \begin{proof}[Subproof] Note that $|V(\theta)| \leq n-1$, since $\theta$ is rainbow and thus contains at most $n$ edges. Let $e$ be a chord. First suppose $e$ has both endpoints on some $P_i$. Let $C_i$ be the unique cycle in $P_i \cup \{e\}$. Note that $C_i$ is rainbow, otherwise $(\theta \setminus E(C_i)) \cup \{e\}$ contradicts the minimality of $|V(\theta)|$. Therefore, $\theta \cup \{e\}$ contains two rainbow cycles meeting in at most two vertices. One of these two cycles has length at most $\lfloor \frac{|V(\theta)|+2}{2} \rfloor \leq \lfloor \frac{n+1}{2} \rfloor = \lceil \frac{n}{2} \rceil $. By symmetry, we may assume that the ends of $e$ are on $P_1$ and $P_2$. Suppose $e$ is coloured red. If $P_1 \cup P_2$ does not contain a red edge, then $\theta \cup \{e\}$ contains rainbow cycles $C_1, \dots, C_4$ such that $\sum_{i \in [4]} |V(C_i)|=2|V(\theta)|+4 \leq 2n+2$. Thus, one of these cycles has length at most $\lceil \frac{n}{2} \rceil $. It follows that some edge $e'$ of $P_1 \cup P_2$ is also red. By the minimality of $|V(\theta)|$, this is only possible if $e$ and $e'$ are incident and one end of $e'$ is in $\{x,y\}$. If $\theta$ has at least three chords, then by symmetry and the pigeonhole principle, we may assume there exist chords $e_1$ and $e_2$ such that $e_1'$ and $e_2'$ are both incident to $x$. But now $(\theta \cup \{e_1, e_2\}) \setminus \{e_1', e_2'\}$ contains a rainbow theta with fewer vertices than $\theta$. \end{proof} Since $G \setminus E(\theta)$ contains a transversal, there is a rainbow cycle $C$ that is edge-disjoint from $\theta$. Let $V_1=V(\theta) \setminus V(C)$, $V_2=V(\theta) \cap V(C)$, and $V_3=V(C) \setminus V(\theta)$. Since $(G,c)$ is a counterexample, $|V(C)|=|V_2|+|V_3| \geq \lceil \frac{n}{2} \rceil +1 $. Let $t$ be the number of chords of $\theta$. Note that $t \leq 2$ by Claim~\ref{clm:chord}. Observe that if $a,b \in V_2$ and $ab \in E(C)$, then $ab$ is a chord of $\theta$. Therefore, $C[V_2]$ contains at most $t$ edges. It follows that $|V_2| \leq \lfloor \frac{|V(C)|}{2} \rfloor +t$, or equivalently $|V_3|+t \geq |V_2|$. Therefore, $|V_3| \geq \frac{ \lceil \frac{n}{2} \rceil +(1-t)}{2}$, and \[ |V(\theta)| \leq n-|V_3| \leq n-\frac{ \lceil \frac{n}{2} \rceil +(1-t)}{2} \leq n-\frac{ \lceil \frac{n}{2} \rceil -1}{2}, \] where the last inequality follows since $t \leq 2$. Combining the bound $|V(\theta)| \leq n-\frac{ \lceil \frac{n}{2} \rceil -1}{2}$ with Claim~\ref{clm:thetabound}, we are done unless $n \equiv 2 \pmod 4$ and all the above bounds are tight. In particular, $t=2$, $n=4k+2$, $|V_1|=2k$, $|V_2|=k+2$, $|V_3|=k$. Moreover, by the second part of Claim~\ref{clm:thetabound}, each of $P_1, P_2$, and $P_3$ contains exactly $k+2$ vertices. Let $e$ be a chord of $\theta$ and $e'$ be the edge of $\theta$ of the same colour as $e$. By the second part of Claim~\ref{clm:thetabound}, $\theta':=(\theta \setminus \{e\}) \cup \{e'\}$ contains a cycle of length at most $2k+1=\frac{n}{2}$ vertices, as required. \end{proof} \section{Tightness of the Bound} \label{sec:tight} We now show that our bound is tight, and that there is a dramatic change of behaviour as we decrease the number of colours from $n$. To be precise, define the \emph{rainbow girth} of an edge-coloured graph $(G,c)$, denoted $\operatorname{rg} (G,c)$, to be the length of a shortest rainbow cycle in $(G,c)$. If $(G,c)$ does not contain a rainbow cycle, then $\operatorname{rg} (G,c)=\infty$. Let \[ f(n,t):=\max \{\operatorname{rg} (G,c) : \text{$|V(G)|=n, |E(G)|=2t$, each colour class of $c$ has size $2$}\}. \] \begin{thm} \label{thm:sharp} For all $n \geq 3$ and $t \leq n$, \[ \begin{cases} f(n, t)=\infty & \text{if $t \leq n-2$,} \\ f(n,t)=n-1 &\text{if $t=n-1$,} \\ f(n,t)= \lceil \frac{n}{2} \rceil &\text{if $t=n$}. \end{cases} \] \end{thm} \begin{proof} By Theorem~\ref{thm:main}, $f(n,n) \leq \lceil \frac{n}{2} \rceil$. For the corresponding lowerbound, let $G$ be a graph with vertex set $\mathbb{Z} / n \mathbb{Z}$ and edges $i(i+1)$ and $i(i+2)$ for all $i \in V(G)$. Colour both $i(i+1)$ and $i(i+2)$ with colour $i$ for all $i \in V(G)$. See Figure~\ref{fig:tillgraph}. It is easy to check that the shortest rainbow cycle in this graph has length $\lceil \frac{n}{2} \rceil$. \begin{figure}[ht] \centering \includegraphics{TonyGraph.pdf} \caption{The shortest rainbow cycle has length $\lceil \frac{7}{2} \rceil=4$.} \label{fig:tillgraph} \end{figure} We now show $f(n, n-1)=n-1$. For the upperbound, let $G$ be a graph with $|V(G)|=n,|E(G)|=2n-2$, and let $c$ be a colouring of $E(G)$ such that each colour class has size $2$. Since there are only $n-1$ colours, there is a vertex $v$ of $G$ such that all edges incident to $v$ are coloured differently. Therefore, there is a transversal $H$ of $(G,c)$ such that $v$ is an isolated vertex in $H$. Since $H-v$ contains $n-1$ vertices and $n-1$ edges, $H-v$ contains a cycle of length at most $n-1$. For the corresponding lowerbound, let $W_n$ be the wheel graph on $n$ vertices. Let $c$ be a colouring of $E(W_n)$ such that each colour class is a path with two edges, one of which is incident to the hub vertex. See Figure~\ref{fig:badcolouring}. Observe that no rainbow cycle of $(W_n, c)$ can use the hub vertex. Therefore, the shortest rainbow cycle in $(W_n, c)$ has length $n-1$. \begin{figure} \centering \begin{tikzpicture}[scale=2.0,inner sep=2.0pt] \tikzstyle{vtx}=[circle,draw,thick,fill=black!10] \node[vtx] (1) at (0,0) {\tiny $1$}; \node[vtx] (2) at (2,0) {\tiny $2$}; \node[vtx] (3) at (2,2) {\tiny $3$}; \node[vtx] (4) at (0,2) {\tiny $4$}; \node[vtx] (0) at (1,1) {\tiny $0$}; \draw[thick] (1) -- node[fill=white,inner sep=1pt,midway]{\tiny $1$} (2); \draw[thick] (2) -- node[fill=white,inner sep=1pt,midway]{\tiny $2$} (3); \draw[thick] (3) -- node[fill=white,inner sep=1pt,midway]{\tiny $3$} (4); \draw[thick] (4) -- node[fill=white,inner sep=1pt,midway]{\tiny $4$} (1); \draw[thick] (1) -- node[fill=white,inner sep=1pt,midway]{\tiny $1$} (0); \draw[thick] (2) -- node[fill=white,inner sep=1pt,midway]{\tiny $2$} (0); \draw[thick] (3) -- node[fill=white,inner sep=1pt,midway]{\tiny $3$} (0); \draw[thick] (4) -- node[fill=white,inner sep=1pt,midway]{\tiny $4$} (0); \end{tikzpicture} \caption{A colouring $c$ of $E(W_5)$, whose shortest rainbow cycle has length $4$.} \label{fig:badcolouring} \end{figure} By deleting two edges of the same colour from $(W_n, c)$ we obtain a graph on $n$ vertices and $2(n-2)$ edges that does not contain a rainbow cycle. Therefore, $f(n, t)=\infty$, for all $t \leq n-2$. \end{proof} We have determined $f(n,t)$ exactly for all $t \leq n$. What happens for $t > n$? The best general upper bound we can prove follows from a theorem of Bollobás and Szemerédi~\cite{BS02}. To state their result, we need some definitions. The \emph{girth} of a graph $G$, denoted $g(G)$, is the length of a shortest cycle in $G$. Define \[ g(n,k):=\max \{g(G): |V(G)|=n, |E(G)|-|V(G)|=k\}. \] Bollobás and Szemerédi prove the following. \begin{thm}[\cite{BS02}] \label{thm:BS} For all $n \geq 4$ and $k \geq 2$, \[ g(n,k) \leq \frac{2(n+k)}{3k} (\log k + \log \log k +4). \] \end{thm} As a corollary, we obtain the following. \begin{thm} \label{thm:rainbowgirth} For all $n \geq 4$ and $k \geq 2$, \[ f(n,n+k) \leq \frac{2(n+k)}{3k} ( \log k + \log \log k +4). \] \end{thm} \begin{proof} Let $G$ be a simple $n$-vertex graph, with $|E(G)|=2(n+k)$ and $c$ be a colouring of $E(G)$ where each colour class has size $2$. Let $H$ be a transversal of $(G,c)$. Note that $H$ has $n$ vertices and $n+k$ edges. By Theorem~\ref{thm:BS}, $H$ contains a cycle of length at most $\frac{2(n+k)}{3k} (\log k + \log \log k +4)$. Since this cycle is necessarily rainbow, we are done. \end{proof} \section{Matroid Generalizations} \label{sec:matroids} In this section, we consider matroid generalizations of Aharoni's conjecture. For the reader unfamiliar with matroids, we introduce all the necessary definitions now. Note that nothing beyond basic linear algebra will be required. For a more thorough introduction to matroids, we refer the reader to Oxley~\cite{oxley11}. A \emph{matroid} is a pair $M=(E, \mathcal C)$ where $E$ is a finite set, called the \emph{ground set} of $M$, and $\mathcal C$ is a collection of subsets of $E$, called \emph{circuits}, satisfying \begin{itemize} \item $\emptyset \notin \mathcal C$, \item if $C'$ is a proper subset of $C \in \mathcal C$, then $C' \notin \mathcal C$, \item if $C_1$ and $C_2$ are distinct members of $\mathcal C$ and $e \in C_1 \cap C_2$, then there exists $C_3 \subseteq (C_1 \cup C_2) \setminus \{e\}$. \end{itemize} A set $I \subseteq E$ is \emph{independent} if it does not contain a circuit. The \emph{rank} of $X \subseteq E$ is the size of a largest independent set contained in $X$, and is denoted $r_M(X)$. The \emph{rank} of $M$ is $r(M):=r_M(E)$. A matroid is \emph{simple} it it does not contain any circuits of size $1$ or $2$. We now give examples of all the matroids that appear in this paper. Let $G$ be a graph. We will consider two different matroids with ground set $E(G)$. The circuits of the first matroid are the (edges of) cycles of $G$. This is the \emph{cycle matroid} of $G$, denoted $M(G)$. A matroid is \emph{graphic} if it is isomorphic to the cycle matroid of some graph. The second matroid is the dual of the cycle matroid of $G$. However, we will not define duality, opting instead to define this matroid directly. An \emph{edge-cut} of $G$ is a set of edges $C^*$ such that $G \setminus C^*$ has more connected components than $G$. A \emph{cocycle} is an inclusion-wise minimal edge-cut. The collection of cocycles of $G$ is also a matroid, called the \emph{cocycle matroid} of $G$, and is denoted $M(G)^*$. A matroid is \emph{cographic} if it is isomorphic to the cocycle matroid of some graph. Let $\mathbb F$ be a field. An \emph{$\mathbb F$-matrix} is a matrix with entries in $\mathbb F$. Let $A$ be an $\mathbb F$-matrix whose columns are labelled by a finite set $E$. The \emph{column matroid} of $A$, denoted $M[A]$, is the matroid with ground set $E$ whose circuits correspond to the minimal (under inclusion) linearly dependent columns of $A$. A matroid is \emph{representable over $\mathbb{F}$} if it is isomorphic to $M[A]$ for some $\mathbb{F}$-matrix $A$. A matroid is \emph{binary} if it representable over the two-element field, and it is \emph{regular} if it is representable over every field. Finally, for integers $0 \leq k \leq n$, the \emph{uniform matroid} $U_{k,n}$ is the matroid with ground set $[n]$, whose circuits are all the subsets of $[n]$ of size $k+1$. An attractive feature of Aharoni's conjecture as opposed to the Caccetta-Häggkvist conjecture, is that there is a natural matroid generalization. For example, the following is the matroid analogue of Theorem~\ref{thm:main}. \begin{conj} \label{conj:matroid} Let $M$ be a simple rank-$(n-1)$ matroid and $c$ be a colouring of $E(M)$ with $n$ colours, where each colour class has size at least $2$. Then $M$ contains a rainbow circuit of size at most $\lceil \frac{n}{2} \rceil$. \end{conj} Let $G$ be a simple, connected, $n$-vertex graph, and $r$ be the rank of $M(G)$. Note that $r$ is the number of edges in a spanning tree of $G$, and so $n-1=r$. Moreover, since the circuits of $M(G)$ are the cycles of $G$, Conjecture~\ref{conj:matroid} holds for graphic matroids by Theorem~\ref{thm:main}. Unfortunately, it is easy to see that Conjecture~\ref{conj:matroid} is false, since the uniform matroid $U_{n-1, m}$ does not contain \emph{any} circuits of size less than $n$. On the other hand, we now prove that Conjecture~\ref{conj:matroid} is true for cographic matroids. \begin{thm} \label{thm:cographic} Let $N$ be a simple rank-$(n-1)$ cographic matroid and $c$ be a colouring of $E(N)$ with $n$ colours, where each colour class has size at least $2$. Then $N$ contains a rainbow circuit of size at most $\lceil \frac{n}{2} \rceil$. \end{thm} \begin{proof} Let $(N,c)$ be a counterexample with $|E(N)|$ minimum. By minimality, each colour class has size exactly $2$. Let $G$ be a graph such that $N=M(G)^*$. Let $G_1, \dots, G_k$ be the connected components of $G$, and $r_i:=r(M(G_i)^*)$ for each $i \in [k]$. Since every cocycle of $N$ is a cocycle in some $N_i:=M(G_i)^*$, it follows that \begin{equation} \label{additiverank} \sum_{i \in [k]} r_i=r(N)=n-1. \end{equation} First suppose there is some $j \in [k]$ such that $|E(G_j)| \geq 2(r_j+1)$. By merging colour classes we may assume that exactly $r_j+1$ colours appear in $E(G_j)$ and each of these colours appears at least twice in $E(G_j)$. By minimality, $G_j$ contains a rainbow cocycle $C^*$ of size at most $\lceil \frac{r_j+1}{2} \rceil \leq \lceil \frac{n}{2} \rceil$. By unmerging colours, $C^*$ is also a rainbow cocycle of $G$, so we are done. By (\ref{additiverank}), such an index $j$ exists unless $k=2, |E(G_1)|=2r_1+1$, and $|E(G_2)|=2r_1+1$. By (\ref{additiverank}) and symmetry, we may assume $r_1 \leq \lfloor \frac{n-1}{2} \rfloor$. Since |$E(G_1)|=2r_1+1$ and each colour appears at most twice in $E(G_1)$, there exists a rainbow set $A \subseteq E(G_1)$ such that $|A| = r_1+1$. Since $|A| > r_1$, $A$ contains a cocycle $C^*$ of $G$. Since $C^*$ is rainbow and $|C^*| \leq |A|= r_1+1 \leq \lfloor \frac{n-1}{2} \rfloor+1 = \lceil \frac{n}{2} \rceil$, we are done. Henceforth, we may assume that $G$ is connected. Let $N^*=M(G)$, the cycle matroid of $G$. We use the well-known fact that $r(N)+r(N^*)=|E(G)|=2n$. Therefore, since $N$ has rank $n-1$, $N^*$ has rank $n+1$. Since $G$ is connected, $|V(G)|=n+2$. For each vertex $v \in V(G)$, let $\delta_G(v)$ be the set of edges of $G$ incident to $v$. Since $\delta_G(v)$ is an edge-cut for each $v \in V(G)$ and $N$ is simple, $G$ has minimum degree at least $3$. Moreover, since there are exactly $n$ colours and $n+2$ vertices, there are at least two distinct vertices $x$ and $y$ of $G$ such that $\delta_G(x)$ and $\delta_G(y)$ are both rainbow. If $\deg_G(x) \leq \lceil \frac{n}{2} \rceil$ or $\deg_G(y) \leq \lceil \frac{n}{2} \rceil$, then $\delta_G(x)$ or $\delta_G(y)$ contains a rainbow cocycle of size at most $\lceil \frac{n}{2} \rceil$. Thus, $\deg_G(x), \deg_G(y) \geq \lceil \frac{n}{2} \rceil+1$. Since $4n=2|E(G)|=\sum_{v \in V(G)} \deg_G(v)$, it follows that $\sum_{v \in V(G) \setminus \{x,y\}} \deg_G(v) \leq 3n-2$. Therefore, some vertex $z \in V(G) \setminus \{x,y\}$ has degree at most $2$, which contradicts that $G$ has minimum degree at least $3$. \end{proof} We finish this section by giving an infinite family of binary matroids for which Conjecture~\ref{conj:matroid} fails. \begin{thm} For each even integer $n \geq 6$, there exists a simple rank-$(n-1)$ binary matroid $M$ on $2n$ elements, and a colouring of $E(M)$ where each colour class has size $2$, such that all rainbow circuits of $(M,c)$ have size strictly greater than $\frac{n}{2}$. \end{thm} \begin{proof} Let $n \geq 6$ be even. For each $i \in [n-1]$, let $\mathbf{e}_i$ be the $i$th standard basis vector in $\mathbb{F}_2^{n-1}$. Let $\mathbf 0$ and $\mathbf 1$ be the all-zeros and all-ones vectors in $\mathbb{F}_2^{n-1}$, respectively. Let $M$ be the binary matroid represented by the following $2n$ vectors $\mathcal V$. \begin{itemize} \item $\mathbf{e}_i$, for all $i \in [n-1]$; \item $\mathbf{e}_i+ \mathbf{e}_{i+1}$, for all $i \in [n-2]$; \item $\mathbf 1$, $\mathbf{1} + \mathbf{e}_{n-2}$, and $\mathbf{e}_1+ \mathbf{e}_{n-2}$. \end{itemize} Since $\{ \mathbf{e}_i \mid i \in [n-1]\} $ are linearly independent, $M$ has rank $n-1$. Moreover, all vectors in $\mathcal V$ are distinct and non-zero, so $M$ is simple. We now specify the colouring, which is just a pairing of $\mathcal V$. For each $i \in [n-3]$, we pair $\mathbf{e}_i$ with $\mathbf{e}_i + \mathbf{e}_{i+1}$. Finally, we pair $\mathbf{e}_{n-2}$ with $\mathbf{e}_1+ \mathbf{e}_{n-2}$; $\mathbf{e}_{n-1}$ with $\mathbf{e}_{n-2}+\mathbf{e}_{n-1}$; and $\mathbf 1$ with $\mathbf{1} + \mathbf{e}_{n-2}$. To illustrate, the case $n=6$ is given by the following matrix, where column $i$ and column $6+i$ are the same colour for all $i \in [6]$. \[ \left( \begin{array}{@{}*{12}{c}@{}} \textcolor{red}{1} & \textcolor{blue}{0} & \textcolor{green}{0} & \textcolor{magenta}{0} & \textcolor{purple}{0} & \textcolor{cyan}{1} & \textcolor{red}{1} & \textcolor{blue}{0} & \textcolor{green}{0} & \textcolor{magenta}{1} & \textcolor{purple}{0} & \textcolor{cyan}{1} \\ \textcolor{red}{0} & \textcolor{blue}{1} & \textcolor{green}{0} & \textcolor{magenta}{0} & \textcolor{purple}{0} & \textcolor{cyan}{1} & \textcolor{red}{1} & \textcolor{blue}{1} & \textcolor{green}{0} & \textcolor{magenta}{0} & \textcolor{purple}{0} & \textcolor{cyan}{1} \\ \textcolor{red}{0} & \textcolor{blue}{0} & \textcolor{green}{1} & \textcolor{magenta}{0} & \textcolor{purple}{0} & \textcolor{cyan}{1} & \textcolor{red}{0} & \textcolor{blue}{1} & \textcolor{green}{1} & \textcolor{magenta}{0} & \textcolor{purple}{0} & \textcolor{cyan}{1} \\ \textcolor{red}{0} & \textcolor{blue}{0} & \textcolor{green}{0} & \textcolor{magenta}{1} & \textcolor{purple}{0} & \textcolor{cyan}{1} & \textcolor{red}{0} & \textcolor{blue}{0} & \textcolor{green}{1} & \textcolor{magenta}{1} & \textcolor{purple}{1} & \textcolor{cyan}{0} \\ \textcolor{red}{0} & \textcolor{blue}{0} & \textcolor{green}{0} & \textcolor{magenta}{0} & \textcolor{purple}{1} & \textcolor{cyan}{1} & \textcolor{red}{0} & \textcolor{blue}{0} & \textcolor{green}{0} & \textcolor{magenta}{0} & \textcolor{purple}{1} & \textcolor{cyan}{1} \end{array} \right)\] Note that a subset of $\mathcal V$ is linearly dependent if and only if it sums to $\mathbf 0$. Therefore, it suffices to prove that every rainbow subset of $\mathcal V$ summing to $\mathbf 0$ has size more than $\frac{n}{2}$. Let $\mathcal C \subseteq \mathcal V$ be a rainbow set such that $\sum_{v \in \mathcal C} v = \mathbf 0$. We first consider the case $\mathbf 1 \in \mathcal{C}$ or $\mathbf{1} + \mathbf{e}_{n-2} \in \mathcal{C}$. Since $\mathbf 1$ and $\mathbf{1} + \mathbf{e}_{n-2}$ are the same colour, exactly one of them, which we call $x$, is in $\mathcal C$. Since $\mathbf{e}_{n-1}$ and $\mathbf{e}_{n-2}+\mathbf{e}_{n-1}$ are the only other vectors in $\mathcal{V}$ that are non-zero in their $(n-1)$th coordinate, exactly one of them, which we call $y$, is in $\mathcal{C}$. Let $\mathcal C' = \mathcal C \setminus \{x,y\}$. In all four cases, $\sum_{v \in \mathcal C '}$ is $\mathbf 1 + \mathbf{e}_{n-1}$ or $\mathbf 1 + \mathbf{e}_{n-1}+\mathbf{e}_{n-2}$. Since $n$ is even, and all vectors in $\mathcal{C}'$ have support at most $2$, $|\mathcal C '| \geq \frac{n}{2}-1$. Therefore, $|\mathcal{C}| > \frac{n}{2}$. The remaining case is $\mathbf 1 \notin \mathcal{C}$ and $\mathbf{1} + \mathbf{e}_{n-2} \notin \mathcal C$. The only other vectors in $\mathcal V$ whose $(n-1)$th coordinate is non-zero are $\mathbf{e}_{n-1}$ and $\mathbf{e}_{n-2}+\mathbf{e}_{n-1}$. Since $\mathbf{e}_{n-1}$ and $\mathbf{e}_{n-2}+\mathbf{e}_{n-1}$ are the same colour, they cannot both be in $\mathcal{C}$. Therefore, neither is in $\mathcal{C}$. Now, since $\mathbf{e}_{n-2}$ and $\mathbf{e}_1 +\mathbf{e}_{n-2}$ are the same colour and the only other vectors in $\mathcal V$ whose $(n-2)$th coordinate is non-zero, we conclude that neither $\mathbf{e}_{n-2}$ nor $\mathbf{e}_1 +\mathbf{e}_{n-2}$ is in $\mathcal C$. Repeating the same argument for each of the pairs $\{\mathbf{e}_i, \mathbf{e}_{i}+\mathbf{e}_{i+1}\}$ for $i=1, \dots, n-3$ (in that order), we conclude that $\mathcal C=\emptyset$, which is a contradiction. \end{proof} A slight modification of the above construction also yields counterexamples for all odd integers $n \geq 7$. On the other hand, it is fairly easy to show that Conjecture~\ref{conj:matroid} holds for binary matroids when $n \leq 5$ (it is true vacuously when $n \leq 4$). Thus, Conjecture~\ref{conj:matroid} holds for binary matroids if and only if $n \leq 5$. \section{Open Problems} \label{sec:open} Note that in proving Theorem~\ref{thm:rainbowgirth}, we only use one fixed transversal. By considering multiple transversals, we suspect that the bound in Theorem~\ref{thm:rainbowgirth} can be improved. \begin{prob} Determine $f(n,t)$ for $t > n$. \end{prob} Since the Caccetta-Häggkvist conjecture is known to hold for $r \in \{3,4,5\}$, another possible direction is to prove Aharoni's conjecture for $r \in \{3,4,5\}$. \begin{prob} Prove that Conjecture~\ref{conj:aharoni} (or Conjecture~\ref{conj:weakaharoni}) holds for $r \in \{3,4,5\}$. \end{prob} Recall that our proof of Aharoni's conjecture for $r=2$ uses Theorem~\ref{directed} as a blackbox. It would be interesting to find a proof of Theorem~\ref{thm:main} that avoids using Theorem~\ref{directed}. Finally, by Theorems~\ref{thm:main} and~\ref{thm:cographic}, Conjecture~\ref{conj:matroid} holds for both graphic and cographic matroids. Therefore, we suspect there is a proof of Conjecture~\ref{conj:matroid} for regular matroids via Seymour's regular matroid decomposition theorem~\cite{Seymour80}. \begin{conj} \label{conj:regular} Let $M$ be a simple rank-$(n-1)$ regular matroid and $c$ be a colouring of $E(M)$ with $n$ colours, where each colour class has size at least $2$. Then $M$ contains a rainbow circuit of size at most $\lceil \frac{n}{2} \rceil$. \end{conj} \subsection*{Acknowledgements.} We would like to thank Ron Aharoni for bringing Conjecture~\ref{conj:aharoni} to our attention. We also thank Tillmann Miltzow for help in making Figure~\ref{fig:tillgraph}. \bibliographystyle{plain}
{ "timestamp": "2020-05-11T02:00:44", "yymm": "1806", "arxiv_id": "1806.00825", "language": "en", "url": "https://arxiv.org/abs/1806.00825" }
\section{Introduction} \label{sec:intro} \vspace{-2mm} Accurate building maps play an important role in a wide range of applications, such as urban planning and 3D city modeling. Nowadays, the large amounts of increasingly available remote sensing (RS) images with very high resolution (VHR) up to half a meter provide abundant data sources to generate such accurate building maps. However, manual administration of buildings from huge volume of VHR-RS images is unfeasible, hence there is an urgent demand to develop automatic approaches for detecting buildings from VHR-RS images. Over the past years, many studies have been devoted to automatic building detection, e.g.,~\cite{zha_use_2003-1,pesaresi_robust_2008,shao_basi:_2014,sirmacek_urban_2009,sirmacek_urban_2010,huang_mbi_2012,saito_multiple_2016}. Among them, one main stream exploits the discriminative properties of buildings in RS images, e.g., from the aspects of spectrum~\cite{zha_use_2003-1}, texture~\cite{pesaresi_robust_2008,shao_basi:_2014} and local structural or morphological features~\cite{sirmacek_urban_2009,sirmacek_urban_2010,huang_mbi_2012}. These methods perform well on detecting buildings from mid-/high-resolution RS images, but dramatically lose their efficiency for RS images of half-meter resolution. The performance decrease is largely due to the fact that, in VHR-RS images, textural or spectral information lacks discriminative power to distinguish buildings. Moreover, most of these approaches are incapable of providing accurate boundaries of buildings, which are particularly desirable in the precise mapping of buildings. Another stream of the state-of-the-art building detection approaches attempts to detect buildings by learning an off-the-shelf parameterized model, e.g., the convolutional neural networks (CNNs), with manually labeled samples~\cite{saito_multiple_2016,zuo_hf-fcn:_2016}. Despite the high performance of learning-based methods, especially the ones based on CNNs~\cite{zuo_hf-fcn:_2016}, their performances heavily rely on a considerable amount of well-annotated training samples, and thus they have very limited generalization capability beyond the training domain. This paper presents a new method for accurately detecting buildings in VHR-RS images, by computing the geometric saliency of building structures. Our work is inspired by the observation that, in VHR-RS images, buildings are always more distinguishable in geometries (both local and global) than in texture or spectrum. More precisely, we first propose to represent VHR-RS images with a mid-level geometrical representation, by exploiting junctions that can locally depict anisotropic geometrical structures of images. We then derive the saliency of geometric structures on buildings, by considering both the probability of each junction that measures its saliency to its surroundings and the relationship of junctions. This stage can encode both local and semi-global geometric saliency of buildings in images. Finally, the geometric building index (GBI) of whole image is measured via integrating the derived geometric saliency. In contrast to existing building indexes, e.g.~\cite{zha_use_2003-1,pesaresi_robust_2008,shao_basi:_2014,sirmacek_urban_2009,sirmacek_urban_2010,huang_mbi_2012}, our method results in less redundant non-building areas and can provide accurate contours of buildings, thanks to the geometric saliency computed from a mid-level geometrical representation. As we shall see in Section \ref{sec:experiment}, our method achieves the state-of-the-art performance\footnote{All results are available at \url{http://captain.whu.edu.cn/project/geosay.html}.} on building detection and meanwhile shows promising generalization power to different datasets, especially in comparison with learning-based approaches~\cite{zuo_hf-fcn:_2016}. \vspace{-3mm} \section{Methodology} \label{sec:index} \begin{figure*}[htb!] \centering \includegraphics[width = 0.95\linewidth]{images/run-example.png} \vspace{-3mm} \caption{A running example on building detection with geometric saliency. } \vspace{-3mm} \label{fig:junc_detail} \end{figure*} \vspace{-3mm} \subsection{A mid-level geometric representation of images} Let $u: \Omega \mapsto \mathcal{Z}_+^L$ denote an $L$-channel VHR-RS image defined on the image grid $\Omega$. For imagery in panchromatic format, all the geometrical information is contained in the single channel image $u$. While, for a multi-spectral image $u = \{u_1, u_2, . . . , u_L\}$, the main geometrical structures of the image can be computed from its $p$-energy image $U = (\sum_{i=1}^L u_i^p)^{p^{-1}}$ or from its first PCA component~\cite{xia-stucture-2010}. In this work, we concentrate on dealing with satellite images with (R,G,B)-channels, so the analysis of geometrical information is based on the luminance channel, with $p=1$. This work proposes to use a mid-level geometric representation of VHR-RS images\cite{Xue2017Anisotropic}. For an image $U$, let $ \mathcal{J} $ denote all detected junctions, where each junction $\jmath \in \mathcal{J}$ is encoded as $\jmath : \{\mathbf{p}, \{\theta_i, s_i\}_{i=1}^{N}, \rho \}$. $\mathbf{p} =(x, y) \in \Omega$ is the location of $\jmath$. $\{\theta_i\}_{i=1}^{N}$ and $\{s_i\}_{i=1}^{N}$ are the orientations and lengths of its $N$ branches respectively. $0 \leq \rho \le 1$ is the significance measured by {\it number false alarms} associated with junction $\jmath$. These junctions can be well detected by the anisotropic-scale junction (ASJ) detector. An example of detected junctions is displayed in Fig. \ref{fig:junc_detail}~(b). Observe that the mid-level geometric description $\mathcal{J}$ includes junctions with different number of branches, e.g., $L$-, $Y$-, and $X$-junctions with $2, 3$ and $4$ branches respectively. According to empirical studies, in terms of buildings, junctions with more than $3$ branches are rare and we decompose all junctions into $L$-junctions, to develop a building-centric geometric representation. Thus, we rewrite the junction format as $$ \jmath : \{\mathbf{c}, \vec \nu_1, \vec \nu_2, \rho \}, $$ where $\vec \nu_1, \vec \nu_2$ are the two branches of $L$-junctions and $\vec \nu_i = \overrightarrow{\mathbf{p}\mathbf{q}_i}$ with $\mathbf{q}_i = \mathbf{p} + s_i \cdot (\cos \theta_i, \sin \theta_i)^\top$ for $i=1,2$. $\mathbf{c}$ is the center of the L-junction $\jmath$, and $\mathbf{c} = \frac{(\mathbf{q}_1 + \mathbf{q}_2)}{2}$. The significance $\rho$ inherits from its original junctions. Fig. \ref{fig:junc_detail}~(c) displays all the L-junctions, illustrating their centers with red dots. \vspace{-3mm} \subsection{Computing geometric saliency in VHR-RS images} In order to detect buildings, we need to derive certain geometric saliency from the mid-level geometric representation of VHR-RS images, so as to highlight geometric features inside buildings and suppress those outside buildings. To this end, we use the significance from both single geometrical primitives and pair-wise junctions. \vspace{1mm} \emph{\bf First-order geometric saliency $\omega^{(1)}$} : For an image $u$, the significance $\rho$ of each junction $\jmath$ detected by the ASJ detector indicates the reliability of the junction $\jmath$ appearing in $u$. The smaller the $\rho$ is, the more salient detected junction will be. In addition, it is noted that all detected junctions $\mathcal{J}$ can be divided into two subsets, {\it i.e.,} $\mathcal{J} = \mathcal{J}_{B} \cup \mathcal{J}_{\bar B}$, $\mathcal{J}_B$ inside buildings and $\mathcal{J}_{\bar B}$ outside buildings. Given a junction $\jmath$ with parameters $\Theta_\jmath \doteq \{\mathbf{c}, \vec \nu_1, \vec \nu_2, \rho \} $, the posterior probability $\mathbb{P}(\jmath \in \mathcal{J}_B \, | \, \Theta_\jmath)$, measuring the possibility of the event that a junction $j$ parameterized by $\Theta_\jmath$ is inside buildings, is derived by $$ \mathbb{P}(\jmath \in \mathcal{J}_B \, | \, \Theta_\jmath) = \frac{\mathbb{P}( \Theta_\jmath \,| \, \mathcal{J}_B ) \mathbb{P}(\mathcal{J}_B)}{\mathbb{P}( \Theta_\jmath \,| \, \mathcal{J}_B ) \mathbb{P}(\mathcal{J}_B) + \mathbb{P}( \Theta_\jmath \,| \, \mathcal{J}_{\bar B} ) \mathbb{P}(\mathcal{J}_{\bar B})}, \label{eq:jbs-jtheta} $$ where the prior probabilities $P(\mathcal{J}_B), \, P(\mathcal{J}_{\bar B})$ and the likelihoods $\mathbb{P}( \Theta_\jmath \,| \, \mathcal{J}_B )$ $\mathbb{P}( \Theta_\jmath \,| \, \mathcal{J}_{\bar B})$ can be estimated from a given dataset of buildings, e.g., the Spacenet65 dataset as we shall see in Section \ref{sec:experiment}. Thus the first-order geometric saliency of a junction $\jmath$ can be computed as \begin{align} \omega_\jmath^{(1)} = (1- \rho_\jmath) \cdot \mathbb{P}(\jmath \in \mathcal{J}_B \, | \, \Theta_\jmath). \end{align} \emph{\bf Pairwise geometric saliency $\omega^{(2)}$} : When there are many junctions whose centers are very close to each other in a region, the probability of existence of a building (building saliency) will be higher. Thus, pair-wise relationships of junctions are useful cues to derive geometric saliency. In contrast with first-order saliency, pair-wise ones can encode more globally geometric information in images. Here, we use nearest neighbors to compute pair-wise saliency. For a junction $\jmath$, its $\tau$-{\it nearest neighbors} ($\tau$-NN), denoted by $\mathcal{N}_\jmath$, are defined as a set of junctions satisfying $$ \| \mathbf{c}_\jmath - \mathbf{c}_{\jmath'} \|_2 < \tau, \, \forall \jmath' \in \mathcal{N}_\jmath, \label{eq:distance-constraint} $$ where $\tau$ represents the minimal length of branches of the junction $\jmath$. An example of the $\tau$-NN graph for junctions is displayed in Fig. \ref{fig:junc_detail}~(c). Thus, the pair-wise geometric saliency of a junction $\jmath$ is defined as below, $\mathcal{N}$ is the amount of neighbors. \begin{equation} \omega_\jmath^{(2)} = \frac{1}{\mathcal{N}}\sum_{\jmath' \in \mathcal{N}_\jmath} e^{-\tau^{-1} \cdot \| \mathbf{c}_\jmath - \mathbf{c}_{\jmath'} \|_2} \cdot \omega_{\jmath'}^{(1)}. \label{eq:junction-index-add-neighbor} \end{equation} The geometric saliency of a VHR-RS image thus can be computed by summarizing $\omega_\jmath^{(1)}$ and $\omega_\jmath^{(2)} $ on each junctions, an example of which is shown in Fig. \ref{fig:junc_detail}~(d). \vspace{-3mm} \subsection{Geometric building index and building detection} \label{sec:gbi} Note that, given an $L$-junction $\jmath : \{\mathbf{c}, \vec \nu_1, \vec \nu_2, \rho \}$, the two branches $\vec \nu_1, \vec \nu_2$ uniquely form a parallelogram $R_\jmath$. Our {\it geometric building index} (GBI) attempts to associate each pixel $\mathbf{p}$ with a saliency measuring the possibility of the pixel belonging to buildings, which is the summation of saliency inside parallelogram of all junctions. Thus, for a pixel $\mathbf{p} \in \Omega$ in $U$, its corresponding GBI is calculated by: \begin{equation} \textrm{GBI}(\mathbf{p}) = \sum_{\jmath \in \mathcal{J}} \big( \omega_\jmath^{(1)} + \omega_\jmath^{(2)} \big) \cdot \,\mathbbm{1}_{\mathbf{p} \in R_\jmath}, \label{eq:naive-gbi} \end{equation} where $J$ is the list of junctions detected by the ASJ detector in image $U$, and $\mathbbm{1}_{\mathbf{p} \in R_\jmath}$ is an indicator function, which equals $1$ if the pixel $\mathbf{p}$ is inside the parallelogram $R_\jmath$ of junction $\jmath$ and equals to $0$ otherwise. An illustration of the proposed GBI is shown in Fig. \ref{fig:junc_detail}~(e). For the image shown in Fig. \ref{fig:junc_detail}~(a), we simply threshold the computed GBI with its arithmetic average to finally generate the building map, as shown in Fig. \ref{fig:junc_detail}~(f). \vspace{-2mm} \section{Experiments and Discussions} \label{sec:experiment} This section evaluates the proposed method and compares it with state-of-the-art methods~\cite{shao_basi:_2014,huang_mbi_2012,liu2013perception,zuo_hf-fcn:_2016} on three public datasets that are used for validating building detection algorithms. The datasets are : \begin{itemize} \vspace{-2mm} \item[-] \emph{Spacenet-65 Dataset}\footnote{\url{https://amazonaws-china.com/cn/public-datasets/spacenet/}} consists of $65$ images of $2000 \times 2000$ pixels extracted from WorldView-2 satellite imagery, with a spatial resolution of $0.5$ meter. This dataset covers buildings in both urban and rural areas. \vspace{-2mm} \item[-] \emph{Potsdam Dataset}\footnote{\url{http://www2.isprs.org/commissions/comm3/wg4/2d-sem-label-potsdam.html}} contains $214$ images of $2000 \times 2000$ pixels with a spatial resolution of $0.05$ meter. Buildings in Potsdam exhibit to be large and are distributed dispersively due to the high resolution. \vspace{-2mm} \item[-] \emph{Massachusetts Buildings Dataset}\footnote{\url{https://www.cs.toronto.edu/~vmnih/data/}} contains $10$ images of $1500 \times 1500$ pixels in the test subset with a spatial resolution of $1$ meter. \vspace{-2mm} \end{itemize} To demonstrate the effectiveness of using geometric saliency in building detection, we compare our method with several state-of-the-art methods, including texture-based BASI~\cite{shao_basi:_2014}, morphology-based MBI~\cite{huang_mbi_2012}, local geometry-based PBI~\cite{liu2013perception} and learning-based HF-FCN~\cite{zuo_hf-fcn:_2016}. Note that BASI and PBI are designed for built-up area detection and the others aim to detect the accurate shape of buildings. For HF-FCN, we directly use the model provided by the authors. For quantitative evaluation, as in ~\cite{zuo_hf-fcn:_2016,automated2013}, the {\it mean Average Precision (mAP)} and {\it F-score} (also known as {\it F-measure}) are employed to measure the accuracy of detection. \vspace{-3mm} \subsection{Results and analysis} All results on the three datasets and detailed comparisons with different methods are available at \url{http://captain.whu.edu.cn/project/geosay.html}. Table \ref{table:map-f-datasets} shows the mAP and F-score of different building detection methods. It can be noted that the proposed GBI achieves the best performance in both mAP and F-score on {\it Spacenet-65} and {\it Potsdam} dataset, in the cases without training. When there are training samples, {\it i.e.,} the case on \emph{Massachusetts} dataset, HF-FCN outperforms all the other methods, since the model is fully trained on the dataset. But the model is severely overfitting, since it substantially loses its efficiency on {\it Spacenet-65} and {\it Potsdam} dataset and achieves very low mAP ($0.04$ and $0.03$) and F-score ($0.12$ and $0.10$). This questions the generalization capability of learning-based methods. By contrast, although the prior probabilities of junctions are estimated from {\it Spacenet-65} dataset, the high performance on both {\it Potsdam} and {\it Massachusetts} dataset indicates the powerful generality of our method. Even under the significant change of resolution (varying from 0.5m to 0.05m), the performance of our method is still better than the others. \begin{table}[htb!] \footnotesize \vspace{-3mm} \caption{Comparisons of different building detection methods in \textit{mAP} and \textit{F-score}. Note that our method outperforms the others in the cases without training.} \vspace{-4mm} \begin{center} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method}& \multicolumn{2}{c|}{Spacenet-65}&\multicolumn{2}{c|}{Massachusetts}&\multicolumn{2}{c}{Potsdam} \cr\cline{2-7}&mAP&F-score &mAP&F-score &mAP&F-score \cr \hline BASI~\cite{shao_basi:_2014} & 0.34 & 0.44 & 0.32 & 0.40 & 0.34 & 0.44\\ MBI~\cite{huang_mbi_2012} & 0.28 & 0.35 & 0.28 & 0.38 & 0.17 & 0.35\\ PBI~\cite{liu2013perception} & 0.27 & 0.37 & 0.25 & 0.36 & 0.41 & 0.50\\ HF-FCN~\cite{zuo_hf-fcn:_2016} & 0.04 & 0.12 & \textbf{0.57} & \textbf{0.74} & 0.03 & 0.10\\ GBI (ours) & \textbf{0.46} & \textbf{0.52} & 0.37 & 0.44 & \textbf{0.46} & \textbf{0.59}\\ \hline \end{tabular} \end{center} \label{table:map-f-datasets} \vspace{-4mm} \end{table} \begin{figure*}[htb!] \vspace{-3mm} \centering \includegraphics[width = 0.92\linewidth]{images/result2.png} \vspace{-4mm} \caption{Building detection results on three sample images. On the first image that has many roads and empty areas, our method highlights buildings, while the compared methods detect a lot of redundant non-building areas. On the second image, BASI performs poorly due to the lack of texture and MBI generates many failures because of the low contrast between buildings and background. More results in \url{http://captain.whu.edu.cn/project/geosay.html}. } \label{fig:visual_result} \vspace{-3mm} \end{figure*} Fig.~\ref{fig:visual_result} illustrates the building detection results on two sample images. The first image shows a case where buildings are distributed dispersively and a lot of non-building objects exist. The two built-up area detection methods, PBI and BASI, extract not only the buildings but also the neighbors, and produce many failures. The texture-based BASI results in numerous false detections in textural regions like roads and forest, and the local geometry-based PBI results in a lot of false detections around buildings. Other methods like MBI also face such problem, which confuse the rural roads with buildings. The phenomenons above suggest that the building indexes defined by these methods are not suitable to describe buildings in VHR-RS images. For the second image, BASI misses many parts of the three highlight buildings with low-texture roofs, which indicates that texture-based methods are inappropriate to buildings with low textures. MBI detects most of the buildings but fails to extract the whole shape of the central building due to the imbalanced luminance at the roof. By contrast, such cases do not hamper the performance of our method, since junctions locate at the corners of buildings no matter what the texture or the luminance of buildings appear. \vspace{-4mm} \subsection{Discussions} The proposed GBI is based on the geometric saliency in VHR-RS images, not requiring any annotated training samples for the computations, and is capable of preserving the whole geometric shapes of buildings with high performance. Such results are promising for mapping buildings in VHR-RS images. One limitation of the GBI is that, in VHR-RS images, some man-made architectures or objects (e.g., cars) may also exhibit salient geometrical structures, which may lead to false detections. For solving these problems, some prior information in the images, such as ratios between object size and image resolution, can be used to suppress false alarms. In addition, it is also of great interest to incorporate different kinds of information to improve the detection accuracy of the position and whole boundaries of buildings. \vspace{-3mm} \section{Conclusion} \label{sec:conclusion} \vspace{-2mm} This paper proposes a geometric saliency-based method for detecting buildings in VHR-RS images. Compared with traditional saliency-based methods, our method measures the geometric saliency of building by leveraging the meaningful geometric features that are specialized for describing buildings; compared with the learning-based method, our method is totally unsupervised and free of any training strategies. Experiments on three public datasets demonstrate that the proposed method not only achieves a substantial performance improvement, but also generalizes well to data of broad domains. Moreover, the buildings detected by our method have a clearer boundary and less redundant cluttered areas than existing methods. \stepcounter{section} \renewcommand\refname{\centering \normalsize \thesection. REFERENCES \vspace{-2mm}} \footnotesize \bibliographystyle{IEEEbib}
{ "timestamp": "2018-06-12T02:08:23", "yymm": "1806", "arxiv_id": "1806.00908", "language": "en", "url": "https://arxiv.org/abs/1806.00908" }
\section{Introduction} Achieving human-like adaptability remains a daunting challenge for developers of artificial intelligence. In a related effort, researchers in the machine learning literature have tried to understand ways to learn quickly on a small number of training examples. Learning the ability to quickly learn a new task on low data is known as meta-learning, and typically two-level learning is employed: initial learning on large data sets representing widely varying tasks and few-shot learning on small amounts of unseen data to conduct a yet different task. Note that while the original meaning of meta-learning is to describe ability to conduct completely different sets of tasks without much new training in between \cite{SchmidhuberML, Schmidhuber93, Schmidhuber97, Thrun}, many researchers have also used the term meta-learning as they developed classification methods involving disjoint classes of images between initial training and inference. Significant advances have been made recently in this narrow sense of meta-learning, which is also often treated in the same context as few-shot learning. In particular, the prior work on combining a neural network with external memory, known as the memory-augmented neural network (MANN) \cite{MANN}, showed a notable progress. The Matching Network of \cite{MN} yields decisions based on matching the output of a network driven by a query sample to the output of another network fed by labeled samples; a training method was also introduced there which utilizes only a few examples per class at a time to match train and test conditions for a rapid learning. In yet another approach, the long short-term memory (LSTM) of \cite{LSTM} is trained to optimize another learner, which performs actual classification \cite{Ravi}. There, the parameters of the learner in few-shot learning are first set to the memory state of the LSTM, and then quickly learned based on the memory update rule of the LSTM, effectively capturing the knowledge from small samples. The model-agnostic meta-learner (MAML) of \cite{MAML} sets up the model for easy fine-tuning so that a small number of gradient-based updates allows the model to yield a good generalization. A method dubbed the simple neural attentive meta-learner (SNAIL) combines an embedding neural network with temporal convolution and soft attention to draw from past experience \cite{SNAIL}. The Prototypical Network of \cite{PN} makes decision by comparing the query output with the per-class cluster means in the embedding space while the network is learned via many rounds of episodic training. In this paper, we propose a unique meta-learning algorithm utilizing linear transformation which allows classification in an alternative projection space to achieve improved few-shot learning. During the initial meta-learning phase, a special set of vectors that acts as references for classification is learned together with the embedding network. The linear transformer, which can be viewed as a simple null-space projector, zero-forces errors between the per-class average outputs of the embedding network and the reference vectors in the projection space. The construction of this projection space is episode-specific. The essence of our algorithm is that when we try to match the network outputs representing the images to classify with the references utilized for classification, they do not need to be close in the original embedding space as long as they are conditioned to match well in the projection space. Our meta-learner exhibits competitive performances among existing meta-learners for Omniglot and \textit{mini}ImageNet image classification. In particular, in 20-way Omniglot experiments, our method gives near-best performance, second only to the SNAIL of \cite{SNAIL} in both 1-shot and 5-shot results. For 1-shot experiments of \textit{mini}ImageNet, our meta-learner is again the second to SNAIL, which yields the best result among all existing methods to our knowledge, except for the task-dependent adaptive metric (TADAM) of \cite{TADAM} that requires an extra network. For the 5-shot testing of \textit{mini}ImageNet, however, our method yields the best accuracy for a given model size, beating both the Prototypical Network of \cite{PN} and the SNAIL method of \cite{SNAIL}, albeit by a small margin against the latter. \section{Meta-Learner with Linear Nulling} \label{section2} \subsection{Model Description} Let us provide a quick conceptual understanding of the proposed algorithm. In processing a given episode of labeled images (support set $\mathcal{S}$) and queries during the initial meta-learning phase, the convolutional neural network (CNN) output average $\bar{\mathbf{g}}_{k}$ is first generated for each label. A support set $\mathcal{S}$ contains pairs of images $\mathbf{x}_{n}$ and matching labels $y_n$. Let $S_{k}$ be the subset of $\mathcal{S}$ with a particular label $k$. See Fig. \ref{fig:system_diagram}. The model also makes use of a special set of vectors $\boldsymbol{\phi}_k$ that are carried over from the last episode-processing stage (arbitrarily initialized when handling the first episode). The purpose of $\boldsymbol{\phi}_k$'s will be made clear shortly. Let $\mathbf{\Phi}$ be the matrix collecting $\boldsymbol{\phi}_k$'s for all labels. A projection space $\mathbf{M}$ is then constructed such that the average CNN output vectors $\bar{\mathbf{g}}_{k}$ and the vectors in $\mathbf{\Phi}$ are aligned closely in this space. The details of constructing $\mathbf{M}$ will be given below. Now, each query input is applied to the embedding CNN and the resulting output is collected. When this is done for all queries (with different labels), both the CNN $f_{\theta}$ and the vectors in $\mathbf{\Phi}$ are adjusted by comparing the query output with the reference vectors such that in-class similarities as well as out-of-class differences are maximized collectively in the projection space. This process gets repeated for every remaining episode with new classes of images and queries during the initial learning phase. The vectors $\boldsymbol{\phi}_k$ act as references for classification, and they are learned from one episode to next together with the embedding network. Note that the projection space $\mathbf{M}$ itself is not learned but computed anew in every episode, given newly obtained averages $\bar{\mathbf{g}}_{k}$ and the reference vectors $\mathbf{\Phi}$ carried over from the last episode stage. The labels for the vectors in $\mathbf{\Phi}$ do not need to change from one episode to another. $\mathbf{M}$ can be seen as playing a crucial role in providing an efficient form of episode-specific conditioning. By the time the initial meta-learning phase is over, an effective few-shot learning strategy gets built-in jointly in the embedding network and the reference vector set. Now, as the few-shot learning phase begins, the learned parameters $f_{\theta}$ and $\mathbf{\Phi}$ will have all been fixed. First, a new set of per-class average vectors $\bar{\mathbf{g}}_{k}$ corresponding to the given few-shot images are obtained. Then, a new projection space is formed using these average vectors as well as the reference vectors in $\mathbf{\Phi}$. Next, the network output for the test shot is compared with each of the reference vectors in the projection space for the final classification decision. The Euclidean distance is used here as distance measure and softmax is utilized as the activation function. \begin{figure} \centering \includegraphics[width=120mm]{figure1.pdf} \caption{The proposed meta-learner with linear nulling} \label{fig:system_diagram} \end{figure} \subsection{Construction of Null-Space} The projection space $\mathbf{M}$ is obtained as follows. Consider the training samples labeled $k$ in a given episode. The average CNN outputs $\bar{\mathbf{g}}_{k}$ are obtained for all classes. As mentioned, we are to choose $\mathbf{M}$ such that $\bar{\mathbf{g}}_{k}$ and the matching reference weight vector $\boldsymbol{\phi}_k$ are aligned closely when projected on $\mathbf{M}$ itself. At the same time, we also want to make $\bar{\mathbf{g}}_{k}$ and the non-matching weights $\boldsymbol{\phi}_l$ for all $l\neq k$ well-separated in the same projection space. An effective way to achieve this is to maximize \begin{align} \label{eq:delta_rep_key} \Delta_{k} = \big\{(N_{c}-1)\boldsymbol{\phi}_{k} - \sum_{{\scriptstyle l\neq k }}{\boldsymbol{\phi}_{l}} \big\} \mathbf{M}\mathbf{M}^{T} \bar{\mathbf{g}}_{k}^{T} \end{align} which is to say that two matching vectors $\boldsymbol{\phi}_{k}$ and $\bar{\mathbf{g}}_{k}$ yield a large inner product (i.e., well-aligned) on $\mathbf{M}$ while at the same time the non-matching vector sum $\sum_{l\neq k}{\boldsymbol{\phi}_{l}}$ and $\bar{\mathbf{g}}_{k}$ have a small inner product. $N_c$ is the number of labels. The class references $\boldsymbol{\phi}_{k}$'s are not relabeled from one episode to next. We assume that $\mathbf{M}$ takes the form of a matrix whose columns are orthonormal unit vectors. Defining the difference or error vector $\mathbf{v}_{k} = \{(N_{c}-1)\boldsymbol{\phi}_{k} - \sum_{l\neq k}{\boldsymbol{\phi}_{l}} \} - \bar{\mathbf{g}}_{k}$, maximization of (1) is done by taking $\mathbf{M}$ to be a subspace of the orthogonal complement of the space spanned by $\mathbf{v}_{k}$ in $\mathbb{R}^{D}$, where $D$ is the length of $\mathbf{v}_{k}$. In this way, the error vector is always forced to zero when projected on $\mathbf{M}$. To consider all classes, $\mathbf{M}$ should be a subspace of the null-space of a matrix whose columns are $N_{c}$ error vectors, i.e., $\mathbf{M} \subseteq \text{null}\big( \{ \mathbf{v}_{0},\cdots,\mathbf{v}_{N_{c}-1} \} \big)$. By focusing only on the null-space, we are essentially cutting out unnecessary subspaces for classification that contain non-orthogonal basis to any error vectors. With such an $\mathbf{M}$, all error vectors are forced to zero when projected on itself. Note that a simple choice of $\mathbf{M} = \text{null}\big( \{ \mathbf{v}_{0},\cdots,\mathbf{v}_{N_{c}-1} \} \big)$ is a perfectly good solution, which we employ in this paper. If $D$ is larger than $N_{c}$, the projection space $\mathbf{M}$ always exists. Note that because the matching between the network output averages and the reference vectors occurs on the episode-specific projection space, $\boldsymbol{\phi}_{k}$'s do not need to be relabeled in every episode; the same label sticks to each reference vector throughout the initial learning phase. \iffalse \begin{algorithm} \caption{Initial learning is done by $N_{E}$ training episodes. Each episode $E_{i}$ consists of $N$ (shot, label) pairs. These $N$ shots are composed of $N_{c}$ classes and there are $N_{s}$ shots in each class. $L_{train}$ is the loss for training learnable parameters and $L(\cdot)$ is the softmax cross-entropy loss function.} \label{alg} \textbf{Input}: Training set $E^T = \left\{E_1, ... ,E_{N_E} \right\}$ where $E_i =\left\{(\mathbf{x}_1, y_1),...,(\mathbf{x}_N,y_N) \right\}$ is episode of length $N = N_c \times N_s$, and $y_t \in \left\{0,...,N_c-1\right\}$. $E_i^{(k)}=\left\{(\mathbf{x}_1^{(k)}, y_1^{(k)}),...,(\mathbf{x}_{N_s}^{(k)},y_{N_s}^{(k)}) \right\}$ is the subset of $E_i$ consisting of all $(\mathbf{x}_n, y_n)$ such that $y_n = k$. \begin{algorithmic}[1] \FOR{$i$ in $ \left \{1 , ... , N_E \right \}$} \STATE $ \mathit{L}_{train} \leftarrow 0$ \FOR{$j$ in $ \left \{1 , ... , N_s -1 \right \}$} \FOR{$k$ in $ \left \{0 , ... , N_c -1 \right \}$} \STATE $S_k \leftarrow \left\{(\mathbf{x}_{n}^{(k)}, y_{n}^{(k)})\right\}$ with $(\mathbf{x}_{n}^{(k)}, y_{n}^{(k)}) \in E_i^{(k)}, n \leq j $ \STATE $\bar{\mathbf{g}}_k \leftarrow {1 \over j} \sum_{(\mathbf{x}_{n}^{(k)}, y_{n}^{(k)} ) \in S_k} {f_{\theta}(\mathbf{x}_{n}) \mathbf{W}^{T}}$ \STATE $\mathbf{v}_{k} \leftarrow \big\{(N_{c}-1)\boldsymbol{\phi}_{k} - \sum_{l\neq k}{\boldsymbol{\phi}_{l}} \big\} - \bar{\mathbf{g}}_{k}$ \ENDFOR \STATE $ \mathbf{M} \leftarrow \text{null} \bigg( \{ \mathbf{v}_{k} \}_{k \in \left \{0 , ... , N_c -1 \right \}} \bigg) $ \FOR{$k$ in $ \left \{0 , ... , N_c -1 \right \}$} \STATE $(\mathbf{x}_{q}, y_{q}) \leftarrow (\mathbf{x}_{n}^{(k)}, y_{n}^{(k)} ) \in E_i^{(k)}, n=j+1 $ \STATE $\mathbf{h}_{q} \leftarrow f_{\theta}(\mathbf{x}_{q}) $ \STATE $\mathbf{g}_{q} \leftarrow \mathbf{h}_{q} \mathbf{W}^{T} $ \STATE $ \mathit{L}_{train} \leftarrow \mathit{L}_{train} + \mathit{L}(\boldsymbol{\Phi}\mathbf{MM}^T\mathbf{g}_{q}^{T}; y_{q})$ \ENDFOR \ENDFOR \STATE Update $ \theta, \mathbf{W}, \boldsymbol{\Phi}$ minimizing $\mathit{L}_{train}$ via Adam optimizer \ENDFOR \end{algorithmic} \end{algorithm} \fi \section {Experiment Results}\label{section3} \subsection{Settings for Omniglot and \textit{mini}ImageNet Few-shot Classification} Omniglot \cite{Omniglot} is a set of images of 1623 handwritten characters from 50 alphabets with 20 examples for each class. We have used 28$\times$28 downsized grayscale images and introduced class-level data augmentation by random angle rotation of images in multiples of 90$^\circ$ degrees, as done in prior works \cite{MANN,PN,MN}. 1200 characters are used for training and test is done with the remaining characters. \textit{mini}ImageNet is a dataset suggested by Vinyals et al. for few-shot classification of colored images \cite{MN}. It is a subset of the ILSVRC-12 ImageNet dataset \cite{Imagenet} with 100 classes and 600 images per each class. We have used the splits introduced by Ravi and Larochelle \cite{Ravi}. For experiment, we have used 84$\times$84 downsized color images with a split of 64 training classes, 16 validation classes and 20 test classes. Data augmentation was not employed in the \textit{mini}ImageNet experiment. The Adam optimizer \cite{Adam} with optimized learning-rate decay is used. For fair comparison, we employ the same embedding CNN widely used in prior works. It is based on four convolutional blocks, each of which consists of a 3$\times$3 2D convolution layer with 64 filters, stride 1 and padding, a batch normalization layer \cite{BN}, a ReLU activation and a 2$\times$2 maxpooling, as done in \cite{SNAIL,PN,MN}. For 20-way Omniglot and 5-way \textit{mini}ImageNet classification, our meta-learner is trained with 60-way and 20-way episodes, respectively. In the test phase of 20-way Omniglot and 5-way \textit{mini}ImageNet classification, we have to choose 20 and 5 reference vectors among 60 and 20, respectively. In selecting only a subset of reference vectors for testing purposes, relabeling is done. For each average network output chosen in arbitrary order, the closest vector among the remaining ones in $\mathbf{\Phi}$ is tagged with the matching label. The closeness measure is the Euclidean distance in our experiments. After choosing the closest reference vectors, the projection space $\mathbf{M}$ is obtained for few-shot classification. The experimental results of our meta-learner in Table \ref{acc_table1} are based on 60-way initial learning for 20-way Omniglot and 20-way initial learning for 5-way \textit{mini}ImageNet classification. \subsection{Results} In Table \ref{acc_table1}, few-shot classification accuracies of our meta-learner with nulling (MLN) are presented. The performance in the 20-way Omniglot experiment is evaluated by the average accuracy over randomly chosen $1\times10^{4}$ test episodes with 5 query images for each class. On the other hand, the performance in 5-way \textit{mini}ImageNet is evaluated by the average accuracy and a 95\% confidence interval over randomly chosen $3\times10^{4}$ test episodes with 15 query images for each class. When we obtain the projection space, we normalize $\boldsymbol{\Phi}$ and $\bar{\mathbf{g}}_{k}$ by forcing the power of all vectors to 1. Once the projection space is found, classification of the query is done by measuring the distance to the reference vectors without normalizing the vectors, since the query output is not normalized. Prior meta-learners including the matching networks \cite{MN}, MAML \cite{MAML}, the Prototypical Networks \cite{PN} and SNAIL \cite{SNAIL} are compared. For 20-way Omniglot classification, MLN shows the second best performance for both 1- and 5-shot cases. Although the performance is not the best, the classification accuracies are fairly close to the best. For the 5-shot case of 5-way \textit{mini}ImageNet classification, our meta-learner achieves the best performance. We remark that for \textit{mini}ImageNet, the recently introduced TADAM of \cite{TADAM} actually shows the best performance among all known methods including our MLN. However, we opted not to include this method in the comparison table as it requires an extra network for the task-conditioning and thus comparison would not be fair. Our meta-learner is perhaps closest in spirit to the Prototypical Network, among known prior methods, in that both methods rely on training references via repetitive episode-specific conditioning with the final query image compared against each of the references during classification. The key difference is that in the Prototypical Network, the generalization strategy is directly learned in the embedding network, whereas in our method a separate set of reference vectors is maintained which is learned together with the network. Employing a projection space to perform classification is also a unique feature of MLN. \begin{table}[h] \caption{Few-shot classification accuracies for 20-way Omniglot and 5-way \textit{mini}ImageNet} \label{acc_table1} \centering \begin{tabular}{l||cc||cc} \toprule & \multicolumn{2}{c}{\textbf{20-way Omniglot}} & \multicolumn{2}{c}{\textbf{5-way \textit{mini}ImageNet}} \\ \cmidrule{2-5} \textbf{Methods} & 1-shot & 5-shot & 1-shot & 5-shot \\ \midrule \textbf{Matching Nets} \cite{MN} & 88.2\% & 97.0\% & 43.56 $\pm$ 0.84\% & 55.31 $\pm$ 0.73\% \\ \textbf{MAML} \cite{MAML} & 95.8\% & 98.9\% & 48.70 $\pm$ 1.84\% & 63.15 $\pm$ 0.91\% \\ \textbf{Prototypical Nets} \cite{PN} & 96.0\% & 98.9\% & 49.42 $\pm$ 0.78\% & 68.20 $\pm$ 0.66\% \\ \textbf{SNAIL} \cite{SNAIL} & 97.64\% & 99.36\% & 55.71 $\pm$ 0.99\% & 68.88 $\pm$ 0.92\% \\ \midrule \textbf{MLN} & 97.33\% & 99.18\% &50.68 $\pm$ 0.11\% & \textbf{69.00} $\pm$ \textbf{0.09}\%\\ \bottomrule \end{tabular} \end{table} \section{Conclusion}\label{section5} In this work, we proposed a meta-learning algorithm aided by a linear transformer that performs null-space projection of the network output. Our algorithm uses linear nulling to shape the classification space where network outputs could be better classified. Our meta-learner achieves the best or near-best performance among known methods in various few-shot image classification tasks for a given network size. \subsubsection*{Acknowledgments} This work is supported by the ICT R\&D program of Institute for Information \& Communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) [2016-0-005630031001, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion]. \small \bibliographystyle{plain}
{ "timestamp": "2018-12-06T02:15:58", "yymm": "1806", "arxiv_id": "1806.01010", "language": "en", "url": "https://arxiv.org/abs/1806.01010" }
\section{Introduction} In this paper we examine wether there is a reduction in the Red Hat inc. stock volatility during the moving from NASDAQ to New York Stock Exchange (NYSE) on December 12, 2006 \cite{morlanes2009empirical}. We model the dynamics of the volatility by means of non-linear autoregressive models and machine learning approach. We mainly focus on three models: the Logistic smooth transition regression model (LSTAR), the self-exciting threshold autoregressive model (SETAR), and the neural network non-linear autoregressive model (NNET). NASDAQ and NYSE are markets in which trade take place under very different conditions. It seems natural to assume that Red Hat Inc. stock dynamics suffers a change when markets are switched. We naturally allow the stock price to consist of two different regimes or states of the world and allow the dynamics to be different in the two regimes. One before the switch of the markets and other after the switch of markets. Classical linear models seem not to capture the complexity of this change. Non-linear models may be more appropriate \cite{franses2000non}. A popular set of models applied in different regimes are autoregressive (AR) models such as SETAR and LSTAR models. These models are extentions of the linear AR models. They are easily estimated and intepreted using regression methods. We explore machine learning approach as an alternative semiparametric method. The use of NNET has become very popular in the last two decades. This is due to its capacity of learning the ''hidden'' relationships in the data without the necessity of supposing a particular parametric model. We confine ourselfves to the applications of Artificial Neural Networks (ANN) and do not consider other types of machine learning approches such as support vector machines and other kernel based learning methods. \section{Data set} The data set includes 500 observations of daily closing prices of Red Hat financial assets. These daily prices are sourced from the Federal Reserve Bank of St. Louis Economic Data (FRED). Unit root test, based on the non-linear Perron test, indicates that the time series is non-stationary. We therefore choose to work with the first difference of the logarithmic price. To perform the non-linear Perron test, we first consider a one-time structural break at $T_B="December 12, 2006"$ with $1<T_B<T$. The null hypothesis consists of a unit root with possible non-zero drift which permits a structural change in the level and the growth rate of the price series \begin{equation} p_t = \mu_1+p_{t-1}+dD(TB)_t+(\mu_2-\mu_1)DU_t+e_t \end{equation} where \begin{equation*} D(TB) = \begin{cases} 1 & \text{if } t=T_B+1 \\ 0 & \mathrm{otherwise} \end{cases} \quad \mathrm{and}\quad DU_t = \begin{cases} 1 & \text{if } t > T_B \\ 0 & \mathrm{otherwise} \end{cases} \end{equation*} versus the alternative hypothesis of a trend-stationary model which allows one change in the intercept and one change in the slope of the trend function \begin{equation} p_t = \mu_1+\beta_1 t+(\mu_2-\mu_1)DU_t+(\beta_2-\beta_1)DT_t+e_t \end{equation} where \begin{equation*} DT_t = \begin{cases} t-T_B & \text{if } t>T_B \\ 0 & \mathrm{otherwise}. \end{cases} \end{equation*} To motivate the particular choice of the hypothesis test, we illustrate in Figure \ref{RH_series}(a), the trend of the Red Hat Inc. After detrending the price series, we perform a Phillips-Perron test on the residulas $e_t$. We do not reject the unit root hypothesis with Z-statistic -2.8112 and p-value 0.235. We therefore use the logarithm of the first differences of the price series. \begin{equation*} y_t=\log p_{t+1}-\log p_t \end{equation*} \begin{figure}[H] \begin{center} \subfigure[t][Structural break.]{ \resizebox*{5.8cm}{!}{\includegraphics[scale=0.15]{Rplot.png}}}\hspace{-5pt} \subfigure[t][Logarithmic returns.]{ \resizebox*{5.8cm}{!}{\includegraphics[scale=0.15]{Rplot01.png}}} \caption{Time series plots of Red Hat Inc. stock price and returns. (a) The trend of the price series shows a jump and a change of growth rate at December 12, 2006. (b) Illustrates a possible reduction in the fluctuations of the returns after the switch of markets on December 12, 2006.} \label{RH_series} \end{center} \end{figure} We construct the realized volatility time series from the log returns with a window of 60 days. The time series is smooth and tractable for modelling. The series has a clear two regime with a definite structural break at the time of market switch (see Figure 1). We consider other window alternatives such as monthly or quarterly - 30 and 90 days respectively-. Although they have clear economical meaning, they do not produce volatility trajectories which are easily to model. The 30 days window produces a too wild fluctuation series and the 90 days window produces a too short series for meaningful statistics. \begin{figure}[H] \begin{center} \includegraphics[scale=0.35]{Rplot02.png} \caption{Realized volatility with a 60 days window.} \end{center} \label{RH_Volatility} \end{figure} \section{Econometric Methods} We use a non-linear autoregressive time series model in the analysis. Consider a general time series autoregressive model that is generated by \begin{equation*} X_t=f(y_t,y_{t-1},\ldots,y_{t-p}; \theta)+\varepsilon_t \end{equation*} with $f$ a generic fuction from $R^p$ to $R$. The vector $\theta$ indicates a generic vector of parameters governing the shape of $f$, which are estimated on the basis of an observed time series. A classical autoregressive model (AR) model is specified by \begin{equation*} X_t=\phi+\phi_0 X_{t-1}+\ldots \phi_p X_{t-p}+\varepsilon_t. \end{equation*} A Self-Exciting Threshold Autoregressive Model (SETAR) can be written as: \begin{equation*} X_t = \begin{cases} \phi+\phi_0 X_{t-1}+\ldots \phi_p X_{t-p}+\varepsilon_t. & \text{if } X_{t-1}> c \\ \beta+\beta_0 X_{t-1}+\ldots \beta_p X_{t-p}+\varepsilon_t & \text{if } X_{t-1}< c. \end{cases} \end{equation*} A Smooth Transition Autoregressive model (STAR) can be viewed as a generalisation of a SETAR model. This allows to change the autoregressive parameters slowly and can be written as: \begin{equation} \label{STARmodel} X_t=\phi+\phi_0 X_{t-1}+\ldots \phi_p X_{t-p}+G(Z_t;\gamma,c)(\beta+\beta_0 X_{t-1}+\ldots \beta_p X_{t-p})+\varepsilon_{t}. \end{equation} If \begin{equation} \label{Logistic_function} G(Z_t;\gamma,c)=\frac{1}{1+e^{-\gamma(Z_t-c)}},\quad \gamma>0, \end{equation} the logistic function and $Z_t$ is the threshold variable, the model is called Logistic Smooth Transition model (LSTAR). The parameter $c$ can be interpreted as the thereshold and $\gamma$ determines the speed and smoothness of transition. The exponential form of the model (ESTAR) uses equation (\ref{STARmodel}) with \begin{equation*} \label{Exponential_function} G(Z_t;\gamma,c)=1-e^{-\gamma(Z_t-c)^2},\quad \gamma>0. \end{equation*} In the empirical study, we use the approach of Artificial Neural Networks models (ANN). A neural network model with linear input, D hidden units and activation function $g$, can be written as: \begin{equation*} X_t=\beta_0+\sum_{j=1}^D \beta_j g\left(\gamma_{0j}+\sum_{i=1}^m \gamma_{ij}X_{t-i}\right)+\varepsilon_t. \end{equation*} A leading example for the active function $g$ is the logistic function (\ref{Logistic_function}). Figure 3 illustrates the architecture of a feedforward network with 3 input units, 4 hidden units, 1 output unit and shortcut connections. \def2.5cm{2.5cm} \usetikzlibrary{calc} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=2.5cm] \tikzset{normal arrow/.style={draw,->,>=stealth}} \tikzstyle{every pin edge}=[<-,shorten <=1pt] \tikzstyle{neuron}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt] \tikzstyle{input neuron}=[neuron, fill=green!50]; \tikzstyle{output neuron}=[neuron, fill=red!50]; \tikzstyle{hidden neuron}=[neuron, fill=blue!50]; \tikzstyle{annot} = [text width=4em, text centered] \foreach \name / \y in {1,...,3} \node[input neuron,pin={[pin edge={<-,>=stealth}]left: $X_{t-\y}$}] (I-\name) at (0,-\y) {}; \foreach \name / \y in {1,...,4} \path[yshift=0.5cm] node[hidden neuron] (H-\name) at (2.5cm,-\y cm) {}; \node[output neuron,pin={[pin edge={->,>=stealth}]right:$X_t$}, right of=H-3] (O) {}; \foreach \source in {1,...,3} \foreach \dest in {1,...,4} \path[normal arrow](I-\source) edge (H-\dest); \foreach \source in {1,...,4} \path[normal arrow] (H-\source) edge (O); \node[annot,above of=H-1, node distance=1cm] (hl) {Hidden layer}; \node[annot,left of=hl] {Input layer}; \node[annot,right of=hl] {Output layer}; \draw[normal arrow,->] ([yshift=0,xshift=11mm]O.north) .. controls + (0,2) and + (0,0) .. ([yshift= -2 mm, xshift=3 mm] H-1.north) node [pos=0.3,right=10pt] {Feedback error}; \end{tikzpicture} \end{center} \begin{tikzpicture}[ init/.style={draw,circle,inner sep=2pt,font=\Huge,join = by -latex}, squa/.style={ draw, inner sep=2pt, font=\Large, join = by -latex }, start chain=2,node distance=13mm ] \node[on chain=2] (x2) {$X_{t-2}$}; \node[on chain=2,join=by o-latex] {$\gamma_{2j}$}; \node[on chain=2,init] (sigma) {$\displaystyle\Sigma$}; \node[on chain=2,squa,label=above:{\parbox{2cm}{\centering Activate \\ function}}] {$g$}; \node[on chain=2,label=above:Output,join=by -latex] (O) {$X_t$}; \begin{scope}[start chain=1] \node[on chain=1] at (0,1.5cm) (x1) {$X_{t-1}$}; \node[on chain=1,join=by o-latex] (w1) {$\gamma_{1j}$}; \end{scope} \begin{scope}[start chain=3] \node[on chain=3] at (0,-1.5cm) (x3) {$X_{t-3}$}; \node[on chain=3,label=below:Weights,join=by o-latex] (w3) {$\gamma_{3j}$}; \end{scope} \node[label=above:\parbox{2cm}{\centering Bias \\ $\gamma_{0j}$}] at (sigma|-w1) (b) {}; \draw[-latex] (w1) -- (sigma); \draw[-latex] (w3) -- (sigma); \draw[o-latex] (b) -- (sigma); \draw[decorate,decoration={brace,mirror}] (x1.north west) -- node[left=10pt] {Inputs} (x3.south west); \draw [-latex] (O) - ++(0,-1) |- (w3) node[pos=0.7,below] {Feedback error}; \end{tikzpicture} \caption{A feedforward network with $m=3$ input units, $D=5$ hidden units and 1 output unit.} \end{figure} \section{Empirical Results} \subsection{Nonlinear Time series Analysis} We perform a Ter\"{a}svirta test to detect the presence of a Logistic smooth transition model. The test is based on a Taylor series expansion of the general LSTAR model. We take the third order Taylor approximation of the Logistic function (\ref{Logistic_function}) with respect to $h_t=-\gamma(t-c)$ with threshold variable $Z_t=t$ evaluated at $h_t=0$. The expansion has the form: \begin{equation*} G(t;\gamma,c)\simeq\frac{h_t}{4}-\frac{h_t^3}{48} \end{equation*} so that \begin{equation*} X_t=\phi+\phi_0 X_{t-1}+\ldots \phi_p X_{t-p}+ (\beta+\beta_0 X_{t-1}+\ldots \beta_p X_{t-p})(\frac{h_t}{4}-\frac{h_t^3}{48})+\varepsilon_t. \end{equation*} The first step is to estimate the linear portion of the AR(p) model to determine the order p. A $p$ order of one or zero AIC and BIC respectively (see Table \ref{AIC}). \begin{table}[H] \centering \caption{\small Order selection for volatility. Best five AIC and BIC out of the first 20 lags.} \label{AIC} \begin{tabular}{rrr} \toprule p & AIC & BIC \\ \midrule 0 & -1308.18 & -1810.94\\ 1 & -3125.22 & -1804.89\\ 2 & -3123.25 & -1799.36\\ 3 & -3121.80 & -1793.28\\ 4 & -3119.81 & -1787.21\\ \bottomrule \end{tabular} \end{table} We next select the functional form. Consider two LSTAR models with order zero and one respectively : \begin{align*} \text{Model 1: }&y_t=\pi_{10}+\pi_{11}y_{t-1}+G(t)\pi_{20}+\varepsilon_t \\ \text{Model 2: } & y_t =\pi_{10}+\pi_{11}y_{t-1}+G(t)(\pi_{20}+\pi_{21}y_{t-1})+\varepsilon_t \\ \end{align*} From the Taylor series expansion for a zero-order LSTAR model, we need to regress the residuals from the linear model on the regressors (i.e, a constant and $X_{t-1}$) and $t$, $t^2$ and $t^3$. The estimated auxiliary regression is: \begin{equation*} \varepsilon_t=\underset{(4.876\times 10^{-3})}{6.577\times 10^{-3}}+\underset{(0.077)}{0.985}X_{t-1}+\underset{(9.697\times 10^{-5})}{2.128\times 10^{-5}}\,t-\underset{(4.605\times 10^{-7})}{2.619\times 10^{-7}}\,t^2+\underset{(5.918\times10^{-10})}{4.201\times^{-10}}\,t^3 \end{equation*} The F-statistic for the entire regression is 6794; with four numerator and 434 denominator degrees of freedom, the regression is highly significant. However, the probability value of F-statistic for the null hypothesis that $t = t^2=t^3=0$ in the auxiliary equation is 0.2547. Hence, there is weak evidence of nonlinear behavior. From Taylor series expansion for a first-order LSTAR model, we need to regress the residuals from the linear model on the regressors (i.e, a constant and $X_{t-1}$) and $t$, $t^2$ and $t^3$ multiplied by the regressors. The estimated auxiliary regression is: \begin{equation*} \varepsilon_t=\underset{(0.010)}{0.004}+\underset{(0.039)}{0.952}X_{t-1}+\underset{(0.131)}{0.920}X_{t-1}\,t-\underset{(6.090\times 10^{-4})}{6.54\times 10^{-6}}X_{t-1}\,t^2+\underset{(4.169\times10^{-9})}{9.471\times 10^{-9}}X_{t-1}\,t^3 \end{equation*} The F-statistic for the entire regression is 6973; with four numerator and 434 denominator degrees of freedom, the regression is highly significant. Moreover, the F-statistic for the presence of the nonlinear terms $X_t\,t$, $X_t\,t^2$ and $X_t\,t^3$ is 5.14; with three numerator and 434 denominator degrees of freedom, we can conclude that there is STAR behavior. Next, we can determine if LSTAR or ESTAR behavior is the most appropriate. Given that the probability of the t-statistic on the coefficient for $X_{t-1}\,t$ is 0.00412, we cannot exclude this expression from the auxiliary equation. Hence, we can rule out ESTAR behavior in favor of LSTAR behavior. The coefficients of the LSTAR model are estimated using non-linear least squares. The gamma parameter is estimated by means of a grid search ranging from 1 to 200 with step increament 0.002 and initial value 3. \begin{equation*} X_t=\underset{(0.004)}{0.007}+\underset{(0.008)}{0.990}X_{t-1}+ \underset{(0.022)}{(-0.114}X_{t-1})/(1+\exp(\underset{(205.18)}{-9.15}(t-\underset{(3.86)}{167.26}))+\varepsilon_t. \end{equation*} Notice that the estimated standard deviation of the gamma parameter is very large and the coefficient of the first lag in the low regime is closely to one. To determine whether the two regime LSTAR is the most appropriate model, we compare few non-linear models against LSTAR in terms of Akaike and Bayesian Information criteria (AIC and BIC respectively) and Mean Absolute Percentage Error (MAPE). A summary of the results of applying the various models to the Red Hat volatility is shown in Table \ref{models}. All models are effectively fitting the volatility in terms of MAPE. \begin{table}[ht] \centering \caption{Non-linear models Red Hat stock volatility} \scriptsize \begin{adjustwidth}{-1cm}{} \begin{tabular}{cccccc} \toprule Model & Intercept & First Lag $X_{t-1}$ & AIC &BIC& MAPE \\ \midrule Linear & 0 & 0.9923$^{***}$& -3129 &-3121 & 3.05 \% \\ LSTAR (2 regimes)& & &-3158&-3133 & 4.38 \% \\ SETAR (3 regimes)& & & -3199 & -3166& 4.52 \% \\ Low regime & 0 & 0.7926$^{***}$ &&\\ Middle regime & 0.1018$^{***}$ & 0.8594$^{***}$ &&\\ High regime & 0.0151$^{***}$& 0.8761$^{***}$&&\\ Threshold & Values & Prop. in Low& Prop. in Middle& Prop. in High&\\ $Z_t=time$& 85 167& 19.36\% & 18.68 \% &61.96\% \\ LSTAR (3 regimes) &&&-3151&-3110& 4.44\% \\ Low regime & 0 & 0.9904$^{***}$ &&&\\ Middle regime &0 & -0.114$^{***}$ &&&\\ High regime & 0.0423$^{***}$& -0.3634$^{***}$&&&\\ Smoothing Parameter&gamma= 24.69 59.89&&&\\ Threshold & Values & Prop. in Low& Prop. in Middle& Prop. in High&\\ $Z_t=time$&167 397& 19.36\% & 18.68 \% &61.96\% &\\ ANN &1-2-1 with 7 weights &&-3119&-3090& 2.98 \%\\ \bottomrule \multicolumn{3}{c}{$^{***}$ Indicates significant at 0.0001\%.}\\ \end{tabular} \end{adjustwidth} \label{models} \end{table} The three regime SETAR model is the best in terms of AIC and BIC with a value of -3199 and -3166 respectively. However, it does a bit worse than the two regime LSTAR by about 0.1\% MAPE. The neural net 1-2-1 with 7 weights faired the lowest MAPE of 2.99 \%. It however performes relatively poorly in terms of information criteria. Hence, this comparation suggests a three regime SETAR versus a two regime LSTAR. We examine various grapical analysis. Some of the results relating to the SETAR model are shown in Figure. \begin{figure}[H] \begin{center} \subfigure[t][Residuals of a three regime SETAR.]{ \resizebox*{5.9cm}{!}{\includegraphics[scale=0.15]{SETAR_residulas.eps}}}\hspace{-5pt} \subfigure[t][Autocorrelations RedHat volatility.]{ \resizebox*{5.9cm}{!}{\includegraphics[scale=0.15]{SETAR_ACF.eps}}} \caption{} \label{RH_series} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{SETAR_regime.eps} \caption{} \end{center} \label{RH_Volatility} \end{figure} \section{Conclusions} We examine wether there is a reduction in the Red Hat inc. stock volatility during the moving from NASDAQ to New York Stock Exchange (NYSE) on December 12, 2006. We used a variety of non-linear time series models which included the following: self-exciting transition regression models, logistic smooth transition and artifial neural networks. The Akaike and Bayesian information and the mean absolute percentage error in forecasting were used to compare across models. All models performed pretty well in terms of MAPE with differences between 1.5\% and 0.05\%. The self-exciting transition with three regimes model was clearly the best option in terms of AIC and BIC. The fitted model captures all the features of the data except the jump in the price of the Red Hat stock due to the announcement and change of the financial markets. This is reflected in the volatility residuals with four jumps, see Figure 4(a). \bibliographystyle{plain}
{ "timestamp": "2018-06-05T02:17:34", "yymm": "1806", "arxiv_id": "1806.01070", "language": "en", "url": "https://arxiv.org/abs/1806.01070" }
\section{Intrinsic Evaluation} \label{sec:intrinsic} We report here on an empirical investigation of the self-normalization properties of NCE language modeling as compared to the alternative methods described in the previous sections. \subsection{Experimental Settings} \label{subsec:experimental_settings} We investigated the following language modeling methods: \begin{itemize} \item \emph{DEV-LM} - the language model proposed by \newcite{Devlin} (Eq. \ref{Devlinobj}) \item \emph{SM-LM} - a standard softmax language model (DEV-LM with $\alpha=0$) \item \emph{AND-LM} - the light-weight approximation of DEV-LM proposed by \newcite{Andreas_2015} (Eq. \ref{Andobj}) \item \emph{NCE-LM} - NCE language model (Eq. \ref{eq:nceobjun}) \item \emph{NCE-R-LM} - our light-weight regularized NCE method (Eq. \ref{eq:ncerobj}) \end{itemize} Following \newcite{Devlin}, to make all of the above methods approximately self-normalized at init time, we initialized their output bias terms to $b_{w} = -\log|V|$, where $V$ is the word vocabulary. We set the negative sampling parameter for the NCE-based LMs to $k=100$, following \newcite{Zoph2016}, who showed highly competitive performance with NCE LMs trained with this number of samples, and following \newcite{Melamud_emnlp} who used the same with PMI language models. We note that in early experiments with PMI LMs, which can be viewed as a close variant of NCE-LMs, we got very similar results for both of these types of models and therefore did not include PMI-LMs in our final investigation. All of the compared methods use standard LSTM to represent the preceding (left-side) sequence of words as the context vector $\vec{c}$, and a simple word embedding lookup table to represent the predicted next word as $\vec{w}$. The LSTM hyperparameters and training regimen are similar to \newcite{zaremba2014recurrent} who achieved strong perplexity results compared to other standard LSTM-based neural language models. Specifically, we used a 2-layer LSTM with a 50\% dropout ratio. During training, we performed truncated back-propagation-through-time, unrolling the LSTM for 20 steps at a time without ever resetting the LSTM state. We trained our model for 20 epochs using Stochastic Gradient Descent (SGD) with a learning rate of 1, which is decreased by a factor of 1.2 after every epoch starting after epoch 6. We clip the norms of the gradient to 5 and use mini-batch size of 20. All models were implemented using the Chainer toolkit (\cite{tokuichainer}). We used two popular language modeling datasets in the evaluation. The first dataset, denoted \emph{PTB}, is a version of the Penn Tree Bank, commonly used to evaluate language models.\footnote{Available from Tomas Mikolov at: \url{http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz}} It consists of 929K/73K/82K training/validation/test words respectively and has a 10K word vocabulary. The second dataset, denoted \emph{WIKI}, is the WikiText-2, more recently introduced by \newcite{merity2016pointer}. This dataset was extracted from Wikipedia articles and is somewhat larger, with 2,088K/217K/245K train/validation/test tokens, respectively, and a vocabulary size of 33K. To evaluate self-normalization, we look at two metrics: (1) $\mu_z = E(\log(Z_c))$, which is the mean log value of the normalization term, across the contexts in the evaluated dataset; and (2) $\sigma_z = \sigma(\log(Z_c))$, which is the corresponding standard deviation. The closer these two metrics are to zero, the more self-normalizing the model is considered to be. We note that a model with an observed $|\mu_z| >> 0$ on a dev set, can be `corrected' to a large extent (as we show later) by subtracting this dev $\mu_z$ from the unnormalized scores at test time. However, this is not the case for $\sigma_z$. Therefore, from a practical point of view, we consider $\sigma_z$ to be the more important metric of the two. In addition, we also look at the classic perplexity metric, which is considered a standard intrinsic measure for the quality of the model predictions. Importantly, when measuring perplexity, except where noted otherwise, we first perform exact normalization of the models' unnormalized scores by computing the normalization term. \subsection{Results} \begin{table*}[t] \centering \begin{tabular}{| r || c | c | r || c | c | r |} \hline & \multicolumn{3}{| c ||}{\emph{NCE-LM}} & \multicolumn{3}{| c |}{\emph{SM-LM}} \\ \hline d & $\mu_z$ & $\sigma_z$ & perp & $\mu_z$ & $\sigma_z$ & perp \\ \hline \rowcolor[gray]{.9} & \multicolumn{6}{| c |}{PTB validation set} \\ \hline 30 & -0.18 & 0.11 & 267.6 & 2.29 & 0.97 & 243.4 \\ 100 & -0.19 & 0.17 & 150.9 & 3.03 & 1.52 & 145.2 \\ 300 & -0.15 & 0.29 & 100.1 & 3.77 & 1.98 & 97.7 \\ 650 & -0.17 & 0.37 & 87.4 & 4.36 & 2.31 & 87.3 \\ \hline \rowcolor[gray]{.9} & \multicolumn{6}{| c |}{WIKI validation set} \\ \hline 30 & -0.20 & 0.13 & 357.4 & 2.57 & 1.02 & 322.2 \\ 100 & -0.24 & 0.19 & 194.3 & 3.34 & 1.45 & 191.1 \\ 300 & -0.23 & 0.27 & 125.6 & 4.19 & 1.73 & 123.3 \\ 650 & -0.23 & 0.35 & 110.5 & 4.67 & 1.83 & 110.7 \\ \hline \end{tabular} \caption{Self-normalization and perplexity results of NCE-LM against the standard softmax language model, SM-LM. $d$ denotes the size of the compared models (units). } \label{tab:nce_softmax_results} \end{table*} We begin by comparing the results obtained by the two methods that do not include any explicit self-normalization component in their objectives, namely NCE-LM and the standard softmax SM-LM. Table~\ref{tab:nce_softmax_results} shows that consistent with previous works, NCE-LM is approximately self-normalized as apparent by relatively low $|\mu_z|$ and $\sigma_z$ values. On the other hand, SM-LM, as expected, is much less self-normalized. In terms of perplexity, we see that SM-LM performs a little better than NCE-LM when model dimensionality is low, but the gap closes entirely at $d=650$. Curiously, while perplexity improves with higher dimensionality, we see that the quality of NCE-LM's self-normalization, as evident particularly by $\sigma_z$, actually degrades. This is surprising, as we would expect that stronger models with more parameters would approximate the true $p(w|c)$ more closely and hence be more self-normalized. A similar behavior was recorded for SM-LM. We further investigate this in Section \ref{sec:analysis}. We also measured model test run-times, running on a single Tesla K20 GPU. We compared run-times for normalized scores that were produced by applying exact normalization versus unnormalized scores. For both SM-LM and NCE-LM, which perform the same operations at test time, we get $\sim$9 seconds for normalized scores vs. $\sim$8 seconds for unnormalized ones on the PTB validation set. Run-times on the x3 larger Wiki validation set are $\sim$38 seconds for normalized and $\sim$24 seconds for unnormalized. We see that the run-time of the unnormalized models seems to scale linearly with the size of the dataset. However, the normalized run-time scales super-linearly, arguably since it depends heavily on the vocabulary size, which is greater for Wiki than for PTB. With typical vocabulary sizes reaching much higher than Wiki's 33K word types, this reinforces the computational motivation for self-normalized language models. \begin{table*}[t] \centering \begin{tabular}{| r || c | c | r || c | c | r || c | c | r |} \hline & \multicolumn{9}{| c |}{\emph{DEV-LM}} \\ \hline \rowcolor[gray]{.9} & \multicolumn{3}{| c ||}{$\alpha = 0.1$} & \multicolumn{3}{| c ||}{$\alpha = 1.0$} & \multicolumn{3}{| c |}{$\alpha = 10.0$} \\ \hline d & $\mu_z$ & $\sigma_z$ & perp & $\mu_Z$ & $\sigma_z$ & perp & $\mu_z$ & $\sigma_z$ & perp \\ \hline \rowcolor[gray]{.9} & \multicolumn{9}{| c |}{PTB validation set} \\ \hline 30 & -0.12 & 0.21 & 242.6 & -0.16 & 0.09 & 250.9 & -0.13 & 0.060 & 307.2 \\ 100 & -0.10 & 0.28 & 143.3 & -0.17 & 0.11 & 149.5 & -0.12 & 0.058 & 182.0 \\ 300 & -0.09 & 0.36 & 96.3 & -0.14 & 0.14 & 100.8 & -0.16 & 0.054 & 121.3 \\ 650 & -0.14 & 0.43 & 85.0 & \textbf{-0.17} & \textbf{0.18} & \textbf{86.3} & -0.11 & 0.071 & 99.5 \\ \hline \rowcolor[gray]{.9} & \multicolumn{9}{| c |}{WIKI validation set} \\ \hline 30 & -0.10 & 0.23 & 334.1 & -0.17 & 0.08 & 338.7 & -0.15 & 0.055 & 389.0 \\ 100 & -0.13 & 0.28 & 189.4 & -0.22 & 0.13 & 191.1 & -0.15 & 0.071 & 228.3 \\ 300 & -0.15 & 0.34 & 121.9 & -0.20 & 0.17 & 125.7 & -0.13 & 0.081 & 143.6 \\ 650 & -0.23 & 0.42 & 109.1 & \textbf{-0.23} & \textbf{0.20} & \textbf{110.0} & -0.12 & 0.089 & 116.9 \\ \hline \end{tabular} \caption{Self-normalization and perplexity results of the self-normalizing DEV-LM for different values of the normalization factor $\alpha$. $d$ denotes the size of the compared models (units). } \label{tab:devlin_results} \end{table*} Next, Table \ref{tab:devlin_results} compares the self-normalization and perplexity performance of DEV-LM for varied values of the constant $\alpha$ on the validation sets. As could be expected, the larger the value of $\alpha$ is, the better the self-normalization becomes, reaching very good self-normalization for $\alpha~=~10.0$. On the other hand, the improvement in self-normalization seems to occur at the expense of perplexity. This is particularly true for the smaller models, but is still evident even for $d=650$. Interestingly, as with NCE-LM, we see that $\sigma_z$ grows (i.e. self-normalization becomes worse) with the size of the model, and is negatively correlated with the improvement in perplexity. \begin{table*}[t] \centering \begin{tabular}{| r || c | c | r || c | c | r |} \hline & \multicolumn{3}{| c ||}{NCE-R-LM} & \multicolumn{3}{| c |}{AND-LM} \\ \hline $\alpha$ & $\mu_z$ & $\sigma_z$ & perp & $\mu_z$ & $\sigma_z$ & perp \\ \hline \rowcolor[gray]{.9} & \multicolumn{6}{| c |}{PTB validation set} \\ \hline 0.1 & -0.19 & 0.34 & 87.1 & 6.14 & 0.56 & 117.5 \\ 1.0 & -0.21 & 0.27 & 87.2 & \textbf{0.45} & \textbf{0.25} & \textbf{119.4} \\ 10.0 & \textbf{-0.19} & \textbf{0.17} & \textbf{89.8} & -0.037 & 0.079 & 143.7 \\ 100.0 & -0.089 & 0.086 & 112.6 & -0.024 & 0.030 & 209.5\\ \hline \rowcolor[gray]{.9} & \multicolumn{6}{| c |}{WIKI validation set} \\ \hline 0.1 & -0.23 & 0.33 & 111.1 & \textbf{4.85} & \textbf{0.72} & \textbf{201.5} \\ 1.0 & -0.24 & 0.28 & 107.5 & 1.02 & 0.001 & 1481.3\\ 10.0 & \textbf{-0.22} & \textbf{0.19} & \textbf{110.8} & 0.41 & 0.12 & 33323.1 \\ 100.0 & -0.12 & 0.099 & 131.5 & 0.413 & 0.000 & 33278.0\\ \hline \end{tabular} \caption{Self-normalization and perplexity results of the self-normalizing DEV-LM for different values of the normalization factor $\alpha$. $d$ = 650 and $\gamma$ = 0.1. } \label{tab:andreas_results} \end{table*} Finally, in Table \ref{tab:andreas_results}, we compare AND-LM against our proposed NCE-R-LM, using a sampling rate of $\gamma = 0.1$ to avoid computing $Z_c$ most of the time, and varied values of $\alpha$. As can be seen, AND-LM exhibits relatively bad performance. In particular, to make the model converge when trained on the WIKI dataset, we had to follow the heuristic suggested by \newcite{Chen2016StrategiesFT}, applying the following conversion to all of AND-LM's unnormalized scores, $x \rightarrow 10 \tanh(x/5)$. In contrast, we see that NCE-R-LM is able to use the explicit regularization to improve self-normalization at the cost of a relatively small degradation in perplexity. \begin{table*}[t] \centering \begin{tabular}{| l || c | c |r |r || c | c | r | r |} \hline & \multicolumn{4}{| c ||}{\emph{PTB-test}} & \multicolumn{4}{| c |}{\emph{WIKI-test} } \\ \hline \rowcolor[gray]{.9} & $\mu_z$ & $\sigma_z$ & perp & u-perp & $\mu_z$ & $\sigma_z$ & perp & u-perp\\ \hline DEV-LM & -0.001 & 0.17 & 83.1 & 83.0 & 0.002 & 0.20 & 104.1 & 104.2 \\ NCE-R-LM & 0.002 & 0.17 & 85.9 & 86.0 & -0.003 & 0.19 & 105.0 & 104.7\\ NCE-LM & -0.004 & 0.35 & 83.7 & 83.4 & 0.003 & 0.36 & 104.3 & 104.6 \\ AND-LM & 0.001 & 0.30 & 114.9 & 115.0 & 0.018 & 0.74 & 185.7 & 189.1 \\ \hline \end{tabular} \caption{Self-normalization and perplexity results on test sets for `shifted' models with $d=650$. `u-perp' denotes unnormalized perplexity. } \label{tab:test_results} \end{table*} Switching to the test-set evaluation, we propose a simple technique to center the $\log(Z)$ values of a self-normalizing model's scores around zero. Let $\mu_{z}^{valid}$ be $E(\log(Z))$ observed on the validation set at train time. The probability estimates of the `shifted' model at test time are $\log p(w|c) = \vec{w} \cdot \vec{c} +b_w - \mu_{z}^{valid}$. Table \ref{tab:test_results} shows the results that we get when evaluating the shifted versions of DEV-LM, NCE-R-LM, NCE-LM and AND-LM with $d~=~650$. For each compared model, we chose the $\alpha$ value that showed the best self-normalization performance without sacrificing significant perplexity performance. Specifically, we used $\alpha_{\textsc{DEV-LM}}=1.0$ and $\alpha_{\textsc{NCE-R-LM}}=10.0$ for both PTB and WIKI datasets, and then $\alpha_{\textsc{AND-LM}}=1.0$ and $\alpha_{\textsc{AND-LM}}=0.1$ for the PTB and WIKI datasets, respectively. Following \newcite{Oualil}, in addition to perplexity, we also report `unnormalized perplexity', which is computed with the unnormalized model scores. When the unnormalized perplexity measure is close to the real perplexity, this suggests that the unnormalized scores are in fact nearly normalized. As can be seen, with the shifting method, all models achieve near perfect (zero) $\mu_z$ value, and their unnormalized perplexities are almost identical to their respective real perplexities. Also, with the exception of AND-LM, the perplexities of all models are nearly identical. Finally, the standard deviation of the normalization term of DEV-LM and NCE-R-LM is notably better than that of NCE-LM and AND-LM. DEV-LM and NCE-R-LM perform very similar in all respects. However, we note that NCE-R-LM's advantage is that during training, it performs sparse computations of the costly normalization term and therefore its training time depends much less on the size of the vocabulary. \begin{table*}[t] \centering \begin{tabular}{| r || c | c || c |c |} \hline & \multicolumn{2}{| c ||}{\emph{PTB-validation}} & \multicolumn{2}{| c |}{\emph{WIKI-validation} } \\ \hline \rowcolor[gray]{.9} d & NCE-LM & DEV-LM ($\alpha=1.0$) & NCE-LM & DEV-LM ($\alpha=1.0$) \\ \hline 30 & -0.33 & -0.27 & -0.50 & -0.26 \\ 100 & -0.29 & -0.29 & -0.53 & -0.49 \\ 300 & -0.46 & -0.41 & -0.56 & -0.63 \\ 650 & -0.50 & -0.45 & -0.53 & -0.64 \\ \hline \end{tabular} \caption{Pearson's correlation between $H_c$ (entropy) and $\log(Z_c)$ on samples from the validation sets. } \label{tab:entropy_correlation} \vspace{10 pt} \end{table*} \begin{figure*}[h!!!] \centering $ \begin{array}{ccc} \includegraphics[width=5.0cm]{pics/model_nce_d650_e20_e20_debug_entropy_norm_nicer.png} & \hspace{0.0cm} \includegraphics[width=5.0cm]{pics/model_nce_d100_e20_e20_debug_entropy_norm_nicer.png} & \hspace{0.0cm} \includegraphics[width=5.0cm]{pics/model_nce_d30_e20_validation_output_debug_entropy_norm_nicer.png} \end{array} $ \caption{A 2-dimensional histogram of the normalization term of a predicted distribution as a function of its entropy, as measured over a sample from NCE-LM predictions on the WIKI validation set. Brighter colors denote denser areas. } \label{fig:wiki_analysis} \end{figure*} \subsection{Analysis} \label{sec:analysis} The entropy of the distributions predicted by a language model is a measure of how uncertain it is regarding the identity of the predicted word. Low-entropy distributions would be concentrated around few possible words, while high-entropy ones would be much more spread out. To more carefully analyze the self-normalization properties of NCE-LM and DEV-LM, we computed the Pearson's correlation between the entropy of a predicted distribution $H_c =-\sum_v p(v|c)\log p(v|c)$ and its normalization term, $\log(Z_c)$. As can be seen in Table \ref{tab:entropy_correlation}, it appears that a regularity exists, where the value of $\log(Z_c)$ is negatively correlated with entropy. Furthermore, it seems that, to an extent, the correlation is stronger for larger models. To further illustrate this regularity, Figure~\ref{fig:wiki_analysis} shows a 2-dimensional histogram of a sample of distributions predicted by NCE-LM . We can see there that particularly low entropy distributions can be associated with very high values of $\log(Z_c)$, deviating a lot from the self-normalization objective of $\log(Z_c)=0$. Examples for contexts with such low-entropy distributions are: ``During the American Civil [War]'' and ``The United [States]'', where the actual word following the preceding context appears in parenthesis. This phenomenon is less evident for smaller models, which tend to produce fewer low entropy predictions. We hypothesize that the above observations could be a contributing factor to our earlier finding that larger models have larger variance in their normalization terms, though it seems to account only for some of that at best. Furthermore, we hope that this regularity could be exploited to improve self-normalization algorithms in the future. \section{Sentence Completion Challenge} \label{sec:mscc} In Section \ref{sec:intrinsic}, we've seen that there may be some trade-offs between perplexity, self-normalization and run-time complexity of language models. While the language modeling method should ultimately to be optimized for each downstream task individually, we follow \newcite{Mnih2012} and use the Microsoft Sentence Completion Challenge \cite{zweig2011microsoft} as an example use case. The Microsoft Sentence Completion Challenge (MSCC) \cite{zweig2011microsoft} includes 1,040 items. Each item is a sentence with one word replaced by a gap, and the challenge is to identify the word, out of five choices, that is most meaningful and coherent as the gap-filler. The MSCC includes a learning corpus of approximately 50 million words. To use this corpus for training our language models, we split it into sentences, shuffled the sentence order and considered all words with frequency less than 10 as unknown, yielding a vocabulary of about 50K word types. We used the same settings described in Section \ref{subsec:experimental_settings} to train the language models except that due to the larger size of the data, we ran fewer training iterations. \footnote{We started with a learning rate of 1 and reduced it by a factor of 2 after each iteration beginning with the very first one.} Finally, as the gap-filler, we choose the word that maximizes the score of the entire sentence, where a sentence score is the sum of its words' scores. For a normalized language model this score can be interpreted as the estimated log-likelihood of the sentence. \begin{table}[t] \centering \begin{tabular}{| l || c | c | c | c || c | c | c | c |} \hline & \multicolumn{4}{| c ||}{2 training iterations} & \multicolumn{4}{| c |}{5 training iterations } \\ \hline \rowcolor[gray]{.9} & acc-n & $\Delta$acc & perp & $\sigma_z$ & acc-n & $\Delta$acc & perp & $\sigma_z$\\ \hline DEV-LM & 47.6 & +0.4 & 75.3 & 0.10 & 51.9 & -0.7 & 67.5 & 0.10 \\ NCE-R-LM & 46.3 & -0.5 & 78.3 & 0.11 & 51.0 & -0.2 & 70.2 & 0.10\\ NCE-LM & 47.6 & -0.4 & 75.3 & 0.17 & 51.6 & +1.1 & 67.1 & 0.14 \\ SM-LM & 47.0 & \textbf{-2.0} & 73.4 & 1.15 & 51.0 & \textbf{-2.3} & 66.3 & 1.19 \\ \hline \end{tabular} \caption{Microsoft Sentence Completion Challenge (MSCC) results for models with $d=650$ that were trained with 2 and 5 iterations. 'acc-n' denotes the accuracy measure obtained when language model scores are precisely normalized. '$\Delta$acc' denotes the delta in accuracy when unnormalized scores are used instead. 'perp' and $\sigma_z$ denote the mean perplexity and standard deviation of $log(Z_c)$ recorded for the 1,040 answer sentences.} \label{tab:mscc_results} \end{table} The results of the MSCC experiment appear in Table \ref{tab:mscc_results}. Accuracy is the standard evaluation metric for this benchmark (simply the proportion of questions answered correctly). We report this metric when performing the costly test-time score normalization and then the delta observed when using unnormalized scores instead. First, we note that given the same number of training iterations, all methods achieved fairly similar normalized-scores accuracies, as well as perplexity values. At the same time, we do see a notable improvement in both accuracies and perplexities when more training iterations are performed. Next, with the exception of SM-LM, all of the compared models exhibit good self-normalization properties, as is evident from the low $\sigma_z$ values. There does not seem to be a meaningful accuracy performance hit when using unnormalized-scores for these models, suggesting that this level of self-normalization is adequate for the MSCC task. Finally, as expected, SM-LM exhibits worse self-normalization properties. However, somewhat surprisingly, even in this case, we see a relatively small (though more noticeable) hit in accuracy. This suggests that in some use cases, the level of the language model's self-normalization may have a relatively low impact on the performance of a down-stream task. \section{Introduction} \blfootnote{ % % % % % \hspace{-0.65cm} This work is licensed under a Creative Commons Attribution 4.0 International License. License details: \url{http://creativecommons.org/licenses/by/4.0/} } The ability of statistical language models (LMs) to estimate the probability of a word given a context of preceding words, plays an important role in many NLP tasks, such as speech recognition and machine translation. Recurrent Neural Network (RNN) language models have recently become the preferred method of choice, having outperformed traditional $n$-gram LMs across a range of tasks \cite{jozefowicz2016exploring}. Unfortunately however, they suffer from scalability issues incurred by the computation of the softmax normalization term, which is required to guarantee proper probability predictions. The cost of this computation is linearly proportional to the size of the word vocabulary and has a significant impact on both training and testing run-times. Several methods have been proposed to cope with this scaling issue by replacing the softmax with a more computationally efficient component at train time. \footnote{Alleviating this problem using sub-word representations is a parallel line of research not discussed here.} These include importance sampling \cite{Bengio2003}, hierarchical softmax \cite{Mnihnips}, BlackOut \cite{ji2016blackout} and Noise Contrastive Estimation (NCE) \cite{Gutmann2012}. NCE has been applied to train neural LMs with large vocabularies \cite{Mnih2012} and more recently was also successfully used to train LSTM-RNN LMs \cite{Vaswani2013,Chen2015,Zoph2016}, achieving near state-of-the-art performance on language modeling tasks \cite{jozefowicz2016exploring,Chen2016StrategiesFT}. All the above works focused on reducing the complexity at train time. However, at test time, the assumption was that one still needs to compute the costly softmax normalization term to obtain a normalized score fit as an estimate for the probability of a word. \emph{Self-normalization} was recently proposed to address the test time complexity. A self-normalized discriminative model is trained to produce near-normalized scores in the sense that the sum over the scores of all words is approximately one. If this approximation is close enough, the assumption is that the costly exact normalization can be waived at test time without significantly sacrificing prediction accuracy \cite{Devlin}. Two main approaches were proposed to train self-normalizing models. Regularized softmax self-normalization is based on using softmax for training and explicitly encouraging the normalization term of the softmax to be as close to one as possible, thus making its computation redundant at test time \cite{Devlin,Andreas_2015,Chen2016StrategiesFT}. The alternative approach is based on NCE. The original formulation of NCE included a parametrized normalization term $Z_c$ for every context $c$. However, the first work that applied NCE to language modeling \cite{Mnih2012} discovered empirically that fixing $Z_c$ to a constant did not affect the performance. More recent studies \cite{Vaswani2013,Zoph2016,Chen2015,Oualil} empirically found that models trained using NCE with a fixed $Z_c$, exhibit self-normalization at test time. This behavior is facilitated by inherent self-normalization properties of NCE LMs that we analyze in this work. The main contribution of this study is in providing a first comprehensive investigation of self-normalizing language models. This includes a theoretical analysis of the inherent self-normalizing properties of NCE language models, followed by an extensive empirical evaluation of NCE against softmax-based self-normalizing methods. Our results suggest that regularized softmax models perform competitively as long as we are only interested in low test time complexity. However, when train time is also a factor, NCE has a notable advantage. Furthermore, we find, somewhat surprisingly, that larger models that achieve better perplexities tend to have worse self-normalization properties, and perform further analysis in an attempt to better understand this behavior. Finally, we show that downstream tasks may not all be as sensitive to self-normalization as might be expected. The rest of this paper is organized as follows. In sections \ref{sec:section1} and \ref{sec:selfnormprop}, we provide theoretical background and analysis of NCE language modeling that justifies its inherent self-normalizing properties. In Section~\ref{sec:explicit}, we review the alternative regularized softmax-based self-normalization methods and introduce a novel regularized NCE hybrid approach. In Section \ref{sec:intrinsic}, we report on an empirical intrinsic investigation of all the methods above, and finally, in Section \ref{sec:mscc}, we evaluate the compared methods on the Microsoft's Sentence Completion Challenge and compare these results with the intrinsic measures of perplexity and self-normalization. \section{NCE as a Matrix Factorization} \label{sec:section1} In this section, we review the NCE algorithm for language modeling \cite{Gutmann2012,Mnih2012} and focus on its interpretation as a matrix factorization procedure. This analysis is analogous to the one proposed by \newcite{Melamud_emnlp} for their PMI language model. Let $p(w|c)$ be the probability of a word $w$ given a preceding context $c$, and let $p(w)$ be the word unigram distribution. Assume the distribution $p(w|c)$ has the following parametric form: \begin{equation} p_{nce}(w|c) = \frac{1}{Z_c} \exp( m(w,c) ) \label{loglinearm} \end{equation} such that $m(w,c) = \vec{w} \cdot \vec {c}+b_w$, where $\vec{w}$ and $\vec{c}$ are $d$-dimensional vector representations of the word $w$ and its context $c$, and $Z_c$ is a normalization term. We can use a simple lookup table for the word representation $\vec{w}$, and a recurrent neural network (RNN) model to obtain a low dimensional representation of the entire preceding context $\vec{c}$. Given a text corpus~$D$, the NCE objective function is: \begin{equation} S(m)=\sum_{w,c \in D} \Big[ \log \sigma ( m(w,c) -\log (p(w) k) ) \label{eq:nceobjun} \end{equation} $$+\sum_{i=1}^k \log (1- \sigma( m(w_i,c) - \log (p(w_i) k)))\Big]$$ such that $w,c$ go over all the word-context co-occurrences in the learning corpus $D$ and $w_1,...,w_k$ are `noise' samples drawn from the word unigram distribution. $\sigma$ denotes the sigmoid function. Let $\mbox{pce}(w,c) = \log p(w|c)$ be the Pointwise Conditional Entropy (PCE) matrix, which is the true log probability we are trying to estimate. \newcite{Gutmann2012} proved that $S(m) \le S(\mbox{pce})$ for every matrix $m$. The rank of the matrix $m$ is at most $d+1$. Thus, the NCE training goal is finding the best low-rank decomposition of the PCE matrix in the sense that it minimizes the difference $S(\mbox{pce})-S(m)$. Following \newcite{melamud2017acl}, we can explicitly write this difference as a Kullback-Leibler~(KL) divergence. The NCE derivation was originally based on sampling $w$ and $c$ either from the joint distribution or from the product of marginals according to a binary r.v. denoted by $z$. For every matrix~$m$, the conditional distribution of $z$ given $w$ and $c$ is: $$ p_m(z\!=\!1|w,c) = \sigma(m(w,c)-\log(kp(w))). $$ The difference between the NCE score at the PCE matrix and the NCE score at a given matrix $m$ can be written as: \begin{equation} S(\mbox{pce})- S(m) = \mbox{KL} ( p_{\mbox{pce}}(z|w,c) || p_m(z|w,c)) \label{factcrit} \end{equation} $$ =\sum_{w,c} p(w,c) \sum_{z=0,1} p_{\mbox{pce}}(z|w,c) \log \frac{p_{\mbox{pce}}(z|w,c)}{p_m(z|w,c)}. $$ This view of NCE as a matrix factorization instead of a distribution estimation, makes the normalization factor redundant during training, thereby justifying the heuristics of setting $Z_c=1$ used by \newcite{Mnih2012}. The crux of the matrix decomposition view of NCE is that although the normalization term is not explicitly included here, the optimal low-dimensional model attempts to approximate the true conditional probabilities, which are normalized, and therefore we expect that it will be almost self-normalized. Indeed, in the next section we provide formal guarantees for that. \section {The NCE Self-Normalization property} \label{sec:selfnormprop} We now address the test time efficiency of language models, which is the focus of this study. As is the case with other language models, at test time, when we use the low-dimensional matrix learned by NCE to compute the conditional probability $p_{nce}(w|c)$ (\ref{loglinearm}), we need to compute the normalization factor to obtain a valid distribution: \begin{equation} Z_c = \sum_w \exp (m(w,c)) = \sum_w \exp (\vec{w} \cdot \vec{c} +b_w ). \end{equation} Unfortunately, this computation of $Z_c$ is often very expensive due to the typical large vocabulary size. However, as we next show, for NCE language models this computation may be avoided not only at train time, but also at test time due to self-normalization. A matrix $m$ is called self-normalized if $\sum_w (\exp(m(w,c))=1$ for every~$c$. The full-rank optimal LM obtained from the PCE matrix $\mbox{pce}(w,c) = \log p(w|c)$, is clearly self normalized: $$ Z_c = \sum_w \exp( \mbox{pce}(w,c) ) = \sum_w p(w|c)=1. $$ The NCE algorithm seeks the best low-rank unnormalized matrix approximation of the PCE matrix. Hence, we can assume that the matrix $m$ is close to the PCE matrix and therefore defines a LM that should also be close to self-normalized: \begin{equation} \sum_w \exp( m(w,c) ) \approx \sum_w \exp( \mbox{pce}(w,c)) = 1. \label{self_pce} \end{equation} We next formally show that if the matrix $m$ is close to the PCE matrix then the NCE model defined by $m$ is approximately self-normalized. {\bf Theorem 1:} Assume that for a given context $c$ there is an $0<\epsilon $ such that $$\log \sum_{w \in V} p(w|c) \exp ( |m(w,c)- \log p(w|c)|) \le \epsilon.$$ Let $Z_c= \sum_w \exp(m(w,c))$ be the normalization factor. Then $|\log Z_c | \le \epsilon.$ {\bf Proof:} $$ \log Z_c = \log \sum_w \exp ( \vec{w} \cdot \vec{c} +b_w ) $$ $$ = \log \sum_w ( p (w|c) \exp (m(w,c)- \log p(w|c) )) $$ \begin{equation} \le \log \sum_{w} p(w|c) \exp ( |m(w,c)- \log p(w|c)|) \le \epsilon. \label{eqap} \end{equation} The concavity of the log function implies that: \begin{equation} -\log Z_c \le -\sum_w p(w|c) ( m(w,c) - \log p(w|c)) \label{eqan} \end{equation} $$ = \sum_w p(w|c) ( -(m(w,c) - \log p(w|c))) $$ The convexity of the exp function implies that: $$ \le \log \sum_w p(w|c) \exp ( -(m(w,c)- \log p(w|c))) $$ $$ \le \log \sum_w p(w|c) \exp ( |m(w,c) - \log p(w|c)|) \le \epsilon $$ Combining Eq. (\ref{eqap}) and Eq. (\ref{eqan}) we finally obtain that $|\log Z_c | \le \epsilon.$ \hspace{1cm} $\Box $ We can also state a global version of Theorem 1 and its proof is similar. {\bf Theorem 2:} Assume there is an $0<\epsilon $ such that $$\log \sum_{w,c} p(w,c) \exp ( |m(w,c)- \log p(w|c)|) \le \epsilon.$$ Then $|\sum_c p(c) \log Z_c | \le \epsilon.$ \section{Explicit Self-normalization} \label{sec:explicit} In this section, we review the two recently proposed language modeling methods that achieve self-normalization via explicit regularization, and then borrow from them to derive an novel regularized version of NCE. The standard language modeling learning method, which is based on a softmax output layer, is not self-normalized. To encourage its self-normalization, \newcite{Devlin} proposed to add to its training objective function, an explicit penalty for deviating from self-normalization: \begin{equation} S_{Dev}=\sum_{w,c \in D} \Big[ (\vec{w} \cdot \vec {c} +b_w-\log Z_c) - \alpha (\log Z_c)^2 \Big] \label{Devlinobj} \end{equation} where $Z_c=\sum_{v \in V} \exp (\vec{v} \cdot \vec {c}+b_v)$ and $\alpha$ is a constant. The drawback of this approach is that at train time you still need to explicitly compute the costly $Z_c$. \newcite{Andreas_2015} proposed a more computationally efficient approximation of~(\ref{Devlinobj}) that eliminates $Z_c$ in the first term and computes the second term only on a sampled subset $D'$ of the corpus $D$: \begin{equation} S_{And}=\sum_{w,c \in D} (\vec{w} \cdot \vec {c} +b_w) - \frac{\alpha}{\gamma} \sum_{c \in D'} (\log Z_c)^2 \label{Andobj} \end{equation} where $\gamma < 1$ is an additional constant that determines the sampling rate, i.e. $|D'| = \gamma|D|$. They also provided analysis that justifies computing $Z_c$ only on a subset of the corpus by showing that if a given LM is exactly self-normalized on a dense set of contexts (i.e. each context $c$ is close to a context $c'$ s.t. $\log Z_{c'}=0$) then $E|\log Z_c |$ is small. Inspired by this work, we propose a regularized variant of the NCE objective function~(\ref{eq:nceobjun}): \begin{equation} S_{nce-r}(m) =S_{nce}(m) - \frac{\alpha}{\gamma} \sum_{c \in D'} (\log Z_c)^2 \label{eq:ncerobj} \end{equation} This formulation allows us to further encourage the NCE self-normalization, still without incurring the cost of computing $Z_c$ for every word in the learning corpus. \input{experiments} \section{Conclusions} We reviewed and analyzed the two alternative approaches for self-normalization of language models, namely, using Noise Contrastive Estimation (NCE) that is inherently self-normalized, versus adding explicit self-normalizing regularization to a softmax objective function. Our empirical investigation compared these approaches, and by extending NCE language modeling with a light-weight explicit self-normalization, we also introduced a hybrid model that achieved both good self-normalization and perplexity performance, as well as little dependence of train-time on the size of the vocabulary. To put our intrinsic evaluation results in perspective, we used the Sentence Completion Challenge as an example use-case. The results suggest that it would be wise to test the sensitivity of the downstream task to self-normalization, in order to choose the most appropriate method. Finally, further analysis revealed unexpected correlations between self-normalization and perplexity performance, as well as between the partition function of self-normalized predictions and the entropy of the respective distribution. We hope that these insights would be useful for improving self-normalizing models in future work.
{ "timestamp": "2018-06-05T02:13:29", "yymm": "1806", "arxiv_id": "1806.00913", "language": "en", "url": "https://arxiv.org/abs/1806.00913" }
\section{Introduction} In the statistical-mechanical theory of liquids, the Kirkwood-Buff (KB) integral plays a distinguished role \cite{KB51,BN06,BNS09}. It has the form \begin{equation} \label{1} \II[F(r)]\equiv \int_0^\infty \dd r\,F(r), \end{equation} where, in the case of three-dimensional (3D) systems, $F(r)=4\pi r^2 h(r)$, $h(r)$ being the (total) pair correlation function \cite{BH76,HM06,S16}. The KB integral is the zero-wave-number limit ($q\to 0$) of the Fourier transform of $h(r)$, defined as \begin{align} \label{hq} \widetilde{h}(q)=&\int \dd^d\mathbf{r}\, e^{-i\mathbf{q}\cdot\mathbf{r}}h(r)\nn =&(2\pi)^{d/2}\int_0^\infty \dd r\, r^{d-1}\frac{J_{d/2-1}(qr)}{(q r)^{d/2-1}}h(r), \end{align} where $d$ is the number of spatial dimensions and $J_\nu(x)$ is the Bessel function of the first kind. Therefore, $\widetilde{h}(q)$ has the structure of Eq.\ \eqref{1}, this time (again for 3D systems, $d=3$) with $F(r)=({4\pi}/{q})r\sin(qr)h(r)$. Apart from the limit $q\to 0$, the physical importance of $\widetilde{h}(q)$ lies in its direct relation to the static structure factor \cite{HM06,S16}, namely \begin{equation} \label{Sq} S(q)=1+\rho \widetilde{h}(q), \end{equation} where $\rho$ is the number density of the fluid. If $h(r)$ is obtained from computer simulations or from numerical solutions of integral-equation theories, its knowledge is limited to a finite range $r<L$, so that the conventional method consists in estimating the KB integral or the structure factor by a truncated integral, i.e., \begin{equation} \label{0} \II[F(r)]\simeq \int_0^L \dd r\,F(r). \end{equation} On the other hand, the correlation function $h(r)$ is usually \emph{oscillatory}, which generally makes the convergence of the estimate \eqref{0} rather slow. It is then highly desirable to devise alternative approximate methods to estimate improper integrals of the form $\II[F(r)]$ that, while relying upon the knowledge of $F(r)$ for $r<L$ only, are much more efficient than Eq.\ \eqref{0}. The general problem of computing highly oscillatory integrals has aroused a large body of work by applied mathematicians, as summed up in a recent monograph \cite{DHI17}. A method recently proposed in physics literature \cite{KSBKVS13,KV18} consists in approximating Eq.\ \eqref{1} by a finite-size integral of the form \begin{equation} \label{2} \II_L[F(r)]\equiv \int_0^L \dd r\,F(r) W(r/L), \end{equation} with an appropriate \emph{weight} function $W(x)\neq 1$. Of course, the computational problem described above is not limited to KB integrals and structure factors but extends, with different physical interpretations of the isotropic oscillatory function $F(r)$, to other branches of physics where improper integrals of the form \eqref{1} are relevant. In those other more general cases, $r$ could not represent a spatial variable but, for instance, a wave number or a time variable. In Ref.\ \cite{KV18}, Kr\"uger and Vlugt proposed a simple, practical, and accurate general prescription to approximate an improper integral of the form \eqref{1} by the finite-size integral \eqref{2}, where the weight function $W(x)$ is given by \begin{equation} \label{3} W_3^{(2)}(x)=1 - \frac{23x^3}{8} + \frac{3 x^4}{4} + \frac{9 x^5}{8}. \end{equation} More specifically, \begin{equation} \label{3b} \II[F(r)]= \II_L[F(r)]+{O}(L^{-3}). \end{equation} Let me rephrase and summarize the two main steps leading to the derivation of Eqs.\ \eqref{3} and \eqref{3b}. First, it is tacitly assumed that $\II[F(r)]$ comes from the 3D volume integral \begin{equation} \label{4} \II[F(r)]=\int \frac{\dd^3\mathbf{r}}{4\pi r^2} {F(r)}, \end{equation} after passing to spherical coordinates and integrating over the angular variables. Next, use is made of the authors' proof that \begin{equation} \label{5} \int \frac{\dd^3\mathbf{r}}{4\pi r^2}{F(r)}y_3(r/L)=\II[F(r)]-\frac{3}{2L}\II[F(r)r]+{O}(L^{-3}), \end{equation} where $4\pi y_3(x)$ is the intersection volume of two spheres of unit diameter separated a distance $x$, i.e., \begin{equation} \label{6} y_3(x)=\left(1-\frac{3x}{2}+\frac{x^3}{2}\right)\Theta(1-x). \end{equation} Actually, the proof in Ref.\ \cite{KV18} extends Eq.\ \eqref{5} to non-spherical shapes, in which case the function $y_3(x)$ depends on the particular shape, $L=6V/A$ ($V$ and $A$ being the volume and surface area, respectively), and, in general, ${O}(L^{-3})\to {O}(L^{-2})$. On the other hand, a spherical shape, and hence Eq.\ \eqref{6}, is needed for the derivation of Eq.\ \eqref{3} as \begin{equation} \label{7} W_3^{(2)}(x)=y_3(x)\left(1+\frac{3x}{2}+\frac{9x^2}{4}\right). \end{equation} \begin{table}[htb] \caption{Coefficient $a_d$ and function $y_d(x)$ for the first few odd values of $d$. The Heaviside function $\Theta(1-x)$ is omitted for clarity.} \label{table:1} \begin{ruledtabular} \begin{tabular}{lll} $d$ & $a_d$&$y_d(x)$\\ \hline $1$&$1$&$1-x$\\ \vvss $2$&$\displaystyle{\frac{4}{\pi}}$&$\displaystyle{\frac{2}{\pi}\left(\cos^{-1}x-{x}\sqrt{1-x^2}\right)}$\\ \vvss $3$&$\displaystyle{\frac{3}{2}}$&$\displaystyle{(1-x)^{2}\left(1+\frac{x}{2}\right)}$\\ \vvss $4$&$\displaystyle{\frac{16}{3\pi}}$&$\displaystyle{\frac{2}{\pi}\left[\cos^{-1}x-\frac{x}{3}\sqrt{1-x^2}(5-2x^2)\right]}$\\ \vvss $5$&$\displaystyle{\frac{15}{8}}$&$\displaystyle{(1-x)^{3}\left(1+\frac{9x}{8}+\frac{3x^2}{8}\right)}$\\ \vvss $6$&$\displaystyle{\frac{32}{5\pi}}$&$\displaystyle{\frac{2}{\pi}\left[\cos^{-1}x-\frac{x}{15}\sqrt{1-x^2}(33-26x^2+8x^4)\right]}$\\ \vvss $7$&$\displaystyle{\frac{35}{16}}$&$ \displaystyle{(1-x)^{4}\left(1 + \frac{29 x}{16} + \frac{5 x^2}{4} + \frac{5 x^3}{16}\right)}$\\ \vvss $8$&$\displaystyle{\frac{256}{35\pi}}$&$\displaystyle{\frac{2}{\pi}\Big[\cos^{-1}x-\frac{x}{105}\sqrt{1-x^2}(279-326x^2}$\\ &&$\displaystyle{+200x^4-48x^6)\Big]}$\\ \vvss $9$&$\displaystyle{\frac{315}{128}}$&$ \displaystyle{(1-x)^{5}\left(1 + \frac{325 x}{128} + \frac{345 x^2}{128} + \frac{175 x^3}{128}+ \frac{35 x^4}{128}\right)}$\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[htb] \includegraphics[width=8cm]{W.eps} \caption{Plot of the weight functions $W_d^{(k)}(x)$ for embedding dimensionalities $d=3$, $5$, $7$, and $9$ and indices $k=2$ and $3$.\label{fig:W}} \end{figure} In their paper \cite{KV18}, Kr\"uger and Vlugt motivate the result posed by Eqs.\ \eqref{3} and \eqref{3b} as a useful way to estimate 3D KB integrals, in which case $F(r)=4\pi r^2 h(r)$. On the other hand, as said before, the result is not restricted \emph{a priori} to 3D KB integrals, i.e., $F(r)$ can be in principle any function such that the (formally one-dimensional) integral $\II[F(r)]$ converges. It is then tempting to wonder how the procedure summarized above would be generalized by freely assuming that the function $F(r)$ is embedded in a $d$-dimensional space and rewriting $\II[F(r)]$ as a $d$-dimensional volume integral. The main goal of this paper is to perform such an extension and, additionally, show that a choice $d\neq 3$ allows one to obtain alternative weight functions $W(x)$ that are generally more efficient than Eq.\ \eqref{3}, even in the case of 3D KB integrals and structure factors. The organization of the remainder of this paper is as follows. Section \ref{sec2} presents the extension to a generic dimensionality of the scheme devised in Ref.\ \cite{KV18}. This is followed in Sec.\ \ref{sec3} by a discussion on the application of the generalized method to the numerical or computational evaluation of KB integrals and static structure factors. Finally, the main conclusions of the paper are summarized in Sec.\ \ref{sec4}. \section{Embedding in a $d$-dimensional space} \label{sec2} Let us assume that the isotropic function $F(r)$ is embedded in a vector space of $d$ dimensions. In such a case, the counterpart of Eq.\ \eqref{4} is \begin{equation} \label{8} \II[F(r)]=\int \frac{\dd^d\mathbf{r}}{\Omega_d r^{d-1}}F(r), \end{equation} where $\Omega_d={2\pi^{d/2}}/{\Gamma(d/2)}$ is the total solid angle in $d$ dimensions. Following essentially the same steps as done in Ref.\ \cite{KV18} to derive Eq.\ \eqref{5}, it is possible to generalize it as \begin{equation} \label{10} \int \frac{\dd^d\mathbf{r}}{\Omega_d r^{d-1}}{F(r)}y_d(r/L)=\II[F(r)]-\frac{a_d}{L}\II[F(r)r]+{O}(L^{-3}), \end{equation} where \begin{equation} \label{11} a_d\equiv\frac{2\pi^{-1/2}\Gamma(1+d/2)}{\Gamma(1/2+d/2)} \end{equation} and $\Omega_d y_d(x)$ is the intersection volume of two $d$-dimensional spheres of unit diameter separated a distance $x$, so that $y_d(0)=1$ and $y_d(x)=0$ if $x\geq 1$. This quantity appears, for instance, in the context of the virial expansion of the pair correlation function \cite{S16}. Three equivalent representations of $y_d(r)$ are \begin{align} \label{yd} y_d(x)=&I_{1-x^2}\left(1/2+d/2,{1}/{2}\right)\nn &=1-a_d\int_0^x \dd t\,(1-t^2)^{(d-1)/2}\nn &=a_d\int_x^1 \dd t\,(1-t^2)^{(d-1)/2}, \end{align} where $I_z(a,b)=B_z(a,b)/B(a,b)$ is the {regularized} incomplete beta function \cite{AS72,OLBC10}. If $d=\text{odd}$, $y_d(x)-1$ is an odd polynomial of degree $d$ \cite{BC87,TS03,TS06}, namely \begin{equation} \label{12} y_d(x)=1-a_d\sum_{j=0}^{(d-1)/2}c_{j,d} x^{2j+1}, \end{equation} where \begin{equation} c_{j,d}\equiv \frac{(-1)^j\Gamma(1/2+d/2)}{(2j+1)j!\Gamma(1/2+d/2-j)}. \end{equation} If $d=\text{even}$, Eq.\ \eqref{12}, with the upper summation limit $(d-1)/2$ replaced by $\infty$, gives the power series expansion of $y_d(x)$. In such a case ($d=\text{even}$), $y_d(x)$ can be more conveniently expressed as \begin{equation} y_d(x)=\frac{2}{\pi}\left[\cos^{-1}x-{x}\sqrt{1-x^2}P_{d/2-1}(x^2)\right], \end{equation} where \begin{equation} P_m(z)=\sum_{j=0}^m p_{j,m} z^j \end{equation} is a polynomial of degree $m$ with coefficients given by $p_{0,m}=\frac{\pi}{2}a_{2m+2}-1$ and the recurrence relation ($j\geq 1$) \begin{equation} p_{j,m}=\frac{1}{2j+1}\left[2jp_{j-1,m}+\frac{\pi}{2}\frac{a_{2m+2}(-1)^j(m+1)!}{j!(m+1-j)!}\right]. \end{equation} Regardless of whether $d$ is even or odd, it can be seen from Eq.\ \eqref{yd} that $y_d(x)\sim (1-x)^{(d+1)/2}$ in the region $x\lesssim 1$. The explicit values of $a_d$ and expressions of $y_d(x)$ for embedding dimensionalities $1\leq d\leq 9$ are given in Table \ref{table:1}. Henceforth, for simplicity, only the cases $d=\text{odd}$ will be explicitly shown because the corresponding functions $y_d(x)$ are just polynomials. However, it can be checked that the results for $d=\text{even}$ follow patterns similar to those for $d=\text{odd}$. In fact, the results obtained with dimensionalities $d$ and $d+1$ become closer and closer as $d$ increases. The replacement $F(r)\to F(r) r^n$ in Eq.\ \eqref{10} yields (provided the integrals exist) \begin{align} \label{13} \II[F(r)r^n]=&\int \frac{\dd^d\mathbf{r}}{\Omega_d r^{d-1}}F(r)r^ny_d(r/L)\nn &+\frac{a_d}{L} \II[F(r)r^{n+1}]+{O}(L^{-3}). \end{align} Recursive application of Eq.\ \eqref{13} in Eq.\ \eqref{10} up to $n=k$ gives \begin{equation} \label{14} \II[F(r)]= \II_{L,d}^{(k)}[F(r)]+{O}(L^{-3}), \end{equation} where \begin{equation} \label{15} \II_{L,d}^{(k)}[F(r)]\equiv \int_0^L \dd r\,F(r) W_{d}^{(k)}(r/L), \end{equation} \begin{equation} \label{16} W_{d}^{(k)}(x)\equiv y_d(x)\sum_{n=0}^k (a_dx)^n. \end{equation} Equations \eqref{14}, \eqref{15}, and \eqref{16} generalize Eqs.\ \eqref{3b}, \eqref{2}, and \eqref{3}, respectively, which correspond to the particular choices $d=3$ and $k=2$. Incidentally, the choice $d=1$ with $k=2$ leads to $W_1^{(2)}(x)=1-x^3$, which is the weight function proposed in Ref.\ \cite{KSBKVS13} by a different method. Figure \ref{fig:W} shows the weight functions \eqref{16} with $d=3$, $5$, $7$, and $9$, and $k=2$ and $3$. All of them have a similar qualitative shape, but, due to the behavior $y_d(x)\sim (1-x)^{(d+1)/2}$, the curves have a flatter shape near $x=1$ as $d$ increases. While the functions $W_d^{(2)}(x)$ decay monotonically with increasing $x$, $W_d^{(3)}(x)$ present a practically unobservable maximum at $0.09<x<0.10$. This maximum, however, becomes more noticeable as $k$ increases (not shown). In fact, $W_d^{(k)}(x)\to y_d(x)(1-a_d x)^{-1}$ in the limit $k\to\infty$, so that it artificially diverges at a value $x<a_d^{-1}<1$. From a more practical of view, it turns out that the results obtained with $k\geq 4$ are generally worse than those obtained with $k=3$ (not shown). Because of this, in what follows only the cases $k=2$ and $k=3$ will be explicitly considered. Notice that \begin{equation} W_d^{(k)}(x)=1+{O}(x^{-3}),\quad k\geq 2, \end{equation} so that the influence of the choice of $d$ and $k\geq 2$ on $W_d^{(k)}(r/L)$ is of ${O}(L^{-3})$, i.e., of the same order as the terms neglected in Eq.\ \eqref{14}. On the other hand, from a practical point of view, the error $\left|\II_{L,d}^{(k)}[F(r)]-\II[F(r)]\right|$ can be minimized by an appropriate choice of the embedding dimensionality $d$ and of the index $k$ for a given function $F(r)$ and a given cutoff distance $L$. \begin{figure}[htb] \includegraphics[width=8cm]{Error_vs_L.eps} \caption{Plot of the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $L$, where $F(r)=4\pi r^2 h(r)$ and $h(r)$ is given by Eq.\ \eqref{h(r)}. Panels (a) and (b) correspond to $\chi=2$ and $\chi=20$, respectively. Only the values corresponding to $L=\text{integer}$ are shown.\label{fig:Error_vs_L}} \end{figure} \begin{figure}[htb] \includegraphics[width=8cm]{Error_vs_chi.eps} \caption{Plot of the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $\chi$, where $F(r)=4\pi r^2 h(r)$ and $h(r)$ is given by Eq.\ \eqref{h(r)}. Panels (a) and (b) correspond to $L=5$ and $L=20$, respectively. \label{fig:Error_vs_chi} } \end{figure} \begin{figure}[htb] \includegraphics[width=8cm]{Error_vs_L_PY.eps} \caption{Plot of the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $L$, where $F(r)=4\pi r^2 h(r)$ and $h(r)$ is the exact solution of the PY integral equation for hard spheres. Panels (a) and (b) correspond to $\phi=0.2$ and $\phi=0.5$, respectively. Only the values corresponding to $L=\text{integer}$ are shown.\label{fig:Error_vs_L_PY}} \end{figure} \begin{figure}[htb] \includegraphics[width=8cm]{Error_vs_eta_PY.eps} \caption{Plot of the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $\phi$, where $F(r)=4\pi r^2 h(r)$ and $h(r)$ is the exact solution of the PY integral equation for hard spheres. Panels (a) and (b) correspond to $L=5$ and $L=10$, respectively. \label{fig:Error_vs_eta_PY}} \end{figure} \begin{figure}[htb] \includegraphics[width=8cm]{1D.eps} \caption{Plot of the $\II_{L,d}^{(k)}[F(r)]$ versus $L^{-3}$ ($L\geq 5$), where $F(r)=2 h(r)$ and $h(r)$ is the exact pair correlation function for a 1D system of hard rods at a packing fraction $\phi=0.8$. The inset is a magnification of the small framed region ($L\geq 10$) in the main figure. \label{fig:1D}} \end{figure} \begin{figure}[htb] \includegraphics[width=8cm]{Error_vs_q_PY.eps} \caption{Plot of the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $q$, where $F(r)=(4\pi/q) r\sin(qr) h(r)$ and $h(r)$ is the exact solution of the PY integral equation for hard spheres at a packing fraction $\phi=0.5$. Panels (a) and (b) correspond to $L=5$ and $L=10$, respectively. \label{fig:Error_vs_q_PY}} \end{figure} \section{Discussion} \label{sec3} One might reasonably argue that the choice of the embedding dimensionality $d$ in the approximation \eqref{14} must be dictated by the dimensionality of the physical problem underlying the evaluation of the (one-dimensional) integral $\II[F(r)]$. According to this reasoning, if the physical problem consists in the computation of the 3D KB integral, i.e., $F(r)=4\pi r^2 h(r)$, or of the 3D structure factor, i.e., $F(r)=(4\pi/q) r\sin(qr) h(r)$, then one should take $d=3$. On the other hand, from a strict mathematical point of view, the integral one wants to approximate by application of Eq.\ \eqref{14} is blind to the physical origin of the problem, so one can always assume that $F(r)$ is embedded in a higher-dimensional space. \subsection{Three-dimensional Kirkwood-Buff integrals} To further elaborate on the previous point, let us take $F(r)=4\pi r^2 h(r)$ and consider the same model 3D pair correlation function as given by Eq.\ (25) of Ref.\ \cite{KV18}, namely \begin{equation} \label{h(r)} h(r)= \begin{cases} -1,&\displaystyle{r<\frac{19}{20}},\\ \displaystyle{\frac{3\cos\left[2\pi\left(r-\frac{21}{20}\right)\right]}{2r}e^{-(r-1)/\chi}},&r>\displaystyle{\frac{19}{20}}, \end{cases} \end{equation} where $\chi$ represents the correlation length. As said before, the discussion is restricted to odd dimensionalities $d=3$, $5$, $7$, and $9$, and to indices $k=2$ and $3$. Figures \ref{fig:Error_vs_L}(a) and \ref{fig:Error_vs_L}(b) show the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $L$ for $\chi=2$ and $\chi=20$, respectively. Although not shown, in the case $\chi=20$ one can check that the error presents rapid oscillations as a function of $L$, except for the combinations $(d,k)=(7,2)$, $(9,2)$, and $(9,3)$. To make cleaner the general picture, only integer values of $L$ are considered in Fig.\ \ref{fig:Error_vs_L}. We observe that an appropriate choice of $(d,k)$ can significantly reduce the error. In contrast to what is inferred from Ref.\ \cite{KV18}, the cases with $k=3$ generally perform better than with $k=2$. On the other hand, the optimal dimensionality $d$ depends on the correlation length: it is $d=3$ for $\chi=2$ and $d=7$ for $\chi=20$. Interestingly, when even values of $d$ are included, the best choices are $d=2$ (outperforming $d=3$) and $d=6$ (outperforming $d=7$) for $\chi=2$ and $\chi=20$, respectively (not shown). To investigate the influence of the correlation length $\chi$ on the relative error, Figures \ref{fig:Error_vs_chi}(a) and \ref{fig:Error_vs_chi}(b) show the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $\chi$ for $L=5$ and $L=20$, respectively. The best behaviors are presented by $d=3$ if $L=5$ and by $d=7$ if $L=20$, in both cases with $k=3$. Although not shown, it turns out that $d=6$ outperforms $d=7$ if $L=20$. As a second (and more realistic) illustrative example, let us take the exact solution of the Percus-Yevick (PY) integral equation for 3D hard spheres \cite{W63,T63,W64,AL66,S16,LNP_book_note_13_10}, which is exactly known for any packing fraction $\phi$. The results are displayed in Figs.\ \ref{fig:Error_vs_L_PY} and \ref{fig:Error_vs_eta_PY}. Again, the choices with $k=3$ are typically more accurate than with $k=2$. Also, as happened in the case of Eq.\ \eqref{h(r)}, the optimal choice of $d$ depends on the range of $h(r)$: while $d=3$ is appropriate for $\phi=0.2$, $d=7$ is preferable for $\phi=0.5$. When even dimensionalities are included (not shown), the best results are again obtained with $d=2$ and $d=6$ for $\phi=0.2$ and $\phi=0.5$, respectively. \subsection{One-dimensional Kirkwood-Buff integrals} In the case of more general functions $F(r)$ where the sought integral $\II[F(r)]$ is not known, the optimal choice of the embedding dimensionality $d$ and the index $k$ can be estimated by plotting $\II_{L,d}^{(k)}[F(r)]$ versus $L^{-3}$ for several combinations of $(d,k)$ and selecting the one with the smoothest variation allowing for an easy extrapolation to $L^{-3}\to 0$. To illustrate this method, let us now consider the one-dimensional (1D) KB integral of hard rods (Tonks gas). In that case, $F(r)=2 h(r)$ is exactly known \cite{T36,SZK53,LZ71,HC04,S07,S16,FS17,LNP_book_note_13_08}, but we can pretend that the associated KB integral $\II[F(r)]$ is unknown. Figure \ref{fig:1D} shows the integrals $\II_{L,d}^{(k)}[F(r)]$ versus $L^{-3}$ at a packing fraction $\phi=0.8$. In all the cases, the integrals $\II_{L,d}^{(k)}[F(r)]$ are seen to converge to the exact value $\II[F(r)]=\phi-2=-1.2$. In general, the amplitudes of the oscillations are smaller with $k=2$ than with $k=3$ and decrease as the embedding dimensionality $d$ increases. On the other hand, the slopes of the lines around which the oscillations take place are smaller with $k=3$ than with $k=2$ and decrease as $d$ decreases. Thus, the optimal choice of $(d,k)$ would depend on the accessible region of $L$: if $L\sim 5$, $(d,k)=(9,3)$ seems to be a good choice for the extrapolation to $L^{-3}\to 0$, while $(d,k)=(7,3)$ seems preferable if $L\sim 10$. \subsection{Three-dimensional structure factors} As shown by Eqs.\ \eqref{hq} and \eqref{Sq}, a relevant physical quantity directly related to integrals of the form \eqref{1} is the static structure factor of a liquid. In the case of 3D systems, $\widetilde{h}(q)=\II[F(r)]$ with $F(r)=(4\pi/q)r\sin (qr)h(r)$, which reduces to the KB integral in the limit $q\to 0$. If $q\neq 0$, the oscillations of $F(r)$ are not only due to $h(r)$ but also to the term $\sin(qr)$. Therefore, the optimization of the numerical or computational estimate of $\widetilde{h}(q)$ when $h(r)$ is known only for $r<L$ is again an extremely important goal. Let us take once more the exact solution of the PY integral equation for hard spheres \cite{W63,T63,W64,AL66,S16,LNP_book_note_13_10} as a physically motivated benchmark to assess the performance of the approximations $\II[F(r)]\simeq \II_{L,d}^{(k)}[F(r)]$, this time as functions of the wave number $q$. Figure \ref{fig:Error_vs_q_PY} shows the relative error $\left|\II_{L,d}^{(k)}[F(r)]/\II[F(r)]-1\right|$ versus $q$, where $F(r)=(4\pi/q) r\sin(qr) h(r)$ and $h(r)$ is the PY pair correlation function at a packing fraction $\phi=0.5$. The oscillations in the $q$-dependence of the relative error are typically smaller with $k=2$ than with $k=3$, and tend to smooth out as $d$ increases. As happened for the KB integrals (see Figs.\ \ref{fig:Error_vs_L}--\ref{fig:1D}), the error is generally smaller if $k=3$ than if $k=2$. As for the influence of the embedding dimensionality $d$, we see in Fig.\ \ref{fig:Error_vs_q_PY} that the best general estimates are obtained with $d=7$ and $d=3$ for a cutoff value $L=5$ and $L=10$, respectively, this time outperforming $d=6$ and $d=2$ (not shown). \section{Conclusion} \label{sec4} In summary, the generalization to any embedding dimensionality $d$ and any index $k$ of the weight function $W_3^{(2)}(x)$, Eq.\ \eqref{3}, proposed in Ref.\ \cite{KV18} can significantly improve the cutoff estimate $\II_{L,d}^{(k)}[F(r)]$ of the improper integral $\II[F(r)]$ of an oscillatory function, even if $\II[F(r)]$ represents a KB integral corresponding to a 3D or 1D pair correlation function. In the cases of KB integrals and structure factors, the results reported here show that an optimal choice of the index is $k=3$. As for the embedding dimensionality, its optimal value tends to increase as the correlation length increases, i.e., as the error due to the finite cutoff distance $L$ grows. As a practical compromise between simplicity and accuracy, a recommended weight function seems to be the one corresponding to $d=7$ and $k=3$, namely \begin{align} W_7^{(3)}(x)=&(1-x)^4\left(1+\frac{35x}{16}\right)\left(1+\frac{1225x^2}{256}\right)\nn &\times\left(1+\frac{29x}{16}+\frac{5x^2}{4}+\frac{5x^3}{16}\right). \end{align} It must be noted that any method based on Eq.\ \eqref{2} with a weight function $0< W(x)<1$ ceases to be valid if the integrand $F(r)$ is not asymptotically an oscillatory function; if the magnitude of $F(r)$ decays monotonically, then the bare truncated integral \eqref{0} itself represents a better estimate than Eq.\ \eqref{2}. Thus, in the case of interaction potentials with an attractive tail, Eq.\ \eqref{2} must be discarded in the computational evaluation of KB integrals below the so-called Fisher-Widom line \cite{FW69,EHHPS93,VRL95,B96,DE00,TCV03,HRYS18}, where $h(r)$ decays monotonically. On the other hand, even in that case, Eq.\ \eqref{2} may be useful for the evaluation of the structure factor for moderate wave numbers. Finally, let me point out that, while in this paper the addressed examples have been related to liquid state physics, given the ubiquitous appearance of integrals involving oscillatory integrands and semiinfinite intervals in many fields of physics, one would expect that the results presented here will be useful for other physical problems as well. \begin{acknowledgments} The author is indebted to Mariano L\'opez de Haro for insightful discussions about the topic of this paper and to Arieh Iserles for calling Ref.\ \cite{DHI17} to his attention. Financial support from the Spanish Agencia Estatal de Investigaci\'on through Grant No.\ FIS2016-76359-P and the Junta de Extremadura (Spain) through Grant No.\ GR18079, both partially financed by Fondo Europeo de Desarrollo Regional funds, is gratefully acknowledged. \end{acknowledgments}
{ "timestamp": "2018-12-04T02:32:01", "yymm": "1806", "arxiv_id": "1806.00821", "language": "en", "url": "https://arxiv.org/abs/1806.00821" }
\section{} \section{acknowledgements} I've benefited from numerous useful conversations whilst this work was done. I'd like in particular to thank Vassili Gorbounov, Ian Grojnowski, Owen Gwilliam and Nitu Kitchloo. Further I'd like to thank MPIM Bonn and all the staff there for providing a wonderful working environment. \section{introduction} In the paper \cite{MSV}, the authors introduce a novel infinite dimensional extension of the complex of differential forms associated to a smooth $\mathbb{C}$-variety $X$. They call it the \emph{Chiral de Rham Complex}, and denote it $\Omega^{ch}_{X,dR}$. This complex has the structure of a differential graded vertex algebra with differential $d^{ch}_{dR}$. It is endowed with an additional \emph{conformal} grading by non-negative integers, such that the weight $0$ subspace agrees with the classical de Rham complex. This sheaf of vertex algebras can be viewed as a non-linear analogue of the $bc$-$\beta\gamma$ system, familiar to physicists. The authors construct $\Omega^{ch}_{X,dR}$ via an etale gluing procedure after writing it down for $\mathbb{A}^{d}$, in which case it is simply a tensor product of $bc$-$\beta\gamma$ systems. NB that an alternative construction in terms of $D$-modules on the algebraic loop space of $X$ was given by Kapranov and Vasserot in \cite{KV}. \\ \\ As well as $\Omega^{ch}_{X,dR}$, one may define analogues of the sheaf of polyvectors and the Hodge complex (the de Rham complex with deleted differential) on $X$. We denote these respectively by $\Theta^{ch}_{X}$ and $\Omega^{ch}_{X}$. $\Theta^{ch}_{X}$ is a differential graded vertex algebra (dgVA henceforth) and it acts on $\Omega^{ch}_{X}$. Further this module admits an extra differential coming from $d^{ch}_{dR}$. This is simply to say that there is an enhancement of the usual calculus associated to $X$ to a semi-infinite version (we use this term only suggestively) thereof. Assuming for simplicity that $X$ is affine with $\mathcal{O}(X)$=$A$, the action of polyvectors on differential forms can be obtained from the Hochschild (co)homology package associated to $A$, as can the de Rham differential. This is simply the famous HKR isomorphism. Crucially, this package exists for arbitrary associative (differential graded) $\mathbb{C}$-algebras. According to Kontsevich (\cite{Ko}) we should view it as a non-commutative version of Hodge theory. It is the goal of this note to provide some simple examples in which this nc-Hodge package can be enhanced to a semi-infinite version of such. \\ \\ If $(X,f)$ is a Landau-Ginzburg model, then there is attached a $\mathbb{Z}/2$-graded dg-category of matrix factorizations, $\mathbf{MF}(X,f)$. The nc-Hodge package associated to this is by now well known to correspond to twisting the usual complexes of forms or polyvectors by the action of the element $df$. As such, and owing to beautiful work of Sabbah (\cite{Sa})), it is closely related to vanishing cycles cohomology of the pair $(X,f)$. We'll see that in this case a chiral enhancement exists, and moreover that we can say a fair deal about it. In particular we'll describe the representation theory of the vertex algebra we construct as well as proving a finiteness result allowing us to construct an enhancement of the vanishing cycles euler characteristic, $\chi_{van}(f)\in\mathbb{Z}$, to a refined version $\chi^{ch}_{van}(f)\in\mathbb{Z}[[q]].$ \section{Recollections} \subsection{Basics of vertex algebra theory}We include here some background on the theory of vertex algebras, largely in order to fix some notation. Our main references throughout will be the books of Kac, \cite{Kac}, and of Ben-Zvi and Frenkel, \cite{BZF}. \begin{definition} A differential graded vertex algebra or dgVA is a tuple $(V,\partial_{V},T,\Omega,Y)$ where: \begin{itemize} \item $V$ a dg- vector space, with cohomological differential $\partial_{V}$. \item $T$ an endomorphism of $V$, referred to as the \emph{infinitesimal translation}. Note that $T$ is assumed an endomorphism of $V$ considered as a dg- vector space, so that it is of cohomological degree $0$ and $[T,\partial_{V}]=0$.\item $\Omega$ a cycle of cohomological degree $0$, referred to as the \emph{vacuum vector} of $V$. \item $Y:V\otimes V\longrightarrow V((z))$ a multiplication map. We will package this as associating to any $a\in V$ a \emph{field} $a(z)\in End(V)[[z^{-1},z]]$. We can think of this as a family of endomorphisms, $a_{(n)}$, so that $a(z)=\sum_{n}a_{(n)}z^{n}$. The reader is cautioned that this differs from standard indexing conventions.\end{itemize} These data are further assumed to satisfy the following conditions; \begin{itemize} \item The field associated to the vacuum vector is the identity. \item $T$ kills the vacuum vector, i.e. $T(\Omega)=0$. \item For all $v\in V$, $v(z)\Omega = v+\mathcal{O}(z)$.\item For all vectors $a$ we have $[T,a(z)]=\partial_{z}a(z)$. \item For all $a,b\in V$, the fields $a(z)$ and $b(w)$ are mutually \emph{local}. That is to say for $N>>0$ we have $(z-w)^{N}[a(z),b(w)]=0$.\item The differential $\partial_{V}$ acts as a \emph{derivation} of $V$ in the sense that we have $[\partial_{V},a(z)]=(\partial_{V}a)(z)$ for all vectors $a$. \end{itemize} \end{definition} The vertex algebras with which we will deal in this note will all be equipped with an additional grading by non-negative integers. We will refer to such gradings as \emph{conformal}. \begin{definition} Let $V$ be a vertex algebra. A grading $V=\bigoplus_{q\geq 0}V^{(q)}$ is said to be a \emph{conformal} grading if the following conditions are satisfied;\begin{itemize}\item The vacuum vector $\Omega$ is of degree $0$. \item The infinitesimal translation $T$ is of degree $1$. \item If $a\in V^{(q)}$ then the endomorphism $a_{(n)}$ is of degree $q+n$.\end{itemize}\end{definition} We will not include the definition of a module for a dgVA here, beyond to say that it is a dg- vector space $M$ equipped with an action map $V\otimes M\longrightarrow M((z))$ satisfying conditions guaranteeing for example that $V$ is naturally a module over itself. There is also a notion of a conformally graded module for a conformally graded vertex algebra. We refer the reader to \cite{BZF} for further details. \subsection{Examples of Vertex Algebras} We now sketch some basic examples of dgVAs. We let $\mathcal{H}$ be the infinite dimensional Heisenberg algebra generated by elements $\{x_{i},y_{j}\}_{i,j\in\mathbb{Z}}$ with commutation relations $$[y_{i},x_{j}]=\delta_{i+j,0}.$$ There is an abelian Lie subalgebra $\mathcal{H}^{+}$ generated by elements $\{x_{<0},y_{\leq 0}\}$. Let us denote by $V$ the induction of the trivial $\mathcal{H}^{+}$-module to all of $\mathcal{H}$. It is well known that this induced module obtains the structure of a vertex algebra. Note that the underlying vector space of this vertex algebra is isomorphic to $$sym_{\mathbb{C}}\{x_{\geq0},y_{> 0}\}=\mathbb{C}\big[x_{i},y_{j+1}\big]_{i,j\geq0}.$$ We refer the reader to \cite{BZF} for an elucidation of the fields. There is an odd version of this. Consider the infinite dimensional Clifford algebra $Cl$ generated by elements $\{\phi_{i}\}_{i\in\mathbb{Z}}$ of cohomological degree +1 and elements $\{\psi_{i}\}_{i\in\mathbb{Z}}$ of degree -1 with the (super-) commutation relations $$[\psi_{i},\phi_{j}]=\delta_{i+j,0}$$ As in the even case there is an abelian sub-algebra generated by $\{\phi_{<0},\psi_{\leq 0}\}$ and we can induce the trivial module for this to all of $Cl$. We denote the resulting module $\bigwedge$ and recall that it also admits the structure of a dgVA. We further define $V_{d}:=V^{\otimes d}$ and $\bigwedge_{d}:=\bigwedge^{\otimes d}$. Finally we define $\bigwedge^{!}$ in a manner analogous to $\bigwedge$, except we induce from the subalgebra $\{\phi_{\leq0},\psi_{<0}\}$ \begin{definition} \begin{itemize}\item We define the dgVA $\Omega^{ch}_{\mathbb{A}^{d}}$ to be the tensor product algebra $V_{d}\bigotimes\bigwedge_{d}$. This is referred to as the sheaf of chiral differential forms on the affine $d$-space $\mathbb{A}^{d}$. \item We define the dgVA $\Theta^{ch}_{\mathbb{A}^{d}}$ to be the tensor product algebra $V_{d}\bigotimes\bigwedge^{!}_{d}$. This is referred to as the sheaf of polyvector fields on the affine $d$-space $\mathbb{A}^{d}$.\item The chiral de Rham differential, $d^{ch}_{dR,\mathbb{A}^{d}}$ is the derivation of $\Omega^{ch}_{\mathbb{A}^{d}}$ defined in operatorial terms as $\sum_{i\in\mathbb{Z}}\sum_{j=1,...,d}y^{j}_{i}\phi^{j}_{-i}.$ This can be checked to be a derivation of the vertex algebra (\cite{MSV}). The resulting dgVA is denoted $\Omega^{ch}_{dR,\mathbb{A}^{d}}.$ \end{itemize}\end{definition} The following theorem underlies the construction of the chiral de Rham complex associated to an arbitrary smooth variety $X$. Before stating it we note that the above three vertex algebras can be completed $(x^{1}_{0},...,x^{d}_{0})$-adically, obtaining $\widehat{\Omega}^{ch}_{\mathbb{A}^{d}}$ etc. We think of these as vertex algebras of the formal $d$-disc $\Delta_{d}$. We let $G_{d}$ be the pro-unipotent group of automorphisms of this formal scheme. The following theorem is proven in \cite{MSV}. \begin{theorem} The action of $G_{d}$ extends naturally to an action on the vertex algebras $\widehat{\Omega}^{ch}_{\mathbb{A}^{d}}$, $\widehat{\Theta}^{ch}_{\mathbb{A}^{d}}$ and $\widehat{\Omega}^{ch}_{dR,\mathbb{A}^{d}}$.\end{theorem} \begin{proof} Consult the original paper \cite{MSV}. \end{proof} As observed in \cite{MSV}, this theorem allows us to globalize the constructions from a disc $\Delta_{d}$ to any smooth $d$-dimensional variety via the method of \emph{Gelfand}-\emph{Kazhdan} \emph{formal geometry}. The resulting sheaves of vertex algebras will be denoted $\Omega^{ch}_{X}$ etc. There is an evident conformal grading on each of them, with $x_{i}$ of weight $i$ and so on. \subsection{Hodge Theory of Landau-Ginzburg Models} We will now briefly recall the basics of Hodge theory for a Landau-Ginzburg model. Note that according to work of numerous authors (we mention here A. Efimov, \cite{Efimov},and A. Preygel, \cite{Prey}), this is equivalent to the nc-Hodge theory of the matrix factorization category associated to $(X,f)$. A word of caution; there are more sophisticated notions of Hodge theory for LG-models that we will not be touching on in this note, as it seems unlikely that such things are obtainable from nc-Hodge theory. The interested reader is referred to the numerous brilliant works of Sabbah and Mochizuki on \emph{exponential mixed Hodge modules}. \begin{definition} For an LG-model $(X,f)$ we define; \begin{itemize}\item $\Theta_{f}:=\Big(\bigwedge^{-*}\Theta_{X},\iota_{df}\Big)$ where $\Theta_{X}$ denotes the sheaf of polyvector fields on $X$ and $\iota$ denotes contraction. $\Theta_{f}$ will be referred to as the sheaf of polyvectos on $(X,f)$.\item $\Omega_{f}:=\Big(\bigwedge^{+*}\Omega_{X},df\wedge\Big).$ This will be referred to as the sheaf of differential forms on $(X,f)$. \item $\Omega_{dR,f}:=\Big(\bigwedge^{+*}\Omega_{X},d_{dR}+df\wedge\Big).$ This will be referred to as the de Rham complex on $(X,f)$\end{itemize}\end{definition} \begin{remark} We note here that $\Theta_{f}$ is a (commutative) differential graded algebra with a module $\Omega_{f}$, and further that this module admits an extra differential. Note also that $\Omega_{f}$ \emph{is not} a differential graded algebra unlike in the case of vanishing potential. \end{remark}\begin{example}An instructive simple example is gotten by taking $(X,f)$ to be $(\mathbb{A}^{1},z^{d+1})$. Here $\Theta_{f}\cong\mathbb{C}[z]/z^{d}$, the sheaf of differential forms is isomorphic to $\big(z^{d}:\mathbb{C}[z]\longrightarrow\mathbb{C}[z]\big)$ and the natural Hodge- de Rham spectral sequence degenerates at the first page. Further, the euler characterstic of the de Rham complex is -$d$ which is -$1$ multiplied by the euler characterstic of the vanishing cycles euler characteristic of $f$, in agreement with the theorem of Sabbah (\cite{Sa})\end{example} \begin{remark} In the sequel we will freely use the language of \emph{derived schemes} as it will simplify some proofs. This doesn't require anything like the full force of Derived Algebraic Geometry so should not cause the reader great difficulties. With this in mind let us now note that the sheaf of commutative differential algebras $\Theta_{f}$ is the sheaf of functions on the \emph{derived critical locus} of $f$, denoted here by $T^{*}_{df}[-1]X$. These objects have recieved a good deal of study recently owing to their status as local Darboux models for $-1$-symplectic varieties (cf citations). \end{remark} \section{The Vertex Algebra associated to a Landau-Ginzburg Model}\subsection{BRST Reductions of Vertex Algebras}Iin this section we will introduce the basic objects of study of this note. They will be constructed from known objects via a general construction in the theory of vertex algebras, known as \emph{BRST reduction}. This is a procedure for modifying a given dgVA by deforming the differential. \\ \\ Now, let $a\in V$ be a vector, which we assume to be a cycle of cohomological degree $1$. \\ \\We see immediately from the relation $[T,a(z)]=\partial_{z}a(z)$ that $$[T,a_{-1}]=Res_{z=0}(\partial_{z}a(z))=0.$$ It follows further from \emph{associativity of the operator product expansion} (cf. \cite{Kac}) that $$[a_{-1},b(z)]=(a_{-1}b)(z)$$ for all vectors $b$. This is to say that $a_{-1}$ acts a derivation of $V$. We observe that this derivation has cohomoligical degree $0$ as $a$ was assumed to have cohomological degree $1$. Putting all of this together we deduce the following \begin{lemma} If $a\in V$ is as above and further if $a_{(-1)}^{2}=0$, then we have a dgVA with underlying dg- vector space $(V,\partial_{V}+a_{(-1)})$ and vacuum vector and fields unchanged from those of $V$. Further, if $V$ admits a conformal grading with $a\in V^{(1)}$, then the associated vertex algebra inherits this grading. \end{lemma}\begin{proof} Nothing remains to be checked. \end{proof} \begin{definition} Let $a\in V$ be as in the lemma above, then the vertex algebra so constructed is referred to as the \emph{BRST reduction} of $V$ by $a$. It is denoted $V^{BRST}_{a}$.\end{definition} \begin{example} Let $V=\Omega^{ch}_{\mathbb{A}^{d}}$. Further let $$a:=\sum_{j=1,...,d}y^{j}_{1}\phi^{j}_{0}\in\Omega^{ch}_{\mathbb{A}^{d}}.$$ This satisfies all the requisite conditions and thus we can perform BRST reduction. It is easy to see that we obtain $V=\Omega^{ch}_{dR,\mathbb{A}^{d}}$.\end{example} \subsection{The Construction} Let $(X,f)$ be our Landau-Ginzburg model. We'll construct a conformally graded dgVA $\Theta^{ch}_{f}$ with conformal weight $0$ subspace isomorphic to $\Theta_{f}$ via BRST reduction. Note, there is an element $df\in\Theta^{ch}_{f}$. In etale local coordinates we have $$df=\sum_{j}\partial_{j}f(x^{1}_{0},...,x^{d}_{0})\phi^{j}_{1}.$$ Observe that this is of cohomological degree $+1$, as well as conformal degree $+1$. Finally we can check that we have $\Big(Res_{z=0}(df)(z)\Big)^2=0$. It follows that the pair $(\Theta^{ch}_{f},df)$ satisfies all the conditions needed to perform BRST reduction and we can thus define; \begin{definition} The conformally graded dgVA $\Theta^{ch}_{f}$ is by definition the BRST reduction $\Theta^{ch,BRST}_{X,df}.$ \end{definition} Unsurprisingly, there are analogues of $\Omega_{f}$ and $\Omega_{dR,(X,f)}$ as well. We package the following definitions into a trivial lemma; \begin{lemma} Let $\Omega^{ch}_{f}$ denote the dg- vector space, $(\Omega^{ch}_{X},\partial^{ch}_{f}:=(df)_{0})$, where we note that we are now taking the $0$-mode of $df\in\Omega^{ch}_{X}$. Then $\Omega^{ch}_{f}$ admits a natural structure of a conformally graded vertex module for the vertex algebra. Further, this module admits an extra differential $d^{ch}_{dR}$, which is simply to say that $[\partial^{ch}_{X},d^{ch}_{dR}]=0$. \end{lemma} \begin{proof}This is a simple computation in local coordinates.\end{proof} Of course, $(\Theta^{ch}_{f},\Omega^{ch}_{f},\Omega^{ch}_{dR,f})$ is the desired semi-infinite enhancement of the nc-Hodge theory associated to $(X,f)$. We mention here that during the writing of this note we learned from V. Gorbounov of the lovely paper \cite{Go} in which a similar construction is studied (in more detail) in a particular case. \begin{remark} Observe that $\Theta^{ch}_{f}$ has a holomorphic (negative modes all act trivially) sub-algebra, locally generated by the $x$ and $\psi$ variables. This sub-algebra has a very elegant description as the algebra of functions on the space of arcs into the derived scheme $T^{*}_{df}X[-1]$. In fact, if we adopt the language of \emph{chiral differential operators} $\mathcal{D}^{ch}$ (cf. \cite{MSV}), we should be able to (make sense of and) prove an isomorphism $\mathcal{D}^{ch}_{T^{*}_{df}[-1]X}\cong\Theta^{ch}_{f}.$ This heuristic, together with a result of Malikov and Schechtman, \cite{MS}, suggests that we should be able to identify a suitably nice category of representations of this vertex algebra. \end{remark} \subsection{Conformal Vertex Modules Associated to the Landau-Ginzburg Model} Recall that $\Theta^{ch}_{f}$ admits a conformal grading, we wish to study the category $\mathbb{Z}_{\geq 0}$-modules wrt this conformal grading, note that we will also refer to the grading on the module as conformal. The answer is remarkably simple as we shall see below. \begin{lemma} Letting $\mathcal{M}_{+ve}(\Theta^{ch}_{f})$ denote the category of conformally graded modules for the vertex algebra $\Theta^{ch}_{f}$ and $\mathcal{D}^{l}(T^{*}_{df}[-1]X)$ the category of left $D$-modules on the derived critical locus. Then there is a naturally defined equivalence of categories $$\mathcal{M}_{+ve}(\Theta^{ch}_{f})\longrightarrow\mathcal{D}^{l}(T^{*}_{df}[-1]X).$$\end{lemma}\begin{proof} We break the proof into simple steps, the main input is essentially just Kashiwara's Lemma. \begin{enumerate} \item Here we construct the functor which will realize the equivalence in the case of $(X,f)=(\mathbb{A}^{1},0)$. Now a module, $\mathcal{M}$, for the vertex algebra $\Theta^{ch}_{\mathbb{A}^{1}}$ admits an action by the infinite dimensional Lie algebra generated by the elements $\{x_{i},y_{i},\phi_{i},\psi_{i}\}_{i\in\mathbb{Z}}$ with the evident commutation relations. (Abusing notation slightly, we think of these as modes of elements of the vertex algebra, so for example $y_{0}$ is identified with the residue of $y_{1}\in\Theta^{ch}_{\mathbb{A}^{1}}$.) \\ \\In particular the elements $\{x_{0},y_{0},\phi_{0},\psi_{0}\}$ generate the algebra $Diff(T^{*}[-1]\mathbb{A}^{1})$ of differential operators on the odd cotangent bundle of $\mathbb{A}^{1}$. We define the functor, $$(-)^{!}:\mathcal{M}_{+ve}(\Theta^{ch}_{\mathbb{A}^{1}})\longrightarrow\mathcal{D}^{l}(T^{*}[-1]\mathbb{A}^{1}),$$ by taking the singular vectors with respect to the negative modes. That is to say $$\mathcal{M}^{!}:=\{m\in\mathcal{M}:(x_{i},y_{i},\phi_{i},\psi_{i})m=0,\forall i<0\}.$$ Negative modes commute with $0$-modes so $\mathcal{M}^{!}$ is indeed a module for differential operators on $T^{*}[-1]\mathbb{A}^{1}$. \item We can now observe compatibility with taking products of varieties as well as etale gluings. This allows us to globalize the construction to a functor to $$(-)^{!}:\mathcal{M}_{+ve}(\Theta^{ch}_{X})\longrightarrow\mathcal{D}^{l}(T^{*}[-1]X).$$ Consideration of the differentials shows that this in fact induces a functor on the twists of both sides, ie $$(-)^{!}:\mathcal{M}_{+ve}(\Theta^{ch}_{f})\longrightarrow\mathcal{D}^{l}(T^{*}_{df}[-1]X).$$\item We show that the functor constructed in (1) above is an equivalence by explicitly constructing the inverse as an induced module. For a $D$-module $\mathcal{N}$ on $T^{*}[-1]\mathbb{A}^{1}$, write $\iota(\mathcal{N}):=\mathcal{N}[x_{i},y_{i},\phi_{i},\psi_{i}]_{i>0}$. Observe that this is naturally a vertex module over $\Theta^{ch}_{\mathbb{A}^{1}}$. Further, for $\mathcal{M}\in\mathcal{M}_{+ve}(\Theta^{ch}_{\mathbb{A}^{1}})$, there is a natural map $$\epsilon:\iota(\mathcal{M}^{!})\longrightarrow\mathcal{M}.$$ Injectivity is obvious enough, just as in Kashiwara's lemma. For surjectivity, let us take $m\in\mathcal{M}$ and assume it is of conformal weight $q$.\\ \\ Consider now $\epsilon$ to be a map of modules for $\mathbb{C}[x_{i},y_{i},\phi_{i},\psi_{i}]_{i\in[-q,q]\setminus\{0\}}$. Kashiwara's lemma implies we can find finitely many $(x_{i},y_{i},\phi_{i},\psi_{i})_{i\in[-q,0)}$-torsion elements of $\mathcal{M}$ whose span under the action of $\mathbb{C}[x_{i},y_{i},\phi_{i},\psi_{i}]_{i\in(0,q]}$ contains $m$. These elements must all manifestly be of conformal weight at most $q$. As such they are necessarily killed by all elements of conformal weight less than $-q$, and thus they are singular vectors for all negative modes, and we have proven surjectivity of $\epsilon$. \item Using the gluing construction of the functor $(-)^{!}$ and the fact that it is compatible with differentials on both sides in the presence of a potential $f$, we conclude that the functor induces the desired equivalnce.\end{enumerate}\end{proof}\begin{remark}\begin{itemize}\item The bounded below derived category of $\mathcal{D}^{l}(T^{*}_{df}[-1]X)$ is equivalent to the bounded below derived category of $D$-modules on $X$ with topological support in $\mathbf{crit}(f)$, which by definition is equivalent to $D$-modules on $\mathbf{crit(f)}$. \item Note that we can define weak equivalences of vertex modules in such a way that they are reflected by the functor $^{!}$. We take the localization with respect to these and denote the resulting category $d\mathcal{M}_{+ve}(\Theta^{ch}_{f})$.\end{itemize}\end{remark} The above remarks allow us to deduce the following theorem, \begin{tcolorbox}\begin{theorem} There is a naturally defined equivalence of dg-categories $$\mathcal{D}\mathcal{M}_{+ve}(\Theta^{ch}_{f})\longrightarrow\mathcal{D}^{l}(\mathbf{crit}(f)).$$ In particular if $f$ has an isolated singularity then $\mathcal{M}_{+ve}(\Theta^{ch}_{f})$ is just the category of vector spaces. \end{theorem}\end{tcolorbox} \subsection{Finiteness Properties} In the presence of an interesting potential $f$, we cannot of course expect the variety $X$ to be proper. Properness of $X$ can be understood as a finiteness property for its category of perfect complexes, and taking this non-commutative point of view one obtains the correct notion of properness for a Landau-Ginzburg model. Such a definition oughtto be equivalent to homological properness of $\mathbf{MF}(X,f)$ in the sense of Kontsevich, \cite{Ko}. \begin{definition}We say that $(X,f)$ is proper if the critical locus $\mathbf{crit}(f)$ is proper. \end{definition} We would like to know that in this case the cohomology of the associated vertex algebra is finite as well, at least in the conformally graded sense. We prove this below. \begin{tcolorbox}\begin{theorem} Let $(X,f)$ be a proper Landau-Ginzburg model. Then for every fixed conformal weight $j$, the weight $j$ component of the total hypercohomology of the sheaf of vertex algebras, $\Theta^{ch}_{f}$, is finite dimensional. That is to say for all $j$ we have $$\dim_{\mathbb{C}}\mathbb{H}^{*}\Big(X,\Theta^{ch,(j)}_{f}\Big)<\infty.$$\end{theorem}\end{tcolorbox}\begin{proof} We will introduce an increasing filtration $\mathcal{F}$ of the sheaf $\Theta^{ch,(j)}_{f}$. It is a mild modification of the filtration introduced in \cite{MSV}. We define it first for $\mathbb{A}^{1}$. Recall from above the meaning of the symbols $\{x,\psi,\phi,y\}$ so that we can make the identification of vector spaces $$\Theta^{ch}_{\mathbb{A}^{1}}=\mathbb{C}[x_{i},y_{i+1},\psi_{i},\phi_{i+1}]_{i\geq 0}.$$ We now stipulate the following, $$\{x_{0},\psi_{0}\}<x_{1}<x_{2}<...<\psi_{1}<\psi_{2}<...<\phi_{1}<\phi_{2}<...y_{1}<y_{2}<...$$ We now extend this to all of $\Theta^{ch}_{\mathbb{A}^{1}}$ lexicographically. This is of course not exhaustive but is easily seen to be exhaustive on each fixed conformal weight component. Now observe the following simple facts, \begin{itemize}\item The filtration $\mathcal{F}$ is compatible with the vertex algebra structure on $\Theta^{ch}_{\mathbb{A}^{1}}.$\item The $0$-th associated graded is the algebra of polyvectors on $\mathbb{A}^{1}$. \item For all $j>0$, $Gr^{j}_{\mathcal{F}}\Theta^{ch}_{\mathbb{A}^{1}}$ is a perfect module over $Gr^{0}_{\mathcal{F}}\Theta^{ch}_{\mathbb{A}^{1}}.$\end{itemize} As in \cite{MSV} we observe compatibility with coordinate transformations. Thus, we produce a filtration $\mathcal{F}_{X}$ on each $\Theta^{ch}_{X}$. Again, as in \cite{MSV} we note that the associated graded $Gr^{>0}_{\mathcal{F}_{X}}\Theta^{ch}_{X}$ is a quasi-coherent sheaf on the scheme $X$. Reducing to the case of $X=\mathbb{A}^{1}$ we see that, for all $j>0$, $Gr^{j}_{\mathcal{F}_{X}}\Theta^{ch}_{X}$ is a direct sum of locally free sheaves in various cohomological degrees. We denote the resulting perfect complex on $X$ by $\mathcal{V}_{j}$. \\ \\ Note that so far we have not mentioned the potential $f$. As before, let $\partial^{ch}_{f}$ denote the differential on $\Theta^{ch}_{f}$. Observe that $\partial^{ch}_{f}$ takes $\psi$ variables to $x$ ones and $y$ ones to $\phi$ ones. Further it is of conformal weight $0$. These two facts together imply that it gives a well-defined filtration on $\Theta^{ch}_{f}$. Observe that $$Gr^{0}_{\mathcal{F}_{X}}\Theta^{ch}_{X}\cong\mathcal{O}_{T^{*}[-1]X},$$ the sheaf of functions on the -$1$-shifted cotangent bundle. Examining the differential, we see that it acts as contraction by $df$ on $\mathcal{O}_{T^{*}[-1]X}$, so that $$Gr^{0}_{\mathcal{F}_{f}}(\Theta^{ch}_{f})\cong\mathcal{O}_{T^{*}_{df}[-1]X}.$$ Crucially we also note that for all $j>0$, $Gr^{j}_{\mathcal{F}_{X}}\Theta^{ch}_{X}$ is isomorphic (as a sheaf over $T^{*}_{df}[-1]X$) to $\pi^{*}\mathcal{V}_{j}$, where $$\pi: T^{*}_{df}[-1]X\longrightarrow X$$ is the natural map. In particular each $Gr^{j}\Theta^{ch}_{f}$ is a perfect sheaf on the derived critical locus $T^{*}_{df}[-1]X$. \\ \\ In order to complete the proof, it of course suffices (by the spectral sequence associated to a filtered complex) to prove that $$\dim_{\mathbb{C}}\mathbb{H}^{*}\Big(X,Gr\Theta^{ch,(j)}_{f}\Big)<\infty.$$ With this in mind we need only recall the following basic fact from derived algebraic geometry; \begin{itemize}\item Suppose $\mathfrak{X}$ is a derived scheme with proper classical truncation $\mathfrak{X}^{cl}$, and suppose further that the total cohomology sheaf $\bigoplus\mathcal{H}^{-j}(\mathcal{O}_{\mathfrak{X}})[j]$ is coherent over $\mathfrak{X}^{cl}$. Then for any perfect sheaf $\mathfrak{F}$ on $\mathfrak{X}$, the total cohomology $\mathbb{H}^{*}(\mathfrak{X},\mathfrak{F})$ is finite dimensional.\end{itemize} In order to prove this one can use induction on the Postnikov tower of $\mathfrak{X}$, noting that the base case is a classical theorem of Serre. The condition that $\mathbf{crit}(f)$ is proper allows us to set $\mathfrak{X}:=T^{*}_{df}[-1]X$ and $\mathfrak{F}:= Gr^{j}\Theta^{ch}_{f},$ we deduce immediately from the lemma the finiteness of $$\dim_{\mathbb{C}}\mathbb{H}^{*}\Big(X,Gr\Theta^{ch,(j)}_{f}\Big),$$ and thence the theorem.\end{proof} We observe now the following - \begin{corollary} For a proper LG-model $(X,f)$, the total hypercohomology in each fixed conformal weight of $\Omega^{ch}_{f}$ is also finite. In particular we can produce a well-defined $q$-graded euler characteristic, defined as follows; \begin{tcolorbox}$$\chi^{ch}_{van}(f):=\sum\chi\Big(\mathbb{H}^{*}(X,\Omega^{ch,(j)}_{f})\Big)q^{j}\in\mathbb{Z}[[q]].$$\end{tcolorbox}\end{corollary} \begin{example} Let us take $(X,f)=(\mathbb{A}^{1},z^{d+1})$. We can then consider the associated \emph{bi}-\emph{graded} complex $\Omega^{ch}_{f}$ by considering the standard $\mathbb{G}_{m}$-action on $\mathbb{A}^{1}$. Computing the bi-graded euler characteristic is then easy, note the we write $z$ for the extra grading variable. As is standard in the literature on $q$-series we write $$\theta_{q}(z)=\prod_{n\geq0}(1-q^{n}z)(1-q^{n+1}z^{-1}).$$ We then compute the bigraded euler characteristic to be simply $$\theta_{q}\big(-z^{-d}\big)\theta_{q}(z)^{-1}=-z^{-d}\theta_{q}(z^{d})\theta_{q}(z)^{-1}.$$ Note that in the $q\longrightarrow0$ limit this produces -$z^{-d}(1+z^{1}+...+z^{d-1})$, which is consistent with $\{z^{d-1}dz,...,dz\}$ forming a basis of the first twisted de Rham cohomology group coupled with the vanishing of all other twisted de Rham cohomology groups.\end{example} \section{Some Vague Speculation} The formalism of Hochschild (co)-homology produces, for a dg-category $\mathbf{C}$, a commutative differential graded algebra $H^{*}(\mathbf{C})$, of \emph{Hochschild cochains}, with a distinguished module $H_{*}(\mathbf{C })$ of \emph{Hochschild chains}, which module admits an additional circle action. We'll assume familiarity with this formalism. We believe that under suitable conditions on the category $\mathbf{C}$, it should be possible to produce an enhancement of this structure to a semi-infinite such. \\ \\ We will state this as a conjecture, although we will be very imprecise. \begin{conjecture} Under suitable conditions on the dg-category dgVA, $H^{*}_{ch}(\mathbf{C})$ with a graded module $H^{ch}_{*}(\mathbf{C})$, which admits an extra $S^{1}$-action. Further it holds that; \begin{enumerate} \item The conformal weight $0$ limit reproduces the usual Hochschild package.\item This reproduces the usual chiral de Rham package when inputted $\mathbf{Perf}(X)$.\item It reproduces the above constructed chiral de Rham package of the LG-model $(X,f)$ in the case of $\mathbf{MF}(X,f)$.\item For a proper dg-category the associated cohomology groups are finite in the conformally graded sense. \item The inclusion of the $0$-weight subspace of the derived $S^{1}$-invariants, $$H_{*}(\mathbf{C})^{S^{1}}\longrightarrow H_ {*}(\mathbf{C})^{ch,S^{1}},$$ is a weak equivalence.\end{enumerate}\end{conjecture} \begin{remark} We should remark that in the case of $\mathbf{Perf}$, (5) is proven in the original paper \cite{MSV}, where it reduces to a very easy statement in the case of $\mathbb{A}^{1}$. It is easy to deduce (5) in the case of $\mathbf{MF}(X,f)$ from this. \end{remark}\subsection{Further Examples} We include here a brief sketch of how this can be done when $\mathbf{C}$ is the category of representations of a finite dimensional Lie algebra $\mathfrak{g}$.\\ \\ We denote the underlying vector space of $\mathfrak{g}$ by $V$. Further we choose a basis $\{x^{1},...,x^{d}\}$ of V, with $c^{k}_{ij}$ denoting the structure constants with respect to this basis, so by definition we have $[x^{i}x{j}]=c^{k}_{ij}x^{k}$, note that we will be assuming the summation convention in this subsection. The Hochschild cohomology of $U\mathfrak{g}$ is computed as Lie algebra cohomology with coefficients in the module $(U\mathfrak{g})^{ad}$. This can equivalently be described as Lie algebra cohomology of $\mathfrak{g}$ with coefficients in the module $S(\mathfrak{g}^{ad})$. We can restate this as the following lemma; \begin{lemma} The Hochschild cohomology of $U\mathfrak{g}$ is equivalent as a cdga to $$\Big(\mathbb{C}[x^{1},...,x^{d},\psi^{1},...,\psi^{d}],\partial:=c^{k}_{ij}x^{k}\psi^{j}\partial_{x^{i}}-\frac{1}{2}c^{i}_{jk}\psi^{j}\psi^{k}\partial_{\psi^{i}}\Big),$$ where the $x$ variables are of cohomological degree $0$ and the $\psi$ ones of cohomological degree $1$.\end{lemma} We'll construct $H^{*}_{ch}(U\mathfrak{g})$ as a BRST reduction of the vertex algebra of chiral polyvector fields on the affine space dual to the underlying vector space of $\mathfrak{g}$, $\Theta_{\mathbb{A}^{1}}^{ch}$. Considering the above lemma, it is pretty obvious how to do this. \begin{tcolorbox}\begin{definition}Define the complex $H^{*}_{ch}(U\mathfrak{g})$ as the BRST reduction $\Theta^{ch,BRST}_{V^{*},\omega_{\mathfrak{g}}}$. Here $$\omega_{\mathfrak{g}}:=c^{k}_{ij}x^{k}_{0}\psi^{j}_{0}y^{i}_{1}-\frac{1}{2}c^{i}_{jk}\psi^{j}_{0}\psi^{k}_{0}\phi^{i}_{1},$$ which we note satisfies all the requirements needed to perform the BRST reduction. \end{definition}\end{tcolorbox} The reader can check that similar formulaes define a module, $H^{ch}_{*}(U\mathfrak{g})$, for $H^{*}_{ch}(U\mathfrak{g})$. The definition of the $S^{1}$ action is unchanged from the case of trivial Lie bracket.\begin{theorem} ($H^{*}_{ch}(U\mathfrak{g})$,$H^{ch}_{*}(U\mathfrak{g}))$ satisfies the conditions of the conjecture. \end{theorem} \begin{proof} There is very little to check. We mention only that (5) can be proven by reducing to the case of an abelian $\mathfrak{g}$ after endowing the objects in question with the evident PBW fitrations. \end{proof} \begin{remark} We expect such enhancements to exist in in the case of perfect complexes on an orbifold, further such objects oughtto be related to the orbifold chiral de Rham complex of Frenkel and Szczesny, \cite{FS}. We wish to return to these questions in the future.\end{remark}
{ "timestamp": "2018-06-05T02:12:13", "yymm": "1806", "arxiv_id": "1806.00854", "language": "en", "url": "https://arxiv.org/abs/1806.00854" }
\section{Introduction}\label{sec:Introduction} \paragraph{} Questionnaires are a viable tool in modern psychological research but to be useful they have to be validated in a number of ways. Usual questionnaire validation pipeline involves a test for internal consistency, construct validity and reliability (\cite{furr2017psychometrics}). Our research is centered around second issue, construct validity. Construct validity shows how well the questionnaire can generalize, to what degree the measures that the questionnaire provides can be applied to estimate the characteristics of psychological model. \paragraph{} Construct validity is categorized into convergent and discriminant validity (\cite{campbell1959convergent}): \begin{enumerate} \item Convergent validity - "Are two theoretically related ways of estimating the same characteristic really related?"; \item Discriminant validity - "Are two theoretically unrelated ways of estimating the same characteristic really unrelated?"; \end{enumerate} Both types of validity are important to investigate the construct validity of a questionnaire. Convergent and discriminant validation ensure that the questionnaire does precisely what it is made to do - provides a way to test a psychological model experimentally. \paragraph{} In this study we provide a novel approach to construct validity evaluation. It is based on usage of neural network to predict characteristics of well-established questionnaire from items of the questionnaire under investigation. Using direct prediction we can evaluate convergent validity and we can evaluate discriminant validity using interpretation of trained weights. \paragraph{} The Five Factor Model of personality traits (the Big Five, also known as the OCEAN or CANOE model) is currently among the most used personality traits models. Questionnaires based on its factors (Neuroticism, Extraversion, Openness to experience, Agreeableness and Conscientiousness) are widely used in scientific and industrial applications that require personality diagnosis. The major drawback of existing questionnaires is their size - it ranges from 44 items (BFI (\cite{Joh99})) to 240 items (\cite{CM92}). Size makes research more difficult especially in Internet-based cases where lack of outside control (usually provided by researchers in offline cases) and participant's preference to skip tedious tasks condone random response or quitting. \paragraph{} A number of studies (\cite{Cre12}, \cite{Gun15}) suggest 10-item personality questionnaires as brief diagnostic tools for they have satisfying psychometric performance. In this study we choose to use a Russian adaptation of TIPI (\cite{Gos03}) questionnaire due to the TIPI's cross-cultural generalizability (which is shown by a set of TIPI international adaptations (\cite{Her}, \cite{Muc07}, \cite{Hof08}, \cite{Osh12}, \cite{Rom12}, \cite{Ren13}, \cite{Lag14}, \cite{Chi15} etc). \paragraph{} Currently there are two competing Russian adaptations for TIPI questionnaire, KOBT (\cite{KT16}) and TIPI-RU (\cite{sergeeva2016translation}). Their performance in convergent validity are close (TIPI-RU performs slightly better in Extraversion, Agreeableness and Emotional stability, KOBT - in Openness and Conscientiousness). For the current study TIPI-RU is considered a better alternative. TIPI-RU data are freely and openly available at github (\cite{TIPIDATA}) so the data can be used as an addition to our own sample. We use TIPI-RU as an example for application of our novel convergent validity evaluation method. \section{Materials and methods}\label{sec:MatMeth} \subsection{TIPI-RU questionnaire} \paragraph{} The TIPI-RU is translated and validated version of TIPI questionnaire (\cite{Gos03}). The translation can be found in appendix 1 of (\cite{sergeeva2016translation}). Questionnaire consists of 10 questions (below denoted as $TIPI_n$ where $n$ ranges from 1 to 10). The actual big five characteristics are computed from answers according to the following formulae: \begin{equation} \label{tipi-e} E = 0.5 (TIPI_1 + reverse(TIPI_6) \end{equation} \begin{equation} A = 0.5 (TIPI_7 + reverse(TIPI_2) \end{equation} \begin{equation} C = 0.5 (TIPI_3 + reverse(TIPI_8) \end{equation} \begin{equation} ES = 0.5 (TIPI_9 + reverse(TIPI_4) \end{equation} \begin{equation} \label{tipi-o} O = 0.5 (TIPI_5 + reverse(TIPI_{10}) \end{equation} where $reverse$ means taking the opposite value on Likert scale (7 becomes 1, 6 becomes 2 etc.) and big five characteristics are denoted (here and in all plots) by first letters of their corresponding names (Extraversion, Agreeableness, Consciousness, Emotional stability and Openness). \subsection{Dataset} We use the extended dataset of 457 observations that include the one composed by Sergeeva, Kirillov and Dzhumagulova (218 observations). 218 old observations are freely available at (\cite{TIPIDATA}) and we have collected other 239 by surveying Russian students who did not participate in the research of Sergeeva's group. \begin{figure}[!h] \centering \includegraphics[width=0.7\textwidth]{genderage.png} \caption{Gender-Age distribution} \label{fig:fig0} \end{figure} The age groups on the figure \ref{fig:fig0} are: 1 (10 - 19 years), 2 (20 - 29 years), 3 (30 - 39 years), 4 (40 - 49 years) and 5 (50 - 59 years). \subsection{Convergent validity test via neural network} \paragraph{} To check convergent validity of TIPI-RU via neural network we use the following scheme: \begin{enumerate} \item We use 5PFQ as template and we assume that its characteristics (extraversion, emotional stability etc.) can be measured simpler via TIPI-RU questionnaire; \item If the latter is correct, we can fit a neural network to predict 5PFQ characteristics from answers to TIPI-RU questions; \item For our approach to be successful we have to address the following issues: \begin{itemize} \item Ensure that the network is learning something, preferrably a certain mapping from TIPI-RU answers to 5PFQ characteristics; \item Ensure that even the small network does overfit - it shows that there is a really strong connection between inputs and outputs; \item Ensure that the result of trained network is different from results of network trained on random permutations of labels. \end{itemize} \item Evaluate the model's performance via quality measures. \end{enumerate} \paragraph{} Sergeeva, Kirillov and Dzhumagulova among other methods used path model to confirm convergent validity. The neural network approach is conceptually much simpler: path model is based on fitting five different linear regression models to predict a 5PFQ item given the answers to questions that construct its TIPI-RU counterpart, but with neural network we can predict all five 5PFQ characteristics using all 10 TIPI-RU questions with single model. If the network can do it, then TIPI-RU and 5PFQ converge - it proves the convergent validity of TIPI-RU. The process of TIPI-RU computation can be viewed as application of a single hidden layer neural network with the following structure (for simplicity, we assume that Likert reverses of answers to 2,4,6,8,10 questions are already taken before the application of the network): \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{TIPIasNN.png} \caption{TIPI computation as neural network. Only non-zero weights are shown.} \label{fig:fig1} \end{figure} \paragraph{} SUM here denotes output of a summarizing neuron with linear activation function and, for simplicity, this toy network has no bias, so the following is correct: \begin{equation} SUM = W \dot X \end{equation} where $W$ is vector of weights, and $X$ is the vector of inputs to this particular neuron. \paragraph{} Same operation is performed at the output layer neurons. The network is fully connected but all edges that don't add up to TIPI-RU scales are set to zero. They are not shown on figure \ref{fig:fig1}. \paragraph{} This particular configuration is very hard to reach by gradient descent. It is not impossible, but very improbable for a network to converge there. But the network can converge to a different weight set that allows for non-zero edges that connect output characteristics and questions that aren't included in them. We don't really need to follow the template set by figure \ref{fig:fig1} and can use an arbitrary neural network. Any network that is reasonably small should be enough. \paragraph{} To ensure that overfitting during the training may be attributed to strong connections between inputs and outputs rather than to model's power we should keep the model as small as possible. There is a trade-off between susceptibility to overfitting and ability to fit anything useful: number of parameters should be large enough to recover a dependency between inputs and outputs but at the same time small enough but it should be still pretty small because the network should find it hard to remember every observation training and validation set. Usually the penalty on model size is considered an auxilliary way of regularization and primarily other ways like L1/L2 regularization are used. For this case we find convenient to use the size as regularizer only. \paragraph{} In the current study we use the following configuration of network: \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{tipinet.png} \caption{Actual network that learns TIPI-5PFQ connection} \label{fig:fig2} \end{figure} \paragraph{} It is a very small convolutional network that consist of one 2D convolutional layer and two reshaping operations. The first reshape is needed to reshape the incoming TIPI items into "pictures" that are 1 in height, 2 in width and have five channels. The second reshape turns (1,1,5) output of convolutional layer into just 5 answers. \paragraph{} The network has 50 parameters - exactly two thirds of presumed TIPI network shown at figure \ref{fig:fig1} that has 75. A pretty simple way to see whether the network is learning something correct and it is not by happenstance is to destroy any real structure that is present in data, then try to fit the network from such damaged set and compare the distribution of results with ones of networks trained on unharmed data. To do so, we perform an investigation obeying the following scheme: \begin{enumerate} \item Train 100 networks on the TIPI-RU dataset - it will provide distribution of MSEs on unharmed data; \item Shuffle 5PFQ characteristics corresponding to TIPI-RU answers at random, then train a network to predict shuffled 5PFQs from same TIPI-RUs. \item Do step 2 one hundred times; \item Check whether two samples of error measures come from the same distribution or not by two-sample Kolmogorov-Smirnov test. \end{enumerate} \paragraph{} Difference between two distributions shows that there is a true dependency between answers to TIPI questions and 5PFQ characteristics that we got completely destroyed while shuffling the labels. The network is implemented using Keras (\cite{chollet2015keras}) with Tensorflow (\cite{tensorflow2015-whitepaper}) as backend. All plots are made with Matplotlib (\cite{hunter2007matplotlib}). \paragraph{} We make here a reasonable assumption that the best predictions of 5PFQ that a generalizable (not overfitted) model can make from the TIPI-RU data are actually the TIPI-RU values themselves. So we can find a best MSE possible by computing MSE between 5PFQ characteristics and TIPI characteristics both divided by corresponding maximas. A network that gets below that threshold is overfitting. \subsection{Neural network performance measures} \paragraph{} This work uses classical loss function for regression: mean squared error (MSE). \begin{equation} MSE(\hat{y}, y) = \frac{1}{n}\sum_{i=0}^n(y-\hat{y})^2 \end{equation} where $y$ - real value, $\hat{y}$ - value predicted by model. \paragraph{} We choose MSE as loss function instead of mean absolute error because it punishes large deviations from the real value more than small ones. But we use MAE as an additional performance measure: \begin{equation} MAE(\hat{y}, y) = \frac{1}{n}\sum_{i=0}^n|y-\hat{y}| \end{equation} \subsection{Discriminant validity via interpretation of trained weights} \paragraph{} Neural Network Interpretation is a complex task and currently a lot of approaches of solving it exists. For a thorough review one can take a look at (\cite{montavon2017methods}). But in this particular case the interpretation becomes very simple. \paragraph{} According to definition of convolution (\cite{goodfellow2016deep}), the output of single convolution is as follows: \begin{equation} Z = \sum_{i=1,j=1}^{n,m} X_{i,j} \times W_{i,j}, \end{equation} where \(W\) - weights of single convolutional filter of interest, \(X\) - single window from the input image. \paragraph{} The resulting output of a convolution filter (without activation function) is constructed by applying this operation to whole input example via a sliding window. The important implication of the convolution is that weights of the convolutional layer after the end of training will mimic structure of its input. A set of CNN interpretation methods is based on this property but for our simple case we need only to visualize the weights for each neuron and it will be enough to observe captured structure. Structure similar to figure \ref{fig:fig1} and equations \ref{tipi-e}-\ref{tipi-o} shows high level of discriminant validity. \section{Results}\label{sec:Results} The network reaches minimal MSE possible somewhere around 60-th epoch of training. Minimal MSE possible is equal 0.05. Validation set is randomly chosen 40\% of the whole: \begin{figure}[H] \centering \includegraphics[width=\textwidth]{histories.png} \caption{First sixty epochs of training on real labels. Horizontal line denotes minimal MSE possible. Black line is mean of 100 repetitions, grey area is error region.} \label{fig:fig3} \end{figure} \paragraph{} After reaching the minimal possible MSE the network overfits and drops MSE to 0. During training the distribution of MSEs and MAEs from models trained on reshuffled labels diverges from one of the models trained on correct labels as shown on figure \ref{fig:fig4}: \begin{figure}[H] \centering \includegraphics[width=\textwidth]{divergence.png} \caption{Divergence of correct and permuted label distributions: \textbf{a}. start of training, \textbf{b.} 125th epoch, \textbf{c.} end of training. Dark grey - correct models, light grey - models trained on permuted labels.} \label{fig:fig4} \end{figure} \paragraph{} Initially both kinds of models are indistinguishable. But as the training goes, the correct models drive towards zero while permuted ones do not and at the end two distributions are no more the same. The full animation is freely available on YouTube (\cite{Divergence}). It is the visual, qualitative way to check whether there is a strong connection between inputs and outputs. The quantitative one that we use is to apply two-sample Kolmogorov-Smirnov test. For computation we use Scipy KS-test implementation (\cite{jones2014scipy}). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{pvalues.png} \caption{Two-sample KS-test p-values (natural logarithms of them) for MAE and MSE as a function of training time. Black arrows denote different thresholds.} \label{fig:fig5} \end{figure} \paragraph{} The question KS-test answers is "Were these two samples drawn from the same distribution?". Null hypothesis is that they are, so for our case p-values should be below the threshold. The differences, as shown on figure \ref{fig:fig5}, grow enough to pass even the most strict threshold quite fast. Weight visualization, as present at figure \ref{fig:fig6}, shows the structure similar to figure \ref{fig:fig1} and equations \ref{tipi-e}-\ref{tipi-o}: \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{interpretation.png} \caption{Evolution of weights during training (average over 100 runs): \textbf{a}. start of training, \textbf{b.} 70th epoch, \textbf{c.} end of training. Single image represents a single convolutional neuron. Black - large positive weight, white - large (in absolute value) negative weight, shades of grey - weights close to 0.} \label{fig:fig6} \end{figure} \paragraph{} It even captures the sign reversal in Agreableness (second row). Also if one looks closely at the Openness neuron (fifth row), it will be obvious that Openness lacks discriminant validity - there are a lot of other items that are highlighted as strong as the valid column. This inconsistency of Openness was already described by Sergeeva et al. and by other researchers. Neural Network method for validity testing converges with more conventional approaches despite being mostly qualitative. \section{Conclusion}\label{sec:Conclusion} \paragraph{} In this study we have introduced a novel qualitative method for evaluation of construct validity of personality questionnaire using a neural network-based approach. The method is easy to implement with modern Deep Learning frameworks and is more interpretable than traditional methods like path models or correlation matrices since it answers much simpler question: "Is there a learnable connection between inputs (answers to questionnaire) and outputs (characteristics of questionnaire made to measure the same constructs)?". An obvious drawback of our method is that no simple way of comparing its performance to path model exists. \paragraph{} Based on our core findings we can recommend using TIPI-RU as a brief method for measuring personality in non-clinical settings like the internet-assesment of personality measures as it passes the neural network test. Future studies involve using neural network test for evaluation of other questionnaires and exploring its limits of applicability. \paragraph{} All procedures involving human participants performed in this study went in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. \newpage \printbibliography \newpage \end{document}
{ "timestamp": "2018-06-05T02:13:22", "yymm": "1806", "arxiv_id": "1806.00905", "language": "en", "url": "https://arxiv.org/abs/1806.00905" }
\section{Introduction} Weighted ensemble is a Monte Carlo method based on stratification and resampling, originally designed to solve problems in computational chemistry~\cite{bhatt2010steady, chong2017path,jeremy,costaouec2013analysis,darve2013computing, donovan2013efficient,huber1996weighted, rojnuckarin1998brownian, rojnuckarin2000bimolecular,suarez, zhang2007efficient, zhang2010weighted,zuckerman,zwier2015westpa}. Weighted ensemble currently has a substantial user base; see~\cite{westpa} for software and a more complete list of publications. In general terms, the method consists of periodically resampling from an ensemble of weighted replicas of a Markov process. In each of a number of {\em bins}, a certain number of replicas is maintained according to a prescribed replica {\em allocation}. The weights are adjusted so that the weighted ensemble has the correct distribution~\cite{zhang2010weighted}. In the context of rare event or small probability calculations, weighted ensemble can significantly outperform direct Monte Carlo, or independent replicas; the performance gain comes from allocating more particles to important or rare regions of state space. In this sense, weighted ensemble can be understood as an importance sampling method. We assume the underlying Markov process is expensive to simulate, so that optimizing variance versus cost is critical. In applications, the Markov process is usually a high dimensional drift diffusion, such as Langevin molecular dynamics~\cite{stoltz2010free}, or a continuous time Markov chain representing reaction network dynamics~\cite{Kurtz}. We focus on the computation of the average of a given function or {\em observable} with respect to the unique steady state of the Markov process, though many of our ideas could also be applied in a finite time setting. One of the most common applications of weighted ensemble is the computation of a {\em mean first passage time}, or the mean time for a Markov process to go from an initial state to some target set. The mean first passage time is an important quantity in physical and chemical processes, but it can be prohibitively large to compute using direct Monte Carlo simulation~\cite{adhikari2019computational,hofmann2003mean,metzler2014first}. In weighted ensemble, the mean first passage time to a target set is reformulated, via the Hill relation~\cite{hill2005free}, as the inverse of the steady state flux into the target. Here, the observable is the characteristic or indicator function of the target set, and the flux is the steady-state average of this observable. This steady state can sometimes be accessed on time scales much smaller than the mean first passage time~\cite{adhikari2019computational,jeremy2,danonline}. On the other hand, when the mean first passage time is very large, the corresponding steady state flux is very small and needs to be estimated with substantial precision, which is why importance sampling is needed. The steady state in this case, and in most weighted ensemble applications, is {\em not} known up to a normalization factor. As a result, many standard Monte Carlo methods, including those based on Metropolis-Hastings, do not apply. In this article we consider weighted ensemble parameter optimization for steady state averages. Our work here expands on related results in~\cite{aristoff2016analysis} for weighted ensemble optimization on finite time horizons. For a given Markov chain, weighted ensemble is completely characterized by the choice of resampling times, bins, and replica allocation. {\em In this article we discuss how to choose the bins and replica allocation}. We also argue that the frequency of resampling times is mainly limited by processor interaction cost and not variance. We use a first-principles, finite replica analysis based on Doob decomposing the weighted ensemble variance. Our earlier work~\cite{aristoff2016analysis} is based on this same mathematical technique, but the methods described there are not suitable for long-time computations or bin optimization~\cite{aristoff2019ergodic}. As is usual in importance sampling, our parameter optimizations require estimates of the very variance terms we want to minimize. However, because {\em weighted ensemble is statistically exact no matter the parameters}~\cite{aristoff2019ergodic,zhang2010weighted}, these estimates can be crude. The choice of parameters affects the variance, not the mean; we only require parameters that are good enough to beat direct Monte Carlo. From the point of view of applications, similar particle methods employing stratification include Exact Milestoning~\cite{bello2015exact}, Non-Equilibrium Umbrella Sampling~\cite{warmflash2007umbrella,dinner2016trajectory}, Transition Interface Sampling~\cite{van2003novel}, Trajectory Tilting~\cite{vanden2009exact}, and Boxed Molecular dynamics~\cite{glowacki2011boxed}. There are related methods based on sampling {reactive paths}, or paths going directly from a source to a target, in the context of the mean first passage time problem just cited. Such methods include Forward Flux Sampling~\cite{allen2006forward} and Adaptive Multilevel Splitting~\cite{brehier_AMS,brehier_AMS2,tony_AMS}. These methods differ from weighted ensemble in that they estimate the mean first passage time directly from reactive paths rather than from steady state and the Hill relation. In contrast with many of these methods, weighted ensemble is simple enough to allow for a relatively straightforward non-asymptotic variance analysis based on Doob decomposition~\cite{doobbook}. From a mathematical viewpoint, weighted ensemble is simply a resampling-based evolutionary algorithm. In this sense it resembles particle filters, sequential importance sampling, and sequential Monte Carlo. For a review of sequential Monte Carlo, see the textbook~\cite{del2004feynman}, the articles~\cite{del2014particle,del2005genealogical} or the compilation~\cite{doucet2001sequential}. There is some recent work on optimizing the Gibbs-Boltzmann input potential functions in sequential Monte Carlo~\cite{balesdent2013optimisation, chraibi2018optimal,jacquemart2016tuning,webber1,wouters2016rare} (see also~\cite{del2005genealogical}). We emphasize that weighted ensemble is different from most sequential Monte Carlo methods, as it relies on a bin-based resampling mechanism rather than a globally defined fitness function like a Gibbs-Boltzmann potential. In particular, sequential Monte Carlo is more commonly used to sample rare events on finite time horizons, and may not be appropriate for very long time computations of the sort considered here, as explained in~\cite{aristoff2019ergodic}. To our knowledge, the binning and particle allocation strategies we derive here are new. A similar allocation strategy for weighted ensemble on finite time horizons was proposed in~\cite{aristoff2016analysis}. Our allocation strategy, which minimizes {\em mutation variance} -- the variance corresponding to evolution of the replicas -- extends ideas from~\cite{aristoff2016analysis} to our steady-state setup. We draw an analogy between minimizing mutation variance and minimizing sensitivity in an Appendix. In most weighted ensemble simulations, and in~\cite{aristoff2016analysis}, bins are chosen in an ad hoc way. We show below, however, that choosing bins carefully is important, particularly if relatively few can be afforded. We propose a new binning strategy based on minimizing {\em selection variance} -- the variance associated with resampling from the replicas -- in which weighted ensemble bins are aggregated from a collection of smaller {\em microbins}~\cite{jeremy}. Formulas for the mutation and selection variance are derived in a companion paper~\cite{aristoff2019ergodic} that proves an ergodic theorem for weighted ensemble time averages. These variance formulas, which we use to define our allocation and bin optimizations, involve the Markov kernel $K$ that describes the evolution of the underlying Markov process between resampling times, as well as the solution, $h$, of a certain Poisson equation. We propose estimating $K$ and $h$ with techniques from Markov state modeling~\cite{husic2018markov,pande2010everything, sarich2010approximation, schutte2013metastability}. In this formulation, the microbins correspond to the Markov states, and a microbin-to-microbin transition matrix defines the approximations of $K$ and $h$. This article is organized as follows. In Section~\ref{sec:alg}, we introduce notation, give an overview of weighted ensemble, and describe the parameters we wish to optimize. In Section~\ref{sec:notation}, we present weighted ensemble in precise detail (Algorithm~\ref{alg1}), reproduce the aforementioned variance formulas (Lemmas~\ref{lem_mut_var} and~\ref{lem_sel_var}) from our companion paper~\cite{aristoff2019ergodic}, and describe the solution, $h$, to a certain Poisson equation arising from these formulas (equations~\eqref{h} and~\eqref{h2}). In Section~\ref{sec:var_min}, we introduce novel optimization problems (equations~\eqref{opt_allocation} and~\eqref{opt_bins}) for choosing the bins and particle allocation. The resulting allocation in~\eqref{Ntu} can be seen as a version of equation (6.4) in~\cite{aristoff2016analysis}, modified for steady state and our new microbin setup. These optimizations are idealized in the sense that they involve $K$ and $h$, which cannot be computed exactly. Thus in Section~\ref{sec:params}, we propose using microbins and Markov state modeling to estimate their solutions (Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins}). In Section~\ref{sec:numerics}, we test our methods with a simple numerical example (Figures~\ref{fig_V_h_pi}-\ref{fig_optimization_data}). Concluding remarks and suggestions for future work are in Section~\ref{sec:remarks}. In the Appendix, we describe residual resampling~\cite{douc2005comparison}, a common resampling technique, and draw an analogy between sensitivity and our mutation variance minimization strategy. \section{Algorithm}\label{sec:alg} A weighted ensemble consists of a collection of replicas, or {\em particles}, belonging to a common state space, with associated positive scalar weights. The particles repeatedly undergo resampling and evolution steps. Using a genealogical analogy, we refer to particles before resampling as {\em parents}, and just after resampling as {\em children}. A child is initially just a copy of its parent, though it evolves independently of other children of the same parent. The total weight remains constant in time, which is important for the stability of long-time averages~\cite{aristoff2019ergodic} (and is a critical difference between the optimization algorithm described in~\cite{aristoff2016analysis} and the one outlined here). Between resampling steps, the particles evolve independently via the same Markov kernel $K$. The initial parents can be arbitrary, though their weights must sum to $1$; see Algorithm~\ref{alg_initialize} for a description of an initialization step tailored to steady-state calculations. The number $N_{\textup{init}}$ of initial particles can be larger than the number $N$ of weighted ensemble particles after the first resampling step. For the resampling or {\em selection} step, we require a collection of {\em bins}, denoted ${\mathcal B}$, and a {\em particle allocation} $N_t(u)_{t \ge 0}^{u \in {\mathcal B}}$, where $N_t(u)$ is the (nonnegative integer) number of children in bin $u \in {\mathcal B}$ at time $t$. We will assume the bins form a partition of state space (though in general they can be any grouping of the particles~\cite{aristoff2019ergodic}). The total number of children is always $N = \sum_{u \in {\mathcal B}} N_t(u)$. {\em Both the bins and particle allocation are user-chosen parameters, and to a large extent this article concerns how to pick these parameters.} In general, the bins and particle allocation can be time dependent and adaptive. For simpler presentation, however, we assume that the bins are based on a fixed partition of state space. After selection, the weight of each child in bin $u \in {\mathcal B}$ is $\omega_t(u)/N_t(u)$, where $\omega_t(u)$ is the sum of the weights of all the parents in bin $u$. By construction, this preserves the total weight $\sum_{u \in {\mathcal B}} \omega_t(u) = 1$. In every occupied bin~$u$, the $N_t(u) \ge 1$ children are selected from the parents in bin $u$ with probability proportional to the parents' weights. (Here, we mean bin $u$ is occupied if $\omega_t(u)>0$. In unoccupied bins, where $\omega_t(u) = 0$, we set $N_t(u) = 0$.) In the evolution or {\em mutation} step, the time $t$ advances, and all the children independently evolve one step according to a fixed Markov kernel $K$, keeping their weights from the selection step, and becoming the next parents. In practice, $K$ corresponds to the underlying process evaluated at {\em resampling times} $\Delta t$. That is, $K$ is a $\Delta t$-{\em skeleton} of the underlying Markov process~\cite{meyn}. For the mathematical analysis below, this will only be important when we consider varying the resampling times, and in particular the $\Delta t \to 0$ limit. (We think of this underlying process as being continuous in time, though of course time discretization is usually required for simulations. Weighted ensemble is used {\em on top} of an integrator of the underlying process; in particular weighted ensemble does not handle the time discretization. We will not be concerned with the unavoidable error resulting from time discretization.) {\em We assume that $K$ is uniformly geometrically ergodic~\cite{doucbook}} with respect to a stationary distribution, or steady state, $\mu$. Recall we are interested in steady-state averages. Thus for a given bounded real-valued function or {\em observable} $f$ on state space, we estimate $\int f\,d\mu$ at each time by evaluating the weighted sum of $f$ on the current collection of parent particles. We summarize weighted ensemble as follows: \vskip2pt \begin{itemize}[leftmargin=20pt] \item In the {selection step} at time $t$, inside each bin $u$, we resample $N_t(u)$ children from the parents in $u$, according to the distribution defined by their weights. After selection, all the children in bin $u$ have the same weight $\omega_t(u)/N_t(u)$, where $\omega_t(u)$ is the total weight in bin $u$ before and after selection. \item In each {mutation step}, the children evolve independently according to the Markov kernel $K$. After evolution, these children become the new parents. \item The weighted ensemble evolves by repeated selection and then mutation steps. The time $t$ advances after a single pair of selection and mutation steps. \end{itemize} \vskip2pt See Algorithm~\ref{alg1} for a detailed description of weighted ensemble. An important property of weighted ensemble is that it is unbiased no matter the choice of parameters: at time $t$ the weighted particles have the same distribution as a Markov chain evolving according to $K$. See Theorem~\ref{thm_unbiased}. With bad parameters, however, weighted ensemble can suffer from large variance, even worse than direct Monte Carlo. As there is no free lunch, choosing parameters cleverly requires either some information about $K$, perhaps gleaned from prior simulations or obtained adaptively during weighted ensemble simulations. We will assume we have a collection of {\em microbins} which we use to gain information about $K$. The microbins, like the weighted ensemble bins, form a partition of state space, and each bin will be a union of microbins. We use the term microbins because the microbins may be smaller than the actual weighted ensemble bins. We discuss the reasoning behind this distinction in Section~\ref{sec:params}; see Remark~\ref{rmk_bins}. \vskip5pt \section{Mathematical notation and algorithm}\label{sec:notation} We write $\xi_t^1,\ldots,\xi_t^{N}$ for the parents at time $t$ and $\omega_t^1,\ldots,\omega_t^{N}$ for their weights. Their children are denoted $\hat{\xi}_t^1,\ldots,\hat{\xi}_t^{N}$ with weights $\hat{\omega}_t^1,\ldots,\hat{\omega}_t^{N}$. Thus, weighted ensemble advances in time as follows: \begin{align*} &\{\xi_t^i\}^{i=1,\ldots,N} \xrightarrow{\textup{selection}} \{{\hat \xi}_t^i\}^{i=1,\ldots,N} \xrightarrow{\textup{mutation}} \{{\xi}_{t+1}^i\}^{i=1,\ldots,N},\\ &\{\omega_t^i\}^{i=1,\ldots,N} \xrightarrow{\textup{selection}} \{{\hat \omega}_t^i\}^{i=1,\ldots,{N}} \xrightarrow{\textup{mutation}} \{{\omega}_{t+1}^i\}^{i=1,\ldots,N}. \end{align*} The particles belong a common standard Borel state space~\cite{durrett2019probability}. This state space is divided into a finite collection ${\mathcal B}$ of disjoint subsets (throughout we only consider measurable sets and functions). We define the {bin weights} at time $t$ as \begin{equation*} \omega_t(u) = \sum_{i:\xi_t^i \in u} \omega_t^i, \qquad u \in {\mathcal B}, \end{equation*} where the empty sum is zero (so an unoccupied bin $u$ has $\omega_t(u) = 0$). For the parent $\xi_t^i$ of $\hat{\xi}_t^j$, we write $\text{par}(\hat{\xi}_t^j) = \xi_t^i$. A child is just a copy of its parent: $$\text{par}(\hat{\xi}_t^j) = \xi_t^i \text{ implies }\hat{\xi}_t^j = \xi_t^i.$$ {\em Each child has a {unique} parent}. Setting the number of children of each parent completely defines the children, as the choices of the children's indices (the $j$'s in $\hat{\xi}_t^j$) do not matter. The number of children of $\xi_t^i$ will be written $C_t^i$: \begin{equation}\label{children} C_t^i = \#\{j: \textup{par}(\hat{\xi}_t^j) = \xi_t^i\}, \end{equation} where $\#S =$ number of elements in a set $S$. Recall that $N_t(u)$ is the number of children in bin $u$ at time $t$. We require that there is at least one child in each occupied bin, no children in unoccupied bins, and $N$ total children at each time $t$. Thus, \begin{equation}\label{Ntu_cond} N_t(u) \ge 1 \text{ if } \omega_t(u)>0, \qquad N_t(u)= 0 \text{ if } \omega_t(u) = 0, \qquad \sum_{u \in {\mathcal B}}N_t(u) = N. \end{equation} We write ${\mathcal F}_t$ for the $\sigma$-algebra generated by the weighted ensemble up to, but not including, the $t$-th selection step. We will assume the particle allocation is known before selection. Similarly, we write $\hat{\mathcal F}_t$ for the $\sigma$-algebra generated by weighted ensemble up to and including the $t$-th selection step. In detail, \begin{align*} {\mathcal F}_t &= \sigma\left((\xi_s^i, \omega_s^i)_{0 \le s \le t}^{i=1,\ldots,N},N_s(u)_{0 \le s \le t}^{u \in {\mathcal B}},(\hat{\xi}_s^i,\hat{\omega}_s^i)_{0 \le s \le t-1}^{i=1,\ldots,N},(C_s^i)_{0 \le s \le t-1}^{i=1,\ldots,N}\right)\\ \hat{\mathcal F}_t &= \sigma\left((\xi_s^i, \omega_s^i)_{0 \le s \le t}^{i=1,\ldots,N},N_s(u)_{0 \le s \le t}^{u \in {\mathcal B}},(\hat{\xi}_s^i,\hat{\omega}_s^i)_{0 \le s \le t}^{i=1,\ldots,N},(C_s^i)_{0 \le s \le t}^{i=1,\ldots,N}\right). \end{align*} \begin{algorithm}\caption{Weighted ensemble } Pick initial parents and weights $(\xi_0^i,\omega_0^i)^{i=1,\ldots,N_{\textup{init}}}$ with $\sum_{i=1}^{N_{\textup{init}}} \omega_0^i = 1$, choose a collection ${\mathcal B}$ of bins, and define a final time $T$. Then for $t \ge 0$, iterate the following: \vskip5pt \begin{itemize}[leftmargin=20pt] \item {\em (Selection step)} Each parent $\xi_t^i$ is assigned a number $C_t^i$ of children, as follows: \vskip5pt In each occupied bin $u \in {\mathcal B}$, conditional on ${\mathcal F}_t$, let $(C_t^i)^{i:\textup{bin}(\xi_t^i) = u}$ be $N_t(u)$ samples from the distribution $\{\omega_t^i/\omega_t(u):\,\xi_t^i \in u\}$, where $N_t(u)^{u \in {\mathcal B}}$ satisfies~\eqref{Ntu_cond}. The children $(\hat{\xi}_t^i)^{i=1,\ldots,N}$ are defined by~\eqref{children}, with weights \begin{equation}\label{omegati} {\hat \omega}_t^i = \frac{\omega_t(u)}{N_t(u)}, \qquad \textup{if }{\hat \xi}_t^i \in u. \end{equation} Selections in distinct bins are conditionally independent. \vskip2pt \item {\em (Mutation step)} Each child $\hat{\xi}_t^i$ independently evolves one time step. Thus: \vskip5pt Conditionally on $\hat{\mathcal F}_t$, the children $(\hat{\xi}_t^i)^{i=1,\ldots,N}$ evolve independently according to the Markov kernel $K$, becoming the next parents $({\xi}_{t+1}^i)^{i=1,\ldots,N}$, with weights \begin{equation}\label{weight_mut} \omega_{t+1}^i = {\hat \omega}_t^i, \qquad i=1,\ldots,N. \end{equation} Then time advances, $t \leftarrow t+1$. Stop if $t = T$, else return to the selection~step. \end{itemize} \vskip5pt Algorithm~\ref{alg_opt_allocation} outlines an optimization for the allocation $N_t(u)^{u \in {\mathcal B}}$, and a procedure for choosing the bins ${\mathcal B}$ is in Algorithm~\ref{alg_opt_bins}. For the initialization, see Algorithm~\ref{alg_initialize}. \label{alg1} \end{algorithm} In Algorithm~\ref{alg1}, we do not explicitly say how we sample the $N_t(u)$ children in each bin $u$. Our framework below allows for any unbiased resampling scheme. We give a selection variance formula that assumes residual resampling; see Lemma~\ref{lem_sel_var} and Algorithm~\ref{alg_residual}. Residual resampling has performance on par with other standard resampling methods like systematic and stratified resampling~\cite{douc2005comparison}. See~\cite{webber2} for more details on resampling in the context of sequential Monte Carlo. \subsection{Ergodic averages} We are interested in using Algorithm~\ref{alg1} to estimate $$\theta_T \approx \int f\,d\mu$$ where we recall $\mu$ is the stationary distribution of the Markov kernel $K$, and \begin{equation}\label{theta_T} \theta_T = \frac{1}{T}\sum_{t=0}^{T-1} \sum_{i=1}^N \omega_t^i f(\xi_t^i). \end{equation} Note that $\theta_T$ is simply the running average of $f$ over the weighted ensemble up to time $T-1$. In particular,~\eqref{theta_T} is {\em not} a time average over ancestral lines that survive up to time $T-1$, but rather it is an average over the weighted ensemble at each time $0 \le t \le T-1$. Our time averages~\eqref{theta_T} require no replica storage and should have smaller variances than averages over surviving ancestral lines~\cite{aristoff2019ergodic}. \subsection{Consistency results} The next results, from a companion article~\cite{aristoff2019ergodic}, show that weighted ensemble is {unbiased} and that weighted ensemble time averages converge. The latter does not in general follow from the former, as standard unbiased particle methods such as sequential Monte Carlo can have variance explosion~\cite{aristoff2019ergodic}. (The proofs in~\cite{aristoff2019ergodic} have $N_{\textup{init}} = N$, but they are easily modified for $N_{\textup{init}} \ne N$.) \begin{theorem}[From~\cite{aristoff2019ergodic}]\label{thm_unbiased} In Algorithm~\ref{alg1}, suppose that the initial particles and weights are distributed as $\nu$, in the sense that \begin{equation}\label{initialization} {\mathbb E}\left[\sum_{i=1}^{N_{\textup{init}} }\omega_0^i g(\xi_0^i)\right] = \int g\,d\nu \end{equation} for all real-valued bounded functions $g$ on state space. Let $(\xi_t)_{t \ge 0}$ be a Markov chain with kernel $K$ and initial distribution $\xi_0 \sim \nu$. Then for any time $T > 0$, \begin{equation*} {\mathbb E}\left[\sum_{i=1}^{N} \omega_T^i g(\xi_T^i)\right] = {\mathbb E}[g(\xi_T)] \end{equation*} for all real-valued bounded functions $g$ on state space. \end{theorem} Recall we assume $K$ is uniformly geometrically ergodic~\cite{doucbook}. \begin{theorem}[From~\cite{aristoff2019ergodic}] \label{thm_ergodic} Weighted ensemble is ergodic in the following sense: $$\lim_{T \to \infty} \theta_T \stackrel{a.s.}{=} \int f\,d\mu.$$ \end{theorem} Theorem~\ref{thm_unbiased} does not use ergodicity of $K$, though Theorem~\ref{thm_ergodic} obviously does. \subsection{Variance analysis}\label{sec:variance} We will make use of the following analysis from~\cite{aristoff2019ergodic} concerning the weighted ensemble variance. Define the Doob martingales~\cite{doob1940regularity} \begin{equation}\label{doob_mart} D_t = {\mathbb E}[\theta_T|{\mathcal F}_t], \qquad \hat{D}_t = {\mathbb E}[\theta_T|\hat{\mathcal F}_t]. \end{equation} The Doob decomposition in Theorem~\ref{thm_doob} below filters the variance through the $\sigma$-algebras ${\mathcal F}_t$ and $\hat{\mathcal F}_t$. It is a way to decompose the variance into contributions from the initial condition and each time step. This type of telescopic variance decomposition is standard in sequential Monte Carlo, although it is usually applied at the level of measures on state space, which corresponds to the infinite particle limit $N \to \infty$~\cite{del2004feynman}. We use the finite $N$ formula directly to minimize variance, building on ideas in~\cite{aristoff2016analysis}. \begin{theorem}[From~\cite{aristoff2019ergodic}]\label{thm_doob} By Doob decomposition, \begin{align} \theta_T^2 - {\mathbb E}[\theta_T]^2 + R_T &= \left(D_0 - {\mathbb E}[\theta_T]\right)^2 \label{var0} \\ &\quad+ \sum_{t=1}^{T-1}{\mathbb E}\left[\left.\left(D_t-\hat{D}_{t-1}\right)^2\right|\hat{\mathcal F}_{t-1}\right] \label{var1}\\ &\qquad + \sum_{t=1}^{T-1}{\mathbb E}\left[\left.\left(\hat{D}_{t-1}-{D}_{t-1}\right)^2\right|{\mathcal F}_{t-1}\right], \label{var2} \end{align} where $R_T$ is mean-zero, ${\mathbb E}[R_T] = 0$. \end{theorem} The terms on the right-hand side of~\eqref{var0},\eqref{var1}, and~\eqref{var2} yield the contributions to the variance of $\theta_T$ from, respectively, the initial condition, mutation steps, and selection steps of Algorithm~\ref{alg1}. We thus refer to the summands of~\eqref{var1} and~\eqref{var2} as the {\em mutation variance} and {\em selection variance} of weighted ensemble. Below, define \begin{equation}\label{ht} h_{t} = \sum_{s=0}^{T-t-1} K^s f, \end{equation} and for any function $g$ and probability distribution $\eta$ on state space, let \begin{equation}\label{var} \textup{Var}_\eta g := \int g^2(\xi)\eta(d\xi) - \left(\int g(\xi)\eta(d\xi)\right)^2. \end{equation} Above and below, the dependence of $D_t$, ${\hat D}_t$ and $h_t$ on $T$ is suppressed. \begin{lemma}[From~\cite{aristoff2019ergodic}]\label{lem_mut_var} The mutation variance at time $t$ is \begin{align}\label{eq_mutvar} {\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|\hat{\mathcal F}_{t}\right] = \frac{1}{T^2}\sum_{i=1}^N \left(\hat{\omega}_t^i\right)^2\textup{Var}_{K(\hat{\xi}_t^i,\cdot)}h_{t+1}. \end{align} \end{lemma} To formulate the selection variance, we define \begin{equation*} \delta_t^i = \frac{N_t(u)\omega_t^i}{\omega_t(u)} - \left\lfloor\frac{N_t(u)\omega_t^i}{\omega_t(u)} \right\rfloor \qquad \text{if }\xi_t^i \in u, \qquad \delta_t(u) = \sum_{i: \xi_t^i \in u} \delta_t^i, \end{equation*} where $\lfloor x\rfloor$ is the floor function, or the greatest integer less than or equal to $x$. In Lemma~\ref{lem_sel_var}, we assume that the $(C_t^j)^{j=1,\ldots,N}$ in Algorithm~\ref{alg1} are obtained using residual resampling. See~\cite{douc2005comparison,webber2} or Algorithm~\ref{alg_residual} in the Appendix below for a description of residual resampling. \begin{lemma}[From~\cite{aristoff2019ergodic}]\label{lem_sel_var} The selection variance at time $t$ is \begin{align}\begin{split}\label{sel_varnew} &{\mathbb E}\left[\left.\left(\hat{D}_{t}-{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] = \frac{1}{T^2} \sum_{u\in {\mathcal B}} \left(\frac{\omega_t(u)}{N_t(u)}\right)^2\delta_t(u)\textup{Var}_{\eta_t(u)}Kh_{t+1},\\ & \qquad \eta_t(u) := \sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}\delta_{\xi_t^i} \end{split} \end{align} \end{lemma} By definition~\eqref{var}, the variances in Lemmas~\ref{lem_mut_var} and~\ref{lem_sel_var} rewrite as \begin{align*} &\textup{Var}_{K(\hat{\xi}_t^i,\cdot)}h_{t+1} = Kh_{t+1}^2(\hat{\xi}_t^i) - (Kh_{t+1}(\hat{\xi}_t^i))^2,\\ &\textup{Var}_{\eta_t(u)}Kh_{t+1} = \sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}(Kh_{t+1}(\xi_t^i))^2- \left( \sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}Kh_{t+1}(\xi_t^i)\right)^2. \end{align*} \subsection{Poisson equation} Because we are interested in large time horizons $T$, below we will consider the mutation and selection variances in the limit $T \to \infty$. We will see that for any probability distribution $\eta$ on state space, \begin{align*} \lim_{T \to \infty} \textup{Var}_\eta h_{t+1} &= \textup{Var}_\eta h\\ \lim_{T \to \infty} \textup{Var}_\eta Kh_{t+1} &= \textup{Var}_\eta Kh, \end{align*} where $h$ is the solution to the Poisson equation~\cite{lelievre2016partial,nummelin} \begin{equation}\label{h} ({Id}-K)h = f - \int f \,d\mu, \qquad \int h\,d\mu = 0, \end{equation} where $Id = $ the identity kernel. Existence and uniqueness of the solution $h$ easily follow from uniform geometric ergodicity of the Markov kernel $K$. Indeed, if $(\xi_t)_{t \ge 0}$ is a Markov chain with kernel $K$, then we can write \begin{align}\begin{split}\label{h2} h(\xi) &= \sum_{t=0}^\infty \left(K^tf(\xi) - \int f\,d\mu\right)\\ &= \sum_{t=0}^\infty \left({\mathbb E}[f(\xi_t)|\xi_0 = \xi]- {\mathbb E}[f(\xi_t)|\xi_0 \sim \mu]\right), \end{split} \end{align} where $\xi_0 \sim \mu$ indicates $\xi_0$ is initially distributed according to the steady state $\mu$ of $K$. Uniform geometric ergodicity and the Weierstrass $M$-test show that the sums in~\eqref{h2} converge absolutely and uniformly. As a consequence, $h$ in~\eqref{h2} solves~\eqref{h}. Interpreting~\eqref{h2}, $h(\xi)$ is the mean discrepancy of a time average of $f(\xi_t)$ starting at $\xi_0 = \xi$ with a time average of $f(\xi_t)$ starting at steady state $\xi_0 \sim \mu$. This discrepancy in the time averages has been normalized so that it has a nontrivial limit, and in particular does not vanish, as time goes to infinity. The Poisson solution $h$ is critical for understanding and estimating the weighted ensemble variance. Besides, $h$ can be used to identify model features, such as sets that are {\em metastable} for the underlying Markov chain defined by $K$, as well as narrow pathways between these sets. Metastable sets are, roughly speaking, regions of state space in which the Markov chain tends to become trapped. To understand the behavior of $h$, we define {metastable sets} more precisely. A region $R$ in state space is metastable for $K$ if $(\xi_t)_{t \ge 0}$ tends to equilibrate in $R$ much faster than it escapes from $R$. The rate of equilibration in $R$ can be understood in terms of the {\em quasistationary distribution}~\cite{QSD} in $R$. See {\em e.g.}~\cite{lelievre2012two} for more discussion on metastablity. The Poisson solution $h$ tends to be nearly constant in regions that are metastable for $K$. This is because the mean discrepancy in a time average of $f(\xi_t)$ over two copies of $(\xi_t)_{t \ge 0}$ with different starting points in the same metastable set $R$ is small: both copies tend to reach the same quasistationary distribution in $R$ before escaping from $R$. In the regions between metastable sets, however, $h$ tends to have large variance. If $f$ is a characteristic or indicator function, this variance tends to be larger the closer these regions are to the support of $f$ (the set where $f = 1$). More generally, the variance of $h$ is larger near regions $R$ where the stationary average $\int_R f\,d\mu$ of $f$ is large. See Figure~\ref{fig_V_h_pi} for illustration of these features. \section{Minimizing the variance}\label{sec:var_min} Our strategy for minimizing the variance is based on {\em choosing the particle allocation to minimize mutation variance} and {\em picking the bins to mitigate selection variance}. Minimizing mutation variance is closely connected with minimizing a certain sensitivity; see the Appendix below. Both strategies require some coarse estimates of $K$ and $h$. We propose using ideas from Markov state modeling to construct {\em microbins} from which we estimate $K,h$. The microbins can be significantly smaller than the weighted ensemble bins, as we discuss below. \subsection{Resampling times}\label{sec:resample} Recall that weighted ensemble is fully characterized by the choice of resampling times, bins, and particle allocation. Though we focus on the latter two here, we briefly comment on the former. The resampling times are implicit in our framework. We assume here that $K = K_{\Delta t}$ is a $\Delta t$-skeleton of an underlying Markov process, or a sequence of values of the underlying process at time intervals $\Delta t$. In this setup, $\Delta t$ is a fixed resampling time, and we are ignoring the time discretization. (Actually, the resampling times need not be fixed -- they can be any times at which the underyling process has the strong Markov property~\cite{aristoff2016analysis}. In practice, the underlying process must be discretized in time, and weighted ensemble is used with the discretized process.) Suppose that the underlying Markov process is one of the ones mentioned in the Introduction: either Langevin dynamics, or reaction network modeled by a continuous time Markov chain on finite state space. Suppose moreover that microbins in continuous state space are domains with piecewise smooth boundaries (for instance, Voronoi regions), and that the bins are unions of microbins. Then the underlying process does not cross between distinct microbins, or between distinct bins, infinitely often in finite time. As a result, weighted ensemble should not degenerate in the limit as $\Delta t \to 0$, as we now show. Consider the variance from selection in Lemma~\ref{lem_sel_var}. By~\eqref{omegati}, the weights of all the children in each bin $u \in {\mathcal B}$ are all equal to $\omega_t(u)/N_t(u)$ after the selection step. If $\Delta t$ is very small, then almost none of the children move to different microbins or bins in the mutation step. If exactly zero of the children change bins, then for residual resampling in the selection step at the next time $t$, provided the allocation $N_t(u)^{u \in {\mathcal B}}$ has not changed, we have $\delta_t^i = 0$ for all $i=1,\ldots,N$. (Note that with our optimal allocation strategy in Algorithm~\ref{alg_opt_allocation}, if we avoid unnecessary resampling of $R_t(u)^{u \in {\mathcal B}}$, then $N_t(u)^{u \in {\mathcal B}}$ does not change unless particles move between microbins.) Thus from Lemma~\ref{lem_sel_var} there is {zero selection variance}, and in fact no resampling occurs in the selection step. Provided particles do not cross bins or microbins infinitely often in finite time, and the allocation only changes when particles move between microbins, this suggests there is {\em no variance blowup when $\Delta t \to 0$}. We expect then that the frequency $\Delta t$ of resampling should be driven not by variance cost but by computational cost, {\em e.g.} processor communication cost. \subsection{Minimizing mutation variance} The mutation variance depends on the choice of weighted ensemble bins as well as the particle allocation at each time $t$. In this section we focus on the particle allocation for an arbitrary choice ${\mathcal B}$ of bins. To understand this relationship between the allocation and mutation variance, following ideas from~\cite{aristoff2016analysis}, {\em we look at the mutation variance visible {before} selection}. It is so named because, unlike the mutation variance in Lemma~\ref{lem_mut_var}, it is a function of quantities $\omega_t(u)$, $N_t(u)$, $(\omega_t^i,\xi_t^i)^{i=1,\ldots,N}$ that are known at time $t$ before selection. \begin{proposition}\label{prop_vis_var} The mutation variance visible before selection satisfies \begin{equation*} \lim_{T \to \infty} T^2{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] = \sum_{u \in {\mathcal B}}\frac{\omega_t(u)}{N_t(u)} \sum_{i: {\xi}_t^i \in u}\omega_t^i \textup{Var}_{K(\xi_t^i,\cdot)} h, \end{equation*} where $h$ is defined in~\eqref{h}. \end{proposition} \begin{proof} By definition of the selection step (see Algorithm~\ref{alg1}), \begin{equation}\label{betati} {\mathbb E}[C_t^i|{\mathcal F}_t] = \frac{N_t(u)\omega_t^i}{\omega_t(u)}. \end{equation} From Lemma~\ref{lem_mut_var}, \begin{align}\begin{split}\label{vis_mut_var} &T^2{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] = {\mathbb E}\left[\left.\sum_{i=1}^N (\hat{\omega}_t^i)^2 \textup{Var}_{K(\hat{\xi}_t^i,\cdot)}h_{t+1}\right|{\mathcal F}_t\right] \\ &= \sum_{u \in {\mathcal B}}\left(\frac{\omega_t(u)}{N_t(u)}\right)^2 {\mathbb E}\left[\left.\sum_{i: \hat{\xi}_t^i \in u} \textup{Var}_{K(\hat{\xi}_t^i,\cdot)}h_{t+1}\right|{\mathcal F}_t\right] \qquad \text{(using }\eqref{omegati}\text{)}\\ &= \sum_{u \in {\mathcal B}}\left(\frac{\omega_t(u)}{N_t(u)}\right)^2 \sum_{i: {\xi}_t^i \in u}{\mathbb E}\left[\left.\sum_{j: \textup{par}(\hat{\xi}_t^j) = \xi_t^i} [Kh_{t+1}^2(\hat{\xi}_t^j) - (Kh_{t+1}(\hat{\xi}_t^j))^2]\right|{\mathcal F}_t\right] \\ &= \sum_{u \in {\mathcal B}}\frac{\omega_t(u)}{N_t(u)} \sum_{i: {\xi}_t^i \in u}\omega_t^i[Kh_{t+1}^2({\xi}_t^i) - (Kh_{t+1}({\xi}_t^i))^2] \qquad \text{(using }\eqref{betati}\text{)}. \end{split} \end{align} In light of~\eqref{ht} and~\eqref{h2}, and using the fact that $$\textup{Var}_{K(\xi_t^i,\cdot)}h = Kh^2({\xi}_t^i) - (Kh({\xi}_t^i))^2,$$ we get the result from letting $T \to \infty$. \end{proof} We let $T \to \infty$ since we are interested in long-time averages. The simpler formulas that result, as they involve $h$ instead of $h_t$, allow for strategies to estimate fixed optimal bins {before beginning weighted ensemble simulations}, which we discuss more below. Thus for minimizing the limiting mutation variance, we consider the following optimization: \begin{align}\begin{split}\label{opt_allocation} &\text{minimize } \sum_{u \in {\mathcal B}}\frac{\omega_t(u)}{N_t(u)} \sum_{i: {\xi}_t^i \in u}\omega_t^i\textup{Var}_{K(\xi_t^i,\cdot)}h,\\ &\qquad \text{ over all choices of }N_t(u) \in {\mathbb R}^+ \text{ such that }\sum_{u \in {\mathcal B}} N_t(u) = N, \end{split} \end{align} where the bins ${\mathcal B}$ are fixed, and we temporarily allow the allocation $N_t(u)^{u \in {\mathcal B}}$ to be noninteger. A Lagrange multiplier calculation shows that the solution to~\eqref{opt_allocation} is \begin{equation}\label{Ntu} N_t(u) = \frac{N\sqrt{\omega_t(u)\sum_{i:\xi_t^i \in u} \omega_t^i \textup{Var}_{K(\xi_t^i,\cdot)}h}}{\sum_{u \in {\mathcal B}}\sqrt{\omega_t(u)\sum_{i:\xi_t^i \in u} \omega_t^i \textup{Var}_{K(\xi_t^i,\cdot)}h}}, \end{equation} provided the denominator above is nonzero. {\em Note this solution is idealized, as $N_t(u)$ must always be an integer, and $h$ and $K$ are not known exactly.} We explain a practical implementation of~\eqref{Ntu} in Algorithm~\ref{alg_opt_allocation}. Our choice of the particle allocation will be based on~\eqref{Ntu}; see Algorithm~\ref{alg_opt_allocation}. Notice that at each time $t$ we only minimize one term, the summand in~\eqref{var1} corresponding to the mutation variance at time $t$, in the Doob decomposition in Theorem~\ref{thm_doob}. Later, when we optimize bins, we will optimize the summand in~\eqref{var2} corresponding to the selection variance at time $t$. In particular, we only minimize the mutation and selection variances at the current time, and not the sum of these variances over all times. In the $T \to \infty$ limit, we expect that weighted ensemble reaches a steady state, provided the bin choice and allocation strategy ({\em e.g.} from Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins}) do not change over time. If the weighted ensemble indeed reaches a steady state, then the mutation and selection variances become stationary in $t$, making it reasonable to minimize them only at the current time. (Under appropriate conditions, the variances in~\eqref{var1}-\eqref{var2} should also become {independent} over time $t$ in the $N \to \infty$ limit. See~\cite{del2004feynman} for related results in the context of sequential Monte Carlo.) Note the term $\textup{Var}_{K(\xi_t^i,\cdot)}h = Kh^2(\xi_t^i) - (Kh(\xi_t^i))^2$ appearing in~\eqref{Ntu}. As discussed in Section~\ref{sec:notation}, {\em this variance tends to be large in regions between metastable sets}. The optimal allocation~\eqref{Ntu} favors putting children in such regions, increasing the likelihood that their descendants will visit both the adjacent metastable sets. See the Appendix for a connection between minimizing mutation variance and minimizing the sensitivity of the stationary distribution $\mu$ to perturbations of $K$. \subsection{Mitigating selection variance}\label{sec:mit_sel} We begin by observing that if bins are small, then so is the selection variance. In particular, if each bin is a single point in state space, then Lemma~\ref{lem_sel_var} shows that the selection variance is {\em zero}. One way, then, to get small selection variance is to have a lot of bins. When simulations of the underlying Markov chain are very expensive, however, we cannot afford a large number of bins; see Remark~\ref{rmk_bins} in Section~\ref{sec:params} below. As a result the bins are not so small, and we investigate the selection variance to decide how to construct them. \begin{lemma}\label{lem_sel_var2} The selection variance at time $t\ge 1$ satisfies \begin{align}\begin{split}\label{sel_var2} &\lim_{T \to \infty}T^2{\mathbb E}\left[\left.\left(\hat{D}_{t}-{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] = \sum_{u \in {\mathcal B}} \left(\frac{\omega_t(u)}{N_t(u)}\right)^2\delta_t(u)\textup{Var}_{\eta_t(u)}Kh, \end{split} \end{align} where $\eta_t(u)$ is defined in Lemma~\ref{lem_sel_var} and $h$ is defined in~\eqref{h}. \end{lemma} \begin{proof} From Lemma~\ref{lem_sel_var},~\eqref{ht} and~\eqref{h2}, letting $T \to \infty$ gives the result. \end{proof} We choose to mitigate selection variance by choosing bins so that~\eqref{sel_var2} is small. This likely requires some sort of search in bin space. For simplicity, we assume that the bins do not change in time and are chosen at the start of Algorithm~\ref{alg1}. Of course it is possible to update bins adaptively, at a frequency that depends on how the cost of bin searches compares to that of particle evolution. We will minimize an {\em agnostic} variant of~\eqref{sel_var2}, for which we make no assumptions about $N_t(u)$, $\omega_t(u)$ and $\delta_t(u)$. This allows us to optimize fixed bins using a time-independent objective function, without taking the particle allocation into account. Our agnostic optimization also is not specific to residual resampling. Indeed, though the precise formula for the selection variance depends on the resampling method, the selection variance should always contain terms of the form $\text{Var}_\eta Kh$, where $\eta$ are probability distributions in the individual weighted ensemble bins; see~\cite{aristoff2019ergodic}. It is exactly these terms that we choose to minimize. Thus for our agnostic selection variance minimization, we let $$\eta_u^{\textup{unif}}(d\xi) = \frac{\mathbbm{1}_{\xi \in u}\,d\xi}{\int \mathbbm{1}_{\xi \in u}\,d\xi}$$ be the uniform distribution in bin $u \in {\mathcal B}$, and consider the following problem: \begin{align}\begin{split}\label{opt_bins} &\text{minimize } \sum_{u \in {\mathcal B}}\text{Var}_{\eta_u^{\textup{unif}}} Kh \\ & \qquad \text{over choices of } {\mathcal B,} \text{ subject to the constraint } \#{\mathcal B} = M. \end{split} \end{align} Like~\eqref{opt_allocation}, {\em this is idealized because we cannot directly access $K$ or $h$.} We describe a practical implementation of~\eqref{opt_bins} in Algorithm~\ref{alg_opt_bins}. Informally, solutions to~\eqref{opt_bins} are characterized by the property that, inside each individual bin $u$, the value of $Kh$ does not change very much. Our choice of bins is based on~\eqref{opt_bins}. Here $M$ is the desired total number of bins. Our agnostic perspective leads us to use the uniform distribution $\eta_u^{\textup{unif}}$ in each bin $u$. When bins are formed from combinations of a fixed collection of microbins,~\eqref{opt_bins} is a discrete optimization problem that usually lacks a closed form solution. Algorithm~\ref{alg_opt_bins} below solves a discrete version of~\eqref{opt_bins} by simulated annealing. Because of the similarity of~\eqref{opt_bins} with the $k$-means problem~\cite{kmeans}, we expect that there are more efficient methods, but this is not the focus of the present work. \section{Microbins and parameter choice}\label{sec:params} To approximate the solutions to~\eqref{opt_allocation} and~\eqref{opt_bins}, we use {\em microbins} to gain information about $K$ and $h$. The collection ${\mathcal M}{\mathcal B}$ of microbins is a finite partition of state space that refines the weighted ensemble bins ${\mathcal B}$, in the sense that every element of ${\mathcal B}$ is a union of elements of ${\mathcal M}{\mathcal B}$. Thus each bin is comprised of a number of microbins, and each microbin is inside exactly one bin. The idea is to use exploratory simulations, over short time horizons, to approximate $K$ and $h$ by observing transitions between microbins. In more detail, we estimate the probability to transition from microbin $p \in {\mathcal M}{\mathcal B}$ to microbin $q \in {\mathcal M}{\mathcal B}$ by a matrix $\tilde{K} = (\tilde{K}_{pq})$, \begin{equation}\label{tildeK} \tilde{K}_{pq} \approx \dfrac{ \iint \nu(d\xi)K(\xi,d\xi')\mathbbm{1}_{\xi \in p,\,\xi' \in q}}{\int \nu(d\xi)\mathbbm{1}_{\xi \in p}}, \end{equation} and we estimate $f$ on microbin $p \in {\mathcal M}{\mathcal B}$ by a vector $\tilde{f} = (\tilde{f}_p)$, \begin{equation}\label{tildef} \tilde{f}_p \approx \frac{\int f(\xi)\mathbbm{1}_{\xi \in p}\,\nu(d\xi)}{\int \mathbbm{1}_{\xi \in p}\nu(d\xi)}. \end{equation} Here $\nu$ is some convenient measure, for instance an empirical measure obtained from preliminary weighted ensemble simulations. This strategy echoes work in the Markov state model community~\cite{husic2018markov,pande2010everything, sarich2010approximation,schutte2013metastability}. For small enough microbins, we could replace $\nu$ in~\eqref{tildeK} with any other measure without changing too much the value of the estimates on the right hand sides of~\eqref{tildeK} and~\eqref{tildef}. Moreover, if $f$ is the characteristic function of a microbin, then $\nu$ could be replaced with any measure supported in microbin $p$ {\em without changing at all} the value of the right hand side of~\eqref{tildef}. This is the case for the mean first passage time problem mentioned in the Introduction, and fleshed out in the numerical example in Section~\ref{sec:numerics}: there, $f$ is the characteristic function of the target set, which can be chosen to be a microbin. \begin{algorithm}\caption{Optimizing the particle allocation} Given the particles and weights $(\xi_0^i,\omega_0^i)^{i=1,\ldots,N_{\textup{init}}}$ at $t = 0$, or $(\xi_t^i,\omega_t^i)^{i=1,\ldots,N}$ at $t\ge 1$: \begin{itemize}[leftmargin=20pt] \item Define the following approximate solution to~\eqref{opt_allocation}: \begin{equation}\label{star} \tilde{N}_t(u) = \frac{N\sqrt{\omega_t(u) \sum_{i:\xi_t^i \in u} \omega_t^i [{\tilde K}{\tilde h}^2 - ({\tilde K}\tilde{h})^2]_{p(\xi_t^i)}}}{\sum_{u \in {\mathcal B}}\sqrt{\omega_t(u) \sum_{i:\xi_t^i \in u} \omega_t^i [{\tilde K}{\tilde h}^2 - ({\tilde K}\tilde{h})^2]_{p(\xi_t^i)}}}, \end{equation} where $p(\xi_t^i) \in {\mathcal M}{\mathcal B}$ is the microbin containing $\xi_t^i$. \item Let $\tilde{N}$ count the occupied bins, \begin{equation*} \tilde{N} = \sum_{u \in {\mathcal B}} \mathbbm{1}_{\omega_t(u)>0}. \end{equation*} \item Let $R_t(u)^{u \in {\mathcal B}}$ be $N-{\tilde N}$ samples from the distribution $\{\tilde{N}_t(u)/N: u \in {\mathcal B}\}$. \item In Algorithm~\ref{alg1}, define the particle allocation as \begin{equation*} N_t(u) = {\mathbbm{1}}_{\omega_t(u)>0} + R_t(u). \end{equation*} \end{itemize} If the denominator of~\eqref{star} is $0$, we set $N_t(u) = \#\{i:\xi_t^i \in u\}$. \label{alg_opt_allocation} \end{algorithm} With $\tilde{f}$ and the microbin-to-microbin transition matrix $\tilde{K}$ in hand, we can obtain an approximate solution $\tilde{h}$ to the Poisson equation~\eqref{h}, simply by replacing $K$, $f$ and $\mu$ in that equation with, respectively, $\tilde{K}$, $\tilde{f}$ and the stationary distribution $\tilde{\mu}$ of $\tilde{K}$ (we assume $\tilde{K}$ is aperiodic and irreducible). That is, $\tilde{h}$ solves \begin{equation}\label{tildeh} (\tilde{I} - \tilde{K})\tilde{h} = \tilde{f} - \tilde{f}^T\tilde{\mu}\tilde{\mathbbm{1}}, \qquad {\tilde h}^T\tilde{\mu} = 0, \end{equation} where $\tilde{I}$ and $\tilde{\mathbbm{1}}$ are the identity matrix and all ones column vector of the appropriate sizes, and $\tilde{f}$, $\tilde{\mu}$ and $\tilde{h}$ are column vectors. Then we can approximate the solutions to~\eqref{opt_allocation} and~\eqref{opt_bins} by simply replacing $K$ and $h$ in those optimization problems with $\tilde{K}$ and $\tilde{h}$. See Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins} for details. We can also use $\tilde{\mu}$ to initialize weighted ensemble; see Algorithm~\ref{alg_initialize}. We have in mind that the microbins are constructed using ideas from Markov state modeling~\cite{husic2018markov,pande2010everything, sarich2010approximation,schutte2013metastability}. In this setup, the microbins are simply the Markov states. These could be determined from a clustering analysis ({\em e.g.}, using $k$-means~\cite{kmeans}) from preliminary weighted ensemble simulations with short time horizons. The resulting Markov state model can be crude: it will be used only for choosing parameters, and weighted ensemble is exact no matter the parameters. Indeed if our Markov state model was very refined, it could be used directly to estimate $\int f \,d\mu$. In practice, we expect our crude model could estimate $\int f \,d\mu$ with significant bias. In our formulation, a bad set of parameters may lead to large variance, but there is never any bias. In short, the Markov state model should be good enough to pick reliable weighted ensemble parameters, but not necessarily good enough to accurately estimate $\int f \,d\mu$. \begin{remark}\label{rmk_bins} We distinguish microbins from bins because {the number of weighted ensemble bins is limited by the number of particles we can afford to simulate over the time horizon needed to reach $\mu$}. As an extreme case, suppose we have many more bins than particles, so that all bins contain $0$ or $1$ particles at almost every time. Then, because Algorithm~\ref{alg1} requires at least one child per occupied bin, parents almost never have more than one child, and we recover direct Monte Carlo. This condition, that the collection of parents in a given bin must have at least one child, is essential for the stability of long-time calculations~\cite{aristoff2019ergodic}. As a result, too many bins leads to poor weighted ensemble performance. A very rough rule of thumb is that the number $M$ of bins should be not too much larger than the number $N$ of particles. \end{remark} \begin{algorithm}\caption{Optimizing the bins} Choose an initial collection ${\mathcal B}$ of bins. Define an objective function on bin space, \begin{equation}\label{objective} {\mathcal O}({\mathcal B}') = \sum_{u \in {\mathcal B}'} \text{Var} (\tilde{K}\tilde{h}|_u), \end{equation} where $\tilde{K}\tilde{h}|_u$ is the restriction of $\tilde{K}\tilde{h}$ to $\{p \in {\mathcal M}{\mathcal B}: p \in u\}$, and $\text{Var} (\tilde{K}\tilde{h}|_u)$ is the usual vector population variance. Choose an annealing parameter $\alpha>0$, set ${\mathcal B}_{opt} = {\mathcal B}$, and iterate the following for a user-prescribed number of steps: \vskip5pt \begin{itemize} \item[1.] Perturb ${\mathcal B}$ to get new bins ${\mathcal B}'$. (Say by moving a microbin from one bin to another bin). \item[2.] With probability $\min\{1,\exp[\alpha({\mathcal O}(\mathcal{B}) - {\mathcal O}(\mathcal{B'}))]\}$, set ${\mathcal B} = {\mathcal B}'$. \item[3.] If ${\mathcal O}({\mathcal B}) < {\mathcal O}({\mathcal B}_{opt})$, then update ${\mathcal B}_{opt} = {\mathcal B}$. Return to Step 1. \end{itemize} \vskip5pt Once the bin search is complete, the output is ${\mathcal B} = {\mathcal B}_{opt}$. \label{alg_opt_bins} \end{algorithm} The number of weighted ensemble bins is limited by the number $N$ of particles. (In the references in the Introduction, $N$ is usually on the order of $10^2$ to $10^3$.) The number of microbins, on the other hand, is limited primarily by the cost of the sampling that produces $\tilde{h}$ and $\tilde{K}$. The microbins could be computed by post-processing exploratory weighted ensemble data generated using larger bins. The {quality} of the microbin-to-microbin transition matrix $\tilde{K}$ depends on the microbins and the number of particles used in these exploratory simulations. But these exploratory simulations, compared to our steady-state weighted ensemble simulations, could use more particles as their time horizons can be much shorter. As a result the number of microbins can be much greater than the number of bins. In Algorithm~\ref{alg_opt_bins}, we could enforce an additional condition that the bins must be connected regions in state space. We do this in our implementation of Algorithm~\ref{alg_opt_bins} in Section~\ref{sec:numerics} below. Traditionally, bins are connected, not-too-elongated regions, {\em e.g.} Voronoi regions. Bins are chosen this way because resampling in bins with distant particles can lead to a large variance in the weighted ensemble. However, {\em since we are employing weighted ensemble only to compute $\int f\,d\mu$ for a single observable $f$}, a large variance in the full ensemble can be tolerated so long as the variance assocated with estimating $\int f\,d\mu$ is still small. This could be achieved with disconnected or elongated bins, or even a non-spatial assignment of particles to bins (based {\em e.g.} on the values of $\tilde{K}\tilde{h}$ on the microbins containing the particles). We leave a more complete investigation to future work. \begin{algorithm}\caption{Initializing weighted ensemble} After the preliminary simulations that produce a collection of particles and weights $(\xi_0^i,\omega^i)^{i=1,\ldots,N_{\textup{init}}}$, together with approximations $\tilde{K}$, $\tilde{\mu}$, $\tilde{f}$ and $\tilde{h}$ of $K$, $\mu$, $f$, and $h$: \vskip5pt \begin{itemize}[leftmargin=20pt] \item Adjust the weights of the particles in each microbin $p$ according to $\tilde{\mu}_p$: \begin{equation*} \omega_0^i = \omega^i \frac{\tilde{\mu}_p}{\sum_{j:\xi_0^j \in p}\omega^j}, \qquad \text{if }\xi_0^i \in p. \end{equation*} There should be at least one initial particle in each microbin, $$\sum_{j:\xi_0^j \in p}\omega^j>0, \qquad p \in {\mathcal M}{\mathcal B}.$$ \item Proceed to the selection step of Algorithm~\ref{alg1} at time $t = 0$, with the initial particles $\xi_0^1,\ldots,\xi_0^{N_{\textup{init}}}$ having the adjusted weights $\omega_0^1,\ldots,\omega_0^{N_{\textup{init}}}$. \end{itemize} \vskip5pt The number, $N_{\textup{init}}$, of initial particles can be much greater than the number, $N$, of particles in the weighted ensemble simulations of Algorithm~\ref{alg1}. \label{alg_initialize} \end{algorithm} \subsection{Initialization} Note that the steady state $\tilde{\mu}$ of $\tilde{K}$ can be used to precondition the weighted ensemble simulations, so that they start closer to the true steady state. This basically amounts to adjusting the weights of the initial particles so that they match $\tilde{\mu}$. This is called {\em reweighting} in the weighted ensemble literature~\cite{bhatt2010steady,jeremy,suarez,zuckerman}. One way to do this is the following. Take initial particles and weights $(\xi_0^i,\omega^i)^{i=1,\ldots,N_{\textup{init}}}$ from the preliminary simulations that define $\tilde{K}$, $\tilde{\mu}$, $\tilde{f}$ and $\tilde{h}$. These initial particles can be a large subsample from these simulations; in particular we can have an initial number of particles $N_{\textup{init}}\gg N$ much greater than the number of particles in weighted ensemble simulations. (The large number can be obtained by sampling at different times along the the preliminary simulation particle ancestral lines.) {\em We require that there is at least one initial particle in each microbin.} The weights $(\omega^i)^{i=1,\ldots,N_{\textup{init}}}$ of these particles are adjusted using $\tilde{\mu}$ to get new weights $(\omega_0^i)^{i=1,\ldots,N_{\textup{init}}}$, such that the total adjusted weight in each microbin matches the value of $\tilde{\mu}$ on the same microbin. Then these $N_{\textup{init}}\gg N$ initial particles are fed into the first ($t = 0$) selection step of Algorithm~\ref{alg1}. This selection step prunes the number of particles to a manageable number, $N$, for the rest of the main weighted ensemble simulation. See Algorithm~\ref{alg_initialize} for a precise description of this initialization. \subsection{Gain over naive parameter choices} The gain of optimizing parameters, compared to naive parameter choices or direct Monte Carlo, comes from the larger number of particles that optimized weighted ensemble puts in important regions of state space, compared to a naive method. {\em These important regions are exactly the ones identified by $h$}; roughly speaking they are regions $R$ where the variance of $h$ is large. A rule of thumb is that the variance can decrease by a factor of up to \begin{equation}\label{gain} \frac{\text{average }\#\text{ of particles in }R \text{ with optimized parameters}}{\text{average }\#\text{ of particles in }R \text{ with naive method}}. \end{equation} To see why, consider the mutation variance from Proposition~\ref{prop_vis_var}, \begin{align}\begin{split}\label{vis_mut_var2} \lim_{T \to \infty} T^2{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] = \sum_{u \in {\mathcal B}}\frac{\omega_t(u)}{N_t(u)} \sum_{i: {\xi}_t^i \in u}\omega_t^i\textup{Var}_{K(\xi_t^i,\cdot)}h. \end{split} \end{align} The contribution to this mutation variance at time $t$ from bin $u$ is \begin{equation*} \frac{\omega_t(u)}{N_t(u)} \sum_{i: {\xi}_t^i \in u}\omega_t^i\textup{Var}_{K(\xi_t^i,\cdot)}h. \end{equation*} Increasing $N_t(u)$ by some factor decreases the mutation variance from bin $u$ at time~$t$ by the same factor. The mutation variance can be reduced by a factor of almost~\eqref{gain} if $N_t(u)$ is increased by the factor~\eqref{gain} in the bins $u$ where $\textup{Var}_{K(\xi_t^i,\cdot)}h$ is large, and if $\textup{Var}_{K(\xi_t^i,\cdot)}h$ is small enough in the other bins that decreasing the allocation in those bins does not significantly increase the mutation variance. Of course the variance formulas in Lemmas~\ref{lem_mut_var} and Lemma~\ref{lem_sel_var} can, in principle, more precisely describe the gain, although it is difficult to accurately estimate the values of these variances a priori outside of the $N \to \infty$ limit. Since we focus on relatively small $N$, we do not go in this analytic direction. Instead we numerically illustrate the improvement from optimizing parameters in Figures~\ref{fig_vary_bins} and~\ref{fig_optimization_data}. \subsection{Adaptive methods} We have proposed handling the optimizations~\eqref{opt_allocation} and~\eqref{opt_bins} by approximating $K$ and $h$ with ${\tilde{K}}$ and $\tilde{h}$, where the latter are built by observing microbin-to-microbin transitions and solving the appropriate Poisson problem. Since $K$ and $h$ are fixed in time and we are doing long-time calculations, it is natural to estimate them adaptively, for instance via stochastic approximation~\cite{kushner2003stochastic}. Thus both~\eqref{opt_allocation} and~\eqref{opt_bins} could be solved adaptively, at least in principle. Depending on the number of microbins, it may be relatively cheap to compute $h$ compared to the cost of evolving particles. If this is the case, it is natural to solve~\eqref{opt_allocation} on the fly. We could also perform bin searches intermittently, depending on their cost. \section{Numerical illustration}\label{sec:numerics} In this section we illustrate the optimizations in Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins}, for a simple example of a mean first passage time computation. Consider {\em overdamped Langevin dynamics} \begin{equation}\label{ovd_lang} dX_t = -V'(X_t)\,dt + \sqrt{2\beta^{-1}}dW_t, \end{equation} where $\beta = 5$, $(W_t)_{t \ge 0}$ is a standard Brownian motion, and the potential energy is \begin{equation*} V(x) = \begin{cases} 5(x-7/12)^2 + 0.15\cos(240\pi x), & x < 7/12\\ -1 - \cos(12\pi x) + 0.15 \cos(240 \pi x), & x \ge 7/12\end{cases}. \end{equation*} See Figure~\ref{fig_V_h_pi}. We impose reflecting boundary conditions on the interval $[0,1]$. We will estimate the mean first passage time of $(X_t)_{t \ge 0}$ from $1/2$ to $[119/120,1]$ using the Hill relation~\cite{aristoff2016analysis,hill2005free}. The Hill relation reformulates the mean first passage time as a steady-state average, as we explain below. \begin{figure} \includegraphics[width=13cm] {fig1.eps} \vskip-10pt \caption{Using Algorithm~\ref{alg_opt_bins} to compute the weighted ensemble bins ${\mathcal B}$ when the number of bins is $M = 4$. We use $10^6$ iterations of Algorithm~\ref{alg_opt_bins} with $\alpha = 10^5$. Top left: Potential energy $V$ and bin boundaries when $M = 4$. Top right: The vector ${\tilde K}{\tilde h}$ defining the objective function in Algorithm~\ref{alg_opt_bins}, where $\tilde{h}$ is the approximate Poisson solution. Note that ${\tilde K}{\tilde h}$ is nearly constant on each superbasin. Bottom left: (square root of) the vector $\tilde{K}\tilde{h}^2 - (\tilde{K}\tilde{h})^2$ involved in the mutation variance optimization in Algorithm~\ref{alg_opt_allocation}. Bottom right: Approximate steady state distribution $\tilde{\mu}$. All plots have been cropped at $x>0.4$, where the values of $\tilde{K}\tilde{h}^2 - (\tilde{K}\tilde{h})^2$ and $\tilde{\mu}$ are neglibigle and ${\tilde K}{\tilde h}$ is nearly constant.} \label{fig_V_h_pi} \end{figure} We choose $120$ uniformly spaced microbins between $0$ and $1$, $${\mathcal M}{\mathcal B} = \{[(p-1)/120,p/120]:i=1,\ldots,120\}.$$ (They are not actually disjoint, but they do not overlap so this is unimportant.) The microbins correspond to the {\em basins of attraction of $V$}. A basin of attraction of $V$ is a set of initial conditions $x(0)$ for which $dx(t)/dt = - V'(x(t))$ has a unique long-time limit. The microbins comprise $3$ larger {\em metastable sets}, defined in Section~\ref{sec:notation} above. These larger metastable sets, each comprised of many smaller basins of attraction, will be called {\em superbasins}. {\em The microbins do not need to be basins of attraction: they only need to be sufficiently ``small'' to give useful estimates of $K$ and $h$.} We choose microbins in this way to illustrate the qualitative features we expect from Markov state modeling, where the Markov states (or our microbins) are often basins of attraction. In this case, the bins might be clusters of Markov states corresponding to superbasins. \begin{figure} \includegraphics[width=13cm] {fig2.eps} \vskip-10pt \caption{Increasing the value of $M$ in Algorithm~\ref{alg_opt_bins}. Plotted are weighted ensemble bins ${\mathcal B}$ computed using Algorithm~\ref{alg_opt_bins} with $M = 4,8,12,16$. For each value of $M$, we use $10^6$ iterations of Algorithm~\ref{alg_opt_bins}, with $\alpha$ tuned between $10^5$ and $10^6$. Note that with increasing $M$, additional bins are initially devoted to resolving the energy barrier between the two rightmost superbasins. Since the observable $f$ is the rightmost microbin, this is the most important energy barrier for the bins to resolve. Note that the multiple adjacent small bins for $M = 16$ correspond to the steepest gradients of $\tilde{K}\tilde{h}$ in the top right of Figure~\ref{fig_V_h_pi}. All plots have been cropped at $x>0.4$.} \label{fig_bin_boundaries} \end{figure} \begin{figure} \centering \includegraphics[width=13cm] {fig3.eps} \vskip-10pt \caption{Varying the number $M$ of weighted ensemble bins in Algorithm~\ref{alg_opt_bins}. Left: Weighted ensemble running means $\theta_T$ vs. $T$ for Algorithm~\ref{alg1} with the optimal allocation and binning of Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins}, when $M = 4,8,12,16$. Values shown are averages over $10^5$ independent trials. Error bars are $\sigma_T/\sqrt{10^5}$, where $\sigma_T$ are the empirical standard deviations. Right: Scaled empirical standard deviations $\sqrt{T}\times \sigma_T$ vs. $T$ in the same setup. We use $N = 40$ particles, and the bins correspond exactly to the ones in Figure~\ref{fig_bin_boundaries}. More bins is not always better, since with too many bins we return to the direct Monte Carlo regime; see Remark~\ref{rmk_bins}.} \label{fig_vary_bins} \end{figure} The kernel $K$ is defined as follows. First, let $K_{\delta t}$ be an Euler-Maruyama time discretization of~\eqref{ovd_lang} with time step $\delta t = 2 \times 10^{-5}$. We introduce a {\em sink} at the target state $[119/120,1]$ that recycles at a {\em source} $x = 1/2$ via \begin{equation*} \bar{K}_{\delta t}(x,dy) = \begin{cases} K_{\delta t}(\frac{1}{2},dy), & x \in [119/120,1] \\ K_{\delta t}(x,dy), & x \notin [119/120,1]\end{cases}. \end{equation*} Then we define $K$ as a $\Delta t$-skeleton of $\bar{K}_{\delta t}$ where $\Delta t = 10$, \begin{equation}\label{between} K = \bar{K}_{\delta t}^{10}. \end{equation} This just means we take $10$ Euler-Maruyama~\cite{kloeden} time steps in the mutation step of Algorithm~\ref{alg1}. The Hill relation~\cite{aristoff2016analysis} shows that, if $(\bar{X}_n)_{n \ge 0}$ is a Markov chain with kernel either ${K}_{\delta t}$ or $\bar{K}_{\delta t}$ and $\bar{\tau} = \inf\{n\ge 0:\bar{X}_n \in [119/120,1]\}$, then \begin{equation*} {\mathbb E}[\bar{\tau}|\bar{X}_0 = 1/2] = \frac{1}{\mu([119/120,1])}, \end{equation*} where $\mu$ is the stationary distribution of $K$. Thus if $\tau = \inf\{t>0:X_t \in [119/120,1]\}$, \begin{equation*} {\mathbb E}[\tau|X_0 = 1/2] \approx \frac{ \delta t}{\mu([119/120,1])}. \end{equation*} By construction $\mu([119/120,1])$ is small (on the order $10^{-7}$), so it must be estimated with substantial precision to resolve the mean first passage time. We will estimate the mean first passage time from $x = 1/2$ to $x \in [119/120,1]$, the latter being the target state and rightmost microbin. Thus, we define $f$ as the characteristic function of this microbin, $f = \mathbbm{1}_{[119/200,1]}$, so that $$\theta_T \approx \int f\,d\mu = \mu([119/120,1]).$$ Weighted ensemble then estimates the mean first passage time via \begin{equation}\label{MFPT2} \textup{Mean first passage time } = {\mathbb E}[\tau|X_0 = 1/2] \approx \frac{\delta t}{\theta_T}. \end{equation} We illustrate Algorithm~\ref{alg1} combined with Algorithms~\ref{alg_opt_allocation}-\ref{alg_opt_bins} in Figures~\ref{fig_V_h_pi}-\ref{fig_optimization_data}. For Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins}, to construct $\tilde{K}$, $\tilde \mu$, $\tilde{f}$ and $\tilde{h}$ as in Section~\ref{sec:params}, we compute the matrix $\tilde{K}$ using $10^4$ trajectories starting from each microbin's midpoint, and we define $\tilde{f}_{120} = 1$, $\tilde{f}_p = 0$ for $1 \le p \le 119$. The $p$th rows of $\tilde{K}$, $\tilde \mu$, $\tilde{f}$ and $\tilde{h}$ correspond to the microbin $[(p-1)/120,p/120]$. In Algorithm~\ref{alg_opt_bins}, to simplify visualization we enforce a condition that the bins must be connected regions. We initialize all our simulations with Algorithm~\ref{alg_initialize} where $N_{\textup{init}} = 120$, $\xi_0^i = (i-1/2)/120$, and $\omega^i = 1/120$. In Figure~\ref{fig_V_h_pi}, we show the bins resulting from Algorithm~\ref{alg_opt_bins}, and plot the terms $\tilde{K}\tilde{h}$ and $\tilde{K}\tilde{h}^2 - (\tilde{K}\tilde{h})^2$ appearing in the optimizations in Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins}, along with the approximate steady state $\tilde{\mu}$. Note that the bins resolve the superbasins of $V$. In Figure~\ref{fig_bin_boundaries}, we explore what happens when the number $M$ of bins increases in Algorithm~\ref{alg_opt_bins}, finding that the bins begin to resolve the regions between superbasins, favoring regions closer to the support of $f$. As $M$ increases, the optimal allocation leads to particles having more children when they are near the dividing surfaces between the superbasins; to see why, compare the particle allocation in Algorithm~\ref{alg_opt_allocation} with the bottom left of Figure~\ref{fig_V_h_pi}. In Figure~\ref{fig_vary_bins}, we illustrate weighted ensemble with the optimized allocation and binning of Algorithms~\ref{alg_opt_allocation} and~\ref{alg_opt_bins} when the number of bins increases from $M = 4$ to $M = 16$. Observe that $M = 4$ bins is not enough to resolve the regions between the superbasins, but we still get a substantial gain over direct Monte Carlo (compare with Figure~\ref{fig_optimization_data}). With $M = 16$ bins we resolve the regions between superbasins, further reducing the variance. The bins we use are exactly the ones in Figure~\ref{fig_bin_boundaries}. \begin{figure} \includegraphics[width=13cm] {fig4.eps} \vskip-10pt \caption{Comparison of direct Monte Carlo with weighted ensemble, where the bins and/or allocation are optimized, $M = 4$, and $N = 40$. Left: Weighted ensemble running means $\theta_T$ vs. $T$ for Algorithm~\ref{alg1} with the indicated choices for allocation and binning. Values shown are averages over $10^5$ independent trials. Error bars are $\sigma_T/\sqrt{10^5}$, where $\sigma_T$ are the empirical standard deviations. Right: Scaled empirical standard deviations $\sqrt{T}\times \sigma_T$ vs. $T$ in the same setup. By optimized bins, we mean the weighted ensemble bins ${\mathcal B}$ are chosen using Algorithm~\ref{alg_opt_bins} with $M = 4$. These optimized bins are exactly the ones plotted in top left of Figure~\ref{fig_bin_boundaries}. Optimized allocation means the particle allocation follows Algorithm~\ref{alg_opt_allocation}. Uniform bins means that the bins are uniformly spaced on $[0,1]$, while uniform allocation means the particles are distributed uniformly among the occupied bins.} \label{fig_optimization_data} \end{figure} In Figure~\ref{fig_optimization_data}, we compare weighted ensemble with direct Monte Carlo when $M = 4$. Direct Monte Carlo can be seen as a version of Algorithm~\ref{alg1} where each parent always has exactly one child, $C_t^i = 1$ for all $t,i$. For weighted ensemble, we consider optimizing either or both of the bins and the allocation. When we do not optimize the bins, we consider {\em uniform bins} ${\mathcal B} = \{[0,40/120],[40/120,80/120],[80/120,1]\}.$ When we do not optimize the allocation, we consider {\em uniform allocation}, where we distribute particles evenly among the occupied bins, $N_t(u) \approx N/\#\{u\in {\mathcal B}:\omega_t(u)>0\}$. Notice the order of magnitude reduction in standard deviation compared with direct Monte Carlo for this relatively small number of bins. In Figure~\ref{fig_MFPT}, to illustrate the Hill relation, we plot the weighted ensemble estimates of the mean first passage time from our data in Figures~\ref{fig_vary_bins} and~\ref{fig_optimization_data} against the (numerically) exact value. The weighted ensemble estimates at small $T$ tend to exhibit a ``bias'' because the weighted ensemble has not yet reached steady state. As $T$ grows, this bias vanishes and the weighted ensemble estimates converge to the true mean first passage time. In the simple example in this section, we can directly compute the mean first passage time. For complicated biological problems, however, the first passage time can be so large that it is difficult to directly sample even once. In spite of this, the Hill relation reformulation can lead to useful estimates on much shorter time scales than the mean first passage time \cite{adhikari2019computational,jeremy2,danonline}. Indeed this is the case in Figure~\ref{fig_MFPT}, where we get accurate estimates in a time horizon orders of magnitude smaller than the mean first passage time. This speedup can be attributed in part to the initialization in Algorithm~\ref{alg1}. In general, there can also be substantial speedup from the Hill relation itself, independently of the initial condition~\cite{danonline}. Optimizing the bins and allocation together has the best performance. We expect that, as in this numerical example, when the number $M$ of bins is relatively small, optimizing the bins can be more important than optimizing the allocation. Optimizing only the allocation may lead to a less dramatic gain when the bins poorly resolve the landscape defined by $h$. On the other hand, for a large enough number $M$ of bins it may be sufficient just to optimize the allocation. We emphasize that {\em more bins is not always better}, since for a fixed number $N$ of particles, with too many bins we end up recovering direct Monte Carlo. See Remark~\ref{rmk_bins} above. \begin{figure} \includegraphics[width=13cm] {fig5.eps} \vskip-10pt \caption{Illustration of the Hill relation for estimating the mean first passage time via~\eqref{MFPT2}. Left: the same data as in Figure~\ref{fig_vary_bins}, but $\theta_T$ is inverted and multiplied by $\delta t$ to estimate the mean first passage time, and error bars are adjusted accordingly. Right: the same data as in Figure~\ref{fig_optimization_data}, but $\theta_T$ is inverted and multiplied by $\delta t$ to estimate the mean first passage time, and error bars are adjusted accordingly. So that the $x$ and $y$ axes have the same units, we consider the ``physical time'' on the $x$-axis, defined as the total number, $T$, of selection steps of Algorithm~\ref{alg1} multiplied by the time, $10\delta t$, between selection steps (see~\eqref{between}). In both plots, the ``exact'' value of the mean first passage time is obtained from $10^5$ independent samples of the first passage time $\bar{\tau}$. As in Figures~\ref{fig_vary_bins} and~\ref{fig_optimization_data}, optimizing the bins and allocation has the best performance, and $M = 16$ bins is better than $M=4,8,12$ bins, though more bins is not always better; see Remark~\ref{rmk_bins}. } \label{fig_MFPT} \end{figure} \section{Remarks and future work}\label{sec:remarks} This work presents new procedures for choosing the bins and particle allocation for weighted ensemble, building in part from ideas in~\cite{aristoff2016analysis}. The bins and particle allocation, together with the resampling times, completely characterize the method. Though we do not try to optimize the latter, we argue that there is no significant variance cost associated with taking small resampling times. Optimizing weighted ensemble is worthwhile when the optimized parameter choices lead to significantly more particles in important regions of state space, compared to a naive method. The corresponding gain, represented by the rule of thumb~\eqref{gain}, can be expected to grow as the dimension increases. This is because in high dimension the pathways between metastable sets are narrow compared to the vast state space (see~\cite{vanden2005transition,weinan2010transition} and the references in the Introduction). Though our interest is in steady state calculations, many of our ideas could just as well be applied to a finite time setup. Our practical interest in steady state arises from the computation of mean first passage times via the Hill relation. On the mathematical side, general importance sampling particle methods with the unbiased property of Theorem~\ref{thm_unbiased} are often not appropriate for steady state sampling. (By general methods, we mean methods for {\em nonreversible} Markov chains, or Markov chains with a steady state that is {\em not known up to a normalization factor}~\cite{lelievre2016partial}.) Indeed as we explain in our companion article~\cite{aristoff2019ergodic}, standard sequential Monte Carlo methods can suffer from variance explosion at large times, even for carefully designed resampling steps. So this article could be an important advance in that direction. We conclude by discussing some open problems. The implementation of the algorithms in this article to complex, high dimensional problems will require substantial effort and modification to the software in~\cite{westpa}. The aim of this article is to lay the groundwork for such applications. On a more theoretical note, there remain some questions regarding parameter optimization. We could consider adaptive/non-spatial bins instead of fixed bins as in Algorithm~\ref{alg_opt_bins}, for instance bins chosen via $k$-means on the values of $Kh$ of the $N$ particles. Also, we choose a number $M$ of weighted ensemble bins and number $N$ of particles {\em a priori}. It remains open how to pick the best value of $M$ for a given number, $N$, of particles, and fixed computational resources. Using only the variance analysis above, a straightforward answer to this question can probably only be obtained in the large $N$ asymptotic regime. More analysis is then needed, since we generally have small $N$. We could also optimize over both $N$ and $M$, or over $N$, $M$ and a total number $S$ of independent simulations, subject to a fixed computational budget. We leave these questions to future work. \section*{Acknowledgements} D. Aristoff thanks Peter Christman, Tony Leli{\`e}vre, Josselin Garnier, Matthias Rousset, Gabriel Stoltz, and Robert J. Webber for helpful suggestions and discussions. D. Aristoff and D.M. Zuckerman especially thank Jeremy Copperman and Gideon Simpson for many ongoing discussions related to this work. This work was supported by the National Science Foundation via the awards DMS-1818726 and DMS-1522398 (for author D. Aristoff) and by the National Institutes of Health via the award GM115805 (for author D.M. Zuckerman). \section{Appendix}\label{sec:appendix} In this appendix, we describe residual resampling, and we draw a connection between mutation variance minimization and sensitivity. \subsection{Residual resampling} Recall that in Algorithm~\ref{alg1} we assumed residual resampling is used to get the number of children $(C_t^j)^{j=1,\ldots,N}$ of each particle $\xi_t^j$ at each time $t$. We could also use residual resampling to compute $R_t(u)^{u \in {\mathcal B}}$ in Algorithm~\ref{alg_opt_allocation}. For the reader's convenience, we describe residual resampling in Algorithm~\ref{alg_residual}. Recall $\lfloor x \rfloor$ is the floor function (the greatest integer $\le x$). \begin{algorithm}\caption{Residual resampling} To generate $n$ samples $\{N_i:i \in {\mathcal I}\}$ from a distribution $\{d_i: i \in {\mathcal I}\}$ with $\sum_{i \in {\mathcal I}}d_i = 1$: \begin{itemize}[leftmargin=20pt] \item Define $\delta_i = nd_i - \lfloor nd_i \rfloor$ and let $\delta = \sum_{i \in {\mathcal I}}\delta_i$. \item Sample $\{R_i: i \in {\mathcal I}\}$ from the multinomial distribution with $\delta$ trials and event probabilities $\delta_i/\delta$, $i \in {\mathcal I}$. In detail, \begin{align*} &{\mathbb P}(R_i = r_i, \,i\in {\mathcal I}) = \mathbbm{1}_{\sum_{i \in {\mathcal I}} r_i = \delta} \frac{\delta!}{\prod_{i \in {\mathcal I}} r_i!}\prod_{i \in {\mathcal I}} (\delta_i/\delta)^{r_i}. \end{align*} \item Define $ N_i = \lfloor nd_i\rfloor + R_i$, $i \in {\mathcal I}$. \end{itemize} \label{alg_residual} \end{algorithm} \subsection{Intuition behind mutation variance minimization}\label{sec:int_mut} The goal of this section is to give some intuition for the strategy~\eqref{Ntu}, which chooses the particle allocation to minimize mutation variance, by introducing a connection with sensitivity of the stationary distribution $\mu$ to perturbations of $K$. {\em In this section we make the simplifying assumption that state space is finite, and each point of state space is a microbin.} Thus $K = (K_{pq})_{p,q \in {\mathcal M}{\mathcal B}}$ is a finite stochastic matrix. The Poisson solution $h$ defined in~\eqref{h} satisfies $(I-K)h = f - (f^T \mu)\mathbbm{1}$ and $h^T \mu = 0$, or equivalently \begin{equation}\label{poiss} h = \sum_{s=0}^\infty K^s \bar{f}, \qquad \bar{f} = f - (f^T \mu)\mathbbm{1}. \end{equation} where $h = (h_p)_{p \in {\mathcal M}{\mathcal B}}$, $f = (f_p)_{p \in {\mathcal M}{\mathcal B}}$, $\bar{f} = (\bar{f}_p)_{p \in {\mathcal M}{\mathcal B}}$, and $\mu = (\mu_p)_{p \in {\mathcal M}{\mathcal B}}$ are column vectors, and $\mathbbm{1} = \sum_{p \in {\mathcal M}{\mathcal B}} e_p$ is the all ones column vector. Here, $I$ is the identity matrix, and $(e_p)_{p \in {\mathcal M}{\mathcal B}}$ is the column vector with $1$ in the $p$th entry and $0$'s elsewhere. We write $\mu(Q)$ for the stationary distribution of an irreducible stochastic matrix $Q = (Q_{pq})_{p,q \in {\mathcal M}{\mathcal B}}$. More precisely, $\mu(Q)$ denotes a continuously differentiable extension of $Q \mapsto \mu(Q)$ to an open neighborhood of the space of $\#{\mathcal M}{\mathcal B}\times \#{\mathcal M}{\mathcal B}$ irreducible stochastic matrices which satisfies $\mu(Q)^T Q = \mu(Q)^T$ and $\mu(Q)^T \mathbbm{1} = 1$ whenever $Q\mathbbm{1} = \mathbbm{1}$. See~\cite{thiede2015sharp} for details and a proof of existence of this extension. Abusing notation, we still write $\mu$ with no matrix argument to denote the stationary distribution $\mu(K)$ of $K$. \begin{theorem}\label{thm_pert} Let $\lambda(u)>0$ be such that $\sum_{u\in {\mathcal B}}\lambda(u) = 1$. For each $u \in {\mathcal B}$, let $\nu(u) = (\nu(u)_p)_{p \in {\mathcal M}{\mathcal B}}$ be a vector satisfying $\nu(u)_p \ge 0$ for $p \in {\mathcal M}{\mathcal B}$, $\nu(u)_p = 0$ for $p \notin u$, and $\sum_{p \in u}\nu(u)_p = 1$. Let $A(u)$ be a random matrix with the distribution \begin{equation}\label{dist_A} {\mathbb P}\left(A(u) = e_p e_q^T- e_pe_p^T K\right) = \nu(u)_pK_{pq}, \qquad \text{if }p \in u, \,q \in {\mathcal M}{\mathcal B}. \end{equation} Let $A(u)^{(n)}$ be independent copies of $A$, for $u \in {\mathcal B}$ and $n = 1,2,\ldots$. Define \begin{equation}\label{BN} B^{(N)} = \sqrt{N}\sum_{u \in {\mathcal B}} \frac{1}{\lfloor N \lambda(u)\rfloor}\sum_{n=1}^{\lfloor N\lambda(u)\rfloor} A(u)^{(n)}. \end{equation} Then \begin{align}\begin{split}\label{sensitivity} &\lim_{N \to \infty} {\mathbb E}\left[\left(\left.\frac{d}{d\epsilon}\mu(K+\epsilon B^{(N)})^T\right|_{\epsilon = 0}f \right)^2\right] \\ &\qquad= \sum_{u \in {\mathcal B}}\lambda(u)^{-1}\sum_{p \in u} \nu(u)_p\mu_p^2\left[(Kh^2)_p - (Kh)_p^2\right]. \end{split} \end{align} \end{theorem} We now interpret Theorem~\ref{thm_pert} from the point of view of particle allocation. Note that $A(u)$ is a centered sample of of $K$ obtained by picking a microbin $p \in u$ according to the distribution $\nu(u)$, and then simulating a transition from $p$ via $K$. By a centered sample, we mean that we adjust $A(u)$ by subtracting by its mean, so that $A(u)$ has mean zero. The mean of $[(d/d\epsilon)\mu(K+\epsilon A(u))^T|_{\epsilon = 0}f]^2$ measures the sensitivity of $\int f\,d\mu = f^T\mu$ to sampling from bin $u$ according to the distribution $\nu(u)$. Similarly, the mean of $[(d/d\epsilon)\mu(K+\epsilon B^{(N)})^T|_{\epsilon = 0}f]^2$ measures the sensitivity of $\int f\,d\mu$ corresponding to sampling from each bin $u \in {\mathcal B}$ exactly $\lfloor N\lambda(u)\rfloor$ times according to the distributions $\nu(u)$. Appropriately normalizing the latter in the limit $N \to \infty$ leads to~\eqref{sensitivity}. The sensitivity in~\eqref{sensitivity} is minimized over $\lambda(u)^{u \in {\mathcal B}}$ when \begin{equation}\label{Ntu2} N\lambda(u) = \dfrac{N\sqrt{\sum_{p \in u} \nu(u)_p\mu_p^2\left[(Kh^2)_p - (Kh)_p^2\right]}}{\sum_{u \in {\mathcal B}}\sqrt{\sum_{p \in u} \nu(u)_p\mu_p^2\left[(Kh^2)_p - (Kh)_p^2\right]}}. \end{equation} In light of the discussion above, we think of $N\lambda(u)^{u \in {\mathcal B}}$ in~\eqref{Ntu2} as a particle allocation. Note that this resembles our formula~\eqref{Ntu} for the optimal weighted ensemble particle allocation. We now try to make a connection between~\eqref{Ntu} and~\eqref{Ntu2}. We first consider a simple case. Suppose the bins are equal to the microbins, ${\mathcal M}{\mathcal B} = {\mathcal B}$. Since we assume here that every point in state space is a microbin, this means that every point of state space is also a bin. In particular the distributions $\nu(u)$ are trivial: $\nu(u)_p =1$ whenever $p \in {\mathcal M}{\mathcal B}$ with $p = u$. To make~\eqref{Ntu2} agree with~\eqref{Ntu}, we need $\mu_p = \omega_t(u)$ when $p =u$. Of course this equality does not hold. It is true that $\mu_p \approx \omega_t(u)$ when $p=u$ is a reasonable approximation, {\em but only in the asymptotic where $N,t \to \infty$.} To see why this is so, note that the unbiased property and ergodicity show that $\lim_{t \to \infty} {\mathbb E}[\omega_t(u)] = \mu_p$ when $p=u$; provided an appropriate law of large numbers also holds, $\lim_{t \to \infty} \lim_{N \to \infty} \omega_t(u) = \mu_p$. Recall, though, that we are interested in relatively small $N$, due to the high cost of particle evolution. For the general case, where ${\mathcal M}{\mathcal B} \ne {\mathcal B}$ and bins contain multiple points in state space, we see no direct connection between the allocation formulas~\eqref{Ntu2} and~\eqref{Ntu}. Indeed, to make an analogy between this sensitivity calculation and weighted ensemble, we should have $\nu(u)_p = \sum_{i:\xi_t^i \in p}\omega_t^i /\omega_t(u)$. Or in other words, we should consider perturbations that correspond to sampling from the bins according to particle weights, in accordance with Algorithm~\ref{alg1}. But putting $\nu(u)_p = \sum_{i:\xi_t^i \in p}\omega_t^i /\omega_t(u)$ in~\eqref{Ntu2} gives something quite different from~\eqref{Ntu}. So while the sensitivity minimization formula~\eqref{Ntu2} is qualitatively similar to our mutation variance minimization formula~\eqref{Ntu}, the two are not actually the same. \subsection{Proof of Theorem~\ref{thm_pert}} \label{sec:appendix_proof} We begin by noting the following. \begin{equation}\label{b_poisson} \text{If }v^T \mathbbm{1} = 0, \quad \text{then}\quad g^T(I-K) = v^T, \,\,g^T \mathbbm{1} = 0\quad \Longleftrightarrow \quad g^T =\sum_{s=0}^\infty v^TK^s. \end{equation} Like~\eqref{poiss}, this follows from ergodicity of $K$. See for instance~\cite{golub1986using}. \begin{lemma}\label{lem1} Suppose $A$ is a matrix with $A\mathbbm{1} = 0$. Then \begin{equation*} \left.\frac{d}{d\epsilon}\mu(K + \epsilon A)\right|_{\epsilon = 0}f = \mu^T A h. \end{equation*} \end{lemma} \begin{proof} Since $\mu(K+\epsilon A)^T(K+\epsilon A) = \mu(K+\epsilon A)^T$, we have \begin{align*} 0 = \left.\frac{d}{d\epsilon}\mu(K+\epsilon A)^T(I-K-\epsilon A)\right|_{\epsilon = 0} = \left.\frac{d}{d\epsilon}\mu(K+\epsilon A)^T\right|_{\epsilon = 0}(I-K) - \mu^T A. \end{align*} Thus \begin{equation}\label{above_1} \left.\frac{d}{d\epsilon}\mu(K+\epsilon A)^T\right|_{\epsilon = 0}(I-K) =\mu^TA. \end{equation} Moreover since $\mu(Q)^T\mathbbm{1} = 1$ for any stochastic matrix $Q$, \begin{equation}\label{above_2} 0 = \left.\frac{d}{d\epsilon}\mu(K+\epsilon A)^T\mathbbm{1}\right|_{\epsilon = 0}\\ = \left.\frac{d}{d\epsilon}\mu(K+\epsilon A)^T\right|_{\epsilon = 0}\mathbbm{1}. \end{equation} Now by~\eqref{poiss},~\eqref{b_poisson},~\eqref{above_1}, and~\eqref{above_2}, \begin{align*} \left.\frac{d}{d\epsilon}\mu(K+\epsilon A)^T\right|_{\epsilon = 0}f &= \sum_{s=0}^\infty \mu^TA K^sf \\ &= \mu^T A \sum_{s=0}^\infty K^s\bar{f} = \mu^T A h, \end{align*} where the last line above uses $AK^s \mathbbm{1} = A\mathbbm{1} = 0$ to replace $f$ with $\bar{f} = f - \mu^T f\mathbbm{1}$. \end{proof} \begin{lemma}\label{lem2} Suppose $(A^{(n)})^{n=1,\ldots,N}$ are matrices such that {(i)} $(A^{(n)})^{n=1,\ldots,N}$ are independent over $n$, {(ii)} $A^{(n)}\mathbbm{1} = 0$ for all $n$, and {(iii)} ${\mathbb E}[A^{(n)}] = 0$ for all $n$. Then \begin{align*} {\mathbb E}\left[\left(\frac{d}{d\epsilon}\left.\mu\left(K + \epsilon \sum_{n=1}^N A^{(n)}\right)^T\right|_{\epsilon = 0}f\right)^2\right] = \sum_{n=1}^N {\mathbb E}\left[\left(\frac{d}{d\epsilon}\left.\mu\left(K + \epsilon A^{(n)}\right)^T \right|_{\epsilon = 0}f\right)^2\right]. \end{align*} \end{lemma} \begin{proof} Since $\mu$ is continuously differentiable, \begin{equation}\label{disp1} \frac{d}{d\epsilon}\left.\mu\left(K + \epsilon \sum_{n=1}^N A^{(n)}\right)^T\right|_{\epsilon = 0}f = \sum_{n=1}^N \frac{d}{d\epsilon}\left.\mu\left(K + \epsilon A^{(n)}\right)^T \right|_{\epsilon = 0}f. \end{equation} By {\em (ii)-(iii)} and Lemma~\ref{lem1}, $ {\mathbb E}\left[\frac{d}{d\epsilon}\left.\mu\left(K + \epsilon A^{(n)}\right)^T \right|_{\epsilon = 0}f\right] = 0$ for all $n$. So by {\em (i)}, \begin{equation}\label{disp2} {\mathbb E}\left[\frac{d}{d\epsilon}\left.\mu\left(K + \epsilon A^{(n)}\right)^T \right|_{\epsilon = 0}f\frac{d}{d\epsilon}\left.\mu\left(K + \epsilon A^{(m)}\right)^T \right|_{\epsilon = 0}f\right] = 0, \qquad \text{if }n \ne m. \end{equation} The result follows by combining~\eqref{disp1} and~\eqref{disp2}. \end{proof} \begin{lemma}\label{lem3} Let $A(u)$ be a random matrix with the distribution~\eqref{dist_A}. Then \begin{equation*} {\mathbb E}\left[\left(\left.\frac{d}{d\epsilon}{\mu}(K + \epsilon A(u))^T\right|_{\epsilon = 0}f\right)^2\right] = \sum_{p \in u} \nu(u)_p\mu_p^2\left[(Kh^2)_p - (Kh)_p^2\right]. \end{equation*} \end{lemma} \begin{proof} Note that $A(u)\mathbbm{1} = ( e_p e_q^T-e_pe_p^T K)\mathbbm{1} = 0$ since $K\mathbbm{1} = \mathbbm{1}$. So by Lemma~\ref{lem1}, \begin{align*} \left.\frac{d}{d\epsilon}\mu\left(K+\epsilon A(u)\right)\right|_{\epsilon = 0}^T f = \mu^T A(u) h. \end{align*} From this and~\eqref{dist_A}, \begin{align*} {\mathbb E}\left[\left(\frac{d}{d\epsilon}\left.{\mu}(K + \epsilon A(u))^T\right|_{\epsilon = 0}f\right)^2\right] &= \sum_{p \in u} \sum_{q \in {\mathcal M}{\mathcal B}}\nu(u)_pK_{pq}\left[\mu^T(e_p e_q^T-e_p e_p^T K)h\right]^2 \\ &= \sum_{p \in u}\nu(u)_p\mu_p^2\sum_{q \in {\mathcal M}{\mathcal B}}K_{pq}\left[(e_q^T-e_p^T K)h\right]^2 \\ &= \sum_{p \in u}\nu(u)_p\mu_p^2\sum_{q \in {\mathcal M}{\mathcal B}} K_{pq} \left[(Kh)_p^2 + h_q^2 - 2(Kh)_p h_q\right] \\ &= \sum_{p \in u}\nu(u)_p\mu_p^2\left[(Kh^2)_p - (Kh)_p^2\right]. \end{align*} \end{proof} We are now ready to prove Theorem~\ref{thm_pert}. \begin{proof}[Proof of Theorem~\ref{thm_pert}] By Lemmas~\ref{lem2} and~\ref{lem3}, \begin{align}\begin{split}\label{lim} &{\mathbb E}\left[\left(\frac{d}{d\epsilon}\left.\mu\left(K + \epsilon B^{(N)}\right)^T \right|_{\epsilon = 0}f\right)^2\right] \\ &= N\sum_{u \in {\mathcal B}}\frac{1}{\lfloor N\lambda(u)\rfloor^2}\sum_{n=1}^{\lfloor N \lambda(u)\rfloor}{\mathbb E}\left[\left(\frac{d}{d\epsilon}\left.\mu\left(K + \epsilon A(u)^{(n)}\right)^T \right|_{\epsilon = 0}f\right)^2 \right] \\ &= \sum_{u \in {\mathcal B}}\frac{N}{\lfloor N\lambda(u)\rfloor}\sum_{p \in u}\nu(u)_p\mu_p^2\left[(Kh^2)_p - (Kh)_p^2\right]. \end{split} \end{align} The result follows by letting $N \to \infty$. \end{proof}
{ "timestamp": "2020-04-22T02:04:41", "yymm": "1806", "arxiv_id": "1806.00860", "language": "en", "url": "https://arxiv.org/abs/1806.00860" }
\part*{\centering \large Appendices}\label{app} \section{\normalsize Accuracy of finite difference schemes}\label{Accuracy} In this section, we discuss the accuracy of the numerical methods employed to calculate various skyrmionic solutions with different topological charge and morphology presented in this work. We compared the results of energy minimization obtained by our method with the second-order finite difference schemes implemented in most open-source software for micromagnetic simulations such as MuMax3~\cite{MuMax3}. A high accuracy numerical scheme used in our work is essential for the study of a number of aspects: the stability of the solutions close to blow-up, energy of skyrmions with extremely big topological charge, etc. Moreover, we provide additional calculations with a very high accuracy reducing the relative error in energy down to 10$^{-6}$. These calculations can be taken as benchmarks and compared with the outputs provided by other methods. Such high accuracy can be achieved only for axisymmetric solutions~\cite{Bogdanov_99}, where the problem can be reduced to ordinary differential equation. An example of axisymmetric solution is depicted in Fig.~\ref{sup_Fig_7pi}(a). Furthermore, we show a non-axisymmetric solution of a comparable size with a more complex morphology in Fig.~\ref{sup_Fig_7pi}(b). The results obtained with our precise method for axisymmetric solutions can be taken as a reference to define a threshold for more general solutions with lower symmetries. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{sup_fig_7pi_compressed.pdf} \caption{\small Example of two solitons of nearly identical size but different morphology and topological charge: axisymmetric $7\pi$-vortex~\cite{Bogdanov_99} with $Q\!=\!-1$ (a) and skyrmion with $Q\!=\!-12$ (b). Both solutions corresponds to the case $u\!=\!0$, $h\!=\!0.65$. Both images are given in the same scale. The color code is identical to that one used in Figs.~1-3 in the main text. } \label{sup_Fig_7pi} \end{figure} The Hamiltonian (1) in the main text can be rewritten in dimensionless units: \begin{equation} \mathcal{E} = \frac{E}{E_0}=\int \Big( \frac{1}{2}\sum_{i}\left( \mathbf{\nabla} n_i \right)^2 + 2\pi\,w(\mathbf{n}) + 4\pi^2\,u\, (1-{n_z}^2) + 4\pi^2\,h\,(1-n_z) \Big)\mathrm{d}\mathtt{x}\mathrm{d}\mathtt{y}, \label{Ham_sup} \end{equation} where $\mathtt{x}=x/L_\mathrm{D}$, $\mathtt{y}=y/L_\mathrm{D}$, $E_{0}$ and $L_\text{D}$ are defined in the main text. In the case of axisymmetric solitons~\cite{Bogdanov_89,Bogdanov_99}, the solution of the problem~(\ref{Ham_sup}) can be reduced to a second order nonlinear non-autonomous ordinary differential equation: \begin{equation} \underbrace{ \frac{d^2\theta}{d\rho^2} + \frac{1}{\rho}\frac{d\theta}{d\rho} - \frac{1}{\rho^2}\sin(\theta)\cos(\theta) }_\mathrm{exchange} + \underbrace{ \frac{4\pi}{\rho}\sin(\theta)^2 }_\mathrm{DMI} - \underbrace{ 4 \pi^2 u \sin(2\theta) }_\mathrm{uniax.\,anis.} - \underbrace{ 4 \pi^2 h \sin(\theta) }_\mathrm{Zeeman} = 0, \label{DE_sup} \end{equation} where $\theta$ is the polar angle of magnetization vector, i.e. $n_z = \cos(\theta)$ and $\rho$ is the radial coordinate. The soliton solutions of equation~(\ref{DE_sup}) are exponentially localized~\cite{BKY}, even in absence of the DMI contribution~\cite{Kovalev}. The true asymptotic of such solutions behaves as a Macdonald function~\cite{Voronov,Leonov_NJP}: \begin{equation} \theta(\rho) \sim \frac{1}{\sqrt{\rho}} \exp\left( -2\pi\sqrt{2u + h}\,\rho \right) \quad\mathrm{for}\,\,\,\rho\rightarrow\infty. \label{asymp} \end{equation} This exponential decay of the solution renders the error introduced by the finite-size domain simulations negligible. The discretization scheme plays a major role for the achievement of the required high accuracy. We are particularly interested in the behaviour of the error with respect to the size and morphology of the skyrmion texture. A higher accuracy can be achieved for one-dimensional (1D) problem~(\ref{DE_sup}) since our solution $\theta$ depends only on $\rho$. Contrary to 2D case the solution of such 1D problem does not require a considerable computational efforts, and high accuracy results can be obtained on very dense meshes with inter-node distance $\sim\!0.001$ (about a 1000 nodes per $L_\mathrm{D}$). Such high-accuracy solutions then can be used as benchmarks to verify the accuracy of other methods. The equation~(\ref{DE_sup}) can be solved numerically by means of explicit integration relying of the Runge-Kutta method~\cite{Bogdanov_99}. Assuming $\theta(0)\!=\!k\pi$, where $k$ is an integer, the proper value of the parameter $\theta^\prime(0)\!=\!d\theta/d\rho |_{\rho=0}$ can be found by shooting method. Thus, the solution is uniquely ``encoded'' in a single number $\theta^\prime$ and every overshooting/undershooting should lead to a distortions of the $\theta(\rho)$ profile which is expected to decrease monotonically to zero as $\rho\!\rightarrow\!\infty$. This property of the explicit integration method can be used to verify the correctness of the results obtained with other methods. In particular, we found the solution of 1D problem~(\ref{DE_sup}) by \textbf{unconstrained} Nonlinear Conjugate Gradient (NCG) minimization method for corresponding Hamiltonian: \begin{equation} \mathcal{E}_\mathrm{1D} = 2\pi\int\displaylimits_0^\infty \left( \frac{1}{2}\left(\frac{d\theta}{d\rho}\right)^2 + \frac{1}{2\rho^2}\sin(\theta)^2 + 2\pi\frac{d\theta}{d\rho} + \frac{\pi}{\rho}\sin(2\theta) + 4\pi^2 u \sin(\theta)^2 + 4\pi^2 h (1 - \cos(\theta)) \right)\rho\,\mathrm{d}\rho. \end{equation} Very large simulation domain was used, $0\!\leq\!\rho\!\leq\!10$ and very small inter-node distance $\Delta\rho\!=\!0.005$. The values at the boundaries, $\theta(0)$ and $\theta(10)$ was fixed to $k\pi$ and $0$ respectively. A finite-difference scheme of the fourth order of accuracy was designed assuming that $\theta_i$ and its spatial derivative, $\theta^\prime_i$ at each node, $i$ are independent variables. According to our estimates the relative error in the calculation of the energy marked in bold font shown in the Table~\ref{tab1} does not exceed $10^{-6}$. For additional verification, we used the value $\theta^\prime_{i=0}$ (the value in the first node for which $\rho\!=\!0$) from the found solution as an initial input value $\theta^\prime(0)$ for the integration with fourth-order Runge-Kutta method (RK4). We used the integration step $\Delta\rho\!=\!10^{-7}$ and found during further shooting procedure that at least the first six significant digits of $\theta^\prime_{i=0}$ are accurate. All calculations for 1D problem were carried out in double-precision format (64 bits) for floating-point operations. The contributions of exchange and Dzyaloshinskii-Moriya interaction terms at each $(i,j)-$th node with coordinates $(\mathtt{x},\mathtt{y})\!=\!(i\,\Delta{s},j\,\Delta{s})$ in 2D mesh are approximated using the values of unit vector field in eight neighboring nodes with $(\mathtt{x}\!\pm\!\Delta{s},\mathtt{y}\!\pm\!\Delta{s})$ and $(\mathtt{x}\!\pm\!2\Delta{s},\mathtt{y}\!\pm\!2\Delta{s})$, where $\Delta{s}$ -- inter-node distance. The corresponding energy contributions represent the products of the $\mathbf{n}$-vector projections at each node and its eight neighbor nodes multiplied by specific factors, see for instance Ref.~\cite{Buhrandt13} and Supplementary Materials in Ref.~\cite{Milde}. For testing purposes, we have also implemented in our code the conventional second-order finite difference scheme. For direct energy minimization, we used \textbf{constrained} NCG algorithm where the constraint $\mathbf{n}^2\!=\!1$ is naturally satisfied because of using the atlas for the manifold corresponding to the space of the order parameter. The manifold itself represents a two-dimensional sphere $\mathbb{S}_{\mathrm{spin}}^2$ while the atlas is composed of two coordinate charts each of which corresponds to stereographic projection from one of two poles of the sphere. Here, we refer to this advanced numerical scheme as ``Atlas''. Conceptually such scheme is similar to the idea of describing the macrospin in the frame of stereographic projections with the ability to switch between projections from the both poles, presented in Ref.~\cite{Horley}. The key feature of the ``Atlas'' scheme is that each individual spin is defined in one of two coordinate chart independently on other spins. A more detailed description of the method and criteria for switching between charts for individual spins can be found in Supplementary Materials of Ref.~\cite{Rybakov_15}. Note, the most of the floating-point operations in our code have been implemented in single-precision format (32 bits). This allows to reach high performance on GPU. In Table~\ref{tab1}, we present the comparison of the results obtained with MuMax3 where 2-nd order finite-difference scheme is implemented~\cite{MuMax3} (the script is provided in an ancillary file \href{https://arxiv.org/src/1806.00782v2/anc/mumax3-script.mx3}{\underline{mumax3-script.mx3}}), different implementation of Atlas method with 2-nd order (see ``Atlas\,$\backslash$\,2'' columns with $\Delta{s}\!=\!1/52$ and more dense mesh with $\Delta{s}\!=\!1/104$), 4-th order finite-difference scheme (see ``Atlas\,$\backslash$\,4'' column), and one-dimensional approach (see ``1D\,$\backslash$\,4'' column). It is seen that the method chosen in current work provides the best accuracy with an error lower than $0.04\%$ as for $Q\!=\!-1$ skyrmions as for more complex textures. In contrast, the 2-nd order finite-difference scheme widely used in micromagnetic software shows significant error of about $8\%$ for the same relatively dense mesh. Increasing twice the mesh density in each of the dimensions reduces this error only by factor four as expected for the second-order scheme. A simple estimate suggests that one requires to increase the mesh density fourteen times more in each dimension to provide an accuracy comparable to our fourth-order scheme. Finally, it worth mentioning that the calculations for the textures with characteristic size of $\sim\!10L_\mathrm{D}$ (see for instance Fig.~\ref{sup_Fig_7pi}) with second-order discretization scheme on the meshes with $\Delta{s}\!\sim\!0.01L_\mathrm{D}$ provides an absolute error in energy calculations higher than value of $E_0$. Therefore, for quantitative analysis of such and larger textures the second-order discretization scheme becomes unreliable. \begin{table} \centering \caption{\small The energies of several solitons calculated by different methods.} \label{tab1} \resizebox{\columnwidth}{!} \begin{tabular}{ |c|r|c|c|c|c|c|c|c|c| } \hline \multicolumn{4}{|c}{} & \multicolumn{5}{|c|}{Energy, $\mathcal{E}$} & \multicolumn{1}{c|}{} \\ \hline texture type & $Q$ & $u$ & $h$ & MuMax3\,$\backslash$\,2 & Atlas\,$\backslash$\,2 & Atlas\,$\backslash$\,2 & Atlas\,$\backslash$\,4 & 1D\,$\backslash$\,4 & $\theta^\prime(0)$ \\ & & & & $\Delta{s}=1/52$ & $1/52$ & $1/104$ & $1/52$ & $1/200$ & \\ \hline axisymmetric skyrmion & -1 & 0 & 0.65 & -3.522 & -3.527 & -3.556 & -3.565 & \textbf{-3.56497} & -4.553561 \\ \hline $2\pi$-vortex (skyrmionium) & 0 & 0 & 0.65 & 6.476 & 6.472 & 6.322 & 6.274 & \textbf{6.27244} & -2.424450 \\ \hline $7\pi$-vortex & -1 & 0 & 0.65 & 55.45 & 55.46 & 53.63 & 53.02 & \textbf{53.0071} & -8.847806 \\ \hline skyrmion & +1 & 0 & 0.65 & 17.85 & 17.79 & 17.59 & 17.52 & not applicable & not applicable\\ \hline skyrmion & -3 & 0 & 0.65 & 6.466 & 6.462 & 5.808 & 5.595 & not applicable & not applicable \\ \hline skyrmion & -12 & 0 & 0.65 & -12.54 & -12.54 & -13.91 & -14.37 & not applicable & not applicable \\ \hline axisymmetric skyrmion & -1 & 0.65 & 0.3 & 0.673 & 0.673 & 0.6341 & 0.6220 & \textbf{0.621763} & -3.012659 \\ \hline axisymmetric skyrmion & -1 & 0.65 & 0.4 & 2.666 & 2.666 & 2.628 & 2.617 & \textbf{2.61621} & -4.561866 \\ \hline \end{tabular} } \end{table} \section{\normalsize Real-time simulations}\label{Video} One of the key features implemented in our code is the graphic user interface with an interactive regime allowing the \textit{in situ} control of the magnetic configurations as well as an easy way to construct a large variety of initial states. In particular, when being in the interactive regime, one can flip the spins inside a certain area under the mouse pointer. This option provides an efficient approach for construction of complex initial configurations composed of domains with a magnetization pointed either up or down. After a certain number of iterations of the energy minimization routine, the initial configuration converges to one of the nearest energy minimum. Beside the calculation of standard termination criteria~\cite{Gill_textbook} one can also perform an \textit{in situ} examination of the stability by introducing small excitations and perturbations to the simulated spin texture. In order to emphasize the isomorphism of systems with different Lifshitz invariants, we prepared three distinct movies illustrating the case of C$_{\mathrm{nv}}$ (\href{http://www.youtube.com/watch?v=LOiDfXhGalw}{\underline{movie~1}}), D$_{2\mathrm{d}}$ (\href{http://www.youtube.com/watch?v=qo75nEE0N7Q}{\underline{movie~2}}) and D$_{\mathrm{n}}$ (\href{http://www.youtube.com/watch?v=Nf2Nd7KduAk}{\underline{movie~3}}) symmetries. \section{\normalsize Big and extremely big skyrmions}\label{Big} It the case of $|Q|\!\gg\!1$, the kernel of the skyrmion -- its major internal part consists of tightly packed cores representing $\pi$-vortices. The shell of such heavy skyrmions which represents a $\pi$- or $2\pi$ domain wall for positive and negative $Q$, respectively, occupies relatively small area along the outer perimeter, see Fig.~\ref{sup_Fig_zoo}. When increasing of the number of cores $N_\textrm{cores}$ the structure of the skyrmion kernel becomes more regular while the area engaged by the kernel increases proportionally to $N_\textrm{cores}$. As a result, the energy of the skyrmion kernel tends to be proportional to $N_\textrm{cores}$ while contribution from the boundary is proportional to the perimeter of the skyrmion and is proportional to $\sqrt{N_\textrm{cores}}$. Thereby, the asymptotic behaviour of the energy of the skyrmions with increasing $|Q|$ should have the following form: \begin{equation} \frac{E_\textrm{aspt}}{E_0}= \begin{cases} \alpha_{(-)}|Q| + \beta_{(-)}\sqrt{|Q|} \quad &(Q\ll{-1}),\\ \alpha_{(+)}(Q+1) + \beta_{(+)}\sqrt{Q+1} \quad &(Q\gg{1}), \end{cases} \label{aspt} \end{equation} where $\alpha_{(\pm)}$, $\beta_{(\pm)}$ are the constants which depend only on $u$ and $h$. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{sup_fig_big_zoo_compressed.pdf} \caption{\small Morphology of stable chiral skyrmions with high topological charges in case of magnetic field applied perpendicular to the plane, $h\!=\!0.65$ and zero magnetocrystalline anisotropy, $u\!=\!0$. Note, the scale is different for all figures. } \label{sup_Fig_zoo} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{sup_fig_big_energy.pdf} \caption{\small The energy of skyrmions, $E$ as function of topological charge $Q$ for: $Q\!\ll\!-1$ (a), $-25\!\leq\!Q\!\leq\!25$ (b) and $Q\!\gg\!1$ (c). The solid curves are fit by~(\ref{aspt}) for the points marked with empty circles in (a) and (c). The dashed lines in (a) and (c) are the linear fit for the same points. The solid circles in (a), (c) and in corresponding insets were not taken in to account in fitting process and are shown to illustrate a high quality of the fit obtained with Eq.~(\ref{aspt}) and deviation of $E(Q)_{|Q|\!\gg\!1}$ from the assumption of linear dependence. } \label{sup_Fig_E_on_Q} \end{figure} For the careful verification of~(\ref{aspt}), we first calculated ten skyrmions (five -- for negative $Q$ and five -- for positive $Q$) with relatively high topological charges in the range $100\!\leq\!|Q|\!\leq\!300$ (see empty circles in Figs.~\ref{sup_Fig_E_on_Q}(a) and \ref{sup_Fig_E_on_Q}(c)). Then assuming that such values of $|Q|$ are sufficiently large for the energy to be slightly different from the asymptote fitted with~(\ref{aspt}) and obtain the following fitting parameters: $\alpha_{(-)}=-3.552$, $\beta_{(-)}=7.663$, $\alpha_{(+)}=9.885$, $\beta_{(+)}=2.308$. The dependencies corresponding to~(\ref{aspt}) are represented as solid curves in Figs.~\ref{sup_Fig_E_on_Q}(a)-\ref{sup_Fig_E_on_Q}(c). Finally, in order to verify the expected asymptotic behavior, we have calculated the energies corresponding to the skyrmions with extremely high $|Q|$. As seen from Figs.~\ref{sup_Fig_E_on_Q}(a) and \ref{sup_Fig_E_on_Q}(c), the agreement is excellent. To emphasize the deviation of $E(Q)$ from the linear dependence, we plotted results of the linear fit with the same points assuming $E\!\approx\!c_1|Q|\!+\!c_2$, see the dashed lines in Figs.~\ref{sup_Fig_E_on_Q}(a) and \ref{sup_Fig_E_on_Q}(c). Despite the fact that for large $|Q|$, the cores form a triangular lattice, the corresponding unit cell is different from the one of the skyrmion lattice phase also known as a \textit{skyrmion crystal}. In particular, the inter-skyrmion distances for an equilibrium skyrmion lattice is different from the inter-cores distance found in the kernels of big skyrmions. In case of equilibrium skyrmion lattice, the particles are packed in such a way that the average energy density is minimized, while the number of particles is assumed to be unlimited in an infinite space. In contrast, the packing in the kernel of big skyrmion minimizes the total energy for a fixed number of cores inside a limited size domain. In case of negative $Q$, if $|Q|$ increases then the pressure inside the sack decreases together with the curvature of the shell. Thereby, the stress of internal lattice should tends to zero as $Q\!\rightarrow\!-\infty$. For such a limiting case this lattice can be regarded in a first approximation as a set of individual non-interacting $Q\!=\!-1$ skyrmions, which means that $\alpha_{(-)}$ is equal to $E_{Q=-1}/E_0$. Our calculation gives $E_{Q=-1}/E_0\!=\!-3.565$ [Table~\ref{tab1}]. The corresponding discrepancy is only $0.4\%$ mostly due to the fact that the coefficients for the asymptote are obtained for a finite value of $|Q|$. For the case of uniaxial anisotropy ($u\!=\!0.65$, $h\!=\!0.3$) following the same procedure we found $\alpha_{(-)}=0.627$. The corresponding energy $E_{Q=-1}/E_0\!=\!0.622$ [Table~\ref{tab1}]. \section{\normalsize Skyrmions in lattice models}\label{Lattice} \subsection{\normalsize Chiral ferromagnet} \begin{figure}[ht] \minipage{0.45\textwidth} \centering \includegraphics[width=1.0\textwidth]{sup_fig_latt_1.pdf} \caption{\small Two energetically equivalent states for skyrmion with $Q\!=\!-2$ on square lattice, for the case of zero magnetocrystalline anisotropy, $K_u\!=\!0$, $\mu_\mathrm{s}B_\mathrm{ext}\!=\!0.25$. } \label{sup_Fig_D1} \endminipage\hfill \minipage{0.45\textwidth} \centering \includegraphics[width=1.0\textwidth]{sup_fig_latt_2.pdf} \caption{\small Two energetically equivalent states for skyrmion with $Q\!=\!1$ on square lattice for the case of no external field, $K_u\!=\!0.45$, $\mu_\mathrm{s}B_\mathrm{ext}\!=\!0$. } \label{sup_Fig_D2} \endminipage \end{figure} The results presented in this work which employs a high accuracy method for the quantitative analysis of continuous solutions remain valid in the discrete limit of classical spins on lattice. In addition to that, our results are also validated by the discrete approach for systems where the continuum approach (1) is unsuitable. For illustration, we consider a standard spin lattice model of a chiral magnet~\cite{Sergienko,Han}: \begin{equation} \mathcal{H}\!= - J \sum_{\left\langle ij\right\rangle, i>j } \mathbf{n}_i \cdot \mathbf{n}_j - \sum_{\left\langle ij\right\rangle, i>j } \mathbf{D}_{ij} \cdot [\mathbf{n}_i\! \times\! \mathbf{n}_j] - K_\mathrm{u} \sum_{i} {n}^2_{i,\mathrm{z}} - \mu_\mathrm{s} \mathbf{B}_\mathrm{ext}\!\sum_{i}\mathbf{n}_i, \label{Ham_latt} \end{equation} where $J$ is the exchange coupling constant, $\mu_\mathrm{s}$ -- magnetic moment of each spin. The unit vector $ \mathbf{n}_i$ defines the orientation of the spin at site $i$. The notation $\left\langle ij\right\rangle$, $i\!>\!j$ denotes that the summation runs over each nearest-neighbour pair once. We assumed that each Dzyaloshinskii-Moriya pseudo vector $\mathbf{D}_{ij}$ is perpendicular to the bond between sites $i$ and $j$ and lies in the $(xy)$-plane. The modulus of vector $D\!=\!|\mathbf{D}_{ij}|$ is assumed to be fixed for all interacting pairs of spins. For definiteness, we consider the case of 2D square lattice with lattice constant $a$; however, results presented below remain valid for other lattice symmetries as well. The dominant interaction in the system is the ferromagnetic exchange, $0\!<\!D\!<\!J$. In the absence of uniaxial anisotropy and external magnetic field ($K_\mathrm{u}\!=\!0$, $B_\mathrm{ext}\!=\!0$) the ground state for~(\ref{Ham_latt}) is a spin spiral with a period $L\!=\!2\pi{a}/\mathrm{arctan}(D/J)$~\cite{Han}. For $J\!\gg\!D$ (and therefore $L\!\gg\!a$) continuous limit (1) can be considered as a valid approximation for the lattice Hamiltonian~(\ref{Ham_latt}) with $\mathcal{A}\!=\!J/(2a)$, $\mathcal{D}\!=\!D/a^2$. The corresponding helix period $L_\mathrm{D}\!=\!2\pi{a}J/D$. However, for $J\!\gtrsim\!D$ the continuum approach (1) representing a second order Taylor expansion of the lattice model~(\ref{Ham_latt}) becomes invalid. For example, for $D\!=\!0.6J$ the period $L_\mathrm{D}$ turns to be underestimated by about $10\%$. Thus, for such ratios between $J$ and $D$ the lattice effects are relevant. For our simulations, we used $J\!=\!1.0$, $D\!=\!0.6$. An important feature of skyrmions in the lattice model is the discrete degeneracy of the solutions meaning that some in-plane directions are more preferable for texture alignment. In Figures~\ref{sup_Fig_D1} and~\ref{sup_Fig_D2}, we illustrate two possible skyrmion configurations for $Q\!=\!-2$ and $Q\!=\!1$, respectively. The degree of degeneracy depends on the symmetry of the crystal lattice, and on the morphology of the skyrmion spin texture. Note, the continuum model (1) is spatially isotropic and the energy of skyrmions does not depend on the orientation of the texture. For the calculation of topological charge on a discrete lattice we used the approach suggested in Ref.~\cite{Berg}. \subsection{\normalsize Chiral antiferromagnet} \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{sup_fig_latt_anti_compressed.pdf} \caption{\small The emergence of antiferromagnetic skyrmions with different topological charges as a result of full energy minimization starting from a random spins distribution. (a) The initial entirely random spins distribution with zero net magnetization, (b) the spin configuration after complete minimization and (c) the perspective and zoomed view of the area marked in (b) as the dashed square. Calculations have been performed on the domain with $256\!\times\!256$ spins and periodic boundary conditions in the plane. Texture with $Q\!=\!0$ (antiferromagnetic skyrmionium) has been recently discussed in~\cite{Bhukta}. } \label{sup_Fig_anti} \end{figure} The spin-orbit interaction in antiferromagnets plays a similar role in the stabilization mechanism of skyrmions as in ferromagnetic mediums. Furthermore, the most realistic cases of two-sublattice chiral antiferromagnet can be described in the frame of the same effective model as for chiral ferromagnet~\cite{AntiferroMain, Bogdanov_PRB_2002}. For the simulation of antiferromagnets, we used spin lattice Hamiltonian~(\ref{Ham_latt}) with $J\!=\!-1$, $D\!=\!0.4$, $K_u\!=\!0.22$ and $B_\mathrm{ext}\!=\!0$. In the corresponding phase diagram of a 2D chiral antiferromagnet, these parameters belong to the domain of confident stability of antiferromagnetic skyrmions~\cite{Bessarab_anti}. For an antiferromagnet, in the absence of external magnetic field, the net magnetization reduces to zero. Therefore, it seems natural to use the configuration that meets this criterion as the initial state. Random spins distribution represent an inexhaustible set of initial states with a zero net magnetization. It turned out that such simple initial guess with a regular probability leads to the appearance of skyrmions with various $Q$ after direct energy minimization, see Fig.~\ref{sup_Fig_anti} and \href{http://www.youtube.com/watch?v=xK_BSP8GX3c}{\underline{movie~4}}. The topological charge of antiferromagnetic skyrmion can be calculated for either of the two sublattices and taking into account its polarity (net magnetization of sublattice). Because of the opposite polarities of the sublattices, the \textit{superimposed} or \textit{combined} winding number in this case always vanishes, $-Q\!+\!Q\!=\!0$. An important consequence of such vanishing of winding number is the cancellation of so-called Magnus force -- the force acting on topological magnetic soliton interacting with the spin-polarized electric current~\cite{Zhang_anti,Barker_anti}. Note, cancellation of Magnus force is expected for any antiferromagnetic skyrmions irrespective of topological charge $Q$. The feature of topological charge for an antiferromagnet can be illustrated using Fig.~\ref{sup_Fig_anti} as follows. Let us consider skyrmion with $Q\!=\!+3$ and three skyrmions with $Q\!=\!-1$ nearby. The total topological charge of such four textures is zero, which means that there is a way to merge these textures with further transformation into the ground state under preservation of the continuity in each of the sublattices. \part*{\centering \normalsize Acknowledgments} The authors thank Cyrill~B. Muratov, Egor Babaev, Stavros Komineas, Juba Bouaziz and Christof Melcher for useful discussions of results. The work of F.\,N.\,R. was supported by the Swedish Research Council Grant No. 642-2013-7837, by G\"{o}ran Gustafsson Foundation for Research in Natural Sciences and Medicine and by the ``Roland Gustafssons Stiftelse f\"{o}r teoretisk fysik''. The work of N.\,S.\,K. was supported by Deutsche Forschungsgemeinschaft (DFG) via SPP 2137 ``Skyrmionics'' Grant No. KI 2078/1-1. \part*{\centering \normalsize References} \renewcommand{\section}[2]{}
{ "timestamp": "2018-11-14T02:17:08", "yymm": "1806", "arxiv_id": "1806.00782", "language": "en", "url": "https://arxiv.org/abs/1806.00782" }
\section{Introduction} Conventional Web archives preserve publicly available content on the live Web. Some Web archives allow users to submit URIs to be individually preserved or used as seeds for an archival crawl. However, some content on the live Web may be inaccessible (e.g., beyond the crawler's capability compared to a live Web browser) or inappropriate (e.g., requires a specific user's credentials) for these crawlers and systems to preserve. For this reason and enabled by the recent influx of personal Web archiving tools, such as WARCreate, WAIL, and Webrecorder.io, individuals are preserving live Web content and personal Web archives are proliferating \cite{marshall-rethinking}. Personal and private captures, or mementos, of the Web, particularly those preserving content that requires authentication on the live Web, have potential privacy ramifications if shared or made publicly replayable after being preserved \cite{Marshall:2012:IAS:2232817.2232819}. Given the privacy issues, strategically regulating access to these personal and private mementos would allow individuals to preserve, replay, and collaborate in personal Web archiving endeavors. Adding personal Web archives with privacy considerations to the aggregate view of the ``Web as it was'' will provide a more comprehensive picture of the Web while mitigating privacy violations. This work has four primary contributions to Web archiving: \begin{flushleft} \textbf{Archival Query Precedence and Short-circuiting}: Allow querying of individual or subsets of archives of an aggregated set in a defined order with the series halting if a condition is met (Section~\ref{sec:precedence}).\\ \textbf{TimeMap/Link Enrichment}: Provide additional, more descriptive attributes to \mbox{URI-Ms} for more efficient querying and interaction (Section~\ref{sec:additionalTimeMapAttributes}).\\ \textbf{Multi-dimensional user-driven content negotiation of archives}: Increase user involvement in request for \mbox{URI-Ms} in both temporal and other dimensions (Sections~\ref{sec:mma} and \ref{sec:negotitationDimensions}).\\ \textbf{Public/Private Web Archive Aggregation}: Introduce additional special handling of access to private Web archives for Memento aggregation using OAuth (Section~\ref{sec:auth}). \end{flushleft} \begin{figure} \captionsetup[subfigure]{justification=centering} \centering \subfloat[Local Archive capture of \url{facebook.com}][Local Archive capture\\ of \url{facebook.com}]{ \includegraphics[width=0.45\linewidth]{fb_me} \label{fig:fb_me} }\hspace{1.0em} \subfloat[Internet Archive capture of \url{facebook.com}][Internet Archive capture\\of \url{facebook.com}]{ \includegraphics[width=0.45\linewidth]{fb_ia} \label{fig:fb_ia} }\\ \subfloat[Private content on the live Web that is extremely time sensitive to preserve for future access.]{ \includegraphics[width=1.0\linewidth]{hii-180-annotated} \label{fig:hii} } \caption[test]{Personalized and Private Web pages.} \label{fig:fb} \end{figure} \subsection{Solutions Beyond Institutions} Personal Web archives may contain captures with personally identifiable information, such as a time sensitive statement verification Web page (Figure~\ref{fig:hii}) or a user's \texttt{facebook.com} feed (Figure~\ref{fig:fb_me}). A user may want to selectively share their \url{facebook.com} mementos \cite{1555440} but wish to also regulate access to them \cite{Marshall:2014:AAF:2740769.2740772}. Without the ability of authenticating as a user on the live Web, many public Web archives simply preserve the \url{facebook.com} login page (Figure~\ref{fig:fb_ia}). Both captures are representative of \texttt{facebook.com}, and they may have even been captured at the same time. Users may be hesitant to share their mementos of \url{facebook.com} (or other personal or private Web pages) without a mechanism to ensure that the Web page as the user experienced it is faithfully captured and that the access of those captures can be regulated. As a counterpoint, an individual's personal Web archive is more susceptible to disappearing without an institution's backing. Maintaining backups of archived content is unwieldy, requires diligence or automation, and is still at the mercy of hardware failures. While distributed propagation of the captures to other places may ameliorate this issue, another privacy issue remains in that distributed content may be sensitive and must be handled differently at the level of access. To observe the more representative picture of ``what I saw on the Web of the past'' inclusive of private Web archive captures, we could give precedence to private Web archives over public Web archives when aggregating. For example, temporally aggregating my friends' captures (potentially residing in multiple private Web archives) with those consisting of preserved Facebook login pages (Figure~\ref{fig:fb_me}) from public Web archives (who are rightly not responsible for preserving my Facebook feed) may not be desirable. Instead, a user may want to instruct the aggregator to only aggregate mementos from archives with certain characteristics, e.g., a set of private Web archives, and only if no personal captures are found, look to the public Web archives for captures (Figure~\ref{fig:precedence}). This sort of runtime specification of archival query precedence does not currently exist. Today's Memento aggregators can only query a static set of public Web archives specified at configuration time (Figure~\ref{fig:ma}). Because more personal and private, non-institutional backed Web archives are being created, and these archives may contain sensitive data that cannot be shared without special handling, more work must to be done to address the impermanence of personal Web archives with consideration for their contents. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{prpu12} \caption{Archival precedence using private first then public Web archiving querying model (Pr$^+$Pu$^+$).} \label{fig:precedence} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{lanl} \caption{Conventional Memento aggregators query a set of public Web archives and do so in an equally-weighted querying model \-circuiting.} \label{fig:ma} \end{figure} \subsection{Enrichment of Archival Aggregates} We provide amendments to the semantics of Memento TimeMaps to encourage aggregation of mementos from more archives while still allowing for the distinction between conventional and enriched captures with additional metadata. We introduce additional \textit{mementities }(a portmanteau of ``Memento'' and ``entity'')\footnote{Used for distinction from the term ``entity'' as defined and in the now-deprecated RFC2616 describing HTTP/1.1.} for accessing various types of Web archives. The use of mementities could enable negotiation in additional dimensions beyond time, systematic aggregation of private captures, regulated access control to Web archives that may contain personal or private mementos, etc. \begin{figure*}[t] \begin{lstlisting} !context ["https://oduwsdl.github.io/contexts/memento"] !id {"uri": "http://localhost:1208/timemap/cdxj/http://facebook.com"} !keys ["memento_datetime_YYYYMMDDhhmmss"] !meta {"original_uri": "http://facebook.com"} !meta {"timegate_uri": "http://localhost:1208/timegate/http://facebook.com"} !meta {"timemap_uri": {"link_format": "http://localhost:1208/timemap/link/http://facebook.com", "json_format": "http://localhost:1208/timemap/json/http://facebook.com", "cdxj_format": "http://localhost:1208/timemap/cdxj/http://facebook.com"}} 19981212013921 {"uri": "http://archive.is/19981212013921/http://facebook.com/", "rel": "first memento", "datetime": "Sat, 12 Dec 1998 01:39:21 GMT"} 19981212013921 {"uri": "http://web.archive.org/web/19981212013921/http://facebook.com/", "rel": "memento", "datetime": "Sat, 12 Dec 1998 01:39:21 GMT"} 19981212024839 {"uri": "http://web.archive.org/web/19981212024839/http://www.facebook.com/", "rel": "memento", "datetime": "Sat, 12 Dec 1998 02:48:39 GMT"} ... 20170330231113 {"uri": "http://web.archive.org/web/20170330231113/http://www.facebook.com/", "rel": "memento", "datetime": "Thu, 30 Mar 2017 23:11:13 GMT"} 20170331013527 {"uri": "http://web.archive.org/web/20170331013527/https://www.facebook.com/", "rel": "last memento", "datetime": "Fri, 31 Mar 2017 01:35:27 GMT"} \end{lstlisting} \caption{An abbreviated CDXJ TimeMap from MemGator for \url{facebook.com}.} \label{fig:cdxj} \end{figure*} In this work (Section~\ref{sec:additionalTimeMapAttributes}) we introduce three new types of attributes for richer TimeMaps: \textbf{content-based attributes} based on data when a URI is dereferenced, \textbf{derived attributes} requiring further analysis beyond dereferencing but useful for evaluating capture quality, and \textbf{access attributes} that guide users and software as to requirements needed to dereference mementos in private, personal, and archives with access restrictions. Through this TimeMap\footnoteFeedback{MLN: and Link header!} enrichment, a user will be able to specify the semantics to be selective in the set of archived URIs (\mbox{URI-Ms}) returned in a TimeMap through a set of attributes beyond time and original URI (\mbox{URI-R}). This will allow the user to interact with the Memento aggregator to specify a custom subset and/or supplement the existing supporting archives in the aggregated result returned. Conventional Memento aggregation \cite{sanderson-global} (Figure~\ref{fig:ma}) is accomplished by a user requesting captures for a \mbox{URI-R} from a remote endpoint. The software receiving the request then relays this request to a set of Web archives with which it is configured. Once the Web archives return a response containing their captures, the aggregator software temporally sorts the \mbox{URI-Ms}, adds additional Memento metadata, and returns the aggregated TimeMap to the user. In this work we also describe a cascading hierarchical relationship between the aggregators and the mementities involved in aggregation and negotiation. We introduce a ``Meta-Aggregation'' concept (Section~\ref{sec:mma}) to allow for a recursive relationship of one aggregator onto another, ``building up'' an aggregate result, potentially including supplemental information in the aggregation. Section~\ref{sec:archivalSelection_mma} describes multiple scenarios where supplementing the memento aggregation using a meta-aggregator mementity would be useful. This hierarchy may also include other mementities like one to regulate access to private Web archives (identified with a \mbox{URI-P}) and another, a StarGate, to allow for selective negotiation (Section~\ref{sec:stargate}), e.g., allowing the client to request that only results from private Web archives are returned from an aggregator. \section{Background and Related Work} \label{sec:bg} \label{sec:relatedWork} \subsection{Archiving and Linked Data} The Memento Framework \cite{rfc7089} provides the constructs to interact with Web archives in the temporal dimension. An archival capture (memento) is identified by a \mbox{URI-M}. Memento aggregation allows identifiers for mementos (\mbox{URI-Ms}) from multiple Web archives to be temporally sorted using the parameters of the original live Web URI (\mbox{URI-R}) and a timestamp of the capture (Memento-Datetime). Memento TimeMaps contain a listing of \mbox{URI-Ms} for mementos of an original resource on the live Web. TimeMaps also include contextual information like the \mbox{URI-R} that the TimeMap represents, URIs for a Memento mementity to handle temporal content negotiation (TimeGate, i.e., \mbox{URI-Gs}), and identifiers for other TimeMaps (\mbox{URI-Ts}). Memento TimeMaps are conventionally formatted and extend upon the Web Linking specification \cite{rfc8288}. The syntax of the Link format applies to both information expressed in HTTP headers as well as information supplied in a TimeMap listing. Because of this, a limited set of attributes about \mbox{URI-Ms} is allowed within a TimeMap inclusive of \texttt{rel} and \texttt{datetime}. Additional information about a \mbox{URI-M} would be useful if present in a TimeMap. For example, knowing the HTTP status code of the dereferenced \mbox{URI-M} would reduce the amount of time needed to determine unique captures in the archive \cite{kelly-jcdl2017}. Extending TimeMaps may also provide the facility for the integration of private and public Web archives. Alam et al. \cite{salam-cdxj} defined the CDXJ format, an extension of the conventional CDX \cite{cdx} archival indexing format, as an extensible means of associating additional attributes to \mbox{URI-Ms}. CDX files serve as indexes for Web archive files and contain many fields, like MIME-type, status code, and content-digest of the memento, which are not present in TimeMaps. MemGator \cite{memgator} is an open source Memento aggregator that supports CDXJ TimeMaps (example in Figure~\ref{fig:cdxj}) along with conventional Link-formatted and additionally JSON-formatted TimeMaps. In this work, we adapt the code for MemGator to handle additional HTTP request parameters supplied by a client as well as producing TimeMaps with the additional proposed attributes. While these two dimensions are sufficient for the aggregation of public Web archives, additional parameters are required to express the need for privacy considerations or further steps to be executed to dereference the \mbox{URI-M}. Beyond the ability to express distinction in private and public mementos, it may not make sense to request public Web archive captures from an aggregator based on a variety of conditions. Some examples where expressions to distinguish captures are in isolation by URI (explicit exclusion of public captures from results) and Archival precedence (e.g., only check for captures of facebook.com in public Web archives when none are in my own). The `profile' link relation type \cite{rfc6906} (discussed in Section~\ref{sec:precedence}) provides a standard set of semantics for processing a resource representation. We leverage and extend these semantics to allow a user to request mementos with certain properties from Web archives. In earlier work, we developed WARCreate \cite{kelly-jcdl12}, a Google Chrome browser extension, to allow a user to capture content from their browser, even pages behind authentication, into the standard web archiving (WARC) format. We also re-packaged institutional grade Web archiving tools in WAIL \cite{berlin-wail}, a native desktop application, to allow individuals to preserve, replay, and retain complete control of their captures. More recently, the Webrecorder.io service allows similar capability, including allowing the user to capture content behind authentication. But unlike WARCreate and WAIL, Webrecorder relies on the user's credentials being proxied through the service, a potentially undesirable feature with privacy ramifications. \subsection{Privacy and Security} The Snowden Archive-in-a-Box project \cite{snowden} is an autonomous version of the the Snowden Digital Surveillance Archive. The project uses a Raspberry Pi single-board computer along with other hardware and a data set containing files leaked by Edward Snowden to allow browsing of the files without a user being surveilled. This use case highlights access as being the problematic factor beyond the base case of the content being sensitive. When aggregating captures from a Snowden archive with captures from other archives, requesters may wish to prevent requests from propagating to other archives via the aggregator (by specifying a privateOnly profile) or to only consult other archives when no results are returned from their instance (request precedence, both discussed in Section~\ref{sec:precedence}). While little research has been performed on the aggregation of private and public captures, multiple surveys have been performed by a variety of researchers on user's perspectives on private Web archives. In particular, Marshall and Shipman \cite{Marshall:2012:IAS:2232817.2232819} surveyed Web users on potential efforts for institutions to preserve their private Web contents. They particularly highlighted the need for exploration of who retains control of access of private content once it is preserved and made available. The OAuth 2.0 Authorization Framework \cite{rfc6749} (usage discussed in Section~\ref{sec:accessAttributes}) provides a model for tokenization that we apply for persistent access to private Web archives. Using this model requires a secondary authorization server, implemented in this research as an additional mementity to decouple the authentication burden on the archive. This model, however, requires the archive to be aware of this additional mementity to act as a gateway to the archive. Cushman and Kreymer \cite{ilya-hacker} performed an extensive review on the security of Web archives in the context of both preservation and replay. Through technical examples on how an attacker might capture private resources, they provided approaches for mitigating each sort of attack. In related work, Brunelle et al. \cite{brunelle-dlib2016} described issues with private Web archiving on an organizational scale. Through analyzing the results of an archival crawler instance, they identified content that should not have been accessible to the crawler, which required wholesale removal of WARCs containing the information for lack of a method of selective removal. \subsection{Memento and HTTP Mechanics} Rosenthal \cite{dshr-aggregators} emphasized that temporal order may not be optimal for TimeMaps returned from Memento aggregators. He stated that aggregators need to develop ways of estimating usefulness of preserved content and conveying these estimates to readers. In a different work, Rosenthal \cite{dshr-importance} described the behavior of aggregators returning ``Soft 403s'' consisting of captures of login pages when the user likely expected content shown that was originally behind authentication. Rosenthal \cite{dshr-importance} also described a ``hints list'' that an aggregator might provide based on its own experience of requesting content from archives. In this work Rosenthal also alluded to a hypothetical mechanism of the aggregator filtering content like login pages from the results and redirecting a user to a version of the TimeMap containing only captures that are not a login page. Jones et al. \cite{sjones-raw1, preferHeader-wsdlBlog} discussed obtaining the ``raw mementos'' consisting of un-rewritten links in captures in a systematic way using the HTTP Link response header. By utilizing the HTTP Prefer request header \cite{rfc7240}, a user would be able to obtain a version of the memento as it appeared at the time of capture instead of a version with relative links rewritten by the archive to point back within the archive and not the live Web. An archive, in response and to confirm compliance with the request, would return the memento with the HTTP Preference-Applied response header along with the requested original version of the memento. The HTTP Prefer header \cite{rfc7240} allows an explicit means for a client to express their preferences of optional aspects in an HTTP Request. Van de Sompel et al. \cite{preferHeader-wsdlBlog} highlighted that the Prefer header could be used by Web archives to allow clients to specify a request for the unaltered or un-rewritten content. Rosenthal \cite{preferHeader-dshr} echoed Van de Sompel et al. by suggesting a list of transformations (screenshot, altered-dom, etc.) for a memento via a new HTTP header. This work focuses on the transformation of TimeMaps, not the mementos themselves. The rewriting problem in previous work is pertinent to replay of \mbox{URI-Ms} whereas what we accomplish is more expressive metadata of the mementos prior to and to mitigate issues with dereferencing \mbox{URI-Ms}. A goal of this work is to further involve the client in the aggregation process. Interaction with the aggregators through these sort of mechanisms will be a first step in accomplishing this Fielding and Reschke \cite{rfc7231} defined proactive and reactive content negotiation as that which is determined by the server as a best-guess based on metadata and a model involving communication and selection of representations, respectively. The latter may be accomplished a variety of ways inclusive of the utilization of the HTTP \texttt{300 Multiple Choices} and \texttt{406 Not Acceptable} status codes as well as the less commonly implemented HTTP Alternates header and Transparent Content Negotiation \cite{rfc2295}. As we anticipate generating derivative TimeMaps consisting of any number of permutations of additional attributes applied on the mementos, it would be useful to associate and allow users to choose the variant they prefer using these status codes and HTTP transaction patterns. In previous work \cite{kelly-msthesis}, we highlighted an issue of URI-collision in the realm of personal Web archives wherein (for example) both a login page and the authenticated content of a live Web application may reside at the same \mbox{URI-R} (Figure~\ref{fig:fb}). We \cite{kelly-dlib2013} extended this work by identifying personalized representations of mementos and providing a mechanism to navigate between additional dimensions beyond time. As personal Web archives proliferate and are at some point aggregated into multi-archive TimeMaps (cf. a TimeMap from and containing only listings from the archive itself), it would be useful to distinguish \mbox{URI-Ms} that represent personalized mementos, mementos that were originally behind authentication, and mementos in personal Web archives that require additional considerations and mechanisms to access. \section{Archive Query Precedence and Short-Circuiting} \label{sec:precedence} Private Web archives contain an inherent characteristic where exposing the metadata about an archive's contents could be sufficient to identify the archive's contents. For example, a private archive responding with a TimeMap containing \mbox{URI-Ms} for captures of my online bank statement would reveal where I am banking as well as where I am preserving personal banking information To mitigate the unnecessary revelation of potentially personal information, a client who has set up a Memento aggregator with access to their private Web archive may wish to have requests sent to public Web archives only if no results are returned from their private Web archive. Figure~\ref{fig:precedence} illustrates requests being first sent to the private archives then to public Web archives. But, it may also be desirable to allow this type of behavior to functionally coexist with conventional pipelined asynchronous archive querying. As with the Snowden Archive-in-a-Box example in Section~\ref{sec:bg}, checking for the existence of captures of this content in other archives may imply interest or association with the subject matter, in some cases itself being revealing or even incriminating. To maintain privacy, a Memento aggregator with access to the Snowden archive would require special handling of requests for resources that might be contained in that archive. For example, a user may want requests for a certain \mbox{URI-R} to only be requested from the Snowden archive or their own personal Web archive, and not other public or private Web archives. We propose two initial approaches to accomplish this: explicit specification by a client at the time of request and analysis of mementos with a potentially personalized representation. For the latter, we \cite{kelly-dlib2013} identified three methods for identifying personalized representations. Of the methods proposed, but not investigated further (we opted for one of the other three), was to be able to specify additional environment variables when selecting a representation of a resource. The downside, we mentioned, was the requirement of a specialized client. The specialized ``client'' in this case may be the mementity responsible for determining the degree of personalization of the representation, i.e., the StarGate. When aggregating and replaying a \mbox{URI-R} over time from a set of archives consisting of captures from both public and private Web archives, it may be desirable to first check for private captures prior to requesting \mbox{URI-Ms} from public Web archives (Figure~\ref{fig:precedence}). For example, in aggregating \mbox{URI-Ms} for \texttt{facebook.com} that include mementos of my news feed from my private archive and unauthenticated login pages from institutional public Web archives (Figure~\ref{fig:fb}), the latter is less useful in observing how the page has changed over time. To maintain relevancy of the desired sort of representation, we may want to check for the existence of captures from private Web archives \textit{first} and then, only if none are present, resort to requesting the captures consisting of a login page. This model of precedence (request priority) and short-circuiting (stop requesting captures if a condition is met) via Memento aggregators does not currently exist but could be critical in a user expressing what they expect from an aggregator beyond simply mementos for a \mbox{URI-R}. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{mmaPuPr} \caption{PrivateOnly and PublicOnly aggregation in an MMA.} \label{fig:mma_prpu} \end{figure} In the basic model below, we express various access precedence models (henceforth \textit{profile}) for containing boolean categorization of private and public Web archives. In each profile, order is significant and thus a simple regular expression can be used where $P_u$ symbolizes a public Web archive endpoint, $P_r$ a private Web archive endpoint, and the ``+'' superscript indicating at least one or more consecutive instances. \vspace{2.3em} \begin{equation}\label{eqn:noArchives}noArchives \rightarrow \varnothing \rightarrow \{\}\end{equation} \begin{equation}\label{eqn:publicOnly}publicOnly \rightarrow {P_u}^+\end{equation} \begin{equation}\label{eqn:privateOnly}privateOnly \rightarrow {P_r}^+\end{equation} \begin{equation}\label{eqn:privateFirst}privateFirst \rightarrow {P_r}^+{P_u}^+\end{equation} \begin{equation}\label{eqn:publicFirst}publicFirst \rightarrow {P_u}^+{P_r}^+\end{equation} The basic profiles pair with the syntax of the \texttt{profile} relation type \cite{rfc6906}, allowing clients to request resulting TimeMaps containing \mbox{URI-Ms} from a subset of archives from which the Memento mementity requests (Figure~\ref{fig:mma_prpu}). The preliminary scheme for short-circuiting of subsequent requests is also boolean, e.g., requests should only be made to public Web archives when the \texttt{privateFirst} profile (Equation~\ref{eqn:privateFirst}) is specified by the client when no identifiers for captures are returned from private archives. This model also assumes that the sets $P_u$ and $P_r$ are disjoint ($P_u \cap P_r = \varnothing$) for simplicity, but may not be the case in reality. For Web archives that contain both private and public captures, an approach toward achieving mutually exclusivity could be to separate each set of the private and public \mbox{URI-Rs} into an abstraction of separate collections. For example, as discussed earlier, the UK Web Archive contains captures from its legal deposit with restricted off-site access; that is, a user cannot access the mementos unless physically on location at the library (Figure~\ref{fig:bl_451_curl}) \begin{figure} \begin{mdframed}[leftmargin=1em,rightmargin=1em] \begin{flushleft}{\ttfamily\scriptsize\color{black} \$ curl -I https://www.webarchive.org.uk/wayback/archive/*/http://www.example.org\\ \textbf{\color{red}HTTP/1.1 451 Unavailable For Legal Reasons}\\ Date: Wed, 25 Oct 2017 04:39:35 GMT\\ Server: Apache-Coyote/1.1\\ Content-Type: text/html;charset=utf-8\\ Transfer-Encoding: chunked\\ Set-Cookie: JSESSIONID=823BD09DF8DD489087763640A8150023; Path=\/; HttpOnly\\ Content-Language: en\\ } \end{flushleft} \end{mdframed} \caption{Accessing a \mbox{URI-M} at UKWA using curl returns an HTTP 451 status code.} \label{fig:bl_451_curl} \end{figure} \section{Additional TimeMap Attributes} \label{sec:additionalTimeMapAttributes} Aggregating private and public mementos requires the ability to distinguish captures that require special handling. To accomplish this and to provide the ability for TimeMaps to be more descriptive of the \mbox{URI-Ms} they contain, we extend the TimeMap syntax and semantics to allow additional attributes. In this work we introduce three new types of attributes for richer TimeMaps: \textbf{content-based attributes} based on data when dereferenced, \textbf{derived attributes} requiring further analysis beyond dereferencing but useful for evaluating capture quality, and \textbf{access attributes} that guide users and software as to requirements needed to dereference mementos in private, personal, and archives with access restrictions. In this work we focus primarily on the access attributes but define other classes of attributes for future extensibility. \subsection{Content-based Attributes} \label{sec:contentBasedAttributes} In previous work \cite{kelly-jcdl2017}, we highlighted that the \mbox{URI-Ms} in a TimeMap for \texttt{google.com} produce nearly 85\% HTTP redirects when dereferenced. Determining how many mementos exist from an archive for a \mbox{URI-R} is thereby impossible from a TimeMap alone. Enriching a TimeMap with information about the dereferenced capture would improve methods for determining how well (in quantity) a \mbox{URI-R} has been captured. HTTP data obtained when dereferencing a \mbox{URI-M} like status code \cite{rfc7231}, content-type \cite{rfc7231}, and Last-Modified \cite{ainsworth-hypertext2015} would be useful. For some of these attributes (like the aforementioned), the data may exist in the archival indexes, typically formatted as CDX. However, while many Web archives expose a Memento endpoint, few make these additional content-based attributes about the captures available through a CDX server\footnote{Internet Archive does currently expose a CDX endpoint with limited fields at \protect\url{http://web.archive.org/cdx/search/cdx?url=example.com}}. Thus, once these attributes are discovered by dereferencing, they may be retained and expressed with the assumption that they are an accurate account of the archival record. \subsection{Derived Attributes} \label{sec:derivedAttributes} Other attributes about a memento may require calculation to obtain, which can be computationally and temporally expensive when performed at archival or even \mbox{URI-R} scale. This section briefly describes one such example of a derived attribute: Memento Damage calculation. Adding the ability for this derived attribute to be present in a TimeMap would allow for more efficient evaluation of memento quality. Brunelle et al. \cite{brunelle-ijdl2015-damage} developed a metric for determining the quality of a capture (cf. quantity in Section~\ref{sec:contentBasedAttributes}) when dereferencing a \mbox{URI-M} with a particular focus on the quantitative significance of missing embedded resources. Determining ``Memento Damage'' requires calculation beyond simple counting, as all resources are not equally weighted, particularly when absent. Having this information calculated and present in a TimeMap would allow a user to select the best or most complete \mbox{URI-M} without needing to iterate through all \mbox{URI-Ms}. \subsection{Access Attributes} \label{sec:accessAttributes} An impetus for this research is integrating private and personal Web archives through aggregation via Memento TimeMaps. CDXJ allows additional attributes to be specified and considered when a \mbox{URI-M} is dereferenced. Figure~\ref{fig:cdxj_new} shows how a token may be stored in an enriched CDXJ TimeMap where the authentication procedure is discoverable at runtime. We utilize OAuth2 \cite{rfc6749} for authorization when dereferencing URI-Ms with this field for tokenization for persistent access to private mementos. Access control may be needed in cases where private and personal Web archives are aggregated with public Web archives via TimeMaps. An authentication procedure and subsequent tokenization will allow persistent access using a token derived from authenticating. Figure~\ref{fig:cdxj_new} shows a token being attributed on a per-\mbox{URI-M} basis, though a single token may be applied to all \mbox{URI-Ms} returned from an archive. The responsibility for attributing the token to an individual or set of mementos may lie in either the archive itself or the aggregator in this preliminary model. \subsection{Sources of Derivatives} CDXJ allows metadata fields (lines beginning with \texttt{!meta}) about the TimeMap to precede the listing of captures. Figure~\ref{fig:cdxj} contains metadata fields within a CDXJ TimeMap that are typically also found in a Link-formatted TimeMap, e.g., \mbox{URI-R} for the original resource, TimeGates, and other related TimeMaps. With the introduction of derived attributes (Section~\ref{sec:derivedAttributes}), it is critical to not just give context as to the semantics of new attributes like ``damage'' but also to provide guidance in generating this value. Figure~\ref{fig:cdxj_new} provides an example where a derived attribute requiring calculation (memento damage \cite{brunelle-ijdl2015-damage}) and an access attribute are defined for guidance within the TimeMap. Definitions in the \texttt{context} utilize a URI to associate semantics of an attribute for a memento. We are still currently exploring further syntax for more expressive attributes in CDXJ TimeMaps. \begin{figure} \begin{lstlisting} !context ["https://oduwsdl.github.io/contexts/memento", "https://oduwsdl.github.io/contexts/damage", "https://oduwsdl.github.io/contexts/access"] !id { "uri": "http://localhost:1208/timemap/cdxj/http://facebook.com"} !meta {"...": "..."} 19981212013921 { "uri": "http://localhost:8080/20101116060516/http://facebook.com/", "rel": "memento", "datetime": "Tue, 16 Nov 2010 06:05:16 GMT", "status_code": 200, "damage": 0.24, "access": { "type": "Blake2b", "token": "c6ed419e74907d220c6647ef0a3a88a41..." } } \end{lstlisting} \caption{An amended CDXJ record for a private capture of \texttt{facebook.com}. Line breaks added for readability.} \label{fig:cdxj_new} \end{figure} \section{Memento Meta-Aggregator} \label{sec:mma} In this work we extend on the role of a Memento aggregator to possess additional capabilities when interacting with Web archive users, Web archives, other Memento aggregators, and other mementities. MemGator \cite{memgator} allows a user to host a Memento aggregator at a location of their choosing (inclusive of the user's local machine) and configure a set of Web archives to query when starting the software. While conventional, remotely located aggregators assume that all Web archives queried are publicly accessible, a locally hosted, customizable aggregator may interface with archives that have restricted access. For example, a MemGator instance may request mementos from a Web archive that is only accessible on the user's local area network or co-hosted on the user's machine on which the aggregator resides. Non-public archives are treated and aggregated agnostically without further consideration of their holdings. In this section we describe a mementity to account for the shortcomings of conventional Memento aggregators while also extending their standard functionality and Memento interfaces. A Memento Meta-Aggregator (MMA) serves as a functional superset of a conventional Memento Aggregator (MA). A conventional MA provides access through identifiers to mementos (\mbox{URI-Ms}), TimeGates (\mbox{URI-Gs}), and TimeMaps (\mbox{URI-Ts}) from a set of Web archives. An MMA provides the ability to both supplement (Figure~\ref{fig:mmaHierarchy}) and selectively filter the results returned from an MA with \mbox{URI-Ms} from additional Web archives at the request of the user or as configured with the MMA (Figure~\ref{fig:mmaScenario}). In a proof-of-concept of this work, we build upon the open source MemGator to introduce these additional roles outside of the scope of the Memento aggregator mementity type to define the MMA mementity type. Figure~\ref{fig:mmaHierarchy} describes a sample hierarchical relationship of mementities consisting of MMAs, MAs, and Web archive (WAs). When MA${_1}$ receives a request for \mbox{URI-Ms} for a \mbox{URI-R}, for instance, the request is relayed to WA$_1$, WA$_2$, and WA$_3$ for the sets of mementos \{$a_1m_1, a_1m_2$\}, \{$a_2m_1, a_2m_2, a_2m_3$\}, and \{$a_3m_1, a_3m_2$\}, respectively. MA$_1$ is then responsible for combining and temporally sorting the \mbox{URI-Ms} then returning the aggregated TimeMap to the requesting user (or mementity). The temporal ordering within an archive corresponds to the second index ($m$) for convenience in the figure, however, this ordering may not hold between archives. For example, $a_2m_2$ is older than $a_3m_1$ per the temporal ordering diagram on the right side of the figure. The ordering for the mementos contained within the configured archives as requested from various mementities is displayed in the bottom portion of the figure. This figure also shows examples of an MMA obtaining results from multiple MAs (e.g., MMA$_\alpha$ from MA$_1$ and MA$_2$) and even MMAs referring to other MMAs for their results when queried (e.g., MMA$_\gamma$ referring to MA$_1$, WA$_5$, and MMA$_\beta$ with the latter referring to WA$_7$ and WA$_8$). The configuration of MMA$_\beta$ is similar to the relationship of MMA$_{Carol}$ to MMA$_{Alice}$ in Figure~\ref{fig:mmaScenario} where a user may configure an MMA to both refer to a custom set of sources for results as well as reuse the in-place selective filtering of the sources. In this case, MMA$_{Carol}$ would inherit the restriction of MMA$_{Alice}$ of not sending requests for mementos of \url{http://alicesembarassingphotos.net/vacation.html} to Bob's archive. \begin{figure} \footnotesize \begin{mdframed} \centering \begin{tabular}{c c c c c c} \multicolumn{2}{c}{A = Alice's archive} & \multicolumn{2}{c}{B = Bob's archive} & \multicolumn{2}{c}{C = Carol's archive} \\ \multicolumn{3}{c}{I = Internet Archive} & \multicolumn{3}{c}{R = \mbox{URI-R} } \\ \multicolumn{6}{c}{MMA$_X$ = Set of archives sourced for \textit{X}'s MMA for R}\\ \multicolumn{6}{c}{MA = Memento aggregator at \url{mementoweb.org}} \end{tabular} \end{mdframed} \begin{tabular}{l l} \text{MMA$_{Alice}$=} & $\begin{cases} \{A,B,C\},&\text{``facebook.com''} \in \text{R}\\ \{A,C\},&\text{``alicesembarassingphotos.net/vacation.html''} \in \text{R}\\ \{A,B,C,I\},& \text{\textit{otherwise}} \end{cases}$\\ \text{MMA$_{Bob}$=} & $\begin{cases} \{B,A\} \end{cases}$\\ \text{MMA$_{Carol}$=}& $\begin{cases} \{C\},&\text{``carolsembarassingphotos.net''} \in \text{R}\\ \{\text{MMA$_{Alice},MA$}\},& \text{\textit{otherwise}} \end{cases}$ \end{tabular} \caption{A Memento Meta-Aggregator is configured to perform selective aggregation.} \label{fig:mmaScenario} \end{figure} Results from other Web archives that are aggregated with the results from an MA may be public non-aggregated Memento-compliant Web archives or private Web archives. We note that a conventional MA is not required to be present to use an MMA, because the aggregation of a static set of public Web archives may be performed by an MMA in a black box banner as if the MMA were identically configured with the same archives as the MA. An MMA can be configured to return an aggregated TimeMap based on a set of Web archives for which it has been configured or it may be provided a set of archives to query upon request from a client. This abstraction provides a level of extensibility to current Memento aggregators for which the additional functionality may not be appropriate, scalable, or interoperable; however, providing an on-demand set of archives to query is useful in the context of personal Web archiving. \renewcommand\theadalign{tc} \begin{figure*}\footnotesize \begin{mdframed} \centering \begin{tabular}{ c | c } \includegraphics[width=0.55\linewidth]{mma_multiarchive_tree2.png} & \specialcell[b]{\begin{tabular}{l l \hline A$_{1\text{...}n}$ & Archive $1$ of $n$ \\ MA$_{1\text{...}n}$ & Memento Aggregator $1$ of $n$ \\ MMA$_{\alpha\text{...}\omega}$ & Memento Meta-Aggregator $1$ of $n$ (denoted using Greek) \\ \blackcircle{3pt}\ a$_x$m$_y$ & Memento of index $y$ from archive of index $x$ \\ \hline \end{tabular} \\ \vspace{2.0em} \\ \includegraphics[width=0.4\linewidth]{mma_multiarchive2.png} \vspace{4.0em} } \end{tabular}\\ \begin{tabularx}{\textwidth}{R p{0.2em} L p{0.2em} L}\hline \textbf{Mementity} & $ \rightarrow $ & \textbf{Abstracted Holdings} & $ \rightarrow $ & \textbf{Memento Holdings} \\ \hline \textit{MA}_1 & & \{A_1, A_2, A_3\} & & \{a_1m_1, a_2m_1, a_2m_2, a_3m_1, a_1m_2, a_2m_3, m_3m_2\} \\ \textit{MA}_2 & & \{A_4, A_5\} & & \{a_4m1, a_4m2, a_5m_1, a_5m_2\} \\ \textit{MMA}_\alpha & & \{\textit{MA}_1, \textit{MA}_2, A_6\}\ $$ \rightarrow $$\ \{A_1, A_2, A_3, A_4, A_5, A_6\} & & \{a_4m_1, a_1m_1, a_6m_1, a_2m_1, a_2m_2, a_3m_1, a_4m_2, a_1m_2,a_2m_3, a_5m_1, a_6m_2, a_3m_2, a_5m_2\} \\ \textit{MMA}_\beta & & \{A_7, A_8\} & & \{a_7m_1, a_8m_1, a_8m_2, a_7m_2\} \\ \textit{MMA}_\gamma & & \makecell[l]{\{\textit{MA}_1, A_5, \textit{MMA}_\beta\}\ $$ \rightarrow $$\ \{A_1, A_2, A_3, A_5, A_7, A_8\}} & & \{a_1m_1, a_2m_1, a_7m_1, a_2m_2, a_3m_1, a_1m_2, a_8m_1, a_2m_3, a_5m_1, a_8m_2, a_3m_2, a_5m_2, a_7m_2\} \\ \end{tabularx} \end{mdframed} \caption{Memento Meta-Aggregators may aggregate \mbox{URI-Ms} from archives, Memento aggregators, and other MMAs equivalently. Shown is an example of temporally sorted captures as served from an MMA in a variety of permutations in a potentially ad hoc hierarchy.} \label{fig:mmaHierarchy} \end{figure*} User-driven specification of aggregation parameters is particularly important for accessing personal Web archives using a Memento aggregator. If a user requests a TimeMap from a conventional Memento aggregator (Figure~\ref{fig:ma}), the aggregator will request the \mbox{URI-Ms} from each archive with which the aggregator is configured to communicate. A user may wish to customize, prioritize, or give precedence to the archives queried (as described in Section~\ref{sec:precedence}). If a user hosts an aggregator themselves, the aggregator would need to be reconfigured to prevent requests for certain \mbox{URI-Rs} from propagating to certain archives on the basis of \mbox{URI-R}-archive pairs. Though this may become unwieldy, what follows is a useful example to illustrate where configuring an MMA with a core ruleset prior to considering further user-driven specification would be useful when aggregating personal and public Web archives. \subsection{MMA Archive Selection} \label{sec:archivalSelection_mma} Figure~\ref{fig:mmaScenario} abstracts the following scenario to show how an MMA can perform selective aggregation. Alice archives Web pages she views in her browser using WARCreate \cite{kelly-jcdl12} and replays them using her local Wayback instance within WAIL \cite{berlin-wail}. Bob, Alice's acquaintance, and Carol, Alice's sister, each do the same for their own captures. Alice sets up an MMA (MMA$_{Alice}$) that is configured to request captures from her archive (A), Bob's archive (B), Carol's archive (C), and the Internet Archive (I). For some \mbox{URI-Rs}, like \url{facebook.com}, it may not make sense to aggregate Alice, Bob, and Carol's captures with those from Internet Archive, so she can specify a rule of only aggregating mementos from \{A, B, C\} when those \mbox{URI-Rs} are requested\footnote{Note that MMAs do not protect the contents of an archive from being viewed, which is handled by the mementity described in Section~\ref{sec:auth}.}. For other \mbox{URI-Rs}, like \url{alicesembarrasingphotos.net}, Alice may want to prevent exposing the fact that she is looking for certain old captures to Bob and the Internet Archive, but wants to also aggregate captures from Carol's archive, with whom she does not mind exposing the \mbox{URI-Rs} requested. She does this by creating another rule to only aggregate from archives \{A,C\} in those cases. By Alice controlling the MMA, she can both pre-configure the set of potential archives queried as well as provide the ability for her, Bob, or Carol to selectively aggregate from the set of archives when requesting captures for a \mbox{URI-R}. Were Bob uncomfortable with his aggregation requests going to Carol's archive when he used Alice's MMA, he may set up his own MMA (MMA$_{Bob}$) to request captures from only his and Alice's archives without a \mbox{URI-R} filtering scheme like Alice's MMA. Carol also sets up an MMA (MMA$_{Carol}$) that defaults to using Alice's MMA and the \url{mementoweb.org} MA except when requesting \mbox{URI-Rs} from \url{carolsembarrassingphotos.net}. As an endpoint, MMAs may aggregate and request access to captures to private Web archives using a token-based authorization model (e.g., using OAuth \cite{rfc6749} as described further in Section~\ref{sec:auth}). The query may be subsequently routed to an applicable and corresponding Web archive (private or public) after authentication has been established. MMAs may query other MMAs with the expectation that the results returned will be consistent with those from an MA with additional indicators for content beyond the scope of an MA (e.g., a flag for content from a non-aggregated or public archive). In the scenario above, Carol may want additional archives aggregated beyond the default case in Figure~\ref{fig:mmaScenario} so she can utilize the ruleset of Alice's MMA, as well as add filtering rules of her own. The filtering that an MMA performs may not be (and more likely is not) exposed to clients or other MMAs that look to it as a source for \mbox{URI-Ms}. Note that in the case of Carol's MMA, there exists a redundancy in that both Alice's MMA and the \url{mementoweb.org} MA will request \mbox{URI-Ms} from IA. While Carol's MMA may perform an operation to consolidate duplicates (i.e., a ``UNIQUE'' operation), time may still be wasted waiting for all archived sources to respond to requests to Carol's MMA. Carol may also only want to look to some archives if none, too few, or some other quantifier or qualifier exists in an initial set or series of archives. For advanced querying of this sort, a separate mementity exists and is described in Section~\ref{sec:stargate}. \subsection{User-driven Archival Specification} As in the scenario described above, a user may wish to include additional archives in the aggregation process or specify the exclusion of \mbox{URI-Ms} from specific archives at the time of the request. The MMA mementity type allows a user to be more descriptive in the results they would like returned compared to a conventional Memento aggregator where only a \mbox{URI-R} and a datetime are specified. Introducing a separate mementity instead of assigning additional roles to the existing Memento aggregator concept provides extensibility while retaining the semantic responsibilities of conventional aggregators and reusing existing infrastructure. For a user to be able to express additional archives to be aggregated at run time requires both cooperation of the client and recipient to communicate through the same ``protocol''. We accomplish this using a fabricated \texttt{X-More-Archives} HTTP request header, that is consumed by a modified MemGator (serving as an MMA) to supplement the list of archives to be queried (see curl command below). Additional attributes may be specified to the MMA, for instance, if the newly supplied archive requires special handling. \begin{lstlisting} curl -H "X-More-Archives: http://myLocalWebArchive/myCollection/timemap/*/" "http://mmaHost/timemap/json/http://www.themaneater.com" \end{lstlisting} \section{StarGate} \label{sec:stargate} Memento TimeGates perform content negotiation in the dimension of datetime (through the Accept-Datetime header) for a \mbox{URI-R} and issue an HTTP 302 redirecting to the appropriate \mbox{URI-M}. This work introduces negotiation with a mementity that serves as an extension to a TimeGate, a StarGate\footnote{``Star'' here refers to common syntax for a wildcard (*)}. A StarGate extends the functionality of a TimeGate with additional content negotiation on other dimensions, such as those described in Section~\ref{sec:additionalTimeMapAttributes}. This broadens archival negotiation beyond the temporal dimension into a range of others. A StarGate also acts as an endpoint to enrich a TimeMap with additional attributes about \mbox{URI-Ms}. \subsection{Negotiation in Other Dimensions} \label{sec:negotitationDimensions} The Prefer HTTP header \cite{rfc7240} provides a basis for content negotiation in other dimensions. Inclusion of the Prefer header requires defining preference in the Vary header of an HTTP response \cite{rfc7240}. Though the specification consists of a registry of preference (of which \texttt{return=minimal} and \texttt{return=representation} are a part), Van de Sompel et al. \cite{preferHeader-wsdlBlog} proposed the extensibility of the definition with \texttt{Prefer} values of \texttt{original-content}, \texttt{original-links}, and \texttt{original-headers}. \subsection{Authentication and Authorization through Negotiation} \label{sec:auth} Figure~\ref{fig:cdxj_new} is an example CDXJ with the \texttt{access} attributes of \texttt{type} and \texttt{token}, which specify for a memento specify a previously established authentication and authorization procedure with a retained token for access persistence. In this initial work, we use an OAuth 2.0 procedure to establish these attributes but the representation is extensible and not coupled to the procedure dynamics. \begin{figure} {\small \begin{enumerate} \item User requests captures for \mbox{URI-R} from MMA \item MMA requests \mbox{URI-R} from Public Web Archives $Pu_{1...n}$ and Private Web Archive $Pr_1$\begin{itemize} \item $Pu_{1...n}$ each return a respective set of \mbox{URI-Ms} $\{\{M_1\}, \{M_2\}, ... \{M_n\}\}$ to MMA \item $Pr_{1}$ returns an HTTP 401 and an identifier for an authentication mementity (\mbox{URI-P})\end{itemize} \item MMA returns HTTP 401, \mbox{URI-P}, and $Pr_1$ identifier to User \item User sends credentials and \mbox{URI-R} to \mbox{URI-P} \item mementity at \mbox{URI-P} returns a token to User \item User requests \mbox{URI-R} again from MMA with token and $Pr_1$ identifier \item MMA requests \mbox{URI-R} from $Pr_1$ along with token\begin{itemize} \item $Pr_1$ returns the set of \mbox{URI-Ms} $\{M_{Pr}\}$ to MMA after potentially consulting mementity at \mbox{URI-P} for validity \end{itemize} \item MMA sorts and transforms $\{\{M_1\}, \{M_2\}, ... \{M_n\},\{M_{Pr}\}\}$ into a TimeMap for \mbox{URI-R} \item MMA returns TimeMap to User \end{enumerate} } \caption{Abstraction of the authentication to private Web archives follows a flow similar to OAuth 2.} \label{fig:pwaa_mma_auth} \end{figure} Figure~\ref{fig:pwaa_mma_auth} describes the interaction flow of authentication and authorization to a private Web archive. This model uses the model described by OAuth 2.0 wherein the archive from which a capture is being requested takes on the roles of the resource owner and resource server (a fundamental pattern described in the specification), an MMA or user taking on the role of the client, and a mementity at \mbox{URI-P} (an identifier for an authentication mementity) taking on the role of the authorization server. \subsection{Endpoint for Archival Aggregate Enrichment} \label{sec:enrichment} StarGates are responsible for receiving data about \mbox{URI-Ms} and enriching any subsequent TimeMaps. In typical usage of Memento, the TimeGate mementity is not aware of the state of the resources it identifies. Two different approaches can be used to specify additional attributes for \mbox{URI-Ms} in a TimeMap: server-driven and client-driven enrichment. In server-driven enrichment, the StarGate (or some other server-based mementity) accesses the \mbox{URI-M} for a \mbox{URI-R} in the archive and attempts to acquire the content-based attributes described in Section~\ref{sec:contentBasedAttributes}. These attributes are then retained by the server-based mementity and added inline to TimeMap when the respective original \mbox{URI-R} is requested by a client. A client-based enrichment approach involves further user interaction of the client accessing a \mbox{URI-M} and a StarGate. A StarGate may provide a \mbox{URI-M} that acts as a proxy to the \mbox{URI-M} that would be requested in conventional Memento usage. By acting as a proxy, a StarGate may setup further communication between the client and StarGate when a \mbox{URI-M} is accessed. This may be accomplished using conventional JavaScript callbacks, a runtime executed service worker (similar to the approach for rerouting at replay-time by Alam et al. \cite{alam-serviceWorker}). The client-side approach also allows for distribution of the computation procedure for derived attributes (Section~\ref{sec:derivedAttributes}) with the StarGate acting as an endpoint for the result. As a safeguard to prevent malicious or miscalculated data from being served in a TimeMap, a StarGate may use a consensus model to ensure the accuracy of the result prior to associating it with a \mbox{URI-M}. As an example, a StarGate may change the \mbox{URI-M} in Figure~\ref{fig:cdxj} from \url{http://web.archive.org/web/19981212013921/http://facebook.com/} to \url{http://stargatehost/calculate/http://web.archive.org/web/19981212013921/http://facebook.com/}. Upon a client accessing the latter URI, the StarGate returns a page to the client with callback information, a key to associate the calculation procedure, and a redirect to the former \mbox{URI-M} with additional embedded JavaScript. For clients that do not support service worker or JavaScript (e.g., curl), accessing the \texttt{stargatehost} URI will provide the same experience the client would receive if accessing the non-proxied \mbox{URI-M}. \section{Future Work and Conclusions} We developed initial prototypes of the MMA\footnote{\url{https://github.com/machawk1/gogator}} (extending MemGator \cite{memgator}) and StarGate\footnote{\url{https://github.com/machawk1/stargate}} as well as integrated a prototype of Mink \cite{kelly2014mink} with the capabilities to interact with other mementities through the dynamics described in this work. Through the implementation of the concepts, mementities, and dynamics described here, users with private Web archives may aggregate their captures both with other private as well as public Web archives to get a better picture of the Web as it was without compromising the information contained in their captures. In the future, we anticipate exploring additional attributes to associate with \mbox{URI-Ms} as classified and described in Section~\ref{sec:additionalTimeMapAttributes}. We also anticipate exploring the temporal and spatial ramifications of further supplementing TimeMaps and Link response headers as well as the ramifications on caching and efficient querying and aggregation of the TimeMaps by users and mementities alike. We plan on further exploring archival query precedence models (Section~\ref{sec:precedence}) to consider attributes of archives and their contained mementos in dimensions beyond public-private as well as more complex asynchronous models and short-circuiting techniques. In this work we laid the foundation for aggregating private and public Web archives. We introduced conceptual Web archiving Memento entities (mementities) to facilitate a hierarchical approach toward aggregation and provided extensible means and methods to consider to further aggregate these captures for a better picture of the Web of the past. We introduced and explored archival precedence and short-circuiting of requests to archival aggregators to allow querying of individual or subsets of archives and to halt if and when a condition is met. We provided the syntax and semantics for enriching Memento TimeMaps with additional attributes to encourage them to be more expressive, particularly as required when aggregating private mementos. We introduced a model to integrate conventional live Web authentication methods with an additional mementity (StarGate) for systematic access control to private captures both from an individual user as well as from other mementities. \begin{flushleft} \textbf{Acknowledgements.} This work is supported by the NEH grant HK-50181-14 and IMLS grant RE-33-16-0107-16. We would also like to thank Scott Ainsworth, Sawood Alam, and Shawn Jones for preliminary reviews. Some icons adapted from the work of Agatha Krych and licensed CC-BY-SA 4.0\footnote{\href{https://github.com/machawk1/jcdl2018-artwork}{https://github.com/machawk1/jcdl2018-artwork}}. \end{flushleft} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2018-06-05T02:12:31", "yymm": "1806", "arxiv_id": "1806.00871", "language": "en", "url": "https://arxiv.org/abs/1806.00871" }
\section{Conclusions and Future Work} \label{sec:conclusions} \vspace{-0.1cm} Assuming a matrix normal distribution on a reduced latent output space, we introduced an efficient and scalable multi-task Gaussian process regression approach to learning complex association between external covariates and high-dimensional neuroimaging data. Our experiments on an fMRI dataset demonstrate the superiority of the proposed approach against other single-task and multi-task alternatives in terms of the computational time complexity. This superiority was achieved without compromising the regression performance, and even with higher sensitivity to abnormal samples in the normative modeling paradigm. Our methodological contribution advances the current practices in the normative modeling from the single-voxel modeling to multi-voxel structural learning. For future work, we will consider enriching the proposed approach by embedding more biologically meaningful basis functions~\cite{huertas2017bayesian}, structural modeling of non-stationary noise, and applying our method to clinical cohorts. \vspace{-0.35cm} \section{Introduction} \label{sec:introduction} Understanding the underlying biological mechanisms of psychiatric disorders constitutes a significant step toward developing more effective and individualized treatments (\emph{i.e.}, \emph{precision medicine}~\cite{Mirnezami2012preparing}). Recent advances in neuroimaging and machine learning provide an exceptional opportunity to employ brain-derived biological measures for this purpose. While symptoms and biological underpinnings of mental diseases are known to be highly heterogeneous, data-driven approaches play an important role in stratifying clinical groups into more homogeneous subgroups. Currently, off-the-shelf clustering algorithms are the most predominant approaches for stratifying clinical cohorts. However, the high-dimensionality and complexity of data beside the use of heuristics to find optimal clustering solutions negatively affect the reproducibility and reliability of resulting clusters~\cite{marquand2016beyond}. Normative modeling~\cite{marquand2016understanding} offers an alternative approach to model biological variations within clinical cohorts without needing to assume cleanly separable clusters or cohorts. This approach is applicable to most types of neuroimaging data such as structural/functional magnetic resonance imaging (s/fMRI). Normative modeling employs Gaussian process regression (GPR)~\cite{williams1996gaussian} to predict neuroimaging data on the basis of clinical and/or behavioral covariates. GPR, and in general Bayesian inference, can be seen as an indispensable part of the normative modeling as it provides coherent estimates of predictive confidence. These measures of predictive uncertainty are important for quantifying centiles of variation in a population~\cite{marquand2016understanding}. GPR also provides the possibility to accommodate both linear and nonlinear relationships between clinical covariates and neuroimaging data. The variant of GPR originally employed for normative modeling aims to model only a single output variable. Thus in normative modeling, one should independently train separate GPR models for each unit of measurement (\emph{e.g.}, for each voxel in a mass-univariate fashion). Such a simplification ignores the possibility of modeling and capitalizing on the existing spatial structure in the output space. However, GPR can be extended to perform a joint prediction across multiple outputs in order to account for correlations between variables in neuroimaging data (for example different voxels in fMRI data). Boyle and Frean~\cite{boyle2005multiple} proposed to employ convolutional processes to express each output as the convolution between a smoothing kernel and a latent function. This idea is later adopted by Bonilla \emph{et al.}~\cite{bonilla2008multi} to extend the classical single-task GPR (STGPR) to multi-task GPR (MTGPR) by coupling a set of latent functions with a shared GP prior in order to directly induce correlation between output variables (tasks). They proposed to disentangle the full cross-covariance matrix into the Kronecker product of the sample (in input space) and task (in output space) covariance matrices. This technique provides the possibility to model both across-sample and across-task variations. Despite its effectiveness in modeling structures in data, MTGPR comes with extra computational overheads in time and space, especially when dealing with high-dimensional neuroimaging data. We briefly review recent efforts toward alleviating these computational burdens. \vspace{-0.25cm} \subsection{Toward Efficient and Scalable MTGPR} \label{subsec:related_work} For $N$ samples and $T$ tasks, the time and space complexity of MTGPR are $\mathcal{O}(N^3 T^3)$ and $\mathcal{O}(N^2 T^2)$, respectively. These high computational demands (compared to STGPR with $\mathcal{O}(N^3 T)$ and $\mathcal{O}(N^2 T)$) are mainly due to the need for computing the inverse cross-covariance matrix in learning and inference phases. In neuroimaging problems that we consider, these can both be relatively high where $N$ in generally in the order of $10^2-10^4$ and $T$ is in the order of $10^4-10^5$ or even higher. Therefore, improving the computational efficiency of MTGPR is crucial for certain problems, and there have been several approaches proposed for this in the machine learning literature~\cite{quinonero2007approximation,alvarez2011computationally}. Here we briefly review two main directions to address the computational tractability issue of MTGPR. In the first set of approaches, approximation techniques are used to improve estimation efficiency. Bonilla \emph{et al.}~\cite{bonilla2008multi} made one of the earliest efforts in this direction, in which they proposed to use Nystr\"{o}m approximation on $M$ inducing inputs~\cite{quinonero2007approximation} out of $N$ samples in combination with the probabilistic principal component analysis, in order to approximate reduced $M$-rank and $P$-rank sample and task covariance matrices, respectively. Their approximation reduced the time complexity of hyperparameter learning to $\mathcal{O}(N T M^2 P^2)$. Elsewhere, Alvarez and Lawrence~\cite{alvarez2009sparse} proposed to approximate a sparse version of MTGPR, assuming conditional independence between each output variable with all others given the input process. This assumption besides using $M$ out of $N$ input samples as inducing inputs reduces the computational complexity of MTGPR to $\mathcal{O}(N^3 T+ N T M^2)$ and $\mathcal{O}(N^2 T+N T M)$ in time and storage, where for $N=M$ is the same as a set of $T$ independent STGPRs. Alvarez \emph{et al.} in~\cite{alvarez2010efficient} extended their previous work by developing the concept of inducing function rather than inducing input. Their new approach so-called variational inducing kernels achieves time complexity of $\mathcal{O}(N T M^2)$. The second set of approaches utilize properties of Kronecker product~\cite{loan2000ubiquitous} to reduce the time and space complexity in computing the exact (and not approximated) inverse covariance matrix. Stegle \emph{et al.}~\cite{stegle2011efficient} proposed to use these properties in combination with eigenvalue decomposition of input and task covariance matrices for efficient parameter estimation, and likelihood evaluation/optimization in MTGPR. In this method, the joint covariance matrix is defined as a Kronecker product between the input and task covariance matrices. This approach reduces the time and space complexity of MTGPR to $\mathcal{O}(N^3+T^3)$ and $\mathcal{O}(N^2+T^2)$, respectively. To account also for structured noise, Rakitsch \emph{et al.}~\cite{rakitsch2013all} extended this method by using two separate Kronecker products for the signal and noise. Importantly, this provides a significant reduction in computational complexity using all samples (\emph{i.e.}, not just inducing inputs), and is exact in the sense that it does not require any approximation or relaxing assumptions. \vspace{-0.5cm} \subsubsection{Our contribution:} In spite of all aforementioned efforts, applications of MTGPR in encoding neuroimaging data from a set of clinically relevant covariates remained very limited, mainly due to the high dimensionality of the output space (\emph{i.e.}, very large $T$). Our main contribution in this text addresses this problem and extends MTGPR to the normative modeling of neuroimaging data. To this end, we use a combination of low-rank approximation of the task covariance matrix with algebraic properties of Kronecker product in order to reduce the computational complexity of MTGPR. Furthermore, on a public fMRI dataset, we show that: 1) our method makes MTGPR possible on very high-dimensional output spaces; 2) it enables us to model both across-space and across-subjects variations, hence provides more sensitivity for the resulting normative model in novelty detection. \vspace{-0.5cm} \section{Methods} \label{sec:methods} \vspace{-0.25cm} \subsection{Notation} \label{subsec:notation} Boldface capital letters, $\mathbf{A}$, and capital letters, $A$, are used to denote matrices and scalar numbers. We denote the vertical vector which is resulted from collapsing columns of a matrix $\mathbf{A} \in \mathbb{R}^{N \times T}$ with $vec(\mathbf{A}) \in \mathbb{R}^{N T}$. In the remaining text, we use $\otimes$ and $\odot$ to respectively denote Kronecker and the element-wise matrix products. We denote an identity matrix by $\mathbf{I}$; and the determinant, diagonal elements, and the trace of matrix $\mathbf{A}$ with $\left | \mathbf{A} \right |$, $diag(\mathbf{A})$, and $Tr[\mathbf{A}]$, respectively. \vspace{-0.5cm} \subsection{Scalable Multi-Task Gaussian Process Regression} \label{subsec:SMTGP} Let $\mathbf{X} \in \mathbb{R}^{N \times F}$ be the input matrix with $N$ samples and $F$ covariates. Let $\mathbf{Y} \in \mathbb{R}^{N \times T}$ represent a matrix of response variables with $N$ samples and $T$ tasks (here, neuroimaging data with $T$ voxels). The multi-task Kronecker Gaussian process model (MT-Kronprod)~\cite{stegle2011efficient} is defined as: \vspace{-0.15cm} \small \begin{eqnarray} \label{eq:MT-kronprod} p(\mathbf{Y} \mid \mathbf{D},\mathbf{R},\sigma^2) = \mathcal{N}(\mathbf{Y} \mid \mathbf{0}, \mathbf{D} \otimes \mathbf{R} + \sigma^2 \mathbf{I}) \quad , \end{eqnarray} \normalsize \noindent where $\mathbf{D} \in \mathbb{R}^{T \times T}$ and $\mathbf{R} \in \mathbb{R}^{N \times N}$ are respectively the task and sample covariance matrices (here, modeling correlations across voxels and samples separately). Despite its effectiveness in modeling both samples and tasks variations, the application of MT-Kronprod is limited when dealing with very large output spaces, such as neuroimaging data, mainly due to the high computational complexity of matrix diagonalisation operations in the optimization and inference phases. We propose to address this problem by using a low-rank approximation of $\mathbf{D}$. Let $\Phi: \mathbf{Y} \to \mathbf{Z}$ be an orthogonal linear transformation, \emph{e.g.}, principal component analysis (PCA), that transforms $\mathbf{Y}$ to a reduced latent space $\mathbf{Z} \in \mathbb{R}^{N \times P}$, where $P < T$, and we have $\mathbf{Z} = \Phi(\mathbf{Y}) = \mathbf{Y}\mathbf{B}$. Here, columns of $\mathbf{B} \in \mathbb{R}^{T \times P}$ represent a set of $P$ orthogonal basis functions. Assuming a zero-mean matrix normal distribution for $\mathbf{Z}$, by factorizing its rows and columns we have: \small \begin{eqnarray} \label{eq:MND_Z} p(\mathbf{Z} \mid \mathbf{C},\mathbf{R})= \mathcal{MN}(\mathbf{0},\mathbf{C} \otimes \mathbf{R})=\frac{\exp(-\frac{1}{2}Tr[\mathbf{C}^{-1}\mathbf{B}^\top\mathbf{Y}^\top\mathbf{R}^{-1}\mathbf{Y}\mathbf{B}])}{\sqrt{(2\pi)^{NP}\left | \mathbf{C} \right |^P \left | \mathbf{R} \right |^N}} \quad , \end{eqnarray} \normalsize \noindent where $\mathbf{C} \in \mathbb{R}^{P \times P}$ and $\mathbf{R} \in \mathbb{R}^{N \times N}$ are column and row covariance matrices of $\mathbf{Z}$. Using the trace invariance property under cyclic permutations, the noise-free multivariate normal distribution of $\mathbf{Y}$ can be approximated from Eq.~\ref{eq:MND_Z}: \small \begin{eqnarray} \label{eq:MND_Y} p(\mathbf{Y} \mid \mathbf{D},\mathbf{R}) \approx p(\mathbf{Y} \mid \mathbf{C},\mathbf{B},\mathbf{R}) = \frac{\exp(-\frac{1}{2}Tr[\mathbf{B}\mathbf{C}^{-1}\mathbf{B}^\top \mathbf{Y}^\top\mathbf{R}^{-1}\mathbf{Y}])}{\sqrt{(2\pi)^{NT}\left | \mathbf{BCB}^\top \right |^T \left | \mathbf{R} \right |^N}} \quad , \end{eqnarray} \normalsize \noindent where $\mathbf{D}$ is approximated by $\mathbf{B}\mathbf{C}\mathbf{B}^\top$. Our scalable multi-task Gaussian process regression (S-MTGPR) model is then derived by marginalizing over noisy samples: \small \begin{eqnarray} \label{eq:kronecker_GP} p(\mathbf{Y} \mid \mathbf{D},\mathbf{R}, \sigma^2) \approx p(\mathbf{Y} \mid \mathbf{C},\mathbf{B}, \mathbf{R}, \sigma^2) = \mathcal{N}(\mathbf{Y} \mid \mathbf{0}, \mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I}) \quad . \end{eqnarray} \normalsize \vspace{-1cm} \subsubsection{Predictive Distribution:} \label{subsubsec:prediction} Following the standard GPR framework~\cite{williams1996gaussian} and setting $\tilde{\mathbf{D}}=\mathbf{BCB}^\top$, the mean and variance of the predictive distribution of unseen samples, \emph{i.e.}, $p(vec(\mathbf{Y})^* \mid vec(\mathbf{M^*}), \mathbf{V}^*)$, can be computed as follows: \small \begin{subequations} \label{eq:predictive_distribution} \begin{align} & vec(\mathbf{M^*}) = (\tilde{\mathbf{D}} \otimes \mathbf{R}^*)(\tilde{\mathbf{D}} \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y}), \\ & \mathbf{V}^* = (\tilde{\mathbf{D}} \otimes \mathbf{R}^{**})-(\tilde{\mathbf{D}} \otimes \mathbf{R}^*)(\tilde{\mathbf{D}} \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1}(\tilde{\mathbf{D}} \otimes \mathbf{R}^{* \top}), \end{align} \end{subequations} \normalsize \noindent where $\mathbf{R}^{**} \in \mathbb{R}^{N^* \times N^*}$ is the covariance matrix of $N^*$ test samples , and $\mathbf{R}^* \in \mathbb{R}^{N^* \times N}$ is the cross-covariance matrix between test and training samples. \subsubsection{Efficient Prediction and Optimization:} \label{subsubsec:optimization} For efficient prediction, and fast optimization of the log-likelihood, we extend the approach proposed in~\cite{stegle2011efficient,rakitsch2013all} by exploiting properties of Kronecker product, and eigenvalue decomposition for diagonalizing the covariance matrices. Then the predictive mean and variance can be efficiently computed by: \small \begin{subequations} \label{eq:efficient_prediction} \begin{align} \mathbf{M}^* &= \mathbf{R}^* \mathbf{U_R} \mathbf{\tilde Y} \mathbf{U^\top_C} \mathbf{C}\mathbf{B}^\top, \\ \mathbf{V}^* &= (\tilde{\mathbf{D}} \otimes \mathbf{R}^{**})-(\mathbf{BCU_C} \otimes \mathbf{R}^* \mathbf{U_R})\tilde{\mathbf{K}}^{-1}(\mathbf{U_C^\top CB^\top} \otimes \mathbf{U_R^\top R}^{* \top}), \end{align} \end{subequations} \normalsize \noindent where $\mathbf{C=U_CS_CU_C^\top}$ and $\mathbf{R=U_RS_RU_R^\top}$ are eigenvalue decomposition of covariance matrices, $\tilde{\mathbf{K}} = \mathbf{S_C} \otimes \mathbf{S_R} + \sigma^2 \mathbf{I}$, and $vec(\mathbf{\tilde Y})=diag(\tilde{\mathbf{K}}^{-1}) \odot vec(\mathbf{U_R^\top Y B U_C})$.\footnote{\scriptsize See supplementary materials for more descriptive derivations of all equations.\normalsize} Based on our assumption on the orthogonality of components in $\mathbf{B}$, we set $\mathbf{B}^{-1}=\mathbf{B}^\top$ and $\mathbf{B}^\top \mathbf{B}=\mathbf{I}$. Note that in the new parsimonious formulation, heavy time and space complexities of computing the inverse kernel matrix is reduced to computing the inverse of a diagonal matrix, \emph{i.e.}, reciprocals of diagonal elements of $\tilde{\mathbf{K}}$. For the predictive variance, explicit computation of the Kronecker product is still necessary but this can easily be overcome by computing the predictions in mini-batches. For the negative log marginal likelihood of Eq.~\ref{eq:kronecker_GP}, we have: \vspace{-0.15cm} \small \begin{eqnarray} \label{eq:LML} \mathcal{L}=-\frac{N \times T}{2} \ln(2\pi)-\frac{1}{2}\ln\left | \tilde{\mathbf{K}} \right | - \frac{1}{2} vec(\mathbf{U_R^\top YBU_C})^\top vec(\mathbf{\tilde Y}) \quad . \end{eqnarray} \normalsize The proposed S-MTGPR model has three sets of parameters plus one hyperparameter: 1) reduced task covariance matrix parameters $\Theta_{\mathbf{C}}$, 2) input covariance matrix parameters $\Theta_{\mathbf{R}}$, 3) noise variance $\sigma^2$ that is parametrized on $\Theta_{\sigma^2}$, and 4) $P$ that decides the number of components in $\mathbf{B}$. While the latter should be decided by means of model selection, the first three sets are optimized by maximizing $\mathcal{L}$. \vspace{-0.4cm} \subsubsection{Computational Complexity:} \label{subsubsec:complexity} The time complexity of the proposed method is $\mathcal{O}(N^2 T + N T^2 + N^3 + P^3)$. The first two terms are related to the matrix multiplication in computing the squared term in Eq.\ref{eq:LML}. The last two terms belong to the eigenvalue decomposition of $\mathbf{R}$ and $\mathbf{C}$. The $P^3$ term can be excluded because always $P \leq min(N,T)$. Thus, for $N>T$ and $N<T$ the time complexity is reduced to $\mathcal{O}(N^3)$ and $\mathcal{O}(N T^2)$, respectively. Thus when $N>T$ or $N<T<N^2$, our approach is analytically even faster than the baseline STGPR approach applied independently to each output variable in a mass-univariate fashion. For $N \ll T$, our method is faster than other Kronecker based MTGPRs by a factor of $T/N$. Such improvement not only facilitates the application of MTGPR on neuroimaging data but also it provides the possibility of accounting for the existing spatial structures across different brain regions. In comparison to the related work, the proposed method provides a substantial speed improvement, especially when dealing with a large number of tasks. This is while unlike other approximation approaches, we fully use the potential of all available samples. \section{Experiments and Results} \label{sec:results} \vspace{-0.25cm} \subsection{Experimental Materials and Setup} \label{subsec:materials_setups} In our experiments, we use a public fMRI dataset collected for reconstructing visual stimuli (black and white letters and symbols) from fMRI data~\cite{miyawaki2008visual}. In this dataset, fMRI responses were measured while $10 \times 10$ checkerboard patch images were presented to subjects according to a blocked design. Checkerboard patches constituted random (1320 trials) and geometrically meaningful patterns (720 trials). We use the preprocessed data available in Nilearn package~\cite{abraham2014machine} wherein the fMRI data are detrended and masked for the occipital lobe (5438 voxels).\footnote{\scriptsize See \url{http://nilearn.github.io/auto_examples/02_decoding/plot_miyawaki_reconstruction.html}. \normalsize} Whilst our approach is quite general, we demonstrate S-MTGPR by simulating normative modeling for novelty detection. Therefore, we aim to predict the masked fMRI 3D-volume from the presented visual stimuli in an encoding setting. To this end, we randomly selected 600 random pattern trials, for training the encoding model. The model then learns to represent this reference or normative class such that anomalous or abnormal samples can be detected and characterised. The rest of non-random patterns (720 trials) and random patterns (720 trials) are used for evaluating the encoding model and testing anomaly-detection performance, achieved by fitting a generalised extreme value distribution to the most deviating voxels. In our experiments, we use PCA to transform the fMRI data in the training set from the voxel space to $\mathbf{Z}$, and the resulting $P=10,25,50,100,250,500,1000$ PCA components are used as basis matrix $\mathbf{B}$ in the optimization and inference. We benchmark the proposed method against the STGPR (\emph{i.e.}, mass-univariate) and MT-Kronprod models in terms of their runtime, performance of the regression, and quality of resulting normative models. In all models, we use a summation of a linear, a squared exponential, and a diagonal isotropic covariance functions for sample and task covariance matrices in order to accommodate both linear and non-linear relationships. In all cases, we use an isotropic Gaussian likelihood function. This likelihood function has different functionality in the STGPR versus MTGPR settings. In STGPR, it is defined independently for each voxel, thus it handles heteroscedastic, \emph{i.e.}, spatially varying noise. While in MTGPR a single noise parameter is shared for all voxels, hence it merely considers homoscedastic, \emph{i.e.}, spatially stationary, noise. The truncated Newton algorithm is used for optimizing the parameters. Table~\ref{tab:hyperparameters} summarizes the time complexity and the number of parameters of three benchmarked methods in our experiments. \begin{table}[t] \centering \caption{Three benchmarked methods in our experiments.} \vspace{-0.3cm} \label{tab:hyperparameters} \resizebox{0.995\textwidth}{!}{\begin{tabular}{|c|ccc|} \hline \textbf{Method} & \textbf{\begin{tabular}[c]{@{}c@{}}Time\\ Complexity\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}No. \\ Parameters\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Parameter \\ Description\end{tabular}} \\ \hline \textbf{STGPR} & $\mathcal{O}(N^3 T)$ & 21752 & \begin{tabular}[c]{@{}c@{}}1 for linear and 2 for squared exponential kernels, 1 for Gaussian likelihood; \\ multiplied by the number of tasks (5438).\end{tabular} \\ \hline \textbf{MT-Kronprod} & $\mathcal{O}(T^3)$ & 9 & \begin{tabular}[c]{@{}c@{}}1 for linear, 2 for squared exponential, and 1 for diagonal isotropic kernels; multiplied \\ by 2 (for sample and task covariance functions); plus 1 for Gaussian likelihood.\end{tabular} \\ \hline \textbf{S-MTGPR} & $\mathcal{O}(N T^2)$ & 10 & \begin{tabular}[c]{@{}c@{}}Same as MT-Kronprod, plus 1 hyperparameter for the number of PCA bases.\end{tabular} \\ \hline \end{tabular}} \vspace{-0.75cm} \end{table} We use the coefficient of determination ($R^2$) to evaluate the explained variance by regression models. In normative modeling, the top 5\% values in normative probability maps are used to fit the generalized extreme value distribution (see~\cite{marquand2016understanding}). To evaluate resulting normative models, we employ area under the curve (AUC) to measure the performance of the model in distinguishing between normal (here random patterns) from abnormal samples (here non-random patterns). All the steps (random sampling, modeling, and evaluation) are repeated 10 times in order to estimate the mean and standard deviation of the runtime, $R^2$, and AUC. All experiments are performed on a system with Intel\textsuperscript \textregistered Xeon\textsuperscript \textregistered E5-1620 0 @3.60GHz CPU and 16GB of RAM.\footnote{\scriptsize The experimental codes are available at~\url{https://github.com/smkia/MTNorm}.\normalsize} \vspace{-0.4cm} \subsection{Results and Discussion} \label{subsec:results_discussion} Fig.~\ref{fig:Comparison_bar_plots} compares the runtime, $R^2$, and AUC of STGPR and MT-Kronprod, with those of S-MTGPR for different number of bases. As illustrated in Fig.~\ref{fig:Comparison_bar_plots}(a) S-MTGPR is faster than other approaches where the total runtime of MT-Kronprod (3 days) and STGPR (6 hours) can be reduced to 16 minutes for $P=25$. This difference in runtime is even more pronounced in case of the optimization time where S-MTGPR is at least (for $P=1000$) 33 and 89 times faster than STGPR and MT-Kronprod, respectively. The multi-task approaches are slower than STGPR in the prediction phase mainly due to the mini-batch implementation of the prediction variance computation (to avoid memory overflow). Fig.~\ref{fig:Comparison_bar_plots}(b) shows this computational efficiency is achieved without penalty to the regression performance; where for certain number of bases the S-MTGPR shows equivalent and even better $R^2$ than STGPR and MT-Kronprod. Furthermore, Fig.~\ref{fig:Comparison_bar_plots}(c) demonstrates that multi-task learning, by considering spatial structures, generally provides a more accurate normative model of fMRI data in that it more accurately detects samples that were derived from a different distribution to those used to train the model. This fact is well-reflected in higher AUC values for S-MTGPR at $P=25,100,250,500,1000$. It is worthwhile to emphasize that these improvements are achieved by reducing the degree-of-freedom of the normative model from 21752 for STGPR to 10 for S-MTGPR (see Table~\ref{tab:hyperparameters}). \begin{figure}[t!] \centering \includegraphics[width=0.995\textwidth]{Figures/Comparison_bar_plots.pdf} \vspace{-0.75cm} \caption{Comparison between S-MTGPR, STGPR, and MT-Kronprod in terms of: a) optimization and prediction runtime, b) average regression performance ($R^2$) across all voxels, and c) AUC in abnormal sample detection using normative modeling.} \label{fig:Comparison_bar_plots} \vspace{-0.6cm} \end{figure} \section*{Supplementary Materials} \label{sec:supplementary} Throughout the supplementary materials we use the same notation introduced in the main text. \subsection*{Useful Equations} \label{subsec:useful} For $\mathbf{A} \in \mathbb{R}^{M \times N}$, $\mathbf{B} \in \mathbb{R}^{P \times Q}$, and $\mathbf{C}$, $\mathbf{D}$ (with appropriate size) we have: \begin{enumerate} \item $\mathbf{A = U_A S_A U_A^\top}$ is the eigenvalue decomposition of $\mathbf{A}$, \item $\mathbf{(ACB)^{-1}=B^{-1}C^{-1}A^{-1}}$, \item $\mathbf{(A \otimes B)(C \otimes D) = AC \otimes BD}$, \item $\mathbf{(A \otimes B)^{-1} = A^{-1} \otimes B^{-1}}$, \item the eigenvalue decomposition of $\mathbf{A \otimes B + I}$ is: \\ $\mathbf{(U_A \otimes U_B)(S_A \otimes S_B + I)(U_A^\top \otimes U_B^\top)}$, \item $(\mathbf{A \otimes B}) vec(\mathbf{C}) = vec(\mathbf{BCA}^{\top})$, \item $\ln \left | \mathbf{AC} \right | = \ln (\left | \mathbf{A} \right | \left | \mathbf{C} \right |) = \ln \left | \mathbf{A} \right | + \ln \left | \mathbf{C} \right |$, \item for $\mathbf{C} \in \mathbb{R}^{N \times N}$, $\frac{\mathrm{d}}{\mathrm{d} x} \ln \left | \mathbf{C} \right | = Tr[\mathbf{C}^{-1} \frac{\mathrm{d} \mathbf{C}}{\mathrm{d} x}]$, \item $Tr[\mathbf{ACBD}]=Tr[\mathbf{CBDA}]=Tr[\mathbf{BDAC}]=Tr[\mathbf{DACB}]$. \end{enumerate} \subsection*{Efficient Mean Prediction} \label{subsec:mean_prediction} Eq.~\ref{eq:efficient_prediction}(a) is derived from Eq.~\ref{eq:predictive_distribution}(a) as follows: \small \begin{eqnarray*} \label{eq:eff_mean_pred} \begin{split} vec(\mathbf{M^*}) & = (\mathbf{BCB^\top} \otimes \mathbf{R}^*)(\mathbf{BCB^\top} \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y}) \\ & = (\mathbf{BCB}^\top \otimes \mathbf{R}^*) (\mathbf{B U_C S_C U_C^\top B^\top} \otimes \mathbf{U_R S_R U_R}^\top + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y}) \\ & = (\mathbf{BCB}^\top \otimes \mathbf{R}^*) [(\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})(\mathbf{U_C^\top B^\top \otimes U_R^\top})]^{-1} vec(\mathbf{Y}) \\ & = (\mathbf{BCB}^\top \otimes \mathbf{R}^*) (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top}) vec(\mathbf{Y}) \\ & = (\mathbf{BCB}^\top \otimes \mathbf{R}^*) (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}vec(\mathbf{U_R^\top Y B U_C}) \\ & = (\mathbf{BC \underbrace{\mathbf{B^\top B}}_I U_C} \otimes \mathbf{R^* U_R}) \underbrace{diag[(\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}] \odot vec(\mathbf{U_R^\top Y B U_C})}_{vec({\tilde{\mathbf{Y}})}} \\ & = \mathbf{R^* U_R \tilde{\mathbf{Y}} \mathbf{U_C^\top C B^\top}} \quad . \end{split} \end{eqnarray*} \normalsize \subsection*{Efficient Variance Prediction} \label{subsec:variance_prediction} Eq.~\ref{eq:efficient_prediction}(b) is derived from Eq.~\ref{eq:predictive_distribution}(b) as follows: \small \begin{eqnarray*} \label{eq:eff_mean_pred} \begin{split} \mathbf{V}^* & = (\mathbf{BCB}^\top \otimes \mathbf{R}^{**})-(\mathbf{BCB}^\top \otimes \mathbf{R}^*)\underbrace{(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1}}_{\mathbf{K}^{-1}}(\mathbf{BCB}^\top \otimes \mathbf{R}^{* \top}) \\ & = (\mathbf{BCB}^\top \otimes \mathbf{R}^{**})-(\mathbf{BCB}^\top \otimes \mathbf{R}^*)(\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1} \\ & (\mathbf{U_C^\top B^\top \otimes U_R^\top}) (\mathbf{BCB}^\top \otimes \mathbf{R}^{* \top}) \\ & = (\mathbf{BCB}^\top \otimes \mathbf{R}^{**})-(\mathbf{BC U_C} \otimes \mathbf{R^* U_R})\underbrace{(\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}}_{\tilde{\mathbf{K}}^{-1} }(\mathbf{U_C^\top C B^\top \otimes U_R^\top} \mathbf{R}^{* \top}) \quad . \end{split} \end{eqnarray*} \normalsize \subsection*{Efficient Log Marginal Likelihood Evaluation} \label{subsec:LML_evaluation} Eq.~\ref{eq:LML} is derived as follows: \small \begin{eqnarray*} \label{eq:eff_LML} \begin{split} \mathcal{L} & = -\frac{N \times T}{2} \ln(2\pi)-\frac{1}{2}\ln\left | \mathbf{K} \right | - \frac{1}{2} vec(\mathbf{Y})^\top \mathbf{K}^{-1} vec(\mathbf{Y}) \\ & = -\frac{N \times T}{2} \ln(2\pi) - \frac{1}{2}\ln\left | \mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I} \right | - \frac{1}{2} vec(\mathbf{Y})^\top (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y}) \\ & = -\frac{N \times T}{2} \ln(2\pi)-\frac{1}{2}\ln\left | (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})(\mathbf{U_C^\top B^\top \otimes U_R^\top}) \right | \\ & - \frac{1}{2} vec(\mathbf{Y})^\top (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top})vec(\mathbf{Y}) \\ & = -\frac{N \times T}{2} \ln(2\pi)-\frac{1}{2} \underbrace{\ln\left | (\mathbf{U_C^\top B^\top \otimes U_R^\top})(\mathbf{B U_C \otimes U_R}) \right |}_{\ln \left | \mathbf{I}\right |=0} - \frac{1}{2} \ln\left |(\mathbf{S_C \otimes S_R + \sigma^2 I})\right | \\ & - \frac{1}{2} vec(\mathbf{U_R^\top Y B U_C})^\top \underbrace{diag[(\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}] \odot vec(\mathbf{U_R^\top Y B U_C})}_{vec(\tilde{\mathbf{Y})}} \\ & = -\frac{N \times T}{2} \ln(2\pi) - \frac{1}{2} \ln \underbrace{\left |(\mathbf{S_C \otimes S_R + \sigma^2 I})\right |}_{\left | \tilde{\mathbf{K}} \right |} - \frac{1}{2} vec(\mathbf{U_R^\top Y B U_C})^\top vec(\tilde{\mathbf{Y}}) \quad . \end{split} \end{eqnarray*} \normalsize \subsection*{Derivatives of $\mathcal{L}$ with Respect to Parameters} \label{subsec:gradients} In the optimization process, the derivatives of $\mathcal{L}$ with respect to $\theta_{\mathbf{C}} \in \Theta_{\mathbf{C}}$, $\theta_{\mathbf{R}} \in \Theta_{\mathbf{R}}$, and $\theta_{\sigma^2} \in \Theta_{\sigma^2}$ can be efficiently computed as follows: \subsubsection*{Gradients of $\mathcal{L}$ with Respect to $\theta_\mathbf{C}$:} \label{subsubsec:gradient_c} \scriptsize \begin{eqnarray*} \label{eq:derivatives_c} \begin{aligned} \frac{\partial \mathcal{L}}{\partial \theta_\mathbf{C}} = & -\frac{1}{2}diag(\tilde{\mathbf{K}}^{-1})^\top[diag(\mathbf{U_C^\top}\frac{\partial \mathbf{C}}{\partial \theta_\mathbf{C}}\mathbf{U_C}) \otimes diag(\mathbf{S_R})] +\frac{1}{2} vec(\mathbf{\tilde Y})^\top vec(\mathbf{S_R \tilde Y U_C^\top \frac{\partial \mathbf{C}}{\partial \theta_\mathbf{C}}U_C}), \end{aligned} \end{eqnarray*} \normalsize \noindent where the determinant term of the above equation is derived by computing the derivative of $\ln \left | \mathbf{K} \right |$: \small \begin{eqnarray*} \label{eq:determinant_c} \begin{split} \frac{\partial \ln \left | \mathbf{K} \right |}{\partial \theta_\mathbf{C}} & = \frac{\partial}{\partial \theta_\mathbf{C}} [\ln\left | \mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I} \right |] = Tr[(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} \frac{\partial}{\partial \theta_\mathbf{C}}(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})] \\ & = Tr[(\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top})(\mathbf{B \frac{\partial C}{\partial \theta_C} B^\top} \otimes \mathbf{R})] \\ & = Tr[(\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top})(\mathbf{B \frac{\partial C}{\partial \theta_C} B^\top} \otimes \mathbf{R})(\mathbf{B U_C \otimes U_R})] \\ & = Tr[\mathbf{\tilde K}^{-1}(\mathbf{U_C^\top B^\top B \frac{\partial C}{\partial \theta_C} B^\top B U_C \otimes U_R^\top R U_R})] = Tr[\mathbf{\tilde K}^{-1}(\mathbf{U_C^\top \frac{\partial C}{\partial \theta_C} U_C \otimes S_R})] \\ & = diag(\mathbf{\tilde K}^{-1})^\top [diag(\mathbf{U_C^\top \frac{\partial C}{\partial \theta_C} U_C)} \otimes diag(\mathbf{S_R})] \quad , \end{split} \end{eqnarray*} \normalsize \noindent and for the squared term we have: \small \begin{eqnarray*} \label{eq:squared_c} \begin{split} & \frac{\partial}{\partial \theta_\mathbf{C}} [vec(\mathbf{Y})^\top \mathbf{K}^{-1} vec(\mathbf{Y})] = \frac{\partial}{\partial \theta_\mathbf{C}} [vec(\mathbf{Y})^\top (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y})] \\ & = - vec(\mathbf{Y})^\top (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} [\frac{\partial}{\partial \theta_\mathbf{C}}(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})] (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y}) \\ & = - vec(\mathbf{Y})^\top (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top}) (\mathbf{B\frac{\partial C }{\partial \theta_C} B^\top} \otimes \mathbf{R}) \\ & (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top}) vec(\mathbf{Y}) \\ & = - [vec(\mathbf{U_R^\top Y B U_C})^\top \odot diag(\tilde{\mathbf{K}}^{-1})] (\mathbf{U_C^\top B^\top B \frac{\partial C}{\partial \theta_C} B^\top B U_C \otimes U_R^\top R U_R}) \\ & [\underbrace{diag(\tilde{\mathbf{K}}^{-1}) \odot vec(\mathbf{U_R^\top Y B U_C})}_{vec(\mathbf{\tilde Y})}] = - vec(\mathbf{\tilde Y})^\top vec(\mathbf{S_R \tilde{Y} U_C^\top \frac{\partial C}{\partial \theta_C} U_C}) \quad . \end{split} \end{eqnarray*} \normalsize \subsubsection*{Gradients of $\mathcal{L}$ with Respect to $\theta_\mathbf{R}$:} \label{subsubsec:gradient_c} \scriptsize \begin{eqnarray} \label{eq:derivatives_r} \begin{aligned} \frac{\partial \mathcal{L}}{\partial \theta_\mathbf{R}} = & -\frac{1}{2}diag(\tilde{\mathbf{K}}^{-1})^\top[diag(\mathbf{S_C}) \otimes diag(\mathbf{U_R^\top}\frac{\partial \mathbf{R}}{\partial \theta_\mathbf{R}}\mathbf{U_R})] +\frac{1}{2} vec(\mathbf{\tilde Y})^\top vec(\mathbf{U_R^\top \frac{\partial \mathbf{R}}{\partial \theta_\mathbf{R}}U_R \tilde Y S_C}), \end{aligned} \end{eqnarray} \normalsize \noindent where the determinant term of the above equation is derived by computing the derivative of $\ln \left | \mathbf{K} \right |$: \small \begin{eqnarray*} \label{eq:determinant_r} \begin{split} \frac{\partial \ln \left | \mathbf{K} \right |}{\partial \theta_\mathbf{R}} & = \frac{\partial}{\partial \theta_\mathbf{R}} [\ln\left | \mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I} \right |] = Tr[(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} \frac{\partial}{\partial \theta_\mathbf{R}}(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})] \\ & = Tr[(\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top})(\mathbf{B C B^\top} \otimes \mathbf{\frac{\partial R}{\partial \theta_R}})] \\ & = Tr[(\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top})(\mathbf{B C B^\top} \otimes \mathbf{\frac{\partial R}{\partial \theta_R}})(\mathbf{B U_C \otimes U_R})] \\ & = Tr[\mathbf{\tilde K}^{-1}(\mathbf{U_C^\top B^\top B C B^\top B U_C \otimes U_R^\top \frac{\partial R}{\partial \theta_R} U_R})] = Tr[\mathbf{\tilde K}^{-1}(\mathbf{S_C \otimes U_R^\top \frac{\partial R}{\partial \theta_R} U_R})] \\ & = diag(\mathbf{\tilde K}^{-1})^\top [diag(\mathbf{S_C}) \otimes diag(\mathbf{ U_R^\top \frac{\partial R}{\partial \theta_R} U_R})] \quad , \end{split} \end{eqnarray*} \normalsize \noindent and for the squared term we have: \small \begin{eqnarray*} \label{eq:squared_r} \begin{split} & \frac{\partial}{\partial \theta_\mathbf{R}} [vec(\mathbf{Y})^\top \mathbf{K}^{-1} vec(\mathbf{Y})] = \frac{\partial}{\partial \theta_\mathbf{R}} [vec(\mathbf{Y})^\top (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y})] \\ & = - vec(\mathbf{Y})^\top (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} [\frac{\partial}{\partial \theta_\mathbf{R}}(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})] (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y}) \\ & = - vec(\mathbf{Y})^\top (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top}) (\mathbf{BCB^\top \otimes \frac{\partial R}{\partial \theta_R}}) \\ & (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top}) vec(\mathbf{Y}) \\ & = - [vec(\mathbf{U_R^\top Y B U_C})^\top \odot diag(\tilde{\mathbf{K}}^{-1})] (\mathbf{U_C^\top B^\top BCB^\top B U_C \otimes U_R^\top \frac{\partial R}{\partial \theta_R} U_R}) \\ & [\underbrace{diag(\tilde{\mathbf{K}}^{-1}) \odot vec(\mathbf{U_R^\top Y B U_C})}_{vec(\mathbf{\tilde Y})}] = - vec(\mathbf{\tilde Y})^\top vec(\mathbf{U_R^\top \frac{\partial R}{\partial \theta_R} U_R \tilde{Y} S_C}) \quad . \end{split} \end{eqnarray*} \normalsize \subsubsection*{Gradients of $\mathcal{L}$ with Respect to $\theta_{\sigma^2}$:} \label{subsubsec:gradient_s} \small \begin{eqnarray} \label{eq:derivatives_s} \begin{aligned} \frac{\partial \mathcal{L}}{\partial \theta_{\sigma^2}} = & -\frac{1}{2} \frac{\partial \sigma^2}{\partial \theta_{\sigma^2}} [Tr[\tilde{\mathbf{K}}^{-1}] + vec(\mathbf{\tilde Y})^\top vec(\mathbf{\tilde Y})] \quad , \end{aligned} \end{eqnarray} \normalsize \noindent where the determinant term of the above equation is derived by computing the derivative of $\ln \left | \mathbf{K} \right |$: \small \begin{eqnarray*} \label{eq:determinant_s} \begin{split} \frac{\partial \ln \left | \mathbf{K} \right |}{\partial \theta_{\sigma^2}} & = \frac{\partial}{\partial \theta_{\sigma^2}} [\ln\left | \mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I} \right |] = Tr[(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} \frac{\partial \sigma^2}{\partial \theta_{\sigma^2}}] \\ & = \frac{\partial \sigma^2}{\partial \theta{\sigma^2}}Tr[(\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top})] \\ & = \frac{\partial \sigma^2}{\partial \theta{\sigma^2}} Tr[(\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top})(\mathbf{B U_C \otimes U_R})] \\ & = \frac{\partial \sigma^2}{\partial \theta{\sigma^2}} Tr[\mathbf{\tilde K}^{-1}(\mathbf{U_C^\top B^\top B U_C \otimes U_R^\top U_R})] = \frac{\partial \sigma^2}{\partial \theta{\sigma^2}} Tr[\mathbf{\tilde K}^{-1}] \quad , \end{split} \end{eqnarray*} \normalsize \noindent and for the squared term we have: \small \begin{eqnarray*} \label{eq:squared_s} \begin{split} & \frac{\partial}{\partial \theta_{\sigma^2}} [vec(\mathbf{Y})^\top \mathbf{K}^{-1} vec(\mathbf{Y})] = \frac{\partial}{\partial \theta_{\sigma^2}} [vec(\mathbf{Y})^\top (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y})] \\ & = - vec(\mathbf{Y})^\top (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} [\frac{\partial}{\partial \theta_{\sigma^2}}(\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})] (\mathbf{BCB}^\top \otimes \mathbf{R} + \sigma^2 \mathbf{I})^{-1} vec(\mathbf{Y}) \\ & = - vec(\mathbf{Y})^\top (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top}) \frac{\partial \sigma^2}{\partial \theta_{\sigma^2}} \\ & (\mathbf{B U_C \otimes U_R}) (\mathbf{S_C \otimes S_R + \sigma^2 I})^{-1}(\mathbf{U_C^\top B^\top \otimes U_R^\top}) vec(\mathbf{Y}) \\ & = - \frac{\partial \sigma^2}{\partial \theta_{\sigma^2}} [vec(\mathbf{U_R^\top Y B U_C})^\top \odot diag(\tilde{\mathbf{K}}^{-1})] (\mathbf{U_C^\top B^\top B U_C \otimes U_R^\top U_R}) \\ & [\underbrace{diag(\tilde{\mathbf{K}}^{-1}) \odot vec(\mathbf{U_R^\top Y B U_C})}_{vec(\mathbf{\tilde Y})}] = - \frac{\partial \sigma^2}{\partial \theta_{\sigma^2}} vec(\mathbf{\tilde Y})^\top vec(\mathbf{\tilde Y}) \quad . \end{split} \end{eqnarray*} \normalsize \subsection*{Normative Modeling} \label{subsec:normative_modeling} Let $\hat{y}_{ij}$ and $\sigma^2_{ij}$ be the prediction mean and variance of the $i$th test sample at the $j$th voxel. Further, let $\sigma^2_{nj}$ be the variance of the noise that is estimated by GPR at the $j$th voxel. Then the normative probability map (NPM) for the $i$th sample at $j$th voxel is defined as follows: \small \begin{eqnarray*} \label{eq:NPM} NPM_{ij}=\frac{y_{ij}-\hat{y}_{ij}}{\sqrt{\sigma^2_{ij}+\sigma^2_{nj}}} \quad , \end{eqnarray*} \normalsize \noindent where $y_{ij}$ is the true output. Having computed NPMs for all samples and brain locations, the abnormality index of each sample can be computed by fitting a generalized extreme value distribution (GEVD). We fit GEVD on the distribution of robust means of top 5\% voxels (in absolute value) across all NPMs. The resulting distribution is used to compute the probability of each sample being abnormal.
{ "timestamp": "2018-06-07T02:02:59", "yymm": "1806", "arxiv_id": "1806.01047", "language": "en", "url": "https://arxiv.org/abs/1806.01047" }
\section{Introduction} \subsection{Motivations} The scientific community agrees that in the future the intelligent activation of demand response (DR) will contribute to a reliable power system and price stability on power markets. Actuation of DR requires to solve an optimization problem in order to maximize an economic objective, which typically results in a welfare maximization problem (WMP), in which the unweighted sum of the economic costs of a group of agents is minimized. A very similar, and perhaps more studied problem, is the optimal power flow (OPF) problem. The OPF is usually solved in a centralized way by an independent system operator (ISO), in order to minimize the generation cost of a group of distributed power plants, over the set of underlying grid constraints. When the number of generators increases, the problem could become computationally expensive. Furthermore, retrieving all the generator-specific parameters could become impractical for the ISOs. For these reasons, different decentralized formulations of the OPF exist \cite{Molzahn2017}, which can speed up the computation exploiting parallelization among the different units. Furthermore, solving the problem in a decentralized way allows the generators to keep most of their information and parameters private, increasing privacy and lowering cyber-security concerns. The main difference between the OPF and DR setting, is that the second one involves the participation of self-serving agents, which cannot be a-priori trusted by the ISOs. This implies that if an agent find it profitable (in terms of its own economic utility), he will compute a different optimization problem from the one provided by the ISO. For this reason, some aspects of DR formulations are better described through a game theoretic framework. \subsection{Background and previous work} In this setting, we must consider that agents can adopt a strategy $s_i(\theta_i)$, which can be in general different from the one suggested by the ISO, based on their private information (or type), denoted as $\theta_i$, and their belief about the strategy of the other prosumers. The well-known Vickrey-Clarke-Groves (VCG) mechanism \cite{Makowski1987,Clarke1971,Vickrey1961} belongs to the strategy-proof class of mechanisms and presents other useful theoretical properties, among which being weakly budget-balanced. Anyway, to achieve this, it requires a value redistribution among agents under the form of monetary taxation, such that the tax which applies to agent $i$ is directly or indirectly independent from its actions. This implies that $N$ optimization problems must be solved, each of which is performed without considering a given agent. This makes the computational cost quadratic in $N$. Furthermore, VCGs are typically centralized and as such, they do not preserve the privacy of the agents. For example, in \cite{Poolla2017}, a VCG mechanism for virtual inertia is considered, in which bidders send their bidding curves to a center, which solves $N$ independent optimization problems. Since the VCG mechanism guarantees that the best bidding strategy is bidding truthfully, they send their true cost curves $c_i(x_i,\lambda_i)$ to the center. Note anyway that, if the agent's system presents some constraints, $c_i(x_i,\lambda_i)$ must represent them. This means that the center must know all the agent constraint sets $\mathcal{X}_i$ in order to solve the VCG. The unfavorable computational cost makes the VCG impractical for combinatorial auctions \cite{Conitzer2006} and problems with a large number of users with a nontrivial objective function. Despite this and other aspects which make it impractical in some cases \cite{Rothkopf2007}, VCGs have been extensively studied since they are the only general purpose incentive compatible mechanisms which maximize social welfare \cite{Tardos2007}. In order to preserve agent's privacy, it is possible to retrieve a distributed formulation of VCG using primal-dual decomposition algorithms. Note that distributing the mechanism aggravates the scalability problem of VCG, since the overall computation must now take into account communication delays. A second effect of adopting a decentralized formulation is that we cannot guarantee strategyproofness anymore. This is known as the cost of decentralization \cite{Petcu2008}, which leads to a weaker notion of incentive compatibility, namely ex-post Nash equilibrium (EPNE). Although weaker than a dominant-strategy equilibrium, ex-post Nash equilibrium does not require agents to model the strategies nor types of other agents through belief functions, as it's done using Bayes-Nash equilibrium \cite{Narahari2014}. Following this concept, in \cite{Parkes2004} guidelines for distributed implementations of VCG mechanisms are derived. In \cite{Petcu2008} a distributed VCG mechanism which reuses part of the computation done in each subproblem is presented. More recently \cite{Tanaka2018} has proposed a distributed VCG implementation based on dual decomposition, and applied the concept of multistage mechanism, in which different mechanisms are applied at each primal dual update. Also in this case, the proposed algorithm scales quadratically with the number of agents. Another field of research, started with the seminal work of Rosen on n-person non-cooperative games \cite{Rosen1965}, adopt non-VCG mechanisms to reach EPNE \cite{Kim2013a,Gharesifard2016}. This involves allowing a loss in terms of efficiency \cite{Yang2010}, with the benefit of better scalability with respect to the number of agents. In this paper we propose a method to guarantee participation constraint, also known as individual rationality (IR): all the prosumers must have a positive return participating in the proposed energy market, with respect to the base case. We ensure IR allowing a coordinator to limit the Lagrangian multipliers associated to the coupling constraints. The rest of the paper is structured as follows: in \ref{s:prob} the specific problem we address is formulated and we show that its associate game mapping is monotone, which is a condition for the uniqueness of the VGNE; in \ref{s:algo} we propose a new algorithm for reaching the GNE, based on the alternating direction method of multipliers (ADMM); in \ref{s:ana} we compare the convergence of the aforementioned algorithm with a recently proposed \cite{Belgioioso2018} preconditioned forward backward (pFB) algorithm for distributed Nash equilibrium seeking. \section{Problem formulation}\label{s:prob} In this work we are interested in a more general problem with respect of the OPF. In particular, we consider the case in which a group of agents which produce and/or consume energy (prosumers now on) can sell their aggregated flexibility to third parties, for example to DSO through demand response programs or to balance responsible parties. The mathematical formulation of this problem is known as the sharing problem: \begin{equation}\label{eq:sharing} \begin{aligned} \argmin{x \in \mathcal{X}} & e(x) + \sum_i^{N} c_i(x_i)\\ s.t. & \quad A x \leq b \end{aligned} \end{equation} where $\mathcal{X} = \prod_{i=1}^N \mathcal{X}_i$ is the Cartesian product of the prosumers feasible sets, $e(x)$ is a system level objective, $c_i(x_i)$ are the costs of each prosumers and the linear constraints are affine coupling constraints between the prosumers and $x = [x_1^T,..x_N^T] = [x_i]_{i=1}^N $ is the vector of the concatenated actions of all the prosumers. Here the affine coupling constraints encode grid constraints, limiting voltage and power in a subset of selected nodes of the grid in which the agents are located. This is possible taking into account the linearized formulation of the power flow equations \cite{Molzahn2017,Almasalma2017}, whose coefficients can be estimated using phasor measurement units \cite{Mugnier2016}, even using smart meter data \cite{Weckx2015}. The advantage of considering coupling constraints instead of agent-level constraints on voltage and power is given by the fact that the first approach can reach better solutions in terms of total welfare. As anticipated in the introduction, we are interested in decomposing problem \eqref{eq:sharing} among the self interested prosumers, in such a way that the induced game presents only one variational GNE, and in the algorithms leading to such an equilibrium. Being the equilibrium unique, rational agents will converge to the EPGNE. This is equivalent to assume that the agents believe their own influence on the prices broadcasted by the sequence of mechanism proposed by the algorithm are negligible, i.e. they are price takers. A reasonable way to turn the centralized problem \eqref{eq:sharing} into a non-cooperative game, is to reward each prosumer with a part of the system level objective $e(x)$, based on the amount of energy he produces or consumes during a give period of time: \begin{equation}\label{eq:utilities} v(x_i,x_{-i}) = c_i(x_i) + \frac{\vert x_i\vert}{\sum_{i=1}^N \vert x_i \vert} e(x) \end{equation} Anyway, this would result in a non-linear and non-convex game. As a first approximation we can replace this repartition rule with fixed (during each horizon) coefficients, based on a moving average: \begin{equation} v(x_i,x_{-i}) = c_i(x_i) + \alpha_i e(x) \end{equation} where \begin{equation} \alpha_i = \frac{\sum_{k=t-\tau}^t\vert x_{i,k}\vert}{\sum_{k=t-\tau}^t\sum_{i=1}^N \vert x_{i,k} \vert} \end{equation} Note that the game $\mathcal{G}(s_i(x),v_i(x))$ induced by the value functions in \eqref{eq:utilities} defines an aggregative game \cite{Jensen2010}, in which the each prosumer influence other's prosumers value only by means of the aggregated actions. The induced game can be described as the set of optimization problems \eqref{GNE} in which each prosumer minimizes its own value function $v(x_i, x_{-i})$ and associated KKT conditions \eqref{GNEKKT}. \begin{equation}\label{GNE} \begin{cases} \minimize{x_i \in \mathcal{X}_i} v(x_i, x_{-i}) \\ s.t \quad Ax\leq b \end{cases} \forall i \in N \end{equation} \begin{equation}\label{GNEKKT} KKT(i) = \begin{cases} 0\in \partial_{x_i} v_i(x_i,\mathrm{x}_{-i}) + \mathrm{N}_{\mathcal{X}_i} + A_i^T\lambda_i \\ 0 \leq \lambda_i \perp -(Ax-b) \geq 0 \end{cases} \end{equation} where $A^T = \left[A_i^T\right]_{i=1}^N $ and $\mathrm{N}_{\mathcal{X}_i}$ is the normal cone operator. Before introducing the algorithms that can be used to solve \ref{GNE}, we discuss some properties of the proposed objective function. It is known that a sufficient condition for the existence and uniqueness of a NE for a n-person non cooperative game is that the system-level objective function $\sigma(x) = \sum_{i=1}^N v_i(x_i)$ is diagonally strictly convex \cite{Rosen1965}. In the case of affine coupling constraints, authors in \cite{Belgioioso2017} and \cite{Paccagnan2016} have shown that the game has a unique variational GNE if the pseudogradient of $\sigma(x)$, $\mathcal{F}:\rm I\!R^{NT} \rightrightarrows \rm I\!R^{NT} = \left[ \partial_{x_i} v_i(x_i) \right]_{i} $ also known as game mapping, is strictly monotone. Furthermore, the equilibrium can be reached making the agents pay $A_i\lambda_i$, that is, the value function of each agent coincides with the integral of the first row of KKT in \eqref{GNEKKT}, $\tilde{v_i} = v_i(x_i,x_{-i}) + \lambda^T A_i x_i $. In this case, it has been shown that the agents reach a variational GNE with unique Lagrangian multiplier $\lambda$. Note that the game mapping differs from gradient of $\sigma(x)$ since its components are the partial derivatives of the values of the ith agent with respect to its own actions. We now show that the game map generated by the agents' values defined in \eqref{eq:utilities} inherits monotonicity from the convexity of $e(x)$. \begin{theorem} Let $e(x): \rm I\!R^{NT} \rightarrow \rm I\!R$ be a (strictly/strongly) convex function and let the costs of the agents $c_i(x_i): \rm I\!R^{T} \rightarrow \bar{\rm I\!R}$ be convex functions. Then any repartition $\left[\alpha_i\right]_{i=1}^N$ of $e(x)$ among the agents such that: \begin{flalign} \nonumber 1) \quad &v_i(x_i,x_{-i}) = \alpha_i e(x) + c_i(x_i) &&\\ \nonumber 2) \quad & \alpha_i \geq 0 \qquad \forall i \in \{N \} \end{flalign} generates a (strictly/strongly) monotone game map $\mathcal{F}: \rm I\!R^{NT} \rightrightarrows \rm I\!R^{NT}$ \end{theorem} \begin{proof} $\mathcal{F} = \left[ \partial_{x_i} v_i(x_i,x_{-i})\right]_{i=1}^N$ can be seen as a sum of two operators: $\mathcal{E} =\left[ \partial_{x_i} \alpha_i e(x_i,x_{-i})\right]_{i=1}^N $ and $\mathcal{C} = \left[ \partial_{x_i} c_i(x_i)\right]_{i=1}^N$. Due to the separability of $\mathcal{C}$, it coincides with the gradient of $\sigma(x) = \sum_{i=1}^N c_i(x_i)$. Due to the convexity of $\sigma(x)$, $\mathcal{C}$ is a monotone map, since the gradient of a convex function is monotone (theorem 1 in \cite{Minty1963}). Using the same reasoning, $\nabla_x e(x)$ is a monotone map due to the convexity of $e(x)$. From the definition of monotonicity, $\left<x-y \vert \nabla_x e(x)-\nabla_y e(y)\right> \geq 0 \quad \forall \quad (x,y)$. Additionally, since any convex function must be convex along any path, we can state it component-wise: $(x_i-y_i)(\partial_{x_i}e(x) -\partial_{y_i}e(y))\geq 0 \quad \forall \quad (i \in \{N\},x,y) $. Since we defined all $\alpha_i$ as positive, $(x_i-y_i)\alpha_i(\partial_{x_i}e(x) -\partial_{y_i}e(y))\geq 0 \quad \forall \quad (i \in \{N\},x,y) $. Thus $\left<x-y \vert \mathcal{E}(x)-\mathcal{E}(y)\right> \geq 0 \quad \forall \quad (x,y)$, and $\mathcal{F}$ is monotone being the sum of two monotone operators. \end{proof} \section{Algorithms for GNE seeking}\label{s:algo} As demonstrated in \cite{Paccagnan2016}, asysmmetric projection algorithms \cite{Facchinei2015} can be used to reach a GNE of an aggregative game with quadratic utilities. Recently, the same algorithm has been rigorously derived modeling the GNE as a monotone inclusion \cite{Belgioioso2018}, showing that it coincides with a preconditioned forward backward (pFB) method (algorithm \eqref{alg:1}), which is a special case of the Banach-Picard iteration \cite{Bauschke2011} of two operators whose sum is the set value mapping associated to the KKT conditions in \eqref{GNEKKT}. \begin{algorithm} \caption{pFB}\label{alg:1} \begin{algorithmic} \State $x^{k+1} = \boldsymbol{\Pi}_{\mathcal{X}} \left[x^{k} -\alpha (\mathcal{F}(x^{k})+A^t\lambda^k)) \right]$ \State $\lambda^{k+1} = \boldsymbol{\Pi}_{\rm I\!R^+} \left[\lambda^{k} +\beta (2Ax^{k+1}-Ax^{k}-b) \right]$ \end{algorithmic} \end{algorithm} We compare algorithm \eqref{alg:1} with a trivial modification of the ADMM algorithm \cite{Boyd2010}, which convergence rate and properties have been extensively studied in the literature. For clarity of exposition, we start considering the version of problem \eqref{eq:sharing} without coupling constraints. This can be solved in a centralized way through ADMM, applying the procedure in \cite{Boyd2010} \S 7.3, which results in the following parallelized formulation: \begin{algorithm} \caption{ADMM}\label{alg:2} \begin{algorithmic} \State \begin{align*} x_i^{k+1} &= \argmin{x_i \in \mathcal{X}_i} c_i(x_i) +\frac{\alpha_i}{2\rho} \Vert (S x^k -y^k)/N \\ &-x_i^k +x_i +\lambda^k \Vert_2^2\\ & +\frac{1}{2\rho} \Vert (A x^k -y^k)/N -A_ix_i^k +x_i +\lambda_a^k \Vert_2^2 \label{eq:agent_min}\numberthis \\ y^{k+1} &= \argmin{y} e(y) + \frac{1}{2\rho} \Vert y -S x^{k+1} -\lambda^k \Vert \numberthis \label{eq:center_min} \\ \lambda^{k+1} &= \lambda^{k} +S x^{k+1} -y^{k+1}\label{eq:lambda_center} \numberthis \\ y_a^{k+1} &= \argmin{y} \mathcal{I}_{\mathcal{X}_a} + \frac{1}{2\rho} \Vert y_a -Ax^{k+1} -\lambda^k \Vert \numberthis \label{eq:y_a_min} \\ \lambda_a^{k+1} &= \lambda_a^{k} +A x^{k+1} -y_a^{k+1}\label{eq:lambda_a} \numberthis \end{align*} \end{algorithmic} \end{algorithm} where the only difference form the centralized algorithm is the $\alpha_i$ coefficient in the $x_i$ update. We can write the KKT conditions at convergence \begin{subnumcases} \partial \partial_{x_i} c_i(x_i^*) + \alpha_i \frac{\lambda^*}{\rho} +A_i^T\frac{\lambda_a^*}{\rho} +\mathrm{N}_{\mathcal{X}_i} = 0 \qquad \forall i \in N &\label{KKT1} \\ \partial_{y} e(y^*) -\frac{\lambda^*}{\rho} = 0& \label{KKT2} \\ y^* = Sx^*& \label{KKT3}\\ 0 \leq \lambda_a^* \perp -(Ax-b) \geq 0 \label{KKT4} \end{subnumcases} We can find $\lambda^*$ from \ref{KKT2} and substitute it in \ref{KKT1}: \begin{equation} \partial_{x_i} c_i(x_i^*) + \alpha_i \partial_{y} e(y^*) +A_i^T\frac{\lambda_a^*}{\rho} + \mathrm{N}_{\mathcal{X}_i}= 0 \end{equation} then we can use \ref{KKT3}, and recalling that $S$ is the summation matrix, we obtain: \begin{equation} \partial_{x_i} c_i(x_i^*) + \alpha_i \partial_{x_i} e(x^*) +A_i^T\frac{\lambda_a^*}{\rho} + \mathrm{N}_{\mathcal{X}_i} = 0 \end{equation} which, together with \ref{KKT4} are equivalent to the KKT \ref{GNEKKT} of the game \ref{GNE}, when $v_i = c_i(x_i) + \alpha_i e(x)$. \subsection{Pricing and individual rationality} In this paper we only consider the case in which the function $e(x)$ is the surplus that the agent community has in paying the energy at the point of common coupling with the electrical grid: \begin{equation}\label{eq:surplus} e(x) = c\left(\sum_{i=1}^{N} x_i\right)-\sum_{i=1}^{N} c(x_i) \end{equation} where $x_i \in \rm I\!R^{T}$ is the vector of total power of the ith agent, $c(\cdot)$ is the energy cost function defined as: \begin{equation}\label{eq:cost_fun} c(z_t) = \begin{cases} p_{b,t} z_t , & \text{if } \quad z_t \geq 0 \\ p_{s,t} z_t , & \text{otherwise} \end{cases} \end{equation} where $p_{b,t}$ and $p_{s,t}$ are the buying and selling tariffs, respectively, at time $t$. In order to induce agents to follow the proposed mechanism, we must ensure that the energy tariff they pay participating in the market is always lower than the one they pay in the base case. This is always true when we are not taking into account grid constraints, since $e(x)$ as defined in \eqref{eq:surplus} is always non-negative, when $p_{b,t} \geq p_{s,t}$, as usual in energy tariffs. However, if the agents are located in a grid with big voltage oscillations, the Lagrangian dual variables (which we can interpret as punishment prices) could be such that the cost paid by the agents is higher than $\alpha_i e(x)$. To ensure IR, we encode it in the optimization scheme. At each iteration, for each time step in the horizon, we increment the Lagrangians only if the following condition holds: \begin{equation} \label{the_condition} \alpha_i e(x^k_t) + A_i^T\lambda^k_t \leq 0 \qquad \forall i \in N, \quad \forall t \in T \end{equation} where a negative value means that the prosumer is gaining a reward. This obviously results in the impossibility to satisfy the coupling constraints. We can give the following straightforward economic interpretation to this mechanism: each agent would opt-out from the game as soon as the energy tariffs become unfavorable with respect to the existing one. Condition \eqref{the_condition} prevent this from happening. In the presence of bad power quality, the DSO could provide favorable energy tariffs to prosumers participating in the mechanism, ensuring that condition \eqref{the_condition} is met with high probability. \subsection{Prosumers problem formulation}\label{problem_form} In this paper, each prosumer's flexibility is modeled using an electric battery. Although simple, the model we used is not simplistic and we briefly describe it in this subsection. Since the effect of charging or discharging on the state of charge is not symmetric due to the efficiencies, the problem is usually formulated as a mixed integer linear program (MILP), introducing binary decision variables and using bilinear constraints, to avoid the simultaneous charge and discharge of the battery. Furthermore, the objective function of the agents is non differentiable at $P_m = 0$ and is mathematically described by the maximum operator. In order to speed up the computations, we reformulated all the control problems as a quadratic optimization. We start considering that both the ADMM and the pFB formulations can be described by the following optimization problem: \begin{equation}\label{eq:prox_prob} \argmin{x_i \in \mathcal{X}_i} \ f(x_i,x_{-i}) +\frac{1}{2\rho}\Vert D x_i -r^k \Vert_2^2 \end{equation} where $r$ is a reference signal and $D \in \rm I\!R^{T \times 2T} = I_{T} \otimes [1,-1] $, performs the sum of the charging and discharging operations with appropriate signs. Here, with abuse of notation, we redefined the vector $x_i\in \rm I\!R^{2T}$ as the vector containing both the charging and discharging operators, in such a way that $x_i = [P_{in,t};P_{out,t}]_{t=1}^T$, where $P_{in,t}$ and $P_{out,t}$ are the charging and discharging powers of the battery. For the ADMM formulation, it is easy to see that problem \eqref{eq:prox_prob} can be used to solve \eqref{eq:agent_min}. We can still use \eqref{eq:prox_prob} for solving the pFB formulation recalling that the projected gradient descent is equivalent to a quadratic optimization problem in the form: \begin{equation}\label{eq:asy_prob} \argmin{x_i \in \mathcal{X}_i} \ \left(\mathcal{F}_i(x_i^{k})+A_i^t\lambda^k\right)^T x_i +\frac{1}{2\rho}\Vert x_i -x_i^k \Vert_2^2 \end{equation} The battery is modeled as a discrete linear system, with the state of charge denoted by $s$: \begin{equation} s_{i,t+1} = A_{s,i} s_{i,t} + B_{s,i} x_{i,t} \end{equation} We can eliminate the dependence on the state of the optimization problem, using the standard batch formulation: \begin{equation} s_i = \Lambda s_{i,0} + \Gamma x_i \end{equation} where $x_i \in \rm I\!R^{2T \times 1}$ is the control vector for the whole time horizon $T$ and $\Lambda \in \rm I\!R^{T \times 1}, \Gamma \in \rm I\!R^{T \times 2T}$ are the batch matrices. We can now describe the set $\mathcal{X}_i$ through the linear constraints $A_{c_i} x_i \leq b_{c_i}$, defined as: \begin{equation} A_{c_i} = \begin{bmatrix} I \\ I \\ -\Gamma \\ \Gamma \end{bmatrix} b_{c_i} =\begin{bmatrix} x_{min} \\ x_{max} \\ -e_{min} + \Lambda e_0 \\ e_{max} -\Lambda e_0 \end{bmatrix} \end{equation} where $x_{min}, x_{max}, e_{min}, e_{max}$ $ \in \rm I\!R^{2T} $ are the power and energy box constraints, while $I$ is the identity matrix of appropriate dimensions. Now we can reformulate the non differentiable cost function \eqref{eq:cost_fun} with a linear function, such as we can reuse it in both the ADMM and pFB formulations. We start considering that the if condition of the cost function in \eqref{eq:cost_fun} can be equivalently formulated using the max operator. In turn, the max operator can be replaced by the sum of an auxiliary variable $y$ and appropriate inequality constraints. We augment our decision variable such as $\tilde{x} = [x^T,y^T]^T$. Now the minimization of \eqref{eq:cost_fun} is equal to the following optimization problem: \begin{equation}\label{eq:selfish_problem_2} \begin{aligned} \min_{\tilde{x}} & \ l^T \tilde{x} \\ s.t.:\ & \tilde{A} \tilde{x} \leq \tilde{b}\\ \end{aligned} \end{equation} where $\tilde{A} = [A_{c,i};A_y]$ and $\tilde{b} = [b_{c,i},b_y]$, and \begin{equation}\label{eq:aug_matrices} A_y = \begin{bmatrix} D \circ P_b - I_T \\ D \circ P_s -I_T\end{bmatrix} \quad b_y =\begin{bmatrix} -p_b P_m \\ -p_s P_m \end{bmatrix} \end{equation} where the $P_b, P_s \in \rm I\!R^{T \times 2T} \ $ $t_{th}$ rows entries are identical to the buying and selling prices at time $t$. The effect of the matrices in \eqref{eq:aug_matrices} is that the new auxiliary variable $y$ is now an upper envelope for the cost function \eqref{eq:cost_fun}. Since we require the cost to be minimized, $y$ will coincide with the cost function $c(\cdot)$ at optimality. We can now use $A_y$ and $b_y$ in both the ADMM and pFB formulations. While in the first $l^T x_i$ replaces $f(x_i,x_{-i})$ in \eqref{eq:prox_prob}, in the latter the prosumers' energy costs are considered as part of the pseudogradient: $F_i = l + \partial_{x_i} e(x)$. This problem formulation prevent us from introducing binary variables for the charging and discharging powers, since optimal solutions of \eqref{eq:selfish_problem_2} does not require to simultaneously charge and discharge the battery at the same time. This is not true for the ADMM formulation, in which the quadratic penalty on the sum of $P_{in,t}$ and $P_{out,t}$ with respect to a reference signal $r^k$ is present. In the case in which the reference signal $r^k$ is negative, the battery is not only incentivized to charge itself, but to consume as much energy as possible. This will result in a simultaneous charge and discharge, due to the round-trip efficiency. To avoid this behavior, $f(\cdot)$ can be augmented with a linear therm, punishing the battery discharging operation when $r$ is negative: $\tilde{f}(\tilde{x}_i) = f(\tilde{x}_i) + l_p^T \tilde{x}_i $ where $l_p \in \rm I\!R^{1 \times 3*T}$ has non zero entries, all identical to a punishment therms, only when $r<0$. \section{Numerical analysis}\label{s:ana} We test both the ADMM and the pFB and compare the performance to a centralized solution. The only difference from the ADMM and the centralized formulation are the $\alpha_i$ coefficients in equation \eqref{eq:agent_min}, which are not present in the centralized solution. Since the system-level objective function $e(x)$, as defined in \eqref{eq:surplus}, is not differentiable in 0 and is not strictly nor strongly convex, the convergence of pFB is not guaranteed. To have an equal comparison, we replaced the system-level cost function \eqref{eq:cost_fun} with a continuously differentiable function. We define it by means of its derivative: \begin{equation} \nabla_z \tilde{c}(z) = (p_{b,t}-p_{s,t})\frac{ \tanh(k Sx_{t})+1}{2} + p_{s,t} \end{equation} where $k$ regulates the steepness of the function in $z=0$. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{time_series} \caption{Time series example, N = 10. Blue: forecasted profiles. Red: constraints. Grays: solutions of the centralized and decentralized approaches. Top: state of charge for each battery. Middle: power profiles. Bottom: voltage profiles.} \label{fig:timeseries} \end{figure} In our simulations $k=10$, which provides a reasonable steepness for all the possible values of the power aggregate, since we did all the computations in per units, and the aggregate power constraint is $Sx \in \left[-1.1, 1.1\right]$. We stress out that this approximation is only used for the system-level objective, and not for the prosumers objective functions, where the cost \eqref{eq:cost_fun} is modeled as described in subsection \ref{problem_form}. In order to fairly compare the algorithms, we used an equal stepsize $\rho$, fixed to 0.1. The power profiles of each prosumer are randomly chosen from a yearly dataset of real residential electrical consumption. Each prosumer is equipped with a PV field, with a nominal power uniformly distributed between 2 and 10 times its daily energy consumption. Furthermore, each prosumer is provided with an electric battery with size equal to the expected daily energy exceeding its consumption. Figure \ref{fig:timeseries} shows the optimized time series from a single case. In the upper panel, the batteries' state of charge (SOC) are shown. Since the SOC is the time integral of the optimization variables ($P_{in}, P_{out}$), it is clear that the ADMM and the pFB converged exactly to the same solution. The middle panels shows the forecasted aggregated power profile and the optimized one. Note that both the ADMM and pFB solutions are not far from the centralized solution, while differences are more evident in terms of single prosumers SOC. The last panel shows voltage profiles at the point of common coupling. In figure \ref{fig:convergence} the convergence of the two algorithms is shown, in terms of game objective function $\sigma(x)$. We ran a total of 50 simulations, each of which includes 10 prosumers with power profiles and battery sizes randomly chosen, as explained before. For each simulation $s$, we retrieve the best optimal value of $\sigma(x)$, $p_{best}^s$, defined as: \begin{equation} p_{best}^s = \text{minimum} \ \{p^{*s}_{ADMM},p^{*s}_{pFB}\} \end{equation} where $p^{*s}_{ADMM},p^{*s}_{pFB}$ are the solution of the two algorithms after 200 iterations (after which the relative change in $\sigma(x)$ for all the simulations was smaller than $1e-5$). The thick lines show the median, while the shadowed patches contain half of the simulations. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{shadows_2} \caption{Normalized optimal value $p^*$. The thick lines denote the median, while the shaded areas are the $25\%$ and $75\%$ quantiles. } \label{fig:convergence} \end{figure} \section{Conclusions} We have proposed a method to enforce IR while reaching a EPGNE in a distributed way. The method and the related algorithm have been tested, and compared with pFB, a state of the art algorithm for GNE seeking. The simulations shows that the proposed algorithm reaches the same solutions of pFB, while showing faster convergence in most of the cases. In future research, we will extensively investigate the advantages of the proposed methodology. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:17:35", "yymm": "1806", "arxiv_id": "1806.01072", "language": "en", "url": "https://arxiv.org/abs/1806.01072" }
\section{Introduction \label{introduction}} We report a measurement of the effective leptonic weak mixing angle (\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace) using the forward-backward asymmetry (\AFB) in Drell--Yan $\qqbar\to\ell^+\ell^-$ events, where $\ell$ stands for muon ($\mu$) or electron (\Pe). The analysis is based on data from the CMS experiment at the CERN LHC. At leading order (LO), lepton pairs are produced through the annihilation of a quark with its antiquark into a \PZ boson or a virtual photon: $\qqbar\to \cPZ/\gamma\to \ell^+\ell^-$. For a given dilepton invariant mass $m_{\ell\ell}$, the differential cross section at LO can be expressed at the parton level as \begin{equation} \label{eq:crosssection} \frac{\rd\sigma}{\rd(\cos\theta^{*})} \propto 1 + \cos^{2} \theta^{*} + A_{4} \cos\theta^{*}, \end{equation} where the $(1 + \cos^{2} \theta^{*})$ term arises from the spin-1 of the exchanged boson, and the $\cos\theta^{*}$ term originates from interference between vector and axial-vector contributions. The definition of \AFB is based on the angle $\theta^*$ of the negative lepton ($\ell^-$) in the Collins--Soper~\cite{Collins} frame of the dilepton system: \begin{equation} \label{eq:afb} \AFB=\frac{3}{8}A_4 =\frac{\sigma_\ensuremath{\mathrm{F}}\xspace-\sigma_\ensuremath{\mathrm{B}}\xspace}{\sigma_\ensuremath{\mathrm{F}}\xspace+\sigma_\ensuremath{\mathrm{B}}\xspace}, \end{equation} where $\sigma_\ensuremath{\mathrm{F}}\xspace$ and $\sigma_\ensuremath{\mathrm{B}}\xspace$ are, respectively, the cross sections in the forward ($\ensuremath{\cos\theta^{*}}\xspace>0$) and backward ($\ensuremath{\cos\theta^{*}}\xspace<0$) hemispheres. In this frame, $\theta^*$ is the angle of the $\ell^-$ relative to the axis that bisects the angle between the direction of the quark and the reversed direction of the antiquark. In proton-proton ($\Pp\Pp$) collisions, the direction of the quark is more likely to be in the direction of the Lorentz boost of the dilepton. Therefore, \ensuremath{\cos\theta^{*}}\xspace can be calculated using the following variables in the laboratory frame: \begin{equation} \ensuremath{\cos\theta^{*}}\xspace=\frac{2(P_1^+P_2^- - P_1^-P_2^+)}{\sqrt{\ensuremath{m_{\ell\ell}}\xspace^2(\ensuremath{m_{\ell\ell}}\xspace^2+\ensuremath{p_{\mathrm{T},\ell\ell}}\xspace^2)}}\,\frac{\ensuremath{p_{z,\ell\ell}}\xspace}{\abs{\ensuremath{p_{z,\ell\ell}}\xspace}}, \end{equation} where \ensuremath{m_{\ell\ell}}\xspace, \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace, and \ensuremath{p_{z,\ell\ell}}\xspace are the mass, transverse momentum, and longitudinal momentum, respectively, of the dilepton system, and the $P_i^\pm$ are defined in terms of the energies~($E_i$) and longitudinal momenta~($p_{z, i}$), of the negatively and positively charged leptons as $P_i^\pm=(E_i\pm p_{z,i})/\sqrt{2}$~\cite{Collins}. A non-zero \AFB value in dilepton events arises from the vector and axial-vector couplings of electroweak bosons to fermions. At LO, these respective couplings of \PZ bosons to fermions (f) can be expressed as: \begin{gather} \begin{align} v_\ensuremath{\mathrm{f}}\xspace&= T_3^\ensuremath{\mathrm{f}}\xspace-2Q_\ensuremath{\mathrm{f}}\xspace\sin^2\theta_\PW, \\ a_\ensuremath{\mathrm{f}}\xspace&= T_3^\ensuremath{\mathrm{f}}\xspace, \end{align} \end{gather} where $Q_\ensuremath{\mathrm{f}}\xspace$ and $T_3^\ensuremath{\mathrm{f}}\xspace$ are the charge and the third component of the weak isospin of the fermion, respectively, and $\sin^2\theta_\PW$ refers to the weak mixing angle, which is related to the masses of the \PW\ and \PZ bosons through the relation $\swsq=1-m_\PW^2/m_\PZ^2$. Electroweak (EW) radiative corrections affect these LO relations. In the improved Born approximation~\cite{Bardin,Zfitter}, some of the higher-order corrections are absorbed into an effective mixing angle. The effective weak mixing angle is based on the relation $v_\ensuremath{\mathrm{f}}\xspace/a_\ensuremath{\mathrm{f}}\xspace=1-4\abs{Q_\ensuremath{\mathrm{f}}\xspace}\sin^2\theta_{\ensuremath{\text{eff}}\xspace}^\ensuremath{\mathrm{f}}\xspace$, with $\sin^2\theta_\ensuremath{\text{eff}}\xspace^\ensuremath{\mathrm{f}}\xspace=\kappa_\ensuremath{\mathrm{f}}\xspace \sin^2\theta_\PW$, where the flavor-dependent $\kappa_\ensuremath{\mathrm{f}}\xspace$ is determined through EW corrections. The \AFB for dilepton events is sensitive primarily to \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. We measure \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace by fitting the mass and rapidity $(\ensuremath{y_{\ell\ell}}\xspace)$ dependence of the observed \AFB in dilepton events to standard model (SM) predictions as a function of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. The most precise previous measurements of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace were performed by the combined LEP and SLD experiments~\cite{ALEPH:2005ab}. There is, however, a known discrepancy of about 3 standard deviations between the two most precise values. Other measurements of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace have also been reported by the Tevatron and LHC experiments~\cite{Abazov:2008xq,Abazov:2011ws,Chatrchyan:2011ya,Aaltonen:2013wcp,Aaltonen:2014loa,Abazov:2014jti,Aad:2015uau,Aaij:2015lka,Aaltonen:2016nuy,Abazov:2017gpw,Aaltonen:2018dxj}. Using the LO expressions for the \PZ boson, virtual photon exchange, and their interference, the ``true'' $\AFB$ (\ie, using the quark direction in the definition of \ensuremath{\cos\theta^{*}}\xspace) can be evaluated as \ifthenelse{\boolean{cms@external}}{ \begin{multline} \label{eq:trueafb} \AFB^{\text{true}}(\ensuremath{m_{\ell\ell}}\xspace)= a_\ell a_\cPq (8v_\ell v_\cPq - Q_\cPq KD_m)\\ \times [16(v_\ell^2+a_\ell^2)(v_\cPq^2+a_\cPq^2)-8 v_\ell v_\cPq Q_\cPq KD_m\\ + Q_\cPq^2K^2(D_m^2+\Gamma^2_\PZ/m_\PZ^2)]^{-1}, \end{multline} }{ \begin{equation} \label{eq:trueafb} \AFB^{\text{true}}(\ensuremath{m_{\ell\ell}}\xspace)=\frac{6 a_\ell a_\cPq (8v_\ell v_\cPq - Q_\cPq KD_m)}{16(v_\ell^2+a_\ell^2)(v_\cPq^2+a_\cPq^2)-8 v_\ell v_\cPq Q_\cPq KD_m+ Q_\cPq^2K^2(D_m^2+\Gamma^2_\PZ/m_\PZ^2)}, \end{equation} } where the subscript \cPq\ refers to the participating quark, $K=8\sqrt{2}\pi \alpha/G_\ensuremath{\mathrm{F}}\xspace m_\PZ^2$, $D_m=1-m_\PZ^2/\ensuremath{m_{\ell\ell}}\xspace^2$, $\alpha$ is the electromagnetic coupling, $G_\ensuremath{\mathrm{F}}\xspace$ is the Fermi constant, and $\Gamma_\PZ$ is the full decay width of the \PZ boson. A strong dependence of \AFB on \ensuremath{m_{\ell\ell}}\xspace originates from axial and vector interference. The \AFB is negative at small \ensuremath{m_{\ell\ell}}\xspace and positive at large values, crossing $\AFB=0$ slightly below the \PZ boson peak. In collisions of hadrons, \AFB is sensitive to parton distribution functions (PDFs) for two reasons. First, the different couplings of \cPqu- and \cPqd-type quarks to EW bosons generate different \AFB values in the corresponding production channels, which means that the average depends on the relative contributions of \cPqu- and \cPqd-type quarks to the total cross section. Second, the definition of \AFB in $\Pp\Pp$ collisions is based on the sign of \ensuremath{y_{\ell\ell}}\xspace, which relies on the fact that on average the dilepton pairs are Lorentz-boosted in the quark direction. Therefore, a non-zero average \AFB originates only from valence-quark production channels and is diluted by events where the antiquark carries a larger momentum than the quark. A dependence of the ``true'' and diluted \AFB on dilepton mass for different \qqbar production channels and their sum is shown in Fig.~\ref{figure:typicalAFB}. \begin{figure*}[!htbp] \centering \includegraphics[width=0.32\textwidth]{Figure_001-a.pdf} \includegraphics[width=0.32\textwidth]{Figure_001-b.pdf} \includegraphics[width=0.32\textwidth]{Figure_001-c.pdf} \caption{ The dependence of $\AFB$ on $\ensuremath{m_{\ell\ell}}\xspace$ in dimuon events generated using \PYTHIA~8.212~\cite{PYTHIA8} and the LO NNPDF3.0~\cite{NNPDF30} PDFs for dimuon rapidities of $\ensuremath{\abs{y_{\ell\ell}}}\xspace<2.4$. The distributions for the total production (\qqbar) and the different channels are given on the left, overlaid with results based on Eq.~(\ref{eq:trueafb}), using the definition of $\AFB^\text{true}(\ensuremath{m_{\ell\ell}}\xspace)$ for the known quark direction. The middle panel gives the diluted \AFB using instead the direction of the dilepton boost, and the right panel shows the diluted \AFB in $\ensuremath{\abs{y_{\ell\ell}}}\xspace$ bins of 0.4 for all channels. \label{figure:typicalAFB} } \end{figure*} The dilution of \AFB depends strongly on \ensuremath{y_{\ell\ell}}\xspace, as shown in Fig.~\ref{figure:typicalAFB}. At zero rapidity, the quark and antiquark carry equal momenta, and the dilution is maximal, resulting in $\AFB=0$. The \AFB is measured in 12 bins of dilepton mass, covering the range $60<\ensuremath{m_{\ell\ell}}\xspace<120\GeV$, and 6 $\ensuremath{\abs{y_{\ell\ell}}}\xspace$ bins of equal size for $\ensuremath{\abs{y_{\ell\ell}}}\xspace<2.4$. The boundaries in the dilepton mass are at: 60, 70, 78, 84, 87, 89, 91, 93, 95, 98, 104, 112, and 120\GeV. The mass bins are chosen such that near $m_\PZ$ the bin widths are larger than the mass resolution in any of the ranges of \ensuremath{y_{\ell\ell}}\xspace. Smaller and larger mass bins are chosen such that all mass bins contain enough events to perform a meaningful independent measurement. The weak dependence of \AFB on \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace is included in the SM predictions. The uncertainty originating from modeling of \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace is very small and included in the theoretical estimates. \section{The CMS detector} The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. A silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections reside within the solenoid volume. Forward calorimeters extend the pseudorapidity $\eta$ coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. A more detailed description of the CMS detector can be found in Ref.~\cite{Chatrchyan:2008zzk}. Muons are measured in the range $\abs{\eta} < 2.4$, using detection planes based on the drift-tube, cathode-strip chamber, or resistive-plate chamber technologies. Matching muons to tracks measured in the silicon tracker provides a relative transverse momentum resolution for muons with $20 <\pt < 100\GeV$ of 1.3--2.0\% in the barrel, and less than 6\% in the endcaps. The \pt resolution in the barrel is smaller than 10\% for muons with \pt up to 1\TeV~\cite{Chatrchyan:2012xi}. The electromagnetic calorimeter consists of 75\,848 lead tungstate crystals that provide a coverage of $\abs{\eta} < 1.48 $ in the barrel region and $1.48 < \abs{\eta} < 3.00$ in the two endcap regions. Preshower detectors consisting of two planes of silicon sensors, interleaved with a total of 3~radiation lengths of lead, are located in front of each endcap detector. The electron momentum is obtained by combining the energy measurement in the ECAL with that in the tracker. The momentum resolution for electrons with $\pt \approx 45\GeV$ from $\Z \to \Pe \Pe$ decays, ranges from 1.7\% for nonshowering electrons in the barrel region, to 4.5\% for showering electrons in the endcaps~\cite{Khachatryan:2015hwa}. Events of interest are selected using a two-tiered trigger system~\cite{Khachatryan:2016bia}. The first level, consisting of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of about 100\unit{kHz} within a time interval of less than 4\mus. The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, that reduces the event rate to about 1\unit{kHz} before data storage. \section{Data and simulated events} The measurement is based on $\Pp\Pp$ collisions at $\sqrt{s}=8\TeV$ recorded by the CMS Experiment in 2012, corresponding to integrated luminosities of 18.8 and 19.6\fbinv for muon and electron channels, respectively. Candidates for the dimuon channel are collected using an isolated single-muon trigger with a \pt threshold of 24\GeV and $\ensuremath{\abs{\eta}}\xspace<2.4$. At the beginning of data taking, the muon trigger was restricted to $\ensuremath{\abs{\eta}}\xspace<2.1$. We do not use these events, and the integrated luminosity in the dimuon analysis is therefore somewhat smaller than for dielectrons. Background contamination is reduced by applying identification and isolation criteria to the reconstructed muons. First, muon tracks are required to be reconstructed independently in the inner tracker and in the outer muon detectors. A global fit to the momentum, including both tracker and muon detector hits, must have a fitted $\chi^2/\text{dof}<10$, where dof stands for the degrees of freedom. Muon tracks are required to pass within a transverse distance of $0.2\cm$ from the primary vertex, defined as the $\Pp\Pp$ vertex with the largest $\sum \pt^2$ of its associated tracks. Muon candidates are rejected if the scalar-\pt sum of all tracks within a cone of $\Delta R=\sqrt{\smash[b]{(\Delta\eta)^2+(\Delta\phi)^2}}=0.3$ around the muon is larger than 10\% of the \pt of the muon (this is referred to as track isolation, with $\phi$ being the azimuth in radians). The track isolation requirement is insensitive to contributions from additional soft $\Pp\Pp$ interactions (pileup). An event is selected when there are at least two isolated muons, with the leading muon (\ie, the one with largest \pt) having $\pt>25\GeV$, and the next-to-leading muon having $\pt>15\GeV$. At least one muon with $\pt>25\GeV$ is required to trigger the event. For the Drell--Yan signal, the two leptons are required to have opposite sign (OS). {\tolerance=1200 Dielectron candidates are collected using a single-electron trigger with a \pt threshold of 27\GeV and $\ensuremath{\abs{\eta}}\xspace<2.5$. Variables pertaining to the energy distribution in electromagnetic showers and to impact parameters of inner tracks are used to separate prompt electrons from electrons originating from photon conversions in detector material. The jet background from SM events produced through quantum chromodynamics (QCD) is referred to as multijet production. A particle-flow (PF) event reconstruction algorithm is used to identify different particle types (photons, electrons, muons, and charged and neutral hadrons~\cite{CMS-PRF-14-001}). The scalar-\pt sum of all PF particles in a cone of $\Delta R<0.3$ around the electron direction is required to be less than 15\% of the electron \pt, which reduces the background from hadrons in multijet events that are reconstructed incorrectly as electrons. This sum is corrected for contributions from pileup~\cite{Khachatryan:2015hwa}. The electron momentum is evaluated by combining the energy in the ECAL with the momentum in the tracker. To ensure good reconstruction, the coverage is restricted to $\ensuremath{\abs{\eta}}\xspace<2.4$, excluding the transition region of $1.44<\ensuremath{\abs{\eta}}\xspace<1.57$ between the ECAL barrel and endcap detectors, as electron reconstruction in this region is not optimal. Dielectron candidates are selected when at least two OS electrons pass all quality requirements. The leading and next-to-leading electrons must have respectively $\pt>30$ and $>20\GeV$, with the triggering electron always required to have $\pt>30\GeV$. \par} A total of about 8.2 million dimuon and 4.9 million dielectron candidate events are selected for further analysis. The number of dielectron events is smaller because of the higher \pt thresholds and more stringent selection criteria implemented in electron selections. The \ensuremath{\cPZ/\gamma\to \MM}\xspace and \ensuremath{\cPZ/\gamma\to \EE}\xspace data include small (${<}1\%$) background contaminations that originate from \ensuremath{\cPZ/\gamma\to \TT}\xspace, \ttbar, single top quark, and diboson (\PW\PW, \PW\cPZ, and \cPZ\cPZ) events, as well as multijet and \PW$+$jets events. Contributions from these backgrounds are subtracted from data as described below. Contamination from photon-induced background near the \PZ boson peak is negligible~\cite{Bourilkov:2016qum}. Monte Carlo (MC) simulation is used to model signal and background processes. The signal as well as the single-boson and top quark backgrounds are based on next-to-leading order (NLO) matrix elements implemented in the \POWHEG~v1 event generator~\cite{POWHEG0,POWHEG1,POWHEG2,POWHEG3} using the CT10~\cite{CTEQ:1007} PDFs. The generator is interfaced to \PYTHIA~6.426~\cite{PYTHIA6} using the Z2*~\cite{Chatrchyan:2013gfi,Khachatryan:2015pea} underlying event tune, which generates the parton showering, the hadronization, and the electromagnetic final-state radiation (FSR). The background events from $\tau$ lepton decays are simulated with \TAUOLA 2.7~\cite{TAUOLA}. Diboson and multijet background events are generated with \PYTHIA 6 using the CTEQ6L1 PDFs~\cite{CTEQ6L}. Simulated minimum-bias events are superimposed on the hard-interaction events to model the effects from pileup. The detector response to all particles is simulated through \GEANTfour~\cite{GEANT4}, and all final-state objects are reconstructed using the same algorithms used for data. \section{Corrections and backgrounds \label{section:corrections}} The MC simulations are corrected to improve the modeling of the data. First, weight factors are applied to all simulated events to match the pileup distribution in data, which consists of roughly 20 interactions per crossing. These weights are based on the measured instantaneous luminosity and the total inelastic cross section that provides a good description of the average number of reconstructed vertices. The total lepton-selection efficiency is factorized into the product of reconstruction, identification, isolation, and trigger efficiencies, with each component measured in samples of \ensuremath{\cPZ/\gamma\to \ell^+\ell^-}\xspace events through a ``tag-and-probe'' method~\cite{Chatrchyan:2012xi,Khachatryan:2015hwa}, in bins of lepton \pt and $\eta$. A charge-dependent efficiency in the muon triggering and reconstruction was observed in previous CMS measurements~\cite{Khachatryan:2016pev}. In the muon channel, all efficiencies are therefore determined separately for positively and negatively charged muons. The same procedures are used for data as for the simulated events, and scale factors are extracted to match the simulated event-selection efficiencies to those in the data. The lepton momentum is calibrated using \ensuremath{\cPZ/\gamma\to \ell^+\ell^-}\xspace events~\cite{Bodek:2012id}. The dominant sources of the mismeasurement of muon momentum originate from the mismodeling of tracker alignment and of the magnetic field. The correction parameters are obtained in bins of muon $\eta$ and $\phi$. First, the average $1/\pt$ values of the reconstructed muon curvature in data and simulation are corrected to the corresponding values calculated for MC generated muons. Then, using MC simulation, the resolution in the reconstructed muon momentum is parametrized as a function of the muon \pt in bins of muon $\ensuremath{\abs{\eta}}\xspace$ and the number of tracker hits used in the reconstruction. Next, the correction parameters of the muon momentum scale are fine-tuned by matching the average dimuon mass in each bin of muon charge, $\eta$, and $\phi$ to their reference values. At this point, the ``reference'' distributions, which are based on the generated muons, are smeared by the reconstruction resolution derived in the previous step. Finally, the scale factors for the muon momentum resolution, in bins of muon $\ensuremath{\abs{\eta}}\xspace$, are determined by fitting the ``reference'' dimuon mass distribution to data. A similar procedure is followed for electrons to reduce the small residual difference between the data and MC simulation. Unlike for muons, the measured electron energy is dominated by the calorimeter, and the corrections are extracted identically for electrons and positrons. The electron energy-scale parameters are fine-tuned by correcting the average dielectron mass in each bin of electron $\eta$ and $\phi$ to the corresponding ``reference'' values. Here, the ``reference'' distributions are based on the generated electrons (post FSR), combined with the FSR photons in a cone, and smeared by the reconstructed energy resolution. The EW and top quark backgrounds are estimated using MC simulations based on the cross sections calculated at next-to-the-next-to-leading order in QCD~\cite{FEWZ, toppp} and normalized to the integrated luminosity. We use cross sections calculated at NLO for the diboson backgrounds. The multijet background in dimuon events, dominated by muons from heavy-flavor hadron decays, is evaluated using same-sign (SS) dimuon events. A small EW and top quark contamination is evaluated in an MC simulation and subtracted from the SS sample. The distributions are then scaled by roughly a factor of 2, estimated from simulated events, to obtain the multijet contamination in the signal OS dimuon sample. The multijet background in the dielectron analysis is evaluated using the SS sample in combination with the $\Pe\mu$ events to subtract the contribution from the OS events caused by the misidentification of charge. The distributions used to estimate the background from jets misidentified as leptons (that include the multijet and \PW+jet events) are obtained from the SS $\Pe\mu$ sample. These distributions are used to fit the dielectron mass distribution in the SS events in each $\ensuremath{y_{\ell\ell}}\xspace$ bin to extract the normalization of this background. \begin{figure*}[htbp] \centering \includegraphics[width=\cmsFigWidth]{Figure_002-a.pdf} \includegraphics[width=\cmsFigWidth]{Figure_002-b.pdf} \includegraphics[width=\cmsFigWidth]{Figure_002-c.pdf} \includegraphics[width=\cmsFigWidth]{Figure_002-d.pdf} \includegraphics[width=\cmsFigWidth]{Figure_002-e.pdf} \includegraphics[width=\cmsFigWidth]{Figure_002-f.pdf} \caption{ Dimuon (left) and dielectron (right) mass distributions in three representative bins in rapidity: $\ensuremath{\abs{y_{\ell\ell}}}\xspace<0.4$ (upper), $0.8<\ensuremath{\abs{y_{\ell\ell}}}\xspace<1.2$ (middle), and $1.6<\ensuremath{\abs{y_{\ell\ell}}}\xspace<2.0$ (lower). \label{figure:mll} } \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=\cmsFigWidth]{Figure_003-a.pdf} \includegraphics[width=\cmsFigWidth]{Figure_003-b.pdf} \includegraphics[width=\cmsFigWidth]{Figure_003-c.pdf} \includegraphics[width=\cmsFigWidth]{Figure_003-d.pdf} \includegraphics[width=\cmsFigWidth]{Figure_003-e.pdf} \includegraphics[width=\cmsFigWidth]{Figure_003-f.pdf} \caption{ The muon (left) and electron (right) \ensuremath{\cos\theta^{*}}\xspace distributions in three representative bins in rapidity: $\ensuremath{\abs{y_{\ell\ell}}}\xspace<0.4$ (upper), $0.8<\ensuremath{\abs{y_{\ell\ell}}}\xspace<1.2$ (middle), and $1.6<\ensuremath{\abs{y_{\ell\ell}}}\xspace<2.0$ (lower). The small contributions from backgounds are included in the predictions. \label{figure:mcs} } \end{figure*} The dilepton mass and \ensuremath{\cos\theta^{*}}\xspace distributions in three of the six rapidity bins are shown in Figs.~\ref{figure:mll} and \ref{figure:mcs}, respectively. The figures include lepton momentum and efficiency corrections, background samples normalized as described above, and the signal normalized to the total expected number of events in the data. \section{Weighted \texorpdfstring{\AFB}{Lg} measurement} As introduced in Section \ref{introduction}, the LO angular distribution of dilepton events has a $(1+\cos^2\theta^*)$ term that arises from the spin-1 of the exchanged boson and a \ensuremath{\cos\theta^{*}}\xspace term that originates from the interference between vector and axial-vector contributions. However, there is also a $(1-3\cos^2\theta^*)$ NLO term that originates from the \pt of the interacting partons~\cite{angular}. Each $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin of the dilepton pair at NLO therefore has an angular distribution in \ensuremath{\cos\theta^{*}}\xspace that follows the form~\cite{angular}: \begin{equation} \label{eq:dsigmadcs} \frac{1}{\sigma}\frac{\rd\sigma}{\rd\ensuremath{\cos\theta^{*}}\xspace} = \frac{3}{8}\Big[1+\cos^2\theta^*+\frac{A_0}{2}(1-3\cos^2\theta^*) + A_4\ensuremath{\cos\theta^{*}}\xspace\Big]. \end{equation} The $\AFB$ value in each $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin is calculated using the ``angular event weighting'' method, described in Ref.~\cite{Bodek:2010qg}, in which each event with a \ensuremath{\cos\theta^{*}}\xspace value (denoted as ``$c$''), is reflected in the denominator ($D$) and numerator ($N$) weights through: \begin{gather} w_\ensuremath{\mathrm{D}}\xspace=\frac{1}{2}\frac{c^2}{(1+c^2+h)^3}, \\ w_\ensuremath{\mathrm{N}}\xspace=\frac{1}{2}\frac{\abs{c}}{(1+c^2+h)^2}, \end{gather} where $h=0.5A_0(1-3c^2)$. Here, as a baseline we use the \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace-averaged $A_0$ value of about $0.1$ in each measurement $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin, as predicted by the signal MC simulation. Using the weighted sums $N$ and $D$ for forward ($\ensuremath{\cos\theta^{*}}\xspace>0$) and backward ($\ensuremath{\cos\theta^{*}}\xspace<0$) events, we obtain \begin{gather} D_\ensuremath{\mathrm{F}}\xspace=\sum_{c>0}w_\ensuremath{\mathrm{D}}\xspace, \quad D_\ensuremath{\mathrm{B}}\xspace=\sum_{c<0}w_\ensuremath{\mathrm{D}}\xspace, \\ N_\ensuremath{\mathrm{F}}\xspace=\sum_{c>0}w_\ensuremath{\mathrm{N}}\xspace, \quad N_\ensuremath{\mathrm{B}}\xspace=\sum_{c<0}w_\ensuremath{\mathrm{N}}\xspace, \end{gather} from which the weighted \AFB of Eq.~(\ref{eq:afb}) can be written as: \begin{equation} \AFB=\frac{3}{8}\frac{N_\ensuremath{\mathrm{F}}\xspace-N_\ensuremath{\mathrm{B}}\xspace}{D_\ensuremath{\mathrm{F}}\xspace+D_\ensuremath{\mathrm{B}}\xspace} \label{eq:weightedafb}. \end{equation} The statistical uncertainty in this weighted \AFB value takes into account correlations among the numerator and denominator sums. For data, the background contribution in the event-weighted sums are subtracted before calculating \AFB. In the full phase space, the values of the weighted and the nominal \AFB, calculated as an asymmetry between the total event counts in the forward and backward hemispheres, are the same. Since the acceptances of the forward and backward events are equal for same values of $\abs{\ensuremath{\cos\theta^{*}}\xspace}$, the fiducial values of the event-weighted \AFB are also the same as in the full phase space, while the nominal \AFB values are smaller because of the limited acceptance at large \ensuremath{\cos\theta^{*}}\xspace. This feature makes an event-weighted \AFB less sensitive than the nominal \AFB to the specific modeling of the acceptance. In addition, because the event-weighted \AFB exploits the full distribution in \ensuremath{\cos\theta^{*}}\xspace, as opposed to only its sign in the nominal \AFB, it therefore provides a smaller statistical uncertainty. \section{Extraction of \texorpdfstring{\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace}{Lg} \label{section:sineff}} {\tolerance=1600 We extract \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace by fitting the \AFB$(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ distribution in data with the theoretical predictions. The default signal distributions are based on the \POWHEG~v2 event generator using the NNPDF3.0 PDFs~\cite{NNPDF30}. The \POWHEG generator is interfaced with \PYTHIA~8~\cite{PYTHIA8} and the CUETP8M1~\cite{Khachatryan:2015pea} underlying event tune to provide parton showering and hadronization, including electromagnetic FSR. The dependence on \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace, on the renormalization and factorization scales, and on the PDFs is modeled through the \POWHEG MC generator that provides matrix-element-based, event-by-event weights for each change in these parameters. The distributions are modified to different values of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace by weighting each event in the full simulation by the ratio of \ensuremath{\cos\theta^{*}}\xspace distributions obtained with the modified and default configurations in each $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin. The uncertainties in the simulation of the detector have a small effect because \AFB is extracted through the angular event-weighting technique that is insensitive to efficiency and acceptance. \par} \begin{figure*}[htbp] \centering \includegraphics[width=\cmsFigWidthBig]{Figure_004-a.pdf} \includegraphics[width=\cmsFigWidthBig]{Figure_004-b.pdf} \caption{ Comparison between data and best-fit \AFB distributions in the dimuon (upper) and dielectron (lower) channels. The best-fit \AFB value in each bin is obtained via linear interpolation between two neighboring templates. Here, the templates are based on the central prediction of the NLO NNPDF3.0 PDFs. The error bars represent the statistical uncertainties in the data. \label{figure:fit} } \end{figure*} {\tolerance=1200 Table~\ref{table:staterrors} summarizes the statistical uncertainty in the extracted \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace in the muon and electron channels and in their combination. Comparisons between the data and best-fit distributions are shown in Fig.~\ref{figure:fit}. The statistical uncertainties are evaluated through the bootstrapping technique~\cite{efron1979}, and take account of correlations among the measured \AFB, lepton selection efficiencies, and calibration coefficients introduced through the repeated use of the same dilepton events. We generate 400 pseudo-experiments that provide an accurate estimate of the statistical uncertainties and correlations. In each pseudo-experiment, every event in the data is replicated $n$ times, where $n$ is a random number sampled from a Poisson distribution with a mean of unity. All steps of the analysis, including extraction of muon selection efficiencies, calibration coefficients, and a measurement of \AFB, are performed for each pseudo-experiment. The statistical uncertainties in electron-selection efficiencies and calibration coefficients, which have no charge dependence, are small and are evaluated separately. \par} \begin{table}[htbp] \centering \topcaption{ Summary of statistical uncertainties in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. The statistical uncertainties in the lepton-selection efficiency and in the calibration coefficients in data are included in the estimates. \label{table:staterrors} } \begin{tabular}{ l c } Channel & Statistical uncertainty \\ \hline Muons & 0.00044 \\ Electrons & 0.00060 \\ [\cmsTabSkip] Combined & 0.00036 \\ \end{tabular} \end{table} \section{Experimental systematic uncertainties} The experimental sources of systematic uncertainty reflect the statistical uncertainties in the simulated events, corrections to lepton-selection efficiency, and to the lepton-momentum scale and resolution, background subtraction, and modeling of pileup. For electrons, the selection efficiencies, which have no dependence on charge, cancel to first order, since we are using the angular event-weighting technique. \subsection{Statistical uncertainties in MC simulated events} {\tolerance=1200 To reduce the statistical uncertainties associated with the limited number of events in the signal MC samples, which include simulation of detector response and lepton reconstruction, the generated \ensuremath{\cos\theta^{*}}\xspace distributions in each $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin within the acceptance of the detector is reweighted to much larger MC samples, generated without simulating detector response or lepton reconstruction. This makes the fluctuations in the generated \ensuremath{\cos\theta^{*}}\xspace distributions negligible, and therefore the statistical uncertainties in the reconstructed \AFB values become dominated by fluctuations in the simulated detector response and lepton reconstruction. These uncertainties are evaluated using the bootstrapping~\cite{efron1979} method in both dimuon and dielectron channels, described in Section~\ref{section:sineff}, by reweighting the generated \ensuremath{\cos\theta^{*}}\xspace distributions in each of the bootstrap samples. The total statistical uncertainties in the simulated events also include contributions from uncertainties in the measured lepton-selection efficiencies and calibration coefficients. \par} \subsection{Lepton selection efficiencies} Several sources of uncertainty are considered in measuring of efficiencies. The statistical uncertainties in the lepton-selection efficiencies, evaluated through studies of pseudo-experiments, are included in the combined statistical uncertainty of the measured \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. {\tolerance=800 Combined scale factors for muon reconstruction, identification, and isolation efficiencies are changed by 0.5\%, and trigger-selection efficiency scale factors by 0.2\%, coherently for all bins for both positive and negative lepton charges. These take into account uncertainties associated with the tag-and-probe method, and are evaluated by changing signal and background models for dimuon mass distributions, levels of backgrounds, the dimuon mass range, and binning used in the fits. These uncertainties are considered fully correlated between the two charges, and therefore have a negligible impact on the measurement of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. In addition, we assign the difference between the offline efficiencies obtained by fitting the dimuon mass distributions to extract the signal yields, and those found using simple counting method, as additional systematic uncertainties. The total systematic uncertainty in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace originating from the muon selection efficiency is $\pm$0.00005. In a similar way as for muons, the scale factors for electron reconstruction, identification, and trigger-selection efficiencies are changed coherently within their uncertainties in all $(\pt,\eta)$ bins, and the corresponding changes in the resulting \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace are assigned as systematic uncertainties. The total uncertainty in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace originating from all electron efficiency-related systematic sources is $\pm$0.00004. \par} \subsection{Lepton momentum calibration} The statistical uncertainties in the parameters used to calibrate lepton momentum, described in Section~\ref{section:corrections}, are included in the combined statistical uncertainty. The theoretical uncertainties, discussed in Section~\ref{section:theory}, are also propagated to the reference distributions used to extract the coefficients in the lepton momentum calibration. {\tolerance=1200 When evaluating the average dimuon masses to extract the $(\eta,\phi)$ dependent corrections, the dimuon mass window is restricted to $86<m_{\mu\mu}<96\GeV$. This range of $\pm5\GeV$ centered at 91\GeV is changed from $\pm2.5$ to $\pm10\GeV$ in steps of 0.5\GeV, and the full calibration sequence is repeated each time. Similarly, a dimuon mass window of $\pm10$ (\ie, 81--101)\GeV, used in the dimuon fits to obtain the resolution-correction factors, is changed from $\pm5$ to $\pm25\GeV$ in steps of 1\GeV. For each of these modifications, the maximum deviation in the extracted \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace relative to the nominal configuration is taken as a systematic uncertainty. The total experimental systematic uncertainty in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace originating from the muon-momentum calibration, evaluated by adding individual uncertainties in quadrature, is $\pm$0.00008. The effects due to PDF uncertainties in the calibration coefficients were found to be negligible. In studies of the impact of the value of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace used to generate the reference distributions for muon-momentum calibration over the range of $\Delta\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace=0.02000$, the extracted result changes at most by $\pm$0.00008 due to the changes made in the muon-calibration parameters. Since the uncertainty in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace is much smaller than $\pm0.02000$, we conclude that this effect is negligible. \par} Similarly, the windows in the dielectron invariant mass used to extract the electron momentum-correction factors are changed to estimate the corresponding systematic uncertainty. And consider additional independent sources of systematic uncertainty from the modeling of pileup, background estimation, and bias in the dielecton mass-fitting procedure. The size of the EW corrections in the extracted electron energy-calibration coefficients is estimated by modifying reference dielectron mass distributions through the weight factors obtained with \textsc{zgrad}~\cite{Baur}. All these systematic uncertainties are found to be rather small. The dominant uncertainty originates from the full corrections to the electron energy resolution, which improve the agreement between data and simulated dielectron mass distributions. The total systematic uncertainty in the extracted value of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace due to both the electron energy scale and resolution is $\pm$0.00019. \subsection{Background} The systematic uncertainties in the estimated background are evaluated as follows. The normalizations of the top quark and $\ensuremath{\cPZ/\gamma\to \TT}\xspace$ backgrounds are changed respectively by 10 and 20\%, covering the maximum deviations between the data and simulation observed in the $\Pe\mu$ control region. The uncertainty in the multijet and \PW+jets background is estimated by changing them by $\pm$100\%. Changing the diboson background prediction by 100\% provides a negligible change in the result (${<}0.00001$). Changing all EW and top quark backgrounds by the uncertainty in the integrated luminosity of 2.6\%~\cite{CMS-PAS-LUM-13-001} also produces a negligible change in the result (${<}0.00001$). The total systematic uncertainty in the measured \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace from the uncertainty in the background estimation is $\pm0.00003$ and $\pm0.00005$ in the dimuon and dielectron channels, respectively. \subsection{Pileup} To take into account the uncertainty originating from differences in pileup between data and simulation, we change the total inelastic cross section by $\pm$5\%, and recompute the expected pileup distribution in data. The analysis is repeated and the difference relative to the central value is taken as the systematic uncertainty. These uncertainties are respectively $\pm0.00003$ and $\pm0.00002$ in the dimuon and dielectron channels. All the above systematic uncertainties are summarized in Table~\ref{table:expsystematics}. \section{Theoretical systematic uncertainties \label{section:theory}} We investigate sources of systematic uncertainty in modeling the MC templates. For each change in the model, we rederive the reference distributions described in Section~\ref{section:corrections} to adjust the lepton momentum calibration coefficients. As a baseline, the signal MC events are weighted to match the \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace distribution in each $\ensuremath{\abs{y_{\ell\ell}}}\xspace$ bin in the data. The difference relative to the result obtained without applying the weight factors, which is $0.00003$ in both channels, is assigned as a systematic uncertainty associated with the modeling of \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace. \begin{table}[!h] \centering \topcaption{ Summary of experimental systematic uncertainties in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. } \label{table:expsystematics} \begin{tabular}{ l c c } Source & Muons & Electrons \\ \hline Size of MC event sample & 0.00015 & 0.00033 \\ Lepton selection efficiency & 0.00005 & 0.00004 \\ Lepton momentum calibration & 0.00008 & 0.00019 \\ Background subtraction & 0.00003 & 0.00005 \\ Modeling of pileup & 0.00003 & 0.00002 \\ [\cmsTabSkip] Total & 0.00018 & 0.00039 \\ \end{tabular} \end{table} The renormalization and factorization scales, $\mu_\ensuremath{\mathrm{R}}\xspace$ and $\mu_\ensuremath{\mathrm{F}}\xspace$, are each changed independently by a factor of 2, up and down, such that their ratio is within $0.5<\mu_\ensuremath{\mathrm{R}}\xspace/\mu_\ensuremath{\mathrm{F}}\xspace<2.0$. The maximum deviation among these six variants relative to the nominal choice (excluding the two opposite changes) is assigned as a systematic uncertainty associated with the missing higher-order QCD correction terms. {\tolerance=1200 In addition, we use a multi-scale improved NLO (\textsc{MiNLO}~\cite{MiNLO}) calculation for the \cPZ+1 jet partonic final state (henceforth referred to as ``\cPZ+j''), interfaced with \PYTHIA~8 for parton showering, FSR, and hadronization, to assess the uncertainty from the missing higher-order QCD terms and modeling of the angular coefficients. The \textsc{MiNLO} \cPZ+j process has NLO accuracy for both \cPZ+0 and \cPZ+1 jet events, which provides a better description of the dependence of the angular coefficients on \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace. \par} Systematic uncertainties in modeling electromagnetic FSR are estimated by comparing results obtained with distributions based on \PYTHIA~8 and \PHOTOS~2.15~\cite{Photos1,Photos2,Photos3} for the modeling of FSR. Electroweak effects from the difference between the \cPqu\ and \cPqd\ quarks and leptonic effective mixing angles, are estimated by changing $\sin^2\theta_\ensuremath{\text{eff}}\xspace^\cPqu$ and $\sin^2\theta_\ensuremath{\text{eff}}\xspace^\cPqd$ by 0.0001 and 0.0002~\cite{Baur}, respectively, relative to \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. The \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace extracted using the corresponding distributions is shifted by 0.00001. The underlying event tune parameters~\cite{Khachatryan:2015pea} are changed by their uncertainties, and \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace is extracted also using the corresponding distributions. The maximum difference from the default tune is taken as the corresponding uncertainty. The systematic uncertainties from these and all the above sources, are summarized in Table~\ref{table:theorysystematics}. We also separately study the modeling of the $A_0$ angular coefficient, which is included in the definition of \AFB. As a baseline, the \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace-averaged $A_0$ value in each measurement $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin is used in the definition of the weighted \AFB. Several other options are studied: (i) the LO expression: $A_0=\ensuremath{p_{\mathrm{T},\ell\ell}}\xspace^2/(\ensuremath{p_{\mathrm{T},\ell\ell}}\xspace^2+\ensuremath{m_{\ell\ell}}\xspace^2)$, (ii) the $\ensuremath{p_{\mathrm{T},\ell\ell}}\xspace$-dependent $A_0$ in each $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin as predicted in the baseline NLO \POWHEG simulation, (iii) the $\ensuremath{p_{\mathrm{T},\ell\ell}}\xspace$-dependent $A_0$ predicted in the \textsc{MiNLO} \cPZ+j \POWHEG generator, and (iv) $A_0$ set to 0. The same definition is used for data and simulation, and the extracted \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace is identical within $\pm$0.00002 of the default. In addition, we weight the $\abs{\ensuremath{\cos\theta^{*}}\xspace}$ distribution from the \textsc{MiNLO} \cPZ+j MC sample to match the dependence of $A_0$ on \ensuremath{p_{\mathrm{T},\ell\ell}}\xspace in each $(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ bin to the corresponding values of the baseline MC simulation. The change in the resulting \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace is also negligible. \begin{table}[!htbp] \centering \topcaption{ \label{table:theorysystematics} Summary of the theoretical uncertainties for the dimuon and dielectron channels, as discussed in the text. } \label{table:systheorypdf} \begin{tabular}{ l c c } Modeling parameter & Muons & Electrons \\ \hline Dilepton \pt reweighting & 0.00003 & 0.00003 \\ $\mu_\ensuremath{\mathrm{R}}\xspace$ and $\mu_\ensuremath{\mathrm{F}}\xspace$ scales & 0.00011 & 0.00013 \\ \POWHEG \textsc{MiNLO} \cPZ+j \vs \PZ at NLO & 0.00009 & 0.00009 \\ FSR model (\PHOTOS \vs\ \PYTHIA~8) & 0.00003 & 0.00005 \\ Underlying event & 0.00003 & 0.00004 \\ Electroweak $\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace$ \vs $\sin^2\theta^{\cPqu, \cPqd}_\ensuremath{\text{eff}}\xspace$ & 0.00001 & 0.00001 \\ [\cmsTabSkip] Total & 0.00015 & 0.00017 \\ \end{tabular} \end{table} \section{Uncertainties in the PDFs\label{section:pdf}} The observed \AFB values depend on the size of the dilution effect, as well as on the relative contributions from \cPqu\ and \cPqd\ valence quarks to the total dilepton production cross section. The uncertainties in the PDFs translate into sizable changes in the observed \AFB values. However, changes in PDFs affect the $\AFB(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace)$ distribution in a different way than changes in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. Changes in PDFs produce large changes in \AFB, when the absolute values of \AFB are large, \ie, at large and small dilepton mass values. In contrast, the effect of changes in \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace are largest near the \PZ boson peak, and are significantly smaller at high and low masses. Because of this behavior, which is illustrated in Fig.~\ref{figure:pdftheory}, we apply a Bayesian $\chi^2$ reweighting method to constrain the PDFs~\cite{Giele:1998gw,Sato:2013ika,Bodek:2016olg}, and thereby reduce their uncertainties in the extracted value of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{Figure_005-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_005-b.pdf} \caption{ Distribution in \AFB as a function of dilepton mass, integrated over rapidity (\cmsLeft), and in six rapidity bins (\cmsRight) for $\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace=0.23120$ in \POWHEG. The solid lines in the bottom panel correspond to six changes at $\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace$ around the central value, corresponding to: $\pm0.00040$, $\pm0.00080$, and $\pm0.00120$. The dashed lines refer to the \AFB predictions for 100 NNPDF3.0 replicas. The shaded bands illustrate the standard deviation in the NNPDF3.0 replicas. \label{figure:pdftheory} } \end{figure} As a baseline, we use the NLO NNPDF3.0 PDFs. In the Bayesian $\chi^2$ reweighting method, PDF replicas that offer good descriptions of the observed \AFB distribution are assigned large weights, and those that poorly describe the \AFB are given small weights. Each weight factor is based on the best-fit $\chi^2_{\ensuremath{\text{min}}\xspace,i}$ value obtained by fitting the \AFB(\ensuremath{m_{\ell\ell}}\xspace,\ensuremath{y_{\ell\ell}}\xspace) distribution with a given PDF replica $i$: \begin{equation} \label{eq:bayesweight} w_i = \frac{\re^{-\frac{\chi^2_{\ensuremath{\text{min}}\xspace,i}}{2}}}{\frac{1}{N}\sum_{i=1}^N \re^{-\frac{\chi^2_{\ensuremath{\text{min}}\xspace,i}}{2}}}, \end{equation} where $N$ is the number of replicas in a set of PDFs. The final result is then calculated as a weighted average over the replicas: $\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace=\sum_{i=1}^{N} w_i s_i/N$, where $s_i$ is the best-fit \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace value obtained for the $i^\text{th}$ replica. Figure~\ref{figure:comination:scatter} shows a scatter plot of the $\chi^2_{\ensuremath{\text{min}}\xspace}$ \vs the best-fit \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace value for the 100 NNPDF3.0 replicas for the $\mu\mu$ and $\Pe\Pe$ samples, and for the combined dimuon and dielectron results. All sources of statistical and experimental systematic uncertainties are included in a $72{\times}72$ covariance matrices for data and template \AFB distributions. The $\chi^2(s)$ is defined as: \begin{equation} \chi^2(s)= (\boldsymbol{D}-\boldsymbol{T}(s))^{T}\boldsymbol{V}^{-1}(\boldsymbol{D}-\boldsymbol{T}(s)), \end{equation} where $\boldsymbol{D}$ represents the measured \AFB values for data in 72 bins, $\boldsymbol{T}(s)$ denotes the theoretical predictions for \AFB as a function of $s$, or $\ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace$, and $\boldsymbol{V}$ represents the sum of the covariance matrices for the data and templates. As illustrated in these figures, the extreme PDF replicas from either side are disfavored by both the dimuon and dielectron data. For each of the NNPDF3.0 replicas, the muon and electron results are combined using their respective best-fit $\chi^2$ values, \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace, and their fitted statistical and experimental systematic uncertainties. Figure~\ref{figure:comination:result} shows the extracted \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace in the muon and electron decay channels and their combination, with and without constraining the uncertainties in the PDFs. The corresponding numerical values are also listed in Table~\ref{table:combination}. After Bayesian $\chi^2$ reweighting, the PDF uncertainties are reduced by about a factor of 2. It should be noted that the Bayesian $\chi^2$ reweighting technique works well when the replicas span the optimal value on both of its sides. In addition, the effective number of replicas after $\chi^2$ reweighting, $n_\ensuremath{\text{eff}}\xspace=N^2/\sum_{i=1}^{N}w_i^2$, should also be large enough to give a reasonable estimate of the average value and its standard deviation. There are 39 effective replicas after the $\chi^2$ reweighting ($n_\ensuremath{\text{eff}}\xspace=39$). Including the corresponding statistical uncertainty of 0.00005, the total PDF uncertainty becomes 0.00031. As a cross-check, we perform the analysis with the corresponding set of 1000 NNPDF3.0 replicas in the dimuon channel, and find good consistency between the two results. \begin{figure*}[!htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_006-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_006-b.pdf} \includegraphics[width=0.48\textwidth]{Figure_006-c.pdf} \caption{ The upper panel in each figure shows a scatter plot in $\chi^2_\ensuremath{\text{min}}\xspace$ \vs the best-fit \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace for 100 NNPDF replicas in the muon channel (upper left), electron channel (upper right), and their combination (below). The corresponding lower panels have the projected distributions in the best-fit \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace for the nominal (open circles) and weighted (solid circles) replicas. \label{figure:comination:scatter} } \end{figure*} \begin{figure}[!htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_007-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_007-b.pdf} \caption{ The extracted values of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace in the muon and electron channels, and their combination. The horizontal bars include statistical, experimental, and PDF uncertainties. The PDF uncertainties are obtained both without (\cmsLeft) and with (\cmsRight) using the Bayesian $\chi^2$ weighting. \label{figure:comination:result} } \end{figure} We have also studied the PDFs represented by Hessian eigenvectors using the CT10~\cite{CTEQ:1007}, CT14~\cite{CT14}, and MMHT2014~\cite{MMHT2014} PDFs in an analysis performed in the dimuon channel. First, we generate the replica predictions ($i$) for each observable $O$ for the Hessian eigensets ($k$): \begin{equation} O_i = O_0 + \frac{1}{2} \sum_{k=0}^{n} (O_{2k+1}-O_{2k+2})R_{ik}, \end{equation} where $n$ is the number of eigenvector axes, and the $R_{ik}$ are random numbers sampled from the normal distribution with a mean of 0 and a standard deviation of unity. Then, the same technique is applied as used in the NNPDF analysis. The results of fits for these PDFs are summarized in Fig.~\ref{figure:pdffit}. After Bayesian $\chi^2$ reweighting the central predictions for all PDFs are closer to each other, and the corresponding uncertainties are significantly reduced. The result using CT14 is within about 1/3 of the PDF uncertainty of the NNPDF3.0 result in the muon channel, whereas the MMHT2014 set yields a smaller \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace value by about one standard deviation. Some of these differences can be reduced by adding more data (e.g. including the electron channel, which is not considered in this check). Some can be attributed to the residual differences in the valence and sea quark distributions, which are not fully constrained using the \AFB distributions alone. For example, we find that the NLO NNPDF3.0 PDF set yields a very good description for the published 8 TeV CMS muon charge asymmetry ($\chi^2$ of 4.6 for 11 dof). In contrast, the $\chi^2$ values with the CT14 and MMHT2014 PDF sets are 21.3 and 21.4, respectively. We also constructed a combined set from same number of replicas of NNPDF3.0, CT14, and MMHT2014 PDFs, and after including the data from the \PW\ charge asymmetry in the PDF reweighting, we find the combined weighted average in the dimuon channel differs from the NNPDF3.0 result by only 0.00009, and the standard deviation only increases from 0.00032 to 0.00036. Consequently, for our quoted results we use only the NNPDF3.0 PDF set, which is used in both dimuon and dielectron analyses. As an additional test, for the case of Hessian PDFs (including the Hessian NNPDF3.0~\cite{NNPDF30hes}) we perform a simultaneous $\chi^2$ fit for \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace and all PDF nuisance parameters representing the variations for each eigenvector. As expected for Gaussian distributions, we obtain the same central values and the total uncertainties that are extracted from Bayesian reweighting of the corresponding set of replicas. \begin{table}[!htbp] \centering \topcaption{ The central value and the PDF uncertainty in the measured \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace in the muon and electron channels, and their combination, obtained without and with constraining PDFs using Bayesian $\chi^2$ reweighting. } \label{table:combination} \begin{tabular}{ l c c } Channel & Not constraining PDFs & Constraining PDFs \\ \hline Muons & $0.23125\pm0.00054$ & $0.23125\pm0.00032$ \\ Electrons & $0.23054\pm0.00064$ & $0.23056\pm0.00045$ \\ [\cmsTabSkip] Combined & $0.23102\pm0.00057$ & $0.23101\pm0.00030$ \\ \end{tabular} \end{table} Finally, as a cross-check, we also repeat the measurement using different mass windows for extracting \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace, and for constraining the PDFs. Specifically, we first use the central five bins, corresponding to the dimuon mass range of $84<m_{\mu\mu}<95\GeV$, to extract \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace. Then, we use predictions based on the extracted \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace in the lower three $(60 < m_{\mu\mu} <84\GeV)$ and the higher four $(95<m_{\mu\mu}<120\GeV)$ dimuon mass bins, to constrain the PDFs. We find that the statistical uncertainty increases by only about 10\%, and the PDF uncertainty increases by only about 6\% relative to the uncertainties obtained when using the full mass range to extract the \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace and simultaneously constrain the PDFs. The test thereby confirms that the PDF uncertainties are constrained mainly by the high- and low-mass bins, and that we obtain consistent results with these two approaches. \begin{figure}[!htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_008-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_008-b.pdf} \caption{ Extracted values of \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace from the dimuon data for different sets of PDFs with the nominal (\cmsLeft) and $\chi^2$-reweighted (\cmsRight) replicas. The horizontal error bars include contributions from statistical, experimental, and PDF uncertainties. \label{figure:pdffit} } \end{figure} \section{Summary} The effective leptonic mixing angle, \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace, has been extracted from measurements of the mass and rapidity dependence of the forward-backward asymmetries \AFB in Drell--Yan $\mu\mu$ and $\Pe\Pe$ production. As a baseline model, we use the \POWHEG event generator for the inclusive $\Pp\Pp\to\PZ/\gamma\to\ell\ell$ process at leading electroweak order, where the weak mixing angle is interpreted through the improved Born approximation as the effective angle incorporating higher-order corrections. With more data and new analysis techniques, including precise lepton-momentum calibration, angular event weighting, and additional constraints on PDFs, the statistical and systematic uncertainties are significantly reduced relative to previous CMS measurements. The combined result from the dielectron and dimuon channels is: \ifthenelse{\boolean{cms@external}}{ \begin{multline} \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace=0.23101 \pm 0.00036\stat \pm 0.00018\syst\\ \pm 0.00016\thy \pm 0.00031\,(\text{PDF}), \end{multline} }{ \begin{equation} \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace=0.23101 \pm 0.00036\stat \pm 0.00018\syst \pm 0.00016\thy \pm 0.00031\,(\text{PDF}), \end{equation} } or summing the uncertainties in quadrature, \begin{equation} \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace=0.23101\pm0.00053. \end{equation} A comparison of the extracted \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace with previous results from LEP, SLC, Tevatron, and LHC, shown in Fig.~\ref{figure:result}, indicates consistency with the mean of the most precise LEP and SLD results, as well as with the other measurements. \begin{figure*}[!htbp] \centering \includegraphics[width=0.8\textwidth]{Figure_009.pdf} \caption{ Comparison of the measured \ensuremath{\sin^2\theta^{\ell}_{\text{eff}}}\xspace in the muon and electron channels and their combination, with previous LEP, SLD, Tevatron, and LHC measurements. The shaded band corresponds to the combination of the LEP and SLD measurements. \label{figure:result} } \end{figure*} \begin{acknowledgments} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR and RAEP (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI and FEDER (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). \hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract No. 675440 (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the "Excellence of Science - EOS" - be.h project n. 30820817; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). \end{acknowledgments}
{ "timestamp": "2018-09-06T02:14:10", "yymm": "1806", "arxiv_id": "1806.00863", "language": "en", "url": "https://arxiv.org/abs/1806.00863" }
\section{Introduction} \subsection{Problem setup.} Let $X \in \Rnr$ be a fixed and unknown matrix with ${\rm rank}(X)=r$, and our aim is to recover $X$ from given quadratic measurements, i.e., \begin{equation} \label{question} {\rm find }\quad X\in \R^{n\times r}, \quad {\rm s.t.}\quad y_i=a_i^\T XX^\T a_i=\norm{a_i^\T X}^2, \qquad i=1,\ldots,m, \end{equation} where $a_i=(a_{i,1},\ldots,a_{i,n})\in \R^n$. This problem is raised in many emerging applications of science and engineering, such as covariance sketching, quantum state tomography and high dimensional data streams \cite{rankone3,kueng2017low, rankone2}. A simple observation is that $a_i^\T XX^\T a_i=a_i^\T XOO^\T X^\T a_i$ where $O\in \R^{r\times r}$ is an orthogonal matrix. We can only hope to recover $X$ up to a right orthogonal matrix. There exists an orthogonal matrix $O^*\in \R^{r\times r}$ such that $XO^*$ has orthogonal column vectors. Hence, throughout the paper we can assume that $X$ has orthogonal column vectors. To recover $X$ from given measurements (\ref{question}), we consider the following optimization problem: \begin{equation} \label{optimization problem} \minm{U\in \Rnr} f(U)=\frac{1}{4m}\sum_{i=1}^m(y_i-\norm{a_i^\T U}^2)^2. \end{equation} The aim of this paper is to develop algorithms to solve (\ref{optimization problem}). \subsection{Related work} \subsubsection{Low rank matrix recovery} Rank minimization problem is a direct generalization of compressed sensing \cite{recht2010guaranteed,jain2013low}. For the general rank minimization problem, it aims to reconstruct a low rank matrix $Q\in \R^{n\times n}$ from incomplete measurements, which can be formulated as the following programming \begin{equation}\label{eq:minrank} \begin{array}{l} \mathop{\min} \limits_{Z \in \R^{n\times n}} \qquad \rank (Z) \\ \text{subject to} \quad \tr(A_iZ)=y_i,\quad i=1,\ldots, m, \end{array} \end{equation} where $y_i=\tr(A_iQ), A_i\in \R^{n\times n}, i=1,\ldots,m$. In \cite{Xulowrank}, Xu has proved that in order to guarantee the solution of (\ref{eq:minrank}) is $Q$ where $Q\in {\mathbb C}^{n\times n}$ and ${\rm rank}(Q)\leq r$, the minimal measurement number $m$ is $4nr-4r^2$. Since (\ref{eq:minrank}) is non-convex, it is challenging to solve it \cite{meka2008rank}. However, under a certain restricted isometry property (RIP), this problem can be relaxed to a nuclear norm minimization problem which is a convex programming and can be solved efficiently \cite{candes2011tight,recht2010guaranteed}. Noting that $M:=XX^\T$ is a low rank matrix, we can recast (\ref{question}) as a rank minimization problem. This means that we can use the nuclear norm minimization to recover the matrix $M$ and hence $X$: \begin{equation}\label{eq:rankone} \begin{array}{l} \mathop{\min} \limits_{Z \in {\mathcal H}_n} \qquad \|Z\|_* \\ \text{subject to} \quad \tr(A_iZ)=y_i,\quad i=1,\ldots, m, \end{array} \end{equation} where ${\mathcal H}_n:=\{Q\in \R^{n\times n}:Q=Q^\T\}$ and $A_i=a_ia_i^*$. The (\ref{eq:rankone}) was studied in \cite{kueng2017low, rankone3} with proving that $m\ge Cnr$ Gaussian measurements are sufficient to recover the unknown matrix $M=XX^\T$ exactly. In \cite{rauhut2016low}, Rauhut and Terstiege also consider the case where the measurement vectors $a_i, i=1,\ldots,m$ are from a tight frame. \subsubsection{Phase retrieval} Under the setting of $r=1$, the (\ref{question}) is reduced to phase retrieval problem. Phase retrieval is to recover an unknown vector from the magnitude of measurements, which means to recover a signal $x\in \mathbb{H}^n$ from measurements \begin{equation} y_i=\abs{\langle a_i,x\rangle}^2, \quad i=1,\ldots,m, \end{equation} where $ a_i\in \mathbb{H}^n$ $(\mathbb{H}=\mathbb{C}$ or $\mathbb{R})$ are sampling vectors. This problem is raised in many imaging applications due to the limitations of optical sensors which can only record intensity information, such as X-ray crystallography \cite{harrison1993phase,millane1990phase}, astronomy \cite{fienup1987phase}, diffraction imaging \cite{shechtman2015phase,gerchberg1972practical}. It has been proved that $m\ge 4n-4$ Gaussian measurements are sufficient to recover the unknown vector up to a global phase \cite{phase1}. In recent years, there are several different algorithms have been proposed to solve it \cite{balan2012reconstruction,matrixcompletion,demanet2014stable,eldar2014phase,netrapalli2013phase}. In \cite{WF}, Cand\`es et al. design Wirtinger flow algorithm for phase retrieval with solving the following non-convex optimization problem \begin{equation}\label{eq:gaoxu} \minm{u\in \mathbb{C}^n} \frac{1}{4m}\sum\limits_{i=1}^m(y_i-\abs{a_i^{*}u}^2)^2 \end{equation} and prove that the algorithm converges to the true signal up to a global phase with high probability provided the measurement vectors are $m=O(n\log n)$ Gaussian measurements. Following the work of \cite{WF}, Chen and Cand\`es \cite{TWF} propose a modified gradient method which is called {\em Truncated Wirtinger Flow}, and it removes the additional logarithmic factor in the number of measurements $m$. In \cite{Gaoxu}, Gao and Xu propose a Gauss-Newton algorithm to solve (\ref{eq:gaoxu}) and they prove that, for the real signal, the algorithm can converge to the global optimal solution quadratically with $O(n\log n)$ measurements. \subsection{Our contribution} In \cite{thelocal,zheng2015convergent}, one designed algorithms for solving (\ref{optimization problem}). In order to guarantee convergence to the global optimal solution, the algorithm in \cite{thelocal} requires that $m\ge C\normf{X}^8\lambda_r^{-4}nr^2\log^2n$, while the algorithm in \cite{zheng2015convergent} needs $m=O(r^3\kappa^2n\log n)$, where $\kappa$ denotes the condition number of $XX^\top$. In contrast to those algorithms, we aim to reduce the sampling complexity with removing the additional logarithmic factor on $n$. In this paper, we propose a novel algorithm and call it {\em exponential-type gradient descent algorithm}. For initialization, we give a tighter initial guess through a careful truncated skill; and for iteration update step, we add a moderate bounded exponential-type function to the classical gradient. Particularly, we show the followings all hold with high probability: \begin{itemize} \item We present a spectral initial method which obtains a good initial guess provided $m\ge C\sigma_r^{-2}\normf{X}^4nr$ and $a_i, i=1,\ldots,m $ are Gaussian random vectors, where $\sigma_r,\sigma_1$ are the smallest and the largest nonzero eigenvalues of the positive semidefinite matrix $XX^\T $ \item Starting from our initial guess, we refine the initial estimation by iteratively applying a novel gradient update rule. If $m \ge C\sigma_r^{-2}\normf{X}^4nr\log(cr\normf{X}^2/\sigma_r) $, then our algorithm linearly converges to a global minimizer $X$, up to a right orthogonal matrix. More importantly, the step size in our algorithm is independent with the dimension $n$. \end{itemize} \subsection{Organization} The paper is organized as follows. First, we introduce some notations and lemmas in Section 2. In Section 3, we introduce the exponential-type gradient descent algorithm for solving (\ref{optimization problem}). We study the convergence property of the new algorithm in Section 4. In Section 5, we introduce the main idea for proving the results which given in Section 4. Numerical experiments are made in Section 6. At last, most of the detailed proofs are given in the Appendix. \section{Preliminaries} \subsection{Notations} Throughout the paper, we assume that $ X =(x_1,\ldots, x_r) \in \Rnr $ has orthogonal columns. Without loss of generality, we assume that $ \norm{x_1} \geq \norm{x_2} \geq \cdots \geq \norm{x_r}$. We use the Gaussian random vectors $ a_i\in\R^n, \, i=1,\ldots, m $ as the measurement vectors and obtain $y_i=a_i^\T XX^\T a_i, \, i=1,\ldots,m$. Here we say the sampling vectors are the Gaussian random measurements if $a_i\in {\mathbb R}^n$ are i.i.d. $ \mathcal{N}(0,I) $ random variables. As we have the entire manifold solutions given by $\mathcal{X}:=\{XO:O\in \mathcal{O}(r)\}$, where $\mathcal{O}(r)$ is the set of $r\times r$ orthogonal matrices, we define the distance between a matrix $U\in \Rnr$ and $X$ as \begin{equation}\label{distance} d(U)\,\,:=\,\,\mathop{\min} \limits_{O \in \mathcal{O}(r)} \|XO-U\|_F. \end{equation} To state conveniently, we assume that \begin{equation} \label{eigen} \sigma_1 \geq \sigma_2\geq \cdots \geq \sigma_r >0 \end{equation} are the nonzero eigenvalues of the matrix $XX^\T $. \subsection{Lemmas} We now introduce some lemmas which will be used in our paper. First, we recall a result about random matrix with non-isotropic sub-gaussian rows \cite[Equation (5.26)]{vershynin2010introduction}. \begin{lemma} (\cite[Equation (5.26)]{vershynin2010introduction}) \label{A introduction} Let $A$ be an $N\times n$ matrix whose rows are $A_i$, and assume that $\Sigma^{-1/2}A_i$ are isotropic sub-gaussian random vectors, and let $K$ be the maximum of their sub-gaussian norms. Then for every $t\ge 0$, the following inequality holds with probability at least $1-2\exp(-ct^2)$: \[ \norm{\frac{1}{N}A^*A-\Sigma}\le \max(\delta,\delta^2)\norm{\Sigma} \qquad \text{where} \quad \delta=C\sqrt{\frac{n}{N}}+\frac{t}{\sqrt{N}}. \] Here $C,c$ are constants. \end{lemma} The next result is Bernstein-type inequality about sub-exponential random variables \cite[Proposition 5.26]{vershynin2010introduction}. \begin{lemma} (\cite[Proposition 5.26]{vershynin2010introduction}) \label{Bernstein inequality} Let $X_1,\ldots,X_N$ be independent centered sub-exponential random variables and $K=\max_i\|X_i\|_{\psi_1}$. Then for every $a=(a_1,\ldots,a_N)\in \R^N$ and every $t\ge 0$, we have \[ \mathbb{P}\Big\{|\sum_{i=1}^Na_iX_i|\ge t\Big\}\le 2\exp\Big[-c\min\big(\frac{t^2}{K^2\norm{a}^2},\frac{t}{K\|a\|_\infty}\big)\Big], \] where $c>0$ is an absolute constant. \end{lemma} \begin{lemma} \label{lemma 3.1} For any $\delta>0$, assume that $m\ge 16\delta^{-2}n$ and $a_i, i=1,\ldots,m$ are Gaussian random vectors. Then for any positive semidefinite matrices $M\in \R^{n\times n}$, \[(1-\delta)\|M\|_*\le \frac{1}{m}\sum_{i=1}^m a_i^\T M a_i \le (1+\delta)\|M\|_* \] holds on an event $E_\delta$ of probability at least $1-2\exp(-m\epsilon^2/2)$, where $\delta/4=\epsilon^2+\epsilon$ and the norm $\|\cdot\|_*$ denotes the nuclear norm of a matrix. In particular, the right inequality holds for all matrices. \end{lemma} \begin{proof} The first part of this lemma is a direct consequence of Lemma 3.1 in \cite{phaselift}. Hence, we only need to prove that the right inequality holds for all matrices. We assume the rank of matrix $M$ is $r$. Then by the singular-value decomposition, we can write $M=\sum_{j=1}^r \sigma_ju_jv_j^\T$, where $u_j,v_j$ are unit vectors. It implies that we just need to show \begin{equation*} \frac{1}{m}\sum_{i=1}^m (a_i^\T u) (a_i^\T v) \le 1+\delta \end{equation*} holds for any fixed unit vectors $u,v$. Indeed, if we denote $A:=(a_1,\dots,a_m)^\T$, then \begin{eqnarray*} \sum_{i=1}^m (a_i^\T u) (a_i^\T v) &\le & \frac{1}{2}\sum_{i=1}^m (a_i^\T u)^2+ \frac{1}{2}\sum_{i=1}^m (a_i^\T v)^2 \\ &=& (\norm{Au}^2+\norm{Av}^2)/2\\ &\le & \sigma_{\max}^2(A), \end{eqnarray*} where $\sigma_{\max}^2(A)$ is the maximum singular value of $A$. From the well known deviations bounds concerning the singular values of Gaussian random matrices, i.e., \begin{equation*} \PP(\sigma_{\max}(A)\ge \sqrt{m}+\sqrt{n}+t)\le \exp(-t^2/2), \end{equation*} we arrive the conclusion if we take $m\ge \epsilon^{-2}n$ and $t=\sqrt{m}\epsilon$. \end{proof} \section{Exponential-type Gradient Descent Algorithm} Our aim is to recover a matrix $X\in \Rnr$ (up to right multiplication by an orthogonal matrix) from quadratic measurements \[ y_i=\norm{a_i^\T X}^2 , \quad i=1,\ldots,m \] by solving the non-convex optimization problem \begin{equation}\label{eq:alg} \minm{U\in \Rnr} f(U)=\frac{1}{4m}\sum_{i=1}^m(y_i-\norm{a_i^\T U}^2)^2. \end{equation} In this section, we will introduce an exponential-type gradient descent algorithm for solving (\ref{eq:alg}). \subsection{Spectral Initialization} The first step of our algorithm is to choose a good initial guess. In \cite{thelocal}, Sanghavi, Ward and White choose $U_0=Z\Lambda^{1/2}$ as the initial guess, where the columns of $Z\in \Rnr$ are the normalized eigenvectors corresponding to the $r$ largest eigenvalues $\lambda_1\ge\cdots\ge\lambda_r$ of the matrix $Y=\frac{1}{2m}\sum\limits_{i=1}^my_ia_ia_i^\T$ and the diagonal matrix $\Lambda={\rm diag}(\Lambda_1,\ldots,\Lambda_r)$ is given by $\Lambda_i=\lambda_i-\lambda_{r+1}$. To guarantee the convergence of the iterative method, the initialization method introduced in \cite{thelocal} requires $ O(nr^2\log^2 n)$ measurements \cite{thelocal}. Motivated by the methods for choosing the initial guess in \cite{TWF} and \cite{thelocal}, we introduce a novel initialization method which is stated in Algorithm \ref{initialization1}. We prove that the new method just need $O(nr)$ measurements to obtain the same accuracy as the method suggested in \cite{thelocal}. \begin{algorithm}[H] \caption{Initialization}\label{initialization1} \begin{algorithmic}[H] \Require Measurements $ y_i=\|a_i^\T X\|^2, i=1,\ldots,m $, where $a_i$ are Gaussian random vectors; parameter $\alpha_y >0$. \\ Define $ U_0=U\Sigma^{1/2}$, where the columns of $U$ are the normalized eigenvectors corresponding to the $r$ largest eigenvalues $ \lambda_1 \ge \cdots \ge \lambda_{r} $ of the matrix \[ Y=\frac{1}{m}\sum_{i=1}^{m}y_ia_ia_i^\T \1_{\{y_i \le \frac{\alpha_y}{m}\sum_{k=1}^{m} y_k\}} \] and the diagonal matrix $\Sigma$ is given by \[ \Sigma_{i,i}=\frac{1}{2}(\lambda_i-\lambda_{r+1}). \] \Ensure Initial guess $ U_0 $. \end{algorithmic} \end{algorithm} In our analysis, we require that the parameter $\alpha_y$ in Algorithm 1 satisfies $\alpha_y \ge C\sqrt{\log(c\kappa r)}$, where $\kappa$ is the ratio of the largest to the smallest nonzero eigenvalues of matrix $XX^\T$ and $C,c$ are universal constants. It means that the choice of $\alpha_y$ only depends on the condition number $\kappa$ and the rank $r$ of $X$, \subsection{Exponential-type Gradient Descent} The next step of our algorithm is to refine the initial guess by an update rule to search the global optimal solution. In \cite{thelocal}, Sanghavi, Ward and White iteratively update $U$ via gradient descent and they also prove the gradient descent method converges to the global optimal solution provided $m\geq Cnr\log^2n$. We next introduce an exponential-type gradient descent update rule. For $k=0,1,\ldots$, we take the iteration step as \begin{equation}\label{itera step} U_{k+1}=U_k-\mu\nabla f_{\ex}(U_k), \end{equation} where $ \nabla f_{\ex}(\cdot) $ denotes the exponential-type gradient given by \begin{equation} \label{gradient} \nabla f_{\ex}(U)=\frac{1}{m}\sum_{i=1}^m(a_i^\T UU^\T a_i-a_i^\T XX^\T a_i)a_ia_i^\T U \cdot \exp\Big(-\frac{my_i}{\alpha\sum_{k=1}^my_k}\Big), \end{equation} where $\alpha>0$. We state our algorithm as follows: \begin{algorithm}[H] \caption{Exponential-type Gradient Descent Algorithm} \begin{algorithmic}[H] \Require Measurement vectors: $a_i\in \R^n, i=1,\ldots,m $; Observations: $y\in \R^m$; Parameter $\alpha$; Step size $\mu$; $\epsilon>0$ \\ \begin{enumerate} \item[1:] Set $T:=c\log \frac{1}{\epsilon}$, where $ c $ is a sufficient large constant. \item[2:] Use Algorithm 1 to compute an initial guess $U_0$ . \item[3:] For $k=0,1,2,\ldots,T-1$ do \[\begin{array}{ll} U_{k+1} & =U_k-\mu\nabla f_{\ex}(U_k)\\ \nonumber & =U_k-\frac{\mu}{m}\sum_{i=1}^m(a_i^\T UU^\T a_i-y_i)a_ia_i^\T U \cdot \exp\Big(-\frac{my_i}{\alpha\sum_{k=1}^my_k}\Big) \end{array} \] \item[4:] End for \end{enumerate} \Ensure The matrix $ U_T $. \end{algorithmic} \end{algorithm} \begin{remark} There is a parameter $\alpha$ in Algorithm 2. Throughout this paper, we select the parameter $\alpha\geq 20$. Numerical experiments in Section 6 show that the algorithm's performance is not sensitive to the selection of $\alpha$. \end{remark} \section{Main results} In this section we present our main results which give the theoretical guarantee of Algorithm 2. We first study Algorithm 1 with showing that our initial guess $U_0$ is not far from $\{ XO : O \in {\mathcal O}(r) \}$. \begin{theorem}\label{initial theorem} Suppose that $m \ge c_0\sigma_r^{-2}\normf{X}^4nr$ and \[ y_i=a_i^\T XX^\T a_i=\norm{a_i^\T X}^2,\,\, i=1,\ldots,m \] where $a_i\in \R^n$ is the Gaussian random vector. Let $ U_0 $ be the output of Algorithm \ref{initialization1} with $\alpha_y \ge C\sqrt{\log(c\kappa r)}$, where $\kappa=\sigma_1/\sigma_r$ denotes the ratio of the largest to the smallest nonzero eigenvalues of the matrix $XX^\T$. Then with probability at least $ 1-6\exp(-\Omega(n))$ we have \[ d(U_0)\,\, \le\,\, \sqrt{\frac{\sigma_r}{8}}, \] where $c,c_0$ and $C$ are absolute constants, and $d(U_0)$ is defined as \[ d(U_0):=\mathop{\min} \limits_{O \in \mathcal{O}(r)} \|XO-U_0\|_F. \] \end{theorem} We next consider the convergence property of Algorithm 2. \begin{theorem}\label{itarates} Suppose that $m \ge c_0\sigma_r^{-2}\normf{X}^4nr\log(c_1r\normf{X}^2/\sigma_r)$ and \[ y_i=a_i^\T XX^\T a_i=\norm{a_i^\T X}^2,\,\, i=1,\ldots,m \] where $a_i\in \R^n$ is the Gaussian random vector. Suppose that $U_k\in \R^{n\times r}$ satisfies $d(U_k) \le \sqrt{\frac{1}{8}\sigma_r}$. The $U_{k+1}$ is defined by the update rule (\ref{itera step}) with the step size $\mu\le \frac{\sigma_r^3}{c_2\sigma_1\normf{X}^6}$. Then with probability at least $1-C\exp(-\Omega(n))$, the iteration step (\ref{itera step}) satisfies \begin{equation} \label{iterate in theorem} d(U_{k+1}) \le \Big(1-\rho_0\Big)^{1/2}d(U_k), \end{equation} where $\rho_0=\frac{2\mu\sigma_r}{7}$. \end{theorem} Combining Theorem \ref{initial theorem} and Theorem \ref{itarates}, we can obtain the following corollary which shows that Algorithm 2 is convergent with high probability provided $ m\ge Cnr\log (cr)$. \begin{corollary} Suppose that $m \ge c_0\sigma_r^{-2}\normf{X}^4nr\log(c_1r\normf{X}^2/\sigma_r)$ and $y_i=a_i^\T XX^\T a_i=\norm{a_i^\T X}^2,\,\, i=1,\ldots,m$ where $a_i\in \R^n$ is the Gaussian random vector. Suppose that $\epsilon$ is an arbitrary constant within range $(0,\sqrt{\sigma_r/8})$. Then with probability at least $1-C\exp(-\Omega(n))$, Algorithm 2 outputs $U_T$ satisfying \[ d(U_{T})\,\, \le\,\, \epsilon \] provided the step size $\mu\le \frac{\sigma_r^3}{c_2\sigma_1\normf{X}^6}$ where $T\ge \log \frac{\sigma_r}{8\epsilon^2}\log \frac{1}{1-\rho_0}$ and $\rho_0=\frac{2\mu\sigma_r}{7}$. \begin{proof} According to Theorem \ref{initial theorem}, with probability at least $ 1-6\exp(-\Omega(n))$ we have \[ d(U_0) \le \sqrt{\frac{\sigma_r}{8}}. \] From the iterative inequality (\ref{iterate in theorem}) in Theorem \ref{itarates}, we obtain that \begin{eqnarray*} d(U_{T}) &\le & \Big(1-\rho_0\Big)^{1/2}d(U_{T-1}) \\ &\le & \Big(1-\rho_0\Big)^{T/2}d(U_{0}) \\ &\le & \sqrt{\frac{\sigma_r}{8}}\Big(1-\rho_0\Big)^{T/2} \\ &\le & \epsilon, \end{eqnarray*} which holds with probability at least $1-C\exp(-\Omega(n))$. \end{proof} \end{corollary} \begin{remark} According to Theorem \ref{itarates}, to guarantee Algorithm 2 converges to the true matrix, we require that the step size \begin{equation}\label{eq:step1} \mu\,\,\le\,\, \sigma_r^3/(C\sigma_1\normf{X}^6). \end{equation} Noting that $\normf{X}^4=(\sigma_1+\cdots+\sigma_r)^2\le r^2\sigma_1^2$, we have $\sigma_r^3/(C\sigma_1\normf{X}^6)\ge 1/(C\kappa^3 r^2\normf{X}^2)$ which implies that \begin{equation}\label{eq:stepsize} \mu\,\, \le\,\, 1/(C\kappa^3 r^2\normf{X}^2) \end{equation} is enough to guarantee (\ref{eq:step1}) holds. Recall that the algorithms in \cite{thelocal} and \cite{zheng2015convergent} require that $\mu\le (1/Cn^4\log^4(nr)\normf{X}^2)$ and $\mu\le C/(\kappa n\normf{X}^2)$, respectively. Comparing with the step size in \cite{thelocal} and \cite{zheng2015convergent}, our step size is independent with the matrix dimension $n$. \end{remark} \section{The proof of the main results} In this section we give the proof of the main results. To state conveniently, for $U\in \R^{n\times r}$, we set \begin{equation}\label{eq:xb} \Xb:=\Xb_U:=\argmin{Z \in \mathcal{X}}\normf{U-Z}, \end{equation} where $\mathcal{X}:=\{XO:O\in \mathcal{O}(r)\}$, and $\mathcal{O}(r)$ is the set of $r\times r$ orthogonal matrices. Motivated by the results in \cite{WF}, we next give the definition of the regularity condition. Under this condition, we shall prove that our algorithm converges linearly to the true matrix $X$ if the initial guess is not far from it. \begin{definition}[Regularity Condition] We say that the function $f$ satisfies the regularity condition $RC(\nu,\lambda,\varepsilon)$ if there exist constants $\nu,\lambda$ such that for all matrices $U \in \Rnr$ satisfying $d(U)\le \varepsilon$ we have \[ \langle\nabla f_{\ex}(U),U-\Xb\rangle\ge \frac{1}{\nu}\sigma_r\normf{U-\Xb}^2+\frac{1}{\lambda\normf{X}^2}\normf{\nabla f_{\ex}(U)}^2, \] where $ \nabla f_{\ex}(\cdot) $ is defined in (\ref{gradient}) and $\Xb$ is defined in (\ref{eq:xb}). \end{definition} Under the assumption of $f$ satisfying the regularity condition, the next lemma shows the performance of the update rule. \begin{lemma} \label{itera} Assume that the function $f$ satisfies the regularity condition $RC(\nu,\lambda,\varepsilon)$ and $d(U_k)\le \varepsilon$. If we take the step size $\mu\le \min\left( \frac{\nu}{2\sigma_r}, \frac{2}{\lambda\normf{X}^2}\right)$, then $U_{k+1}=U_k-\mu\nabla f_{\ex}(U_k)$ satisfies \[ d(U_{k+1})\,\,\le\,\, \sqrt{1-\frac{2\mu\sigma_r}{\nu}}d(U_k). \] \end{lemma} \begin{proof} To state conveniently, we set \begin{equation}\label{eq:xbk} \Xb_k:=\argmin{Z \in \mathcal{X}}\normf{U_k-Z}. \end{equation} Under the regularity condition $RC(\nu,\lambda,\varepsilon)$, we have \begin{align} d(U_{k+1})^2&\le \normf{U_k-\Xb_k-\mu\nabla f_{\ex}(U_k)}^2 \\ \nonumber & =\normf{U_k-\Xb_k}^2-2\mu\langle\nabla f_{\ex}(U_k),U-\Xb_k\rangle+\mu^2\normf{\nabla f_{\ex}(U_k)}^2 \\ \nonumber & \le \normf{U_k-\Xb_k}^2-2\mu\Big(\frac{1}{\nu}\sigma_r\normf{U_k-\Xb}^2+\frac{1}{\lambda\normf{X}^2}\normf{\nabla f_{\ex}(U_k)}^2\Big) +\mu^2\normf{\nabla f_{\ex}(U_k)}^2 \\ \nonumber & =\left(1-\frac{2\mu\sigma_r}{\nu}\right)\normf{U_k-\Xb_k}^2+\mu(\mu-\frac{2}{\lambda\normf{X}^2})\normf{\nabla f_{\ex}(U_k)}^2\\ \nonumber & \le \left(1-\frac{2\mu\sigma_r}{\nu}\right)d(U_k)^2, \end{align} where the last inequality follows from $\mu\le \frac{2}{\lambda\normf{X}^2}$. \end{proof} Based on Lemma \ref{itera}, the key point to prove Theorem \ref{itarates} is to show that the function $f$ satisfies the regularity condition with high probability. The next lemma shows that $f$ satisfies the regularity condition provided $ m \ge c_0\sigma_r^{-2}\normf{X}^4nr\log(c_1r\normf{X}^2/\sigma_r)$. \begin{lemma} \label{regularity condition} Suppose $ m \ge c_0\sigma_r^{-2}\normf{X}^4nr\log(c_1r\normf{X}^2/\sigma_r)$ and $f$ is defined as (\ref{optimization problem}). Then $f$ satisfies the regularity condition $RC\Big(7,\frac{250\alpha^2\sigma_1\normf{X}^4}{\sigma_r^3},\sqrt{\frac{1}{8}\sigma_r}\Big)$ with probability at least $1-C\exp(-\Omega(n))$, where $\alpha$ is the constant in $\nabla f_{\ex}$ and $C,c_0,c_1$ are universal constants. \end{lemma} We next state the proof of Theorem \ref{itarates}. \begin{proof}[Proof of Theorem \ref{itarates}] According to Lemma \ref{regularity condition}, if $ m \ge c_0\sigma_r^{-2}\normf{X}^4nr\log(c_1r\normf{X}^2/\sigma_r)$, then $f$ satisfies the regularity condition with $\nu=7$, $\lambda=250\alpha^2\sigma_1\normf{X}^4/\sigma_r^3$ and $\varepsilon=\sqrt{\sigma_r/8}$ with probability at least $1-C\exp(-\Omega(n))$. Noting that $d(U_k) \le \sqrt{\frac{1}{8}\sigma_r}$, Lemma \ref{itera} implies that \begin{equation*} d(U_{k+1})\,\,\le\,\, \sqrt{1-\frac{2\mu\sigma_r}{\nu}}d(U_k)=\Big(1-\frac{2\mu\sigma_r}{7}\Big)^{1/2}d(U_k) \end{equation*} provided the step size \begin{equation*} \mu \le \min\left( \frac{\nu}{2\sigma_r}, \frac{2}{\lambda\normf{X}^2}\right)=\frac{\sigma_r^3}{125\alpha^2\sigma_1\normf{X}^6}=\frac{\sigma_r^3}{c_2\sigma_1\normf{X}^6}. \end{equation*} \end{proof} We remain to prove Lemma \ref{regularity condition}. To this end, we introduce one proposition and the full details can be found in the appendix. \begin{prop}\label{pr:1} Assume that $\normf{X}=1$ and that $ m \ge c_0\sigma_r^{-2}nr\log(c_1r/\sigma_r)$. Then with probability at least $1-C\exp(-\Omega(n))$, the followings hold for all matrices $U \in \Rnr$ satisfying $d(U)\le \sqrt{\frac{\sigma_r}{8}}$: \begin{eqnarray} (a) &\langle\nabla f_{\ex}(U),H\rangle &\ge 0.166\sigma_r\normf{H}^2+0.78\left(\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2\right) \label{eq:a}\qquad \\ (b) &\frac{\sigma_r^2\normf{\nabla f_{\ex}(U)}^2}{3\alpha^2\left(\normf{H}^2+\normf{X}^2\right)}&\le 1.223\sigma_1\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2 \label{eq:b}\qquad, \end{eqnarray} where $H=U-\Xb$ and $\Xb$ is defined in (\ref{eq:xb}). \end{prop} Now, we can give the proof of Lemma \ref{regularity condition}. \begin{proof}[Proof of Lemma \ref{regularity condition} ] In order to prove Lemma \ref{regularity condition}, we only need to consider the case where $\normf{X}=1$. For any $0<\gamma<1$, multiplying $\gamma\sigma_r/\sigma_1$ on both sides of (\ref{eq:b}) we have \begin{eqnarray*} \frac{\gamma\sigma_r^3\normf{\nabla f_{\ex}(U)}^2}{3\alpha^2\sigma_1\left(\normf{H}^2+\normf{X}^2\right)} &\le& 1.223\gamma\sigma_r\normf{H}^2+\gamma\sigma_r\tr^2(H^\T\Xb)/\sigma_1+\gamma\sigma_r\normf{H^\T\Xb}^2/\sigma_1. \end{eqnarray*} Note that $\sigma_r\leq 1$. Taking $\gamma=0.166/12.23$ and then combining with $(\ref{eq:a})$, we obtain \begin{align} \langle\nabla f_{\ex}(U),H\rangle & \ge 0.1494\sigma_r\normf{H}^2+\frac{\sigma_r^3\normf{\nabla f_{\ex}(U)}^2}{222\alpha^2\sigma_1\left(\normf{H}^2+\normf{X}^2\right)}\nonumber \\ & \ge 0.1494\sigma_r\normf{H}^2+\frac{\sigma_r^3}{250\alpha^2\sigma_1\normf{X}^2}\normf{\nabla f_{\ex}(U)}^2, \nonumber \end{align} where we use $\normf{H}^2\le\frac{1}{8}\sigma_r\le \frac{1}{8}\normf{X}^2$ in the last line. Thus we have \[\langle\nabla f_{\ex}(U),H\rangle\ge \frac{1}{\nu}\sigma_r\normf{H}^2+\frac{1}{\lambda\normf{X}^2}\normf{\nabla f_{\ex}(U)}^2 \] for $\nu\ge 7$ and $\lambda\ge 250 \alpha^2\sigma_1/\sigma_r^3 $ with probability at least $1-C\exp(-\Omega(n))$, if $m\ge c_0\sigma_r^{-2}nr\log(c_1r/\sigma_r)$. \end{proof} \section{Numerical Experiments} The purpose of the numerical experiments is the comparison for the exponential-type gradient descent algorithm with the gradient descent algorithm \cite{thelocal}. In our numerical experiments, the target matrix $X\in \Rnr$ is chosen randomly in standard normal distribution and the measurement vector $a_i,i=1,\ldots,m$ are generated by Gaussian random measurements. \begin{example} \label{example1} In this example, we test the success rate of the exponential-type gradient descent algorithm with different parameter $\alpha$. Let $X\in \Rnr$ with $n=200,r=2$, the parameter $\alpha_y=9$ in spectral initialization and the step size $\mu=0.1\cdot m/\sum_{i=1}^my_i$. We consider the performance with $\alpha=20$ and $100$. The maximum number of iterations is $T=3000$. For the number of measurements, we vary $m$ within the range $ [nr,4nr] $. For each $m$, we run 100 times trials and calculate the success rate. We consider a trial to be successful when the relative error is less than $10^{-5}$ and the relative error is defined as \[ \minm{O\in \mathcal{O}(r)}\frac{\normf{XO-U^t}}{\normf{X}}=\frac{\normf{XZV^\T-U^t}}{\normf{X}}, \] where $ZDV^\T$ is the singular value decomposition of $X^\T U^t$. Figure \ref{figure:1} shows the numerical results for exponential-type gradient descent and gradient descent algorithm. The figure shows that exponential-type gradient descent algorithm achieve $100\%$ recovery rate if $m\ge 4nr$ and the empirical success rate is better than the gradient descent algorithm. \end{example} \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{succesprobability.eps} \caption{Empirical success rate versus $m/nr$ for $X\in \Rnr$ with $n=200$, $r=2$.} \label{figure:1} \end{figure} \begin{example} In this example, we test the convergence and robustness of the exponential-type gradient descent algorithm. We use noiseless model for (a) to test the convergence and use the noise model for (b) to test the robustness. The noise model is described as $y_i=a_i^\T XX^\T a_i+\epsilon_i$ where the noise $\epsilon_i \sim \mathcal{N}(0,0.1^2),\; i=1,\ldots,m$. Let $X\in \Rnr$ with $n=200,r=2$, the parameter $\alpha_y=9$ in spectral initialization and the step size $\mu=0.1\cdot m/\sum_{i=1}^my_i$. We consider the performance with $\alpha=20$ and $100$. We set the number of measurements $m=3nr$. Figure \ref{figure:2} depicts the relative error against the iteration number. From the figure, we observe that our exponential-type gradient descent algorithm can converge to the exact solution and is robust with noisy measurements. \end{example} \begin{figure}[H] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{convergence_noiseless.eps}} \subfigure[]{ \includegraphics[width=0.45\textwidth]{convergence_noise.eps}} \caption{ Relative error with respect to noiseless and noise, where the unknown matrix $X\in \Rnr$ with $n=200$, $r=2$ and $m=3nr$.} \label{figure:2} \end{figure} \begin{figure}[H] \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{relativerror_r_1.eps}} \subfigure[]{ \includegraphics[width=0.45\textwidth]{complex.eps}} \caption{ Relative error versus $m/n$ for real and complex signals $x$ with dimension $n=100$.} \label{figure3} \end{figure} \begin{example} Finally, we test the performance of the exponential-type gradient descent algorithm to recover $X\in \R^{n\times r}$ with $r=1$. As stated before, under the setting of $r=1$, the (\ref{question}) is reduced to phase retrieval problem. One already develops many algorithms to solve phase retrieval problems, such as PhaseLift \cite{phaselift}, PhaseMax \cite{goldstein2018phasemax}, WirtFlow \cite{WF}, and TAF \cite{wang2017solving}. The aim of numerical experiments is to compare the performance of the exponential-type gradient descent algorithm with that of other existing methods for phase retrieval as mentioned above. This experiment is done by Phasepack \cite{phasepack} which is a algorithm package for solving the phase retrieval problem. For the exponential-type gradient algorithm, we choose the parameter $\alpha_y=9$ and $\alpha=100$ as the comparison. We choose a random real signal $x\in \R^n$ in (a) and a random complex signal $x\in \mathbb{C}^n$ in (b) with $n=100$. Here, one can use the elegant formulation of Wirtinger derivatives \cite{WF} to obtain the exponential-type gradient for complex signal. We show the relative error in the reconstructed signal as a function of the the number of measurements $m$, where $m$ within the ranges $[n,6n]$. The results are shown in Figure \ref{figure3}. From the figure, we can see that our algorithm performs well comparing with state-of-the-art phase retrieval algorithms. \end{example} \section{Appendix} \subsection{Proof of Theorem \ref{initial theorem}} \begin{proof} By homogeneity, it suffices to consider the case where $\normf{X}=1$. We assume that $X=(x_1,\ldots, x_r) \in \Rnr$ has orthogonal columns satisfying $ \norm{x_1}\geq \cdots \geq \norm{x_r}$. Recall that $\sigma_1 \geq \sigma_2\geq \cdots \geq \sigma_r >0$ are the nonzero eigenvalues of the positive semidefinite matrix $XX^\T $ and then \[ \sigma_j=\norm{x_j}^2, \quad \text{ for } \; 1\le j\le r. \] From Lemma \ref{lemma 3.1}, for $\varepsilon>0$, we have \begin{equation} \label{average sum y_i} \frac{1}{m}\sum_{k=1}^ma_k^\top XX^\top a_k = \frac{1}{m}\sum_{k=1}^my_k \in [1- \varepsilon, 1+\varepsilon], \end{equation} with probability at least $1-2\exp(-\Omega(n))$, if $m\ge Cn$ where $C$ is a constant depending on $\varepsilon$. Here, we use the fact that $\|XX^\top\|_*=\|X\|_F^2=1$. The (\ref{average sum y_i}) implies that \begin{equation}\label{eq:jie1} \1_{\{y_i \le (1-\varepsilon)\alpha_y\}}\le\1_{\{y_i \le \frac{\alpha_y}{m}\sum_{k=1}^{m} y_k\}}\le \1_{\{y_i \le (1+\varepsilon)\alpha_y\}}. \end{equation} Recall that $Y=\frac{1}{m}\sum_{i=1}^{m}y_ia_ia_i^\T \1_{\{y_i \le \frac{\alpha_y}{m}\sum_{k=1}^{m} y_k\}}$. The (\ref{eq:jie1}) implies that \begin{equation} \label{the interval of Y} Y_2\preceq Y \preceq Y_1 \end{equation} holds with high probability where \begin{eqnarray*} Y_2:=\frac{1}{m}\sum_{i=1}^{m}y_ia_ia_i^\T \1_{\{y_i \le (1-\varepsilon)\alpha_y\}},\quad Y_1:=\frac{1}{m}\sum_{i=1}^{m}y_ia_ia_i^\T \1_{\{y_i \le (1+\varepsilon)\alpha_y\}}. \end{eqnarray*} We claim the following results: \begin{claim}\label{claim of estimate Y1} For any $0<\delta<1$, if $\alpha_y \ge C\sqrt{\log(cr\sigma_1/\delta)}$, then \begin{equation} \label{estimate of expectation} \norm{\E Y_1-2XX^\T-I}\le \delta,\qquad \norm{\E Y_2-2XX^\T-I}\le \delta. \end{equation} \end{claim} The (\ref{estimate of expectation}) implies that $\norm{\E Y_1}\ge 1+2\sigma_1-\delta$ and $\norm{\E Y_2}\ge 1+2\sigma_1-\delta$. We can use Lemma \ref{A introduction} to obtain that if $m\ge C\delta^{-2}(1+2\sigma_1-\delta)^{-2}n$, and then with probability at least $ 1-4\exp(-\Omega(n))$, we have \begin{equation} \label{YEY} \norm{Y_1-\E Y_1} \le \delta, \qquad \norm{Y_2-\E Y_2} \le \delta, \end{equation} where $C$ is a positive constant. Indeed, in Lemma \ref{A introduction} we take the $i$-th row of $A$ as $b_i^\T:=\sqrt{y_i}a_i^\T \1_{\{y_i \le (1+\varepsilon)\alpha_y\}}$ and set $\Sigma=\E Y_1$ with $\norm{\E Y_1}\ge 1+2\sigma_1-\delta$ and $t=\delta\norm{\E Y_1}\sqrt{m}$. Then we can obtain $\norm{Y_1-\E Y_1} \le \delta$. Similarly, we have $\norm{Y_2-\E Y_2} \le \delta$ if we take the $i$-th row of $A$ as $b_i^\T:=\sqrt{y_i}a_i^\T \1_{\{y_i \le (1-\varepsilon)\alpha_y\}}$ and set $\Sigma:=\E Y_2$. Combining (\ref{the interval of Y}), (\ref{estimate of expectation}) and (\ref{YEY}), we have \begin{equation}\label{approximate of Y} \norm{Y-2XX^\T-I}\le 2\delta \end{equation} with probability at least $1-6\exp(-\Omega(n))$ provided $m\ge C\delta^{-2}(1+2\sigma_1-\delta)^{-2}n$ and $\alpha_y \ge C\sqrt{\log(cr\sigma_1/\delta)}$. Furthermore, from Wely Theorem we have \begin{equation}\label{wely} |\lambda_{r+1}-1|\le 2\delta \quad \text{and} \quad |\lambda_{n}-1|\le 2\delta. \end{equation} Next, we turn to consider $d(U_0)$. Recall the definition $U_0=U\Sigma^{1/2}$ in Algorithm \ref{initialization1}. Here, $U=(u_1,\ldots,u_r)$ where $u_k$ is normalized eigenvectors corresponding to the eigenvalues $\lambda_k$ of $Y$ for $k=1,\ldots,r$, and the scaling of the diagonal matrix $\Sigma$ is given by $\Sigma_{i,i}=(\lambda_i-\lambda_{r+1})/2$. Hence, \begin{eqnarray*} \norm{U_0U_0^\T-XX^\T} &\le & \norm{U_0U_0^\T-\frac{1}{2}Y+\frac{1}{2}\lambda_{r+1}I}+\norm{\frac{1}{2}Y-\frac{1}{2}I-XX^\T}+\frac{1}{2}\norm{(\lambda_{r+1}-1)I} \\ &\le& \frac{1}{2}(\lambda_{r+1}-\lambda_n)+\delta+\frac{1}{2}(\lambda_{r+1}-1) \\ &\le& 4\delta , \end{eqnarray*} where the second inequality follows from (\ref{approximate of Y}) and the last inequality follows from (\ref{wely}). Then, using the following fact ( see, e.g. the Initialization of \cite{zheng2015convergent}) \begin{eqnarray*} \mathop{\min} \limits_{O \in \mathcal{O}(r)} \|U_0-XO\|_F^2 &\le & \frac{\normf{U_0U_0^\T-XX^\T}^2}{(2\sqrt{2}-2)\sigma_r}, \end{eqnarray*} and taking $\delta\le \frac{\sigma_r}{18\sqrt{r}}$, we obtain \begin{eqnarray*} \mathop{\min} \limits_{O \in \mathcal{O}(r)} \|U_0-XO\|_F^2 &\le & \frac{2r\norm{U_0U_0^\T-XX^\T}^2}{(2\sqrt{2}-2)\sigma_r} \\ &\le & \frac{32r\delta^2}{(2\sqrt{2}-2)\sigma_r} \\ &\le& \frac{\sigma_r}{8}, \end{eqnarray*} where we use $\normf{A}\le\sqrt{\rank(A)}\norm{A}$ in the first inequality. The choice of $\delta$ implies that the measurements $m\ge C\sigma_r^{-2}nr$ and $\alpha_y \ge C\sqrt{\log(c'\kappa r)}$, where $\kappa=\sigma_1/\sigma_r$ denotes the ratio of the largest to the smallest nonzero eigenvalues of matrix $XX^\T$.\\ We remain to prove Claim \ref{claim of estimate Y1}. There exists an orthogonal matrix $O\in \R^{r\times r}$ such that $X=O(\norm{x_1}e_1,\ldots,\norm{x_r}e_r)$. Then \begin{eqnarray*} O^\T(\E Y_1-2XX^\T-I)O &=& O^\T \E Y_1 O-\left(2\sum_{k=1}^r\norm{x_k}^2e_ke_k^\T +I \right), \end{eqnarray*} and \begin{eqnarray} \label{Y_1} O^\T \E Y_1 O &=& \E \left[\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2a_ia_i^\T \1_{\{\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2\le(1+\varepsilon)\alpha_y\}} \right]. \end{eqnarray} A simple calculation is that \begin{equation}\label{eq:pr41} \E \left[\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2a_ia_i^\T\right] = 2\sum_{k=1}^r\norm{x_k}^2e_ke_k^\T +I , \end{equation} which implies that \begin{equation} \label{upper bound} O^\T \E Y_1 O \le 2\sum_{k=1}^r\norm{x_k}^2e_ke_k^\T +I, \end{equation} where we write $M_2\le M_1$ if all entries of $M_1-M_2$ are nonnegative. On the other hand, from (\ref{Y_1}) we obtain that \begin{equation}\label{eq:pr42} O^\T \E Y_1 O = \E \left[\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2a_ia_i^\T\right]-\E \left[\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2a_ia_i^\T \1_{\{\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2\ge(1+\varepsilon)\alpha_y\}} \right]. \end{equation} For any $1\le j,l,k\le r$ and $\delta>0$, by H\"{o}lder's inequality we have \begin{equation}\label{eq:bernsteinfordetal} \begin{aligned} & \E\left[\norm{x_k}^2a_{i,j}^2a_{i,l}^2 \1_{\{\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2\ge(1+\varepsilon)\alpha_y\}} \right] \\ & \quad \le \norm{x_1}^2\sqrt{\E[a_{i,j}^4a_{i,l}^4]}\cdot \sqrt{\PP\left\{\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2\ge(1+\varepsilon)\alpha_y\right\}} \\ &\quad \le C_1\norm{x_1}^2\exp\left(-C_0\min\Big(\frac{(1+\varepsilon)^2\alpha_y^2}{\norm{x_1}^4 +\cdots+\norm{x_r}^4},\frac{(1+\varepsilon)\alpha_y}{\norm{x_1}^2}\Big)\right) \\ &\quad \le C_1\sigma_1\exp\left(-C_0(1+\varepsilon)^2\alpha_y^2\right) \\ &\quad \le \frac{\delta}{r} \end{aligned} \end{equation} provided $\alpha_y \ge C\sqrt{\log(cr\sigma_1/\delta)}$, where the second inequality follows from Lemma \ref{Bernstein inequality} and the third inequality follows from the fact that $\normf{X}=1$ and $\norm{x_r}\le \cdots \le \norm{x_1}\le 1$. The (\ref{eq:bernsteinfordetal}) implies that \begin{equation}\label{eq:pr43} \E \left[\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2a_ia_i^\T \1_{\{\sum_{k=1}^r \norm{x_k}^2a_{i,k}^2\ge(1+\varepsilon)\alpha_y\}} \right]\le \delta I. \end{equation} Thus, combining (\ref{eq:pr41}), (\ref{eq:pr42}) and (\ref{eq:pr43}) we have \begin{eqnarray}\label{lower bound} O^\T \E Y_1 O &\ge 2\sum\limits_{k=1}^r\norm{x_k}^2e_ke_k^\T +(1-\delta)I. \end{eqnarray} Combining (\ref{upper bound}) and (\ref{lower bound}) and noting that $O^\T \E Y_1 O$ is a diagonal matrix, we obtain \begin{eqnarray*} \norm{\E Y_1-2XX^\T-I} &=& \norm{O^\T(\E Y_1-2XX^\T-I)O} \le \delta. \end{eqnarray*} Similarly, we can obtain $\norm{\E Y_2-2XX^\T-I}\le \delta$, which completes the proof. \end{proof} \subsection{Proof of Proposition \ref{pr:1} } We always assume that $\normf{X}=1$ throughout the proof. We set $H:=U-\Xb$ where $\Xb=\argmin{Z \in \mathcal{X}}\normf{U-Z}$ and $\mathcal{X}$ is the solution set. Then the exponential-type gradient can be rewritten as \begin{equation}\label{regradient} \nabla f_{\ex}(U)=\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i+2a_i^\T H\Xb^\T a_i)(a_ia_i^\top H+a_ia_i^\T\Xb) \cdot \exp\left(-\frac{my_i}{\alpha\sum_{k=1}^my_k}\right). \end{equation} For convenience, we let \begin{equation}\label{hi} \rho_{i,\alpha}\,\,:=\,\,\exp\big(-\frac{my_i}{\alpha\sum_{i=1}^my_i}\big),\;i=1,\ldots,m. \end{equation} To prove Proposition \ref{pr:1}, we need the following lemmas. \begin{lemma} \label{lemma one} For any fixed $\alpha\ge 20$ and $\delta>0$, if $ m\ge c_0\alpha^2\delta^{-2}nr\log(\sqrt{r}/\delta)$, then with probability at least $1-C\exp(-\Omega(\alpha^{-2}\delta^2m))$, the followings hold for all non-zero matrix $U \in \Rnr$: \begin{eqnarray*} (a)&\frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2 \rho_{i,\alpha}\ge& (0.78\sigma_r-2\delta)\normf{H}^2+0.78\tr^2(H^\T\Xb)+0.78\normf{H^\T\Xb}^2 \\ (b)&\frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2 \rho_{i,\alpha}\le& (\sigma_1+2\delta)\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2, \end{eqnarray*} where $C,c_0$ are universal constants. \end{lemma} \begin{proof} Suppose for the moment that $H$ is independent from $a_i$. By homogeneity, it suffices to establish the claim for the case $\normf{H}=1$. From (\ref{average sum y_i}) we have \begin{equation}\label{ei} \exp\Big(-\frac{a_i^\T XX^\T a_i}{0.99\alpha}\Big)\le \rho_{i,\alpha}\le \exp\Big(-\frac{a_i^\T XX^\T a_i}{1.01\alpha}\Big) \end{equation} with high probability. For convenience, we set \begin{equation}\label{li} \barrho:=\exp\Big(-\frac{a_i^\T \Xb\Xb^\T a_i}{0.99\alpha}\Big),\;i=1,\ldots,m. \end{equation} Noting that $a_i^\T \Xb\Xb^\T a_i= a_i^\T XX^\T a_i$, we have \begin{equation}\label{eq:1le52} \frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2\rho_{i,\alpha}\ge \frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2\barrho. \end{equation} We claim the following results: \begin{claim}\label{claim1} For any fixed parameter $\alpha\ge 20$ it holds \begin{itemize} \item[1)] $\E\left[(a_i^\T H\Xb^\T a_i)^2\right]\ge\sigma_r\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2$\vspace{1ex} \item[2)] $\E\left[(a_i^\T H\Xb^\T a_i)^2\right]\le\sigma_1\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2$\vspace{1ex} \item[3)] $\E\left[(a_i^\T H\Xb^\T a_i)^2\barrho\right]\ge 0.78\E\left[(a_i^\T H\Xb^\T a_i)^2\right]$.\vspace{1ex} \end{itemize} \end{claim} Then combining 3) and 1) we obtain that \begin{eqnarray*} \E\left[(a_i^\T H\Xb^\T a_i)^2\barrho\right] &\ge& 0.78\sigma_r\normf{H}^2+0.78\tr^2(H^\T\Xb)+0.78\normf{H^\T\Xb}^2. \end{eqnarray*} Since \begin{eqnarray*} (a_i^\T H\Xb^\T a_i)^2\barrho &\le & (a_i^\T \Xb\Xb^\T a_i)\barrho(a_i^\T HH^\T a_i) \end{eqnarray*} and $(a_i^\T \Xb\Xb^\T a_i)\barrho$ is bounded, it means that $(a_i^\T H\Xb^\T a_i)^2\barrho$ is a sub-exponential random variable with $\psi_1$ norm $O(\alpha\normf{H}^2)$. We can use Lemma \ref{Bernstein inequality} to obtain that \begin{equation}\label{eq:2le52} \begin{aligned} \frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2\barrho &\ge \E\big[(a_i^\T H\Xb^\T a_i)^2\barrho\big]-\delta\normf{H}^2 \\ &\ge (0.78\sigma_r-\delta)\normf{H}^2+0.78\tr^2(H^\T\Xb)+0.78\normf{H^\T\Xb}^2 \end{aligned} \end{equation} holds with probability at least $1-\exp(-\Omega(\alpha^{-2}\delta^2m))$ where $\delta>0$. Combining (\ref{eq:1le52}) and (\ref{eq:2le52}), we obtain that (a) holds for a fixed $H\in \R^{n\times r}$. We construct an $\epsilon$-net $\mathcal{N}_\epsilon\subset \R^{n\times r}$ with cardinality $|\mathcal{N}_\epsilon|\le (1+\frac{2}{\epsilon})^{nr}$ such that for any $H\in\Rnr$ with $\normf{H}=1$, there exists $ H_0\in \mathcal{N}_\epsilon$ satisfying $\normf{H-H_0}\le \epsilon $. Taking a union bound over this set gives that \begin{eqnarray*} \frac{1}{m}\sum_{i=1}^m(a_i^\T H_0\Xb^\T a_i)^2\barrho &\ge& (0.78\sigma_r-\delta)\normf{H_0}^2+0.78\tr^2(H_0^\T\Xb)+0.78\normf{H_0^\T\Xb}^2 \end{eqnarray*} holds for all $H_0 \in \mathcal{N}_\epsilon$ with probability at least $1-(1+\frac{2}{\epsilon})^{nr}\exp(-\Omega(\alpha^{-2}\delta^2m))$. \\ Note that $\barrho<1$ for all $i$. Then there exists a universal constant $c_1>0$ such that \begin{eqnarray} \left|\frac{1}{m}\sum_{i=1}^m (a_i^\T H\Xb^\T a_i)^2\barrho-\frac{1}{m}\sum_{i=1}^m (a_i^\T H_0\Xb^\T a_i)^2\barrho\right| &\le & \frac{1}{m}\sum_{i=1}^m\left|a_i^\T H\Xb^\T a_i-a_i^\T H_0\Xb^\T a_i\right|\nonumber \\ &\le & c_1\|HX^\T-H_0X^\T\|_* \nonumber\\ &\le& c_1\sqrt{r}\normf{H-H_0} \nonumber\\ & \le& c_1\sqrt{r}\epsilon \label{invoking epsilon} \end{eqnarray} where we use Lemma \ref{lemma 3.1} in the second line, the fact $\|A\|_*\le \sqrt{\rank(A)}\normf{A}$ in the third line. Indeed, according to Lemma \ref{lemma 3.1}, for any $\delta\in(0,1)$, if $m\ge c_0\delta^{-2}n$, then with probability at least $1-C\exp(-\Omega(n))$ we have \[ \frac{1}{m}\sum\limits_{i=1}^m\left|a_i^\T HX^\T a_i-a_i^\T H_0X^\T a_i\right| \le (1+\delta)\|HX^\T-H_0X^\T\|_* \le c_1\|HX^\T-H_0X^\T\|_*. \] By choosing $\epsilon=\frac{\delta}{c_1\sqrt{r}}$ in (\ref{invoking epsilon}), we conclude the first part of lemma.\\ We now turn to the part (b). The (\ref{ei}) implies that \[ \rho_{i,\alpha}\le \exp\big(-\frac{a_i^\T XX^\T a_i}{1.01\alpha}\big) \] holds with high probability. It gives that \begin{eqnarray*} \frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2\rho_{i,\alpha} &\le & \frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2\exp\left(-\frac{a_i^\T XX^\T a_i}{1.01\alpha}\right). \end{eqnarray*} From Claim \ref{claim1}, we have \begin{eqnarray*} \E\left[(a_i^\T H\Xb^\T a_i)^2\exp\left(-\frac{a_i^\T XX^\T a_i}{1.01\alpha}\right)\right]&\le & \sigma_1\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2. \end{eqnarray*} Similarly, $(a_i^\T H\Xb^\T a_i)^2\exp\big(-\frac{a_i^\T XX^\T a_i}{1.01\alpha}\big)$ is a sub-exponential random variable with sub-exponential norm $O(\alpha\normf{H}^2)$. Then, we can employ the method for proving part (a) to prove part (b). \end{proof} \begin{lemma} \label{lemma 2} For a fixed $\lambda>0$, for any $H\in \Rnr$ and $\delta>0$, if $m\ge c_0\delta^{-2}\lambda^{-2}nr\log(\sqrt{r}/(\delta\lambda))$, then with probability at least $1-C\exp(-\Omega(\delta^2\lambda^2m))$, we have \[ \frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2\exp\left(-\lambda\frac{a_i^\T HH^\T a_i}{\normf{H}^2}\right) \le 2\normf{HH^\T}^2+(2\delta+1)\normf{H}^4. \] Here, $c_0,C$ are some universal constants. \end{lemma} \begin{proof} Without loss of generality, we only need to prove the lemma in the case $\normf{H}=1$. It is straightforward to show that \begin{equation*} \E\left[(a_i^\T HH^\T a_i)^2\exp\left(-\lambda a_i^\T HH^\T a_i\right)\right] \le \E\left[(a_i^\T HH^\T a_i)^2\right]=2\normf{HH^\T}^2+\normf{H}^4. \end{equation*} Observe that $(a_i^\T HH^\T a_i)^2\exp\big(-\lambda a_i^\T HH^\T a_i\big)$ is a sub-exponential random variable with sub-exponential norm $O(1/\lambda\cdot\normf{H}^2)$. According to Lemma \ref{Bernstein inequality} we have \begin{equation*} \frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2\exp\left(-\lambda a_i^\T HH^\T a_i\right)\le 2\normf{HH^\T}^2+\normf{H}^4+\frac{\delta_0}{\lambda}\normf{H}^2 \end{equation*} with probability $1-\exp(-\Omega(\delta_0^2m))$. We next construct an $\epsilon$-net $\mathcal{N}_\epsilon$ with $|\mathcal{N}_\epsilon|\le (1+\frac{2}{\epsilon})^{nr}$ such that for any $H\in\Rnr$ with $\normf{H}=1$, there exists $H_0\in \mathcal{N}_\epsilon$ satisfying $\normf{H-H_0}\le \epsilon$. Since $x^2e^{-\lambda x}$ is Lipschitz function with Lipschitz constant $O(1/\lambda^2)$, we have \[ \begin{array}{l} \Big|\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2\exp\Big(-\lambda a_i^\T HH^\T a_i \Big)-\frac{1}{m}\sum_{i=1}^m(a_i^\T H_0H_0^\T a_i)^2\exp\Big(-\lambda a_i^\T H_0H_0^\T a_i\Big)\Big| \vspace{1ex} \\ \le \frac{1}{\lambda^2m}\sum_{i=1}^m\Big|a_i^\T HH^\T a_i-a_i^\T H_0H_0^\T a_i\Big| \vspace{1ex} \\ \le \frac{c_2\sqrt{r}\epsilon}{\lambda^2} \end{array} \] where the last inequality follows from Lemma \ref{lemma 3.1}. By choosing $\epsilon=\frac{\delta_0\lambda}{c_2\sqrt{r}}$, we obtain \begin{equation*} \frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2\exp\left(-\lambda a_i^\T HH^\T a_i\right)\le 2\normf{HH^\T}^2+\normf{H}^4+\frac{2\delta_0}{\lambda}\normf{H}^2 \end{equation*} with probability at least $1-\exp(-\Omega(\delta_0^2m))$ if $m\ge c_0\delta_0^{-2}nr\log(\sqrt{r}/(\delta_0\lambda))$. Finally, noting that $\normf{H}=1$ and taking $\delta_0=\lambda\delta$, we arrive at the conclusion. \end{proof} \begin{corollary} \label{ahhalow} For any $\delta>0$, $U\in \Rnr$ and $H=U-\Xb$, if $m\ge c_0\alpha^2\delta^{-2}\sigma_r^{-2}nr\log(\alpha\sqrt{r}/(\delta\sigma_r))$, then with probability at least $1-C\exp(-\Omega(n))$, it holds \begin{equation*} \frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2\rho_{i,\alpha}\le 2\normf{HH^\T}^2+(2\delta+1)\normf{H}^4. \end{equation*} \end{corollary} \begin{proof} Since $\sigma_r$ is the smallest eigenvalue of $XX^\T$, we have \begin{equation*} y_i=a_i^\T XX^\T a_i\ge \sigma_r\norms{a_i}^2, \end{equation*} which implies that \begin{equation}\label{eq:budeng1} \norms{a_i}^2\le \frac{a_i^\T XX^\T a_i}{\sigma_r}=\frac{y_i}{\sigma_r}. \end{equation} On the other hand, we have \begin{equation}\label{eq:budeng2} a_i^\T HH^\T a_i\leq \|H\|_F^2\|a_i\|^2. \end{equation} Combining (\ref{eq:budeng1}) and (\ref{eq:budeng2}), we obtain that \begin{equation}\label{eq:budeng3} y_i\geq \sigma_r \frac{a_i^\T HH^\T a_i}{\|H\|_F^2}. \end{equation} According to (\ref{ei}) and (\ref{eq:budeng3}), we obtain that \begin{eqnarray*} \frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2\rho_{i,\alpha} &\le& \frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2\exp\left(-\frac{\sigma_r}{1.01\alpha}\cdot\frac{a_i^\T HH^\T a_i}{\normf{H}^2}\right). \end{eqnarray*} We take $\lambda=\frac{\sigma_r}{1.01\alpha}$ in Lemma \ref{lemma 2} and arrive at the conclusion. \end{proof} \begin{proof}[Proof of Proposition \ref{pr:1} ] To state conveniently, we set \begin{equation*} \beta^2 =\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2 \rho_{i,\alpha},\quad \gamma^2=\frac{2}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2 \rho_{i,\alpha}. \end{equation*} According to the expression of exponential-type gradient (\ref{regradient}), we have \begin{equation} \begin{aligned} \langle\nabla f_{\ex}(U),H\rangle \nonumber &=\beta^2+\gamma^2+\frac{3}{m}\sum\limits_{i=1}^m(a_i^\T H\Xb^\T a_i)(a_i^\T HH^\T a_i)\rho_{i,\alpha}\nonumber \\ &\ge \beta^2+\gamma^2-\frac{3}{m}\sqrt{\sum\limits_{i=1}^m(a_i^\T H\Xb^\T a_i)^2 \rho_{i,\alpha}}\cdot \sqrt{\sum\limits_{i=1}^m(a_i^\T HH^\T a_i)^2 \rho_{i,\alpha}} \nonumber\\ &= \beta^2+\gamma^2-\frac{3}{\sqrt{2}} \beta \gamma=\Big(\gamma-\frac{3}{2\sqrt{2}}\beta\Big)^2-\frac{1}{8}\beta^2 \nonumber\\ &\ge \Big(\frac{\gamma^2}{2}-\frac{9}{8}\beta^2\Big)-\frac{1}{8}\beta^2=\frac{\gamma^2}{2}-\frac{5}{4}\beta^2 \nonumber\\ &= \frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)^2 \rho_{i,\alpha} -\frac{5}{4m}\sum_{i=1}^m(a_i^\T HH^\T a_i)^2 \rho_{i,\alpha} \nonumber\\ &\ge (0.78\sigma_r-2\delta_1)\normf{H}^2+0.78\tr^2(H^\T\Xb)+0.78\normf{H^\T\Xb}^2-\frac{5}{2}\normf{HH^\T}^2-\frac{5(2\delta_2+1)}{4}\normf{H}^4 \nonumber\\ &\ge \left(0.78\sigma_r-2\delta_1-\frac{5(2\delta_2+3)}{4}\normf{H}^2\right)\normf{H}^2+0.78\left(\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2\right) \label{choice of delta_1 delta_2} \end{aligned} \end{equation} where we use Cauchy-Schwarz inequality in the second line, the inequality $(\gamma-\beta)^2\ge\frac{\gamma^2}{2}-\beta^2$ in the fourth line, Lemma \ref{lemma one} and Corollary \ref{ahhalow} in the sixth line, and the fact that $\normf{HH^\T}\le \normf{H}^2$ in the last line. Note that $\normf{H}^2=\|U-\Xb\|_F^2=d(U)^2\le\frac{1}{8}\sigma_r$. Taking $\delta_1\le \frac{1}{16}\sigma_r$ and $\delta_2\le \frac{1}{16}$, we obtain that \begin{equation*} \langle\nabla f_{\ex}(U),H\rangle\ge 0.166\sigma_r\normf{H}^2+0.78\left(\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2\right) \end{equation*} with probability at least $1-C\exp(-\Omega(n)$, if $m\ge c_0\sigma_r^{-2}nr\log(c_1r/\sigma_r)$. This implies the part $(a)$.\\ Next, we turn to the part $(b)$. We consider \[ \normf{\nabla f_{\ex}(U)}^2=\maxm{\normf{W}=1, W\in \R^{n\times r}}\abs{\langle\nabla f_{\ex}(U),W\rangle}^2 \] on the case where $H=U-\Xb\le\sqrt{\frac{1}{8}\sigma_r}$. Recall the notation $\rho_{i,\alpha}$ in formula (\ref{hi}), and we have \begin{equation*} \begin{aligned} &\abs{\langle\nabla f_{\ex}(U),W\rangle}^2\\ &= \left(\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha}+\frac{2}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha} \right.\\ &\quad \left.+\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T \Xb W^\T a_i)\rho_{i,\alpha}+\frac{2}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)(a_i^\T \Xb W^\T a_i)\rho_{i,\alpha} \right)^2 \\ &\le 4\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha}\right)^2+16\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha}\right)^2 \\ &\quad +4\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T \Xb W^\T a_i)\rho_{i,\alpha}\right)^2+16\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)(a_i^\T \Xb W^\T a_i)\rho_{i,\alpha}\right)^2. \end{aligned} \end{equation*} We first consider the term $4\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha}\right)^2$. Using Cauchy-Schwarz inequality, we obtain that \begin{equation*} \begin{aligned} & 4\left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha}\right)^2\\ &\le 4\left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HH^\T a_i)^2\rho_{i,\alpha}\right) \left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HW^\T a_i)^2\rho_{i,\alpha}\right) \\ &\le 4\left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HH^\T a_i)^2\rho_{i,\alpha}\right)\left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T WW^\T a_i)\rho_{i,\alpha}\right). \end{aligned} \end{equation*} According to Corollary \ref{ahhalow}, we have \begin{equation}\label{eq:T_1} \frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HH^\T a_i)^2\rho_{i,\alpha} \le\left(2\normf{HH^\T}^2+(2\delta_2+1)\normf{H}^4\right) \end{equation} with probability at least $1-C\exp(-\Omega(n))$ provided $m\ge c_0\delta_2^{-2}\sigma_r^{-2}nr\log(\sqrt{r}/(\delta_2\sigma_r))$. Noting that $a_i^\T XX^\T a_i\ge \sigma_r\norms{a_i}^2$ and $a_i^\T HH^\T a_i\leq \|H\|_F^2 \|a_i\|^2$ we have \begin{equation*} \frac{a_i^\T XX^\T a_i}{2.02\alpha}\ge \frac{\sigma_r \cdot a_i^\T HH^\T a_i}{2.02\alpha\normf{H}^2} \qquad \text{and}\qquad \frac{a_i^\T XX^\T a_i}{2.02\alpha}\ge \frac{\sigma_r \cdot a_i^\T WW^\T a_i}{2.02\alpha}. \end{equation*} It gives that \begin{equation} \label{eq:T_2} \begin{aligned} &(a_i^\T HH^\T a_i)(a_i^\T WW^\T a_i)\rho_{i,\alpha} \\ &\le (a_i^\T HH^\T a_i)(a_i^\T WW^\T a_i)\exp\left(-\frac{a_i^\T XX^\T a_i}{1.01\alpha}\right)\\ &\le (a_i^\T HH^\T a_i)\exp\left(-\frac{\sigma_r \cdot a_i^\T HH^\T a_i}{2.02\alpha\normf{H}^2}\right) (a_i^\T WW^\T a_i)\exp\left(-\frac{\sigma_r \cdot a_i^\T WW^\T a_i}{2.02\alpha}\right) \\ &\le \normf{H}^2 \Big(\frac{1.01\alpha}{e\sigma_r}\Big)^2 \end{aligned} \end{equation} where we use inequality $ xe^{-\gamma x}\le 1/(e\gamma)$ for any $x\ge 0$ in the last line. Combining formulas (\ref{eq:T_1}) and (\ref{eq:T_2}), we obtain \begin{equation*} 4\left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha}\right)^2\le 4 \Big(\frac{1.01\alpha}{e\sigma_r}\Big)^2 \normf{H}^2 \left(2\normf{HH^\T}^2+(2\delta_2+1)\normf{H}^4\right). \end{equation*} The other three terms can be bounded similarly. For the second term, we have \begin{equation*} \begin{aligned} &16\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)(a_i^\T HW^\T a_i)\rho_{i,\alpha}\right)^2 \\ &\le 16\left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T H\Xb^\T a_i)^2\rho_{i,\alpha}\right)\left(\frac{1}{m}\sum\limits_{i=1}^m(a_i^\T HW^\T a_i)^2\rho_{i,\alpha}\right) \\ &\le 4 \Big(\frac{1.01\alpha}{e\sigma_r}\Big)^2 \normf{H}^2 \left(4(\sigma_1+2\delta_1)\normf{H}^2+4\tr^2(H^\T\Xb)+4\normf{H^\T\Xb}^2\right) \end{aligned} \end{equation*} with probability at least $1-C\exp(-\Omega(n))$ provided $m\ge c_0\delta_1^{-2}nr\log(\sqrt{r}/\delta_1)$, where we use the part (b) of Lemma \ref{lemma one} in the last line. The third term and fourth term can be bounded as \begin{equation*} 4\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T HH^\T a_i)(a_i^\T \Xb W^\T a_i)\rho_{i,\alpha}\right)^2 \le 4\Big(\frac{1.01\alpha}{e\sigma_r}\Big)^2\normf{X}^2\left(2\normf{HH^\T}^2+(2\delta_2+1)\normf{H}^4\right) \end{equation*} \begin{equation*} 16\left(\frac{1}{m}\sum_{i=1}^m(a_i^\T H\Xb^\T a_i)(a_i^\T \Xb W^\T a_i)\rho_{i,\alpha}\right)^2 \le 4 \Big(\frac{1.01\alpha}{e\sigma_r}\Big)^2\normf{X}^2\left(4(\sigma_1+2\delta_1)\normf{H}^2+4\tr^2(H^\T\Xb)+4\normf{H^\T\Xb}^2\right). \end{equation*} Putting there inequalities together and noting that $\normf{HH^\T}\le \normf{H}^2$, we have \[\normf{\nabla f_{\ex}(U)}^2 \le 4 \Big(\frac{1.01\alpha}{e\sigma_r}\Big)^2\left(\normf{H}^2+\normf{X}^2\right) \left(\left(4\sigma_1+8\delta_1+(2\delta_2+3)\normf{H}^2\right)\normf{H}^2+4\tr^2(H^\T\Xb)+4\normf{H^\T\Xb}^2\right). \] Furthermore, noticing that $\normf{H}^2\le\frac{1}{8}\sigma_r$ and choosing $\delta_1\le \frac{1}{16}\sigma_r$, $\delta_2\le \frac{1}{16}$, it follows that \begin{equation*} \frac{\sigma_r^2\normf{\nabla f_{\ex}(U)}^2}{3\alpha^2\left(\normf{H}^2+\normf{X}^2\right)} \le 1.223\sigma_1\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2 \end{equation*} with probability at least $1-C\exp(-\Omega(n)$, if $m\ge c_0\sigma_r^{-2}nr\log(c_1r/\sigma_r)$. \end{proof} The rest paper is to check the Claim \ref{claim1}. For 1) and 2) of the Claim \ref{claim1}, let $O_1=\argmin{O \in \mathcal{O}(r)}\normf{U-XO}$, then $\Xb=XO_1$. Recall that $X$ has orthogonal column vectors, and then there exists an orthogonal matrix $O_2 \in \R^{n\times n}$ such that $X=O_2(\norms{x_1}e_1,\ldots,\norms{x_r}e_r)$. Let $\Hh:=HO_1^\T, \; \Ht=O_2^\T\Hh $ and $\hh_s,\htt_s,x_s$ denote the $s$th column of $\Hh,\Ht,X$ respectively, and $a_{i,s}$ denotes the $s$th entry of $a_i$. It follows that \begin{eqnarray} & & \E\left[(a_i^\T H\Xb^\T a_i)^2\right]=\E\left[(a_i^\T\Hh X^\T a_i)^2\right]=\E (a_i^\T O_2 \Ht X^\T O_2O_2^\T a_i) \nonumber \\ & \quad &=\E (a_i^\T \Ht X^\T O_2 a_i) = \E\left[\|x_1\|(\htt_1^\T a_i)a_{i,1}+\cdots+\|x_r\|(\htt_r^\T a_i)a_{i,r}\right]^2 \nonumber \\ &\quad &= \E\left[\sum\limits_{s=1}^r\|x_s\|^2(\htt_s^\T a_i)^2a_{i,s}^2+\sum\limits_{s\neq k}\|x_s\|\|x_k\|(\htt_s^\T a_i)(\htt_k^\T a_i)a_{i,s}a_{i,k}\right] \nonumber\\ &\quad &= \sum\limits_{s=1}^r\left(\norms{x_s}^2\norms{\htt_s}^2+2\norms{x_s}^2\htt_{s,s}^2\right)+\sum\limits_{s\neq k}\norms{x_s}\norms{x_k}\left(\htt_{s,s}\htt_{k,k}+\htt_{s,k}\htt_{k,s}\right)\label{firtm} \\ &\quad &= \sum\limits_{s=1}^r\norms{x_s}^2\norms{\hh_s}^2+\sum\limits_{s,k}\norms{x_s}\norms{x_k}\left(\htt_s^\T e_s\htt_k^\T e_k+\htt_s^\T e_k\htt_k^\T e_s\right)\nonumber\\ &\quad &= \sum\limits_{s=1}^r\norms{x_s}^2\norms{\hh_s}^2+\sum\limits_{s,k}(x_s^\T\hh_sx_k^\T\hh_k+x_s^\T\hh_kx_k^\T\hh_s)\label{The max sigular} \\ &\quad &\ge \sigma_r\normf{\Hh}^2+\tr^2(X^\T\Hh)+\tr(X^\T\Hh X^\T\Hh) \nonumber \\ &\quad&= \sigma_r\normf{H}^2+\tr^2(H^\T\Xb)+\tr(H^\T\Xb H^\T\Xb) \nonumber \\ &\quad &= \sigma_r\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2,\nonumber \end{eqnarray} where the last equation follows from that $H^\T\Xb$ is a symmetric matrix and the symmetry of $HX^\T=(U-\Xb)X^\T$ can be seen by the singular-value decomposition of $X^\T U$. More specifically, suppose that the singular-value decomposition of $X^\T U$ is $WDV^\T$, then we have \begin{align*} O_1:=\argmin{O \in \mathcal{O}(r)}\normf{U-XO} & =\argmax{O \in \mathcal{O}(r)}\langle XO,U\rangle =\argmax{O \in \mathcal{O}(r)}\langle O,WDV^\T\rangle=WV^\T. \end{align*} Therefore, $U^\T\Xb=U^\T XWV^\T=VDV^\T$ is a symmetric matrix, which implies that $H^\T\Xb=U^\T\Xb-\Xb^\T\Xb$ is also symmetric matrix.\\ Similarly, from formula (\ref{The max sigular}), it is easy to obtain \[\E\left[(a_i^\T H\Xb^\T a_i)^2\right]\le\sigma_1\normf{H}^2+\tr^2(H^\T\Xb)+\normf{H^\T\Xb}^2.\] For 3) of the Claim \ref{claim1}, using the notation $\Hh,\Ht,\hh_s,\htt_s$ above, we have \begin{eqnarray*} \E\left[(a_i^\T H\Xb^\T a_i)^2\barrho\right] &=& \E\left[(a_i^\T\Hh X^\T a_i)^2\barrho \right] \\ &=& \E\left[\sum\limits_{s=1}^r\|x_s\|^2(\htt_s^\T a_i)^2a_{i,s}^2\cdot\prod\limits_{t=1}^r\exp\left(-\frac{\norms{x_t}^2a_{i,t}^2}{0.99\alpha}\right)\right]\\ & &+\E\left[\sum\limits_{s\neq k}\|x_s\|\|x_k\|(\htt_s^\T a_i)(\htt_k^\T a_i)a_{i,s}a_{i,k}\cdot \prod\limits_{t=1}^r\exp\left(-\frac{\norms{x_t}^2a_{i,t}^2}{0.99\alpha}\right)\right] \\ &>& 0.78\sum\limits_{s=1}^r\|x_s\|^2(2\htt_{s,s}^2+\norms{\htt_{s}}^2)+0.78\sum\limits_{s\neq k}\norms{x_s}\norms{x_k}(\htt_{s,s}\htt_{k,k}+\htt_{s,k}\htt_{k,s}) \\ &=& 0.78\E\left[(a_i^\T H\Xb^\T a_i)^2\right], \end{eqnarray*} where the last equation follows from (\ref{firtm}) and the inequality comes from the following two inequalities (\ref{ineqality:1}) and (\ref{inequality:2}): \begin{eqnarray} && \E\Big[(\htt_s^\T a_i)^2a_{i,s}^2\cdot\prod\limits_{t=1}^r\exp(-\frac{\norms{x_t}^2a_{i,t}^2}{0.99\alpha})\Big] \nonumber\\ &= & \frac{1}{\gamma\omega_s}\Big(\frac{\htt_{s,1}^2}{\omega_1}+\cdots+\frac{\htt_{s,s-1}^2}{\omega_{s-1}} +\frac{3\htt_{s,s}^2}{\omega_s}+\frac{\htt_{s,s+1}^2}{\omega_{s+1}}+\cdots+\frac{\htt_{s,r}^2}{\omega_r}+\htt_{s,r+1}^2+\cdots+\htt_{s,n}^2\Big)\nonumber \\ & \ge&\frac{1}{1.102^2\cdot\gamma}(\htt_{s,1}^2+\cdots+\htt_{s,s-1}^2+3\htt_{s,s}^2+\htt_{s,s+1}^2+\cdots+\htt_{s,n}^2)\nonumber\\ & \ge&\frac{1}{1.102^2\cdot e^{1/0.99\alpha}}(2\htt_{s,s}^2+\norms{\htt_{s}}^2)\nonumber\\ &>&0.78(2\htt_{s,s}^2+\norms{\htt_{s}}^2)\label{ineqality:1} \end{eqnarray} provided $\alpha \ge 20$ and the parameters $\omega_k,\gamma$ are defined as follows: $$\omega_k:=\frac{\|x_k\|^2}{0.495\alpha}+1\le 1.102,\;\forall\; 1\le k\le r$$ and $$\gamma:=\sqrt{(\frac{\|x_1\|^2}{0.495\alpha}+1)(\frac{\|x_2\|^2}{0.495\alpha}+1)\cdots(\frac{\|x_r\|^2}{0.495\alpha}+1)}\le e^{1/0.99\alpha}$$ due to the fact that $1+x\le e^x$ for any $x\ge 0$ and $\normf{X}=1$. Similarly, for any $s\neq k,1\le s,k\le r$, we have \begin{equation} \E\left[(\htt_s^\T a_i)(\htt_k^\T a_i)a_{i,s}a_{i,k}\cdot\prod\limits_{t=1}^r\exp(-\frac{\norms{x_t}^2a_{i,t}^2}{0.99\alpha})\right]= \frac{\htt_{s,s}\htt_{k,k}+\htt_{s,k}\htt_{k,s}}{\gamma\omega_s\omega_k}> 0.78(\htt_{s,s}\htt_{k,k}+\htt_{s,k}\htt_{k,s}).\label{inequality:2} \end{equation}
{ "timestamp": "2018-06-05T02:13:22", "yymm": "1806", "arxiv_id": "1806.00904", "language": "en", "url": "https://arxiv.org/abs/1806.00904" }
\section{Introduction} Radio pulsar observations in the last half century have revealed many common but not universal phenomena that are not as yet perceived as understood, of which nulls and mode-changes in the emission are examples. In the same period, knowledge of magnetospheric structure has progressed little beyond the early insights of Goldreich \& Julian (1969), Radhakrishnan \& Cooke (1969) and Ruderman \& Sutherland (1975) except for recognition of the role of the Lense-Thirring effect and for the application of modern numerical plasma-physics techniques in the force-free approximation to the region near the light cylinder (see, for example, Bai \& Spitkovsky 2010 and Brambilla et al 2018 who also survey more recent work). Our view is that the failure to make progress is a consequence of two factors. Firstly, the proposition that neutron stars with negative polar-cap corotational charge density (${\bf \Omega}\cdot{\bf B} > 0$, where ${\bf \Omega}$ is the rotational spin and ${\bf B}$ the polar-cap magnetic flux density: Goldreich \& Julian 1969) can support the observed phenomena is, in Bayesian terms, one of zero prior probability. Secondly, in the ${\bf \Omega}\cdot{\bf B} < 0$ case, failure to recognize that the generation of protons by the reverse flux of electrons at the polar cap is a dominant factor in determining the plasma composition. In connection with our first point, electron motion above the polar cap is limited. Electrons are in the Landau ground-state and drift velocities $-c({\bf E}\times{\bf B})/B^{2}$ are very small. The electron work function at the neutron-star surface is so small that a space-charge-limited flow boundary condition is valid at all times, equivalent to a Dirichlet boundary condition on the surface and, in some approximation, on the surface separating open from closed sectors of the magnetosphere. The system of accelerated electrons is described solely by the Vlasov-Maxwell equations except that pair creation is necessary for the formation of any collective mode that might lead to coherent radio-frequency emission. It is difficult to understand how single-photon pair creation in the whole population of radio-loud pulsars, including the millisecond pulsars, can produce an electron-positron plasma of the required density or is even possible (see, for example, Hibschman \& Arons 2001, Harding \& Muslimov 2002). The degrees of freedom in the ${\bf \Omega}\cdot{\bf B} > 0 $ case are so limited that we consider the assignment of zero prior probability justified. The generation of protons in the positive corotational charge-density ${\bf \Omega}\cdot{\bf B} < 0$ case introduces new degrees of freedom to the system. The origin of the acceleration field ${\bf E}_{\parallel}$ above the polar cap has been explained in terms of the Lense-Thirring effect in important papers by Beskin (1990) and Muslimov \& Tsygan (1992). It is the interrelation between this field and its screening by photo-ionization of accelerated ions that makes the ${\bf \Omega}\cdot{\bf B} < 0$ system so complex but interesting. It has been emphasized previously (Jones 2016; also papers cited therein) that these processes are the source of the coherent radio emission observed at frequencies of the order of $1$ GHz. The introduction of new degrees of freedom brings with it additional parameters, specifically the blackbody surface temperature and the ionic atomic number, about which little is known. Thus the magnetosphere is essentially a complex system in the sense described in a recent essay by Kivelson \& Kivelson (2018) and consideration should be given to what can realistically be expected from a model or theory of it. Thus the work in this paper assumes that most, or possibly all, radio-loud pulsars, including those displaying nulls or mode-changes have ${\bf \Omega}\cdot{\bf B} < 0$. We shall attempt to show that two distinct modes differing in average values of $E_{\parallel}$ and of ion, proton, and positron flux densities may exist quite naturally in limited regions of the pulsar $P-\dot{P}$ plane. It is possible to see how each mode can be unstable against transition to the other. Observations indicate that the timing of such transitions is essentially, though not always, chaotic, but this paper is limited to demonstrating the nature of the modes and the origin of their instability. The radio-loud mode is believed, with some observational evidence (Jones 2016), to consist of an ion-proton plasma in which $E_{\parallel}$ is largely screened by the reverse flux of photo-ionization electrons. The plasma must satisfy certain conditions if the Langmuir mode is to have a growth rate sufficient to produce strong plasma turbulence. These are summarized in Section 2. For convenience, the second mode can be referred to as radio-quiet, though observations show that this is not always strictly the case. It is argued here that this mode is most probably a self-sustaining inverse Compton scattering process generating a modest flux of electron-positron pairs. This has not been considered previously in relation to the ${\bf \Omega}\cdot{\bf B} < 0$ case, and is described in detail in Section 3. Contrary to what has been the canonical view, the presence of electron-positron pairs in this mode does not lead to coherent radio emission of the observed intensities in the $1$ GHz region, but may be significant in relation to incoherent emission, possibly at X-ray frequencies, for which a leptonic source is essential. There are extremely large numbers of publications describing the observed properties of nulls and mode-changes, mostly in individual pulsars and we refer at this stage to a recent paper (Lyne et al 2017) which contains a useful and succinct summary. But in Section 4 the properties of the second mode of Section 3 are discussed in relation to a number of individual pulsar observations and it is argued that they provide a basic physical framework for understanding the phenomena. \section{The radio-loud state} The properties of the ion-proton plasma mode have appeared in a number of previous publications (see Jones 2016) but are summarized here in order that they may be distinguished from the nominally radio-quiet state described in Section 3. The basic requirement is that the longitudinal or quasi-longitudinal Langmuir mode should have an amplitude growth factor $\exp\Lambda$ sufficient to generate strong plasma turbulence with $\Lambda \approx 30$ adopted as a working value. It is, \begin{eqnarray} \Lambda = \frac{Ra_{\Lambda i}}{c}\int^{\eta_{e}}_{1}\gamma^{-3/2}_{i}(\eta) \omega_{pi}(\eta)d\eta \end{eqnarray} taken over particle paths to the emission region $\eta_{e}$. Here $a_{\Lambda i} \approx 0.2$ is a dimensionless constant, $\gamma_{i}$ is the Lorentz factor and $\omega_{pi}$ is defined in terms of observer-frame variables for particle $i$, \begin{eqnarray} \omega^{2}_{pi} = \frac{4\pi n_{i}Z^{2}_{i}{\rm e}^{2}}{m_{i}}, \end{eqnarray} with $m_{i}$ and $Z_{i}$ as the mass and charge of the ion, and $\eta$ the radius in units of the neutron-star radius $R$ (Jones 2012). The conditions are as follows. (i) Plasma accelerated must contain both proton and ion components. (ii) The transverse width of the bundle of flux lines on which the plasma flows must be at least as large as the Langmuir-mode rest-frame wavelength. (iii) Screening of $E_{\parallel}$ within the bundle must be such as to limit the Lorentz factors to moderate, though relativistic, values ($\gamma \sim 20$). This is facilitated by ions having high enough atomic number to remain partially ionized during their acceleration at $\eta < \eta_{e}$. Thus a plasma of deuterons and protons in which there can be no screening would in general be too rapidly accelerated to have an adequate $\Lambda$. (iv) Owing to the small lepton mass, a positron component distributed with Lorentz factors overlapping those of the ions and protons can modify the dielectric tensor to the extent of extinguishing the Langmuir mode growth-rate (Jones 2015). \section{Inverse Compton scattering} Inverse Compton scattering by a Goldreich-Julian flux of primary electrons has been considered by many authors as a source of secondary pairs in an ${\bf \Omega}\cdot{\bf B} > 0$ magnetosphere (see, for example, Hibschman \& Arons 2001). There is no easily quantifiable source of primary leptons in the ${\bf \Omega}\cdot{\bf B} < 0$ case: the most probable is the class of neutron capture reactions $n +(A,Z)\rightarrow (A+1,Z)+\gamma$ in which the neutrons originate in the same photo-nuclear reactions as the protons and at approximately the same rate. The neutrons of $2-3$ MeV energy are not thermalized as are the protons but scatter with a cross-section of the order of $1$ bn so that a fraction of them will diffuse rapidly towards the surface. Capture in this region produces $\gamma$-rays which can enter the open magnetosphere above the polar cap. Multiple Compton scatters within the shower itself are a further possible source of such outward-directed photons. But this also is difficult to quantify. The single-photon attenuation coefficient through pair production is, \begin{eqnarray} \frac{m{\rm e}^{2}B(\eta)\sin\theta}{2\hbar^{2}B_{c}}\left(0.377\exp\frac{-4}{3\chi}\right) \end{eqnarray} (Erber 1966) in which, $\chi = k_{\perp}B(\eta)/(2mcB_{c})$, where $k_{\perp}$ is the photon momentum component perpendicular to ${\bf B}$ and $B_{c} = m^{2}c^{3}/{\rm e}\hbar =4.41\times 10^{13}$ G. The angle between ${\bf k}$ and ${\bf B}$ is $\theta$. A value $\chi = 0.070$, equivalent to $k_{\perp}B_{12} = 6mc$ corresponds with attenuation of $10^{-2}$ cm$^{-1}$. A relatively narrow window of values in the vicinity of this would give attenuation neither so small that the $\gamma$ escapes from the open region of the magnetosphere nor so large that the positron formed annihilates in the neutron-star atmosphere. Polar cap magnetic flux densities of $2-3\times 10^{12}$ G would satisfy this condition for typical nuclear-capture $\gamma$-ray energies. Consequently the source would not function in millisecond pulsars or in high-field neutron stars. Whilst the rate of positron formation is contingent on many factors which are not well known, it is approximately a linear function of reverse-electron energy input to the polar cap: it is denoted here by $W_{e}$, the number of positrons formed above the neutron-star atmosphere within the open magnetosphere per unit reverse-electron energy. For an outward accelerated positron at $\eta$ with Lorentz factor $\gamma$ and velocity $\beta$, the transition rate from a blackbody radiation field of temperature $T_{s}$ is, \begin{eqnarray} \Gamma = 2\pi\int^{\theta_{max}}_{0}d(\cos\theta)\int^{\infty}_{0}d\omega \hspace{2cm} \\ \nonumber \hspace{2cm} \left( \frac{\omega^{2}\tilde{n}(\omega, T_{s})}{4\pi^{3}\hbar^{3}c^{2}}\right) \left(1 - \beta\cos\theta \right)\sigma(s), \end{eqnarray} in which $\cos \theta_{max} = (\eta^{2} - 1)^{1/2}/\eta$, the photon occupation number is $\tilde{n}$ and $\sigma(s)$ is a partial Klein-Nishina cross-section at Lorentz-invariant total energy-squared $s$, \begin{eqnarray} s = m^{2} + 2m\gamma\omega(1 - \cos\theta), \end{eqnarray} and $\omega$ is the blackbody photon energy. In the Klein-Nishina region of $s \gg m^{2}$, the cross-section is dominated by backward scattering owing to the u-channel pole and we therefore adopt the approximate expression for scattering into the backward hemisphere in the centre-of-momentum system, \begin{eqnarray} \sigma(s) = \pi\left(\frac{{\rm e}^{2}}{mc^{2}}\right)^{2}\left(\frac{2m^{2}}{s} \ln\frac{s}{m^{2}} - \frac{m^{2}}{s}\right) \end{eqnarray} for $s \geq 5$, with linear interpolation for $1 \leq s \leq 5$. For the same reason, and following Hibschman \& Arons (2001), we assume the scattered photon energy to be the maximum, \begin{eqnarray} ck_{1} = m\gamma\left(1 - \frac{m^{2}}{s}\right) \hspace{2cm} \gamma \gg 1 \end{eqnarray} The direction of ${\bf k_{1}}$ is initially closely tangential to the local ${\bf B}$ and it is the increase in this angle as the photon propagates that determines the point at which pair creation occurs. The contribution of aberration to this angle is an order of magnitude smaller than that of the flux-line curvature and we neglect it. The intersection of a photon emitted tangentially at $\eta$ with a flux line at $\eta^{\prime}$ is at an angle, \begin{eqnarray} \frac{3u(\eta)}{4R\eta}\left(1 - \frac{\eta}{\eta^{\prime}}\right), \end{eqnarray} for dipole field geometry. Photon conversion to first-generation pairs occurs in the interval $\eta < \eta^{\prime} \leq 4\eta/3$ provided its momentum satisfies, \begin{eqnarray} k_{1} > k_{c} = \left(\frac{6mc}{B_{12}(1)}\right)\left(\frac{4\eta}{3}\right)^{3} \left(\frac{16R\eta}{3u(\eta)}\right), \end{eqnarray} in which $u$ is the lateral displacement of the Compton event from the magnetic axis. The electron and positron are in high Landau states radiating by synchrotron emission. We adopt the Hibschman \& Arons estimate of $0.22k_{1}/k_{c}$ for the number of higher generation pairs thereby produced. The unscreened acceleration potential is assumed to be, \begin{eqnarray} \hspace{1cm} V_{max}(\eta, u) = \hspace{4cm} \nonumber \\ 1.25\times 10^{3}\left(1 - \frac{1}{\eta^{3}}\right) \left(1 - \frac{u^{2}}{u^{2}_{0}}\right)\frac{B_{12}(1)}{P^{2}} \hspace{4mm} {\rm GeV} \end{eqnarray} (see Jones 2013) in which $u_{0}$ is the radius of the circular open magnetosphere at $\eta$, and $P$ the rotation period. Table 1 gives the mean reverse-electron energy input from a single positron in a partially-screened acceleration field $s_{V}V_{max}$. Inputs $\epsilon_{1}$ and $\epsilon^{ICS}$ from the first and from all generations of electrons are shown separately for values of $u = s_{u}u_{0}$. A distinction from Hibschman \& Arons is that we neglect screening of the Lense-Thirring potential by ICS reverse electrons. This is justified by the fact that for ${\bf \Omega}\cdot{\bf B} < 0$, the proton and positron current densities satisfy $J^{p} \gg J^{e}$: typically $J^{e}$ is of the order of $10^{-1}J^{p}$ and the reverse electrons from photo-ionization are the principal source of screening. The approximations on which Table 1 is based are such that the values of $\epsilon^{ICS}$ can be no more than a guide. The most serious, the assumption of dipole-field geometry, is unavoidable. \begin{table} \caption{This shows separately in units of GeV, the reverse-electron energy inputs per positron accelerated for first $(\epsilon_{1})$ and all generations of pairs $\epsilon^{ICS}$ as functions of the scale parameter $s_{V}$ which represents the degree of screening of the acceleration potential so that $V = s_{V}V_{max}$, $V_{max}$ being defined by equation (10). Columns 2 to 7 give these energies for positrons accelerated at radii $u = s_{u}u_{0}$ from the magnetic axis in a circular polar cap of radius $u_{0}$, with $s_{u} = 0.2, 0.5, 0.8$. The neutron-star parameters are $P = 0.81$ s and $B_{12} = 2.6$ for PSR 1931+24. The temperatures are $T_{s} = 2.5, 5.0, 10.0\times 10^{5}$ K in ascending order from the foot of the Table.} \begin{tabular}{@{}lrrrrrr@{}} \hline $V$ & $\epsilon_{1}$ & $\epsilon^{ICS}$ & $\epsilon_{1}$ & $\epsilon^{ICS}$ & $\epsilon _{1}$ & $\epsilon^{ICS}$ \\ \hline $V_{max}$ & GeV & GeV & GeV & GeV & GeV & GeV \\ \hline & $s_{u}$ = 0.2 & 0.2 & 0.5 & 0.5 & 0.8 & 0.8 \\ \hline 1.00 & 17.3 & 404.1 & 16.3 & 731.9 & 13.5 & 472.1 \\ 0.50 & 14.6 & 179.4 & 13.6 & 315.5 & 10.9 & 197.8 \\ 0.20 & 11.2 & 62.1 & 10.2 & 101.9 & 7.6 & 60.9 \\ 0.10 & 8.7 & 28.7 & 7.8 & 43.1 & 5.4 & 24.6 \\ 0.05 & 6.5 & 13.9 & 5.6 & 18.3 & 3.5 & 9.9 \\ 0.02 & 3.9 & 5.6 & 3.2 & 6.1 & 1.7 & 2.9 \\ 0.01 & 2.0 & 2.5 & 1.8 & 2.6 & 0.7 & 1.0 \\ \hline 1.00 & 3.6 & 86.0 & 3.4 & 154.4 & 2.7 & 96.2 \\ 0.50 & 3.0 & 37.1 & 2.8 & 64.4 & 2.1 & 38.6 \\ 0.20 & 2.2 & 12.1 & 1.9 & 19.6 & 1.3 & 10.9 \\ 0.10 & 1.6 & 5.3 & 1.4 & 7.8 & 0.9& 4.0 \\ 0.05 & 1.1 & 2.4 & 0.9 & 3.0 & 0.5 & 1.4 \\ 0.02 & 0.6 & 0.8 & 0.4 & 0.8 & 0.2 & 0.3 \\ 0.01 & 0.3 & 0.3 & 0.2 & 0.3 & 0.0 & 0.1 \\ \hline 1.00 & 0.7 & 17.8 & 0.7 & 31.5 & 0.5 & 18.8 \\ 0.50 & 0.6 & 7.4 & 0.5 & 12.6 & 0.4 & 7.1 \\ 0.20 & 0.4 & 2.2 & 0.3 & 3.5 & 0.2 & 1.8 \\ 0.10 & 0.3 & 0.9 & 0.2 & 1.3 & 0.1 & 0.6 \\ 0.05 & 0.2 & 0.4 & 0.1 & 0.4 & 0.1 & 0.2 \\ 0.02 & 0.1 & 0.1 & 0.0 & 0.1 & 0.0 & 0.0 \\ 0.01 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \end{tabular} \end{table} Following Harding \& Muslimov (2002), we adopt an open magnetosphere radius, \begin{eqnarray} u_{0}(\eta) = \left(\frac{2\pi R^{3}\eta^{3}}{cPf(1)}\right)^{1/2} \end{eqnarray} where $f(1) = 1.368$ for a neutron star of $1.4$ $M_{\odot}$ and radius $R = 1.2\times 10^{6}$ cm. The surface temperature $T_{s}$ is of that part of the surface visible from a radius $\eta$. The temperature dependence of $\epsilon^{ICS}$ is largely determined by the properties of blackbody radiation and we find $\epsilon^{ICS} \propto T_{s}^{2.3}$ for values of $s_{V} \sim 0.5$ that are relevant to self-sustaining ICS. The rotation period $P = 0.81$ s and polar-cap $B_{12}(1) = 2.6$ have been chosen to coincide with those of PSR 1931+24 (Manchester et al 2005). The production rate for protons is $W_{p}\epsilon^{ICS}$ per positron. Values of $W_{p}$ are approximately proportional to $Z^{-1}$, where $Z$ is here the mean atomic number of nuclei undergoing photo-absorption in the electromagnetic showers. Typical values are $0.2 - 0.5$ GeV$^{-1}$ (see Jones 2010, Table 1) and for values $s_{V} \approx 0.5$, of the order of $10$ protons are produced per positron at $5\times 10^{5}$ K. The particle flux can consist of ions, positrons and protons: but in what order of priority do they comprise a flux at the Goldreich-Julian value? Positrons are formed at low altitudes ($< u_{0}$) above the polar cap and thus are bound to be part of the flux. Protons are not in static equilibrium in the predominantly ion atmosphere in local thermodynamic equilibrium (LTE) and thus having a higher charge-to-mass ratio enter the flux in preference to ions. An excess of protons has time to fractionate forming a further LTE layer at the top of the atmosphere. Ions enter the flux and undergo photo-ionization to a charge $Z_{\infty}$ only if the protons are insufficient to form such a layer or produce a Goldreich-Julian charge density. It is virtually impossible to estimate with confidence the value of $W_{e}$, but we adopt a conservative value $W_{e} = 10^{-1}W_{p}$. In this case it is clear from the $\epsilon^{ICS}$ in the Table that self-sustaining ICS pair creation would be possible given favourable conditions. These are: a polar-cap $B$ within the narrow window suitable for a neutron-capture $\gamma$-ray attenuation rate of $10^{-1} - 10^{-4}$ cm$^{-1}$, a sufficient $T_{s}$ and $V_{max}$, as may be the case in PSR 1931+24. The high values of $\epsilon^{ICS}$ are a consequence of higher generations of pairs created in regions of $V \rightarrow V_{max}$. But for $0.01 < s_{V} \leq 0.1$, typical of the mean potential in the radio-loud mode, it is clear that $\epsilon^{ICS}$ is not significant in comparison with the energy input from photo-ionized reverse-electrons. The description of processes at the polar cap can be formalized with the definitions $\epsilon W_{p} = K$, where $\epsilon$ is the reverse-electron energy per unit charge of ion accelerated, and $\epsilon^{ICS}W_{p} = K^{ICS}$. Then the ion, proton, and positron current densities $J^{z,p,e}$ are related by, \begin{eqnarray} J^{p}(t) + \tilde{J}^{p}(t) = \hspace{4cm} \nonumber \\\int^{t}_{-\infty}dt^{\prime}f_{p}(t - t^{\prime}) \left(KJ^{z}(t^{\prime}) + K^{ICS}J^{e}(t^{\prime}) \right) \end{eqnarray} in which $f_{p}$ is the proton diffusion function, normalized to unity and of scale given by a diffusion time $\tau_{p} \sim 1$ s. The rate at which protons enter and add to any LTE layer that might exist is $\tilde{J}^{p}$. Both $K$ and $K^{ICS}$ are functions of $V({\bf u},t^{\prime})$. A similar type of relation describes positron formation, $\epsilon^{ICS}W_{e} = K^{e}$, but the time delays are of the order of particle transit times and are negligible compared with $\tau_{p}$. Thus the expression reduces to, \begin{eqnarray} J^{e}(t)\left(1 - \frac{W_{e}}{W_{p}}K^{ICS}\right) = \frac{W_{e}}{W_{P}}KJ^{z}(t) \end{eqnarray} at any instant. Equation (12) holds locally at any polar-cap coordinate ${\bf u}$ in respect of $J^{p}$, $\tilde{J}^{p}$, and $J^{z}$, and approximately so for $J^{p}$, $\tilde{J}^{p}$, and $J^{e}$, whilst equation (13), based on single-photon pair creation can be valid only for averages over the whole polar cap: equation (13) fails and self-sustaining positron creation occurs at the threshold $W_{p} = W_{e}K^{ICS}$. Equation (12) has been modelled in a very basic manner (Jones 2013) by representing the polar cap in terms of finite elements $\delta {\bf u}$ each contributing to the potential $V({\bf u},t)$, but with neglect of ICS. It provides a simulation of a radio-loud polar cap, downward fluctuations in $V$ corresponding with radio emission in accordance with equation (1) and condition (ii). As a mode it is stable under conditions in which $\tilde{J}^{p} = 0$. But large upward fluctuations of $V$ from its time average occur in the model at instants when $J^{z} = 0$ in all elements and $V \rightarrow V_{max}$. These intervals which are nulls are usually brief because $J^{z} = 0$ in $\delta {\bf u}$ means zero proton production in that element so that eventually its $J^{p}$ falls below $J_{GJ}$, the Goldreich-Julian current density. Thus ion emission re-commences and screening by the reverse electrons reduces $V$. There is no transition to the ICS mode unless the threshold $W_{p} = W_{e}K^{ICS}$ is reached. This depends on suitable values of a number of parameters (surface temperature $T_{s}$, $B$ and $V_{max}$) which is likely only in limited areas of the population $P - \dot{P}$ plane. Upward fluctuations of $V$ extend over increasing areas of the polar cap if $V_{max}$ is large enough. But provided there is a boundary condition on the cylindrical surface separating open from closed magnetospheres there must be an annular region with outer boundary $u_{0}$ in which $V$ is not large enough to produce a proton current density $J^{p} = J_{GJ}$. Thus ion emission forms a part of the current density in this region but conditions (ii) and (iii) of Section 2 may not be satisfied, resulting in no observable radio emission. If the mode persists with $\tilde{J}^{p} > 0$ in the central region of the polar cap, an LTE atmosphere of protons grows, possibly to the extent of developing a liquid phase. Ion number densities at the surface of a neutron star have been calculated by Medin \& Lai (2006) and can be expressed as, \begin{eqnarray} N = 2.6\times 10^{26}Z^{-0.7}B^{1.2}_{12} \hspace{1cm} {\rm cm}^{-3}, \end{eqnarray} where $Z \geq 6$ is here the nuclear charge. The radiation length defined in terms of the Bethe-Heitler formulae (Bethe 1934) for bremsstrahlung and pair production cross-sections, with modified screening appropriate for the magnetic field, is then \begin{eqnarray} l_{r} = 1.6 Z^{-1.3}B^{-1.2}_{12}\left(\ln(12Z^{1/2}B^{-1/2}_{12})\right)^{-1} {\hspace{1cm}} {\rm cm}, \end{eqnarray}in which the bracketed term, replaces the $\ln(183Z^{-1/3})$ term of the Bethe-Heitler expression (see Jones 2010). For protons, it may be preferable to use the linear-chain spacing found by Medin \& Lai, extended to a simple cubic lattice, giving a density $N = 5.5\times 10^{26}B^{1.2}_{12}$, about a factor of two larger than equation (14) and therefore a radiation length $l^{p}_{r}$ smaller by the same factor. The depth of any liquid phase is naturally limited to that necessary to contain the processes of the electromagnetic shower. Also, the nature of the shower is modified because in a proton liquid, Compton scattering is the dominant reaction: the mean free path for this is small compared with $l^{p}_{r}$. Furthermore, $W_{e}\rightarrow 0$ because there is no source of neutrons. Even if the proton LTE atmosphere or liquid phase exists over a large fraction of the polar cap, it can have no long-term stability. The reason is that within an element of area $\delta {\bf u}$, an ion current component screens effectively owing to the reverse-electron flux: protons or completely stripped ions of low $Z$ do not screen. Thus in an annular region of ${\bf u}$ near $u_{0}$, $\tilde{J}^{p} = 0$ and equation (12) shows that a downward fluctuation in $K$ and $K^{ICS}$ at $t^{\prime}$ leads to a downward fluctuation in $J^{p}(t)$ and an increase in $J^{z}(t)$ which further increases screening in that region. The system is one of extraordinary complexity and, as indicated in Section 1, this paper is limited to defining the conditions under which the two modes exist and showing how the transitions between them can occur. Neutrons and protons are generated principally at the shower maximum at a depth of about $10$ radiation lengths. The nuclear charge of ions entering the LTE atmosphere is therefore reduced to $Z_{s}$ from the assumed original $Z_{0} = 26$. It has been shown previously (Jones 2011) that instability, in the form of a time-dependent $Z_{s}$ ensues if at any instant $Z_{s}$ is such that the ion is completely stripped whilst in the LTE atmosphere. In this case, there is no reverse-electron flux at $\delta {\bf u}$ so that eventually higher values of nuclear charge reach the surface and the reverse-electron flux re-commences. the time-scale for this is given broadly by the elapsed time in which one radiation length of ions leave the surface in a Goldreich-Julian flux. It is, \begin{eqnarray} t_{rl} = 2.1\times 10^{5}\left(\frac{P}{Z_{s}B_{12}\ln(12Z_{s}^{1/2}B^{-1/2}_{12}}\right) {\hspace{5mm}} {\rm s}. \end{eqnarray} Chaotic behaviour of this nature is likely to contribute to transitions from ICS to the normal radio-loud state. \section{Conclusions} Application of the results found in Section 3 to specific pulsar observations is on the basis that an ion-proton plasma is the source of the coherent radio emission normally observed at frequencies of the order of $1$ GHz. Electron-positron pairs in the ICS mode cannot lead to radio emission of the intensity usually seen principally because the lepton Lorentz factors are too large to allow a significant growth rate in any collective mode that could be the source. Equation (9) gives $k_{c} \sim 10^{4}$ for the minimum momentum of a photon so that the lepton Lorentz factors, substituted into equation (1) would not give an adequate growth factor exponent, of the order of $\Lambda \approx 30$. Thus neither a complete ICS mode nor an area of polar cap with a short-term $\tilde{J}^{p} \neq 0$ can be a source of normal radio emission although we have not studied the possibility that some form of low-intensity emission might result. Switching to or from a radio-loud state is known to occur within a rotation period. Fluctuations in the screening of $V_{max}$ are inherent in the model owing to reverse-electron fluxes and consequent proton production. The basic time-scale here is the proton diffusion time from shower maximum to the surface, estimated to be of the order of $\tau_{p} \sim 1$ s, as in equation (12). Proton fluxes provide no screening so that the time-scale for large fluctuations away from $V_{max}$ must be no greater than $\tau_{p}$. Radio emission needs modest values of the local ion Lorentz factor, of the order of $10 - 30$, and hence large downward fluctuations from $V_{max}(\eta,u)$. The dependence of the amplitude growth exponent $\Lambda$ on Lorentz factor in equation (1) shows that the maximum mode amplitude attained is an extremely sensitive function of acceleration potential a small change in which can result in failure to reach a turbulent state and the cessation of radio emission, or the reverse. The rotation period of mode-changing pulsars is usually of the order of $P = 1$ s, hence the model is at least qualitatively consistent with observation. Fluctuations downward from $V_{max}$ are a feature of the basic model polar cap studied by Jones (2013) and their magnitude is an increasing function of $T_{s}$. Photo-ionization cross-sections are large and transitions occur promptly once blackbody photons in the rest-frame of an ion reach the threshold energy. Thus the ion Lorentz factor at this point is $\propto T_{s}^{-1}$. Consequently, photo-ionization screens the Lense-Thirring potential very efficiently in young ${\bf \Omega}\cdot{\bf B} < 0$ neutron stars so that upward fluctuations necessary for a transition to the ICS mode have negligible probability. The temperature $T_{s}$ is that of the surface visible from radius $\eta$ on the magnetic axis and is in the local proper frame. Owing to this and to the ${\bf B}$-dependent anisotropy of thermal conductivity in the neutron-star crust, it will be larger than the whole-surface average seen by an observer. Given these factors and on the basis of temperatures listed by \"{O}zel (2013) for a group of pulsars $\tau_{c} <1$ yr in age, we shall assume that $T_{s} = 5\times 10^{5}$ K is a reasonable estimate for any set of pulsars that have ages of the order of $1$ Myr including those showing long-term intermittency. Here we refer to PSR 1931+24 (Kramer et al 2006), J1832+0029 (Lorimer et al 2012), J1841-0500 (Camilo et al 2012), J1910+0157 and J1929+1357 (Lyne et al 2017). Fluctuations in $V$ are larger at this temperature whilst the establishment of a long-term ICS mode remains possible. Table 1 indicates that this ceases to be so at lower temperatures, which is consistent with the position in the $P- \dot{P}$ plane at which intermittency is seen. At low $T_{s}$, where ICS modes cannot be self-sustaining, because photo-ionization cross-sections are large near thresholds an ion-proton plasma remains possible. But higher Lorentz factors are needed to reach the thresholds: thus the time-averaged potential $\langle V({\bf u},t)\rangle$ is increased as in consequence are values of $K$. On average, elements of polar-cap area supporting an ion-proton flux become more sparse (see the basic model in Jones 2013). Emission is within a cone whose semi-angle with respect to the local source-region magnetic flux lines is finite and determined by the ion Lorentz factors. Hence an observer sees emission from flux lines in a band across the polar cap and will register a null if there is no ion-proton current density within the band that satisfies conditions (i) - (iv) of Section 2. This accounts for the increase in short ($< 10P$) nulls as a function of age (Wang et al 2007). Longer nulls can be caused by the variability in the surface nuclear charge $Z_{s}$ on time-scales derived from equation (16) and described at the end of Section 3. Here, if $\epsilon^{ICS}$ is negligible, and averaged over time at any ${\bf u}$ within the polar cap, \begin{eqnarray} Z_{0} - \langle Z_{s}\rangle = \langle KZ_{s}\rangle, \end{eqnarray} so that increasing $K$ is certainly accompanied by decreasing $\langle Z_{s}\rangle$ creating the conditions for instability described previously in Jones (2011). In this stage of evolution, the mean value of $V$ increases until condition (iii) ceases to be satisfied and the Langmuir mode growth-rate is too small to produce strong turbulence. It is the most likely reason for the cessation of emission at an age which is a function of the parameters $P$, $B_{12}$, $T_{s}$ and $Z_{0}$. This is consistent with the observed density of pulsars in the logarithmic $P - \dot{P}$ diagram which indicates not a death-line but the fact that pulsar deaths occur from a relatively early time, of the order of $10$ Myr onwards. The position of the Rotating Radio Transients (RRAT) in the sequence is enigmatic. For one half of those listed by Keane et al (2011) with values of $P$ and $B$, equation (10) gives $V_{max}(\infty,0) < 10^{3}$ GeV, which is likely to be too small to support a self-sustaining ICS mode. Also, some values of $B_{12}$ are too large to facilitate the optimum capture $\gamma$-ray attenuation rates of $10^{-1}- 10^{-4}$ cm$^{-1}$. It is not even obvious that the radio emission conforms with the normal ion-proton plasma expectation. Karastergiou et al (2009) were able to study a limited number of single pulses from J1819-1458 but found circular polarization that is too small to provide evidence for an ion-proton plasma source. Time intervals of $10^{2-4}$ s between successive intervals of emission suggest involvement of the surface nuclear charge with time-scales given by equation (16). However, in the case of very high fields ($>B_{c}$) low energy electron-positron pairs may be present and the Langmuir-mode dispersion relation so modified that growth is prevented. We refer to Jones (2015) for details of this additional explanation for nulls. Mode-changes are much less frequently observed than nulls (see the review of Wang et al 2007) and are more difficult to characterize. But we can consider two separate classes: those involving a change in mean pulse profile only and a very small number of cases in which X-ray emission is also observed. The former are the more numerous and have been reported most recently by Lyne et al (2010; 6 pulsars), Young et al (2014; J1107-5907), Sobey et al (2015; B0823+26), and Brook et al (2016; 9 pulsars). With the exception of J1107-5907, these are all less than or of the order of $1$ Myr in age, and 4 are listed in the Second Fermi LAT Catalogue of $\gamma$-emitting pulsars (Abdo et al 2013). Lyne et al associated profile changes directly with changes in spin-down rate $\delta\dot{P}$ measured over intervals of $100 - 400$ d. The later paper of Brook et al is of particular interest because it looked for changes in the relative flux density of different components in the profile of each pulsar. These authors found profile variability but, except in the case of J1602-5100 with $\delta\dot{P}/\dot{P} = 0.05$ over a $600$ d interval, were unconvinced that a simple two-state spin-down model best fitted the data. Values of $B_{12}P^{-2}$ for each of these pulsars with a temperature $T_{s} = 5\times10^{5}$ K are adequate for the formation of an ICS state, as described in Section 3. (The only exception is J1107-5907 with $B = 4.8\times 10^{10}$ G which is at least an order of magnitude too small to support the pair creation process described there.) Formation of this state changes both the rate and the position at which particles pass through the light cylinder. The net charge on the star must remain approximately constant on time-scales of the order of $P$. This can be achieved by particle fluxes in the open sector of the magnetosphere including the outer gap, possibly also by a return current. It is likely that an ICS state forms over an area of the polar cap with values of $u$ near the maximum in $\epsilon^{ICS}$, referred to as a partial ICS mode, with some ICS pair creation. This alters the flux and nature of particles reaching the light cylinder in the beam accelerated from the polar cap. Simultaneously, the flux of particles leaving through acceleration in the outer gap must change in order to maintain the overall charge balance. The very high $\gamma$-ray luminosity of the older neutron stars listed by Abdo et al (2013) shows that particle acceleration in the outer gap can be the origin of a large fraction of the spin-down torque. Thus changes caused by a transition to even a partial ICS mode appear capable causing the small $\delta\dot{P}/\dot{P}$ observed. Mode changes in pulsars with observed X-ray emission are very limited in number. Hermsen et al (2017) have observed radio and X-ray emission simultaneously in PSR B1822-09. Mode changes are seen in the radio with a mean separation of about $200$ s but no mode changes, synchronous or otherwise, appear in the X-rays. The outstanding \textit{sui generis} pulsar is B0943+10. The existence of synchronous mode-changes in radio and X-ray emission was first observed by Hermsen et al (2013) and then further by Mereghetti et al (2016). Of the two modes, the B-mode has the stronger radio flux and the weaker X-ray flux, and the Q-mode vice-versa. Emission of X-rays is incoherent and requires a leptonic source. The period is $P = 1.10$ s and $B_{12} = 2.0$, with $\tau_{c} = 5.0$ Myr: ICS formation is possible given the criteria adopted here and it would be tempting in view of ICS electron-positron production to assign the Q-mode to a partial ICS state. But that would introduce the problem of understanding the origin of the Q-mode radio emission which, although of approximately half the B-mode intensity, has a very different single-pulse structure consisting of sporadic bright pulses distributed with chaotic phases within the integrated profile (Bilous et al 2014). It remains an enigmatic object. Mode changes as described here must certainly change particle fluxes at the light-cylinder radius and its relation with recent work (Brambilla et al 2018 and work cited therein) would potentially be of interest. Unfortunately, these plasma-physics computational techniques appear to have been applied exclusively to the ${\bf \Omega}\cdot{\bf B} > 0$ case with various conditions on particle injection so that a useful comparison between inner and outer magnetosphere work is difficult. The intermittent pulsars are cases in which ICS has an unambiguous role. Values of $P$ and $B_{12}$ can support a self-sustaining ICS state according to the criteria of this paper, as in Table 1 which refers to B1931+24. All members of the set are very closely positioned in the logarithmic $P - \dot{P}$ plane. Time-scales can be as long as $10^{7-8}$ s, several orders of magnitude larger than $\tau_{rl}$, and difficult to associate with any other obvious aspect of neutron-star physics. The ICS mode has been demonstrated here as a state of the magnetosphere with the possibility of long-term but not absolute stability. The approximate halving of spin-down rate correlated with the transitions (Lyne et al 2006, 2017) requires a significant change in magnetospheric structure in the vicinity of the light cylinder to alter the torque. The ICS mode certainly makes such changes possible because as we noted previously, the flux of particles and the position at which they pass through the light cylinder are substantially modified. But further consideration of this is far beyond the scope of this paper. More generally, it must be admitted that some objects remain enigmatic in the light of the ICS mode study described here. The present work does rely on parameters which certainly exist but are not well known. Nonetheless, it provides a physical understanding of nulls and mode-changes based on the nature of the plasma as discussed in the early paragraphs of Section 1. \section*{Acknowledgments} It is a pleasure to thank the anonymous referee for questions that have much improved the presentation of this work.
{ "timestamp": "2018-06-05T02:17:00", "yymm": "1806", "arxiv_id": "1806.01053", "language": "en", "url": "https://arxiv.org/abs/1806.01053" }
\section{Introduction} \label{sec:intro} We consider percolation on a locally finite rooted tree $T$: each edge is open with probability $p \in(0,1)$, independently of all others. Let $\rtt$ denote the root of $T$ and $\CC_p$ be the open $p$-percolation cluster of the root. We may consider the \emph {survival probability} $\theta_T(p) := \P[|\CC_p| = +\infty]$ and note that $\theta_T$ is an increasing function of $p$. There thus exists a \emph{critical percolation parameter} $p_c \in[0,1]$ so that $\theta_T(p) = 0$ for all $p \in[0,p_c)$ and $\theta_T(p) > 0$ for $p \in(p_c,1]$. If $T$ is a regular tree where each non-root vertex has degree $d + 1$---i.e. each vertex has $d$ children---then the classical theory of branching processes shows that $p_c = \frac{1}{d}$ and $\theta_T(p_c) = 0$ (see, for instance, \cite{athreya-ney}). Since critical percolation does not occur, we may consider the \emph {incipient infinite cluster} (IIC), in which we condition on critical percolation reaching depth $M$ of $T$ and take $M$ to infinity. The IIC for regular trees was first constructed and considered by Kesten in \cite{kesten-subdiffusive}. In that work, along with \cite {barlow-kumagai}, the primary focus was on simple random walk on the IIC for regular trees. Our focus is on three elementary quantities for random $T$: the probability that critical percolation reaches depth $n$; the number of vertices of $\mathcal{C}_p$ at depth $n$ conditioned on percolation reaching depth $n$; and the number of vertices in the IIC at depth $n$. For regular trees, these questions were answered in the study of critical branching processes. In fact, these classical results apply to \emph{annealed} critical percolation on Galton-Watson trees. If we generate a Galton-Watson tree $T$ with progeny distribution $Z \geq1$ with $\E[Z] > 1$, we may perform $p_c = 1/\E[Z]$ percolation at the same time as we generate $T$; this is known at the \emph{annealed} process---in which we generate $T$ and percolate simultaneously---and is equivalent to generating a Galton-Watson tree with offspring distribution $\Zt:= \Bin(Z,p_c)$. Since $\E[\Zt] = 1$, this is a critical branching process and thus the classical theory can be used: \begin{thm}[\cite{kesten-ney-spitzer}] \label{thm:annealed} Suppose $\E[Z^2] < \infty$, and set $Y_n$ to be the set of vertices at depth $n$ of $T$ connected to the root in $p_c = 1/\E[Z]$ percolation. Then \begin{enumerate} \item[$(a)$] The annealed probability of surviving to depth $n$ satisfies \[ n \cdot\P[|Y_n| > 0] \to\frac{2}{\Var[\widetilde{Z}]} = \frac{2 \E[Z]^2}{\E[Z(Z-1)]}\,. \] \item[$(b)$] The annealed conditional distribution of $|Y_n|/n$ given $|Y_n| > 0$ converges in distribution to an exponential law with mean $\frac{\E[Z(Z-1)]}{2\E[Z]^2}$ as $n\to\infty$. \end{enumerate} \end{thm} Under the additional assumption of $\E[Z^3] < \infty$, parts $(a)$ and $(b)$ are due to Kolmogorov \cite{kolmorogov1938} and Yaglom \cite {yaglom} respectively; as such, they are commonly referred to as Kolmogorov's estimate and Yaglom's limit law. For a modern treatment of these classical results, see \cite{LPP-95} or \cite[Section $12.4$]{LP-book}. Although less widely known, Theorem \ref {thm:annealed} quickly gives a limit law for the size of the annealed IIC. \begin{corollary} \label{cor:IIC-annealed} If $\E[Z^2] < \infty$, let $C_n$ denote the number of vertices at depth $n$ in the annealed incipient infinite cluster. Then $C_n / n$ converges in distribution to the random variable with density $\lambda ^2 x e^{-\lambda x }$ with $\lambda:=\frac{2 \E[Z]^2}{\E[Z(Z-1)]}$ on $[0,\infty)$. In other words, \[ \lim_{n \to\infty}\left(\lim_{M\to\infty}\P[ |Y_n|/n \in(a,b) \| |Y_M| > 0 ] \right)= \int_{a}^b \lambda^2x e^{-\lambda x}\,dx \] for each $a < b.$ \end{corollary} This can be easily proven from Theorem \ref{thm:annealed} using an argument similar to the proof of Theorem \ref{thm:IIC}, and thus the details are omitted. Our goal is to upgrade Theorem \ref{thm:annealed} and Corollary \ref {cor:IIC-annealed} to hold for the \emph{quenched} process; that is, rather than generate $T$ and perform percolation at the same time as in the annealed case, we generate $T$ and then perform percolation on each resulting $T$. Before stating the quenched results, we recall some notation and facts from the theory of branching processes. If we allow $\P[Z = 0] > 0$ and condition on the resulting tree being infinite, we may pass to the reduced tree as in \cite[Chapter $5.7$]{LP-book} in which we remove all vertices that have finitely many descendants; this results in a new Galton-Watson process with some offspring distribution $\tilde{Z} \geq1$. We therefore assume without loss of generality that $Z \geq1$. For a Galton-Watson tree $T$, let $Z_n$ denote the number of vertices at distance of $n$ from the root; then the process $W_n = Z_n / (\E[Z])^n$ converges almost-surely to some random variable $W$. A first quenched result is that of \cite{lyons90}, which states that for a.e.\ supercritical Galton-Watson tree with progeny distribution $Z$, we have that the critical percolation probability is $p_c = 1/\E [Z]$; furthermore, for almost every Galton-Watson tree $\TT$, $\theta _\TT(p) = 0$ for $p \in[0,p_c]$ and $\theta_\TT(p)>0$ for $p \in (p_c,1]$. For a fixed tree $T$, let $\P_T[\cdot]$ be the probability measure induced by performing $p_c$ percolation on $T$. When $T$ is random, this is a random variable and we may ask about the almost sure behavior of certain probabilities. Our main results are summarized in the following theorem: \begin{thm}\label{thm:main} Let $\TT$ be a Galton-Watson tree with progeny distribution $Z \geq1$ with $\E[Z] > 1$. Suppose $\E[Z^p] < \infty$ for each $p \geq1$. Set $\lambda:= \frac{2 \E[Z]^2}{\E[Z(Z-1)]}$ and let $Y_n$ be the set of vertices in depth $n$ of $\TT$ connected to the root in $p_c = 1/\E[Z]$ percolation. Then for a.e. $\TT$ we have \begin{enumerate} \item[$(a)$] $n\cdot\P_\TT[|Y_n| > 0] \to W \lambda$ a.s. \item[$(b)$] The conditioned variable $(|Y_n| / n \| |Y_n| > 0)$ converges in distribution to an exponential random variable with mean $\lambda^{-1}$ a.s. \item[$(c)$] Let $\mathbf{C}_n$ denote the number of vertices in the quenched IIC of $\TT$ at depth $n$. Then $\mathbf{C}_n / n$ converges in distribution to the random variable with density $\lambda^2 x e^{-\lambda x}$ a.s. \end{enumerate} \end{thm} Note that, surprisingly, the limit laws of parts $(b)$ and $(c)$ of Theorem \ref{thm:main} do not depend at all on $\TT$ itself but just on the distribution of $Z$. This is in sharp contrast to the case of near-critical and supercritical percolation on Galton-Watson trees, in which the behavior is dependent on the tree itself \cite {mpr-quenched}. One possible justification for this lack of dependence on $W$, for instance, is that conditioning on $|Y_n| > 0$ forces certain structure of the percolation cluster near the root; since $W$ is mostly determined by the levels of $\TT$ near the root, the behavior when conditioned on $|Y_n| > 0$ for large $n$ does not depend on $W$. Part $(a)$ of Proposition \ref{pr:spread} corroborates this heuristic explanation. The three parts of Theorem \ref{thm:main} are Theorems \ref {thm:surv-prob}, \ref{thm:cond-surv} and \ref{thm:IIC} respectively. The proof of part $(a)$ utilizes its annealed analogue, Theorem \ref {thm:annealed}$(a)$, along with a law of large numbers argument. Part $(b)$ is proven by the method of moments building on the work of \cite {mpr-quenched}. Part $(c)$ follows from there with a similar law of large numbers argument combined with two short facts about the structure of the percolation cluster conditioned on $|Y_n| > 0$ (this is Proposition \ref{pr:spread}). \begin{remark} Theorem \ref{thm:main} assumes that $\E[Z^p] < \infty $ for each $p \geq1$, and we suspect that this condition is an artifact of the proof. Since we use the method of moments, it is natural that we require all moments of the underlying distribution to be finite. We suspect that less rigid conditions are sufficient, but this would require a different proof strategy than the method of moments, perhaps utilizing a stronger anti-concentration statement in the vein of Proposition \ref{pr:spread}. \end{remark} \section{Set-up and notation} We begin with some notation and a brief description of the probability space on which we will work. Let $Z$ be a random variable taking values in $\{1,2,\ldots,\}$ with $\mu:= \E[Z] > 1$ and $\P[Z = 0] = 0$. Define its \emph{probability generating function} to be $\phi(z):= \sum\P[Z = k] z^k$. Let $\TT$ be a random locally finite rooted tree with law equal to that of a Galton-Watson tree with progeny distribution $Z$ and let $(\Omega_1,\T,\GW)$ be the probability space on which it is defined. Since we will perform percolation on these trees, we also use variables $\{U_i\}_{i = 1}^\infty$ where the $U_i$ are i.i.d. random variables uniform on $[0,1]$; let $(\Omega _2,\F_2, \P_2)$ be the corresponding probability space. Our canonical probability space will be $(\Omega,\F,\P)$ with $\Omega:= \Omega_1 \times\Omega_2$, $\F:= \T\otimes\F_2$ and $\P:= \GW\times\P _2$. We interpret an element $\omega=(T,\omega_2) \in\Omega$ as the tree $T$ with edge weights given by the $U_i$ random variables. To obtain $p$ percolation, we restrict to the subtree of edges with weight at most $p$. Since we are concerned with quenched probabilities, we define the measure $\P_\TT[\cdot] := \P[\cdot\| \TT] = \P[\cdot \| \T]$. Since this is a random variable, our goal is to prove theorems $\GW$-a.s. We employ the usual notation for a rooted tree $T$, Galton-Watson or otherwise: $\rtt$ denotes the root; $T_n$ is the set of vertices at depth $n$; and $Z_n := |T_n|$. In the case of a Galton-Watson tree $\TT $, we define $W_n := Z_n / \mu^n$ and recall that $W_n \to W$ almost surely. Furthermore, if $\E[Z^p] < \infty$ for some $p \in[1,\infty )$, we in fact have $W_n \to W$ in $L^p$ \cite[Theorems $0$ and $5$]{bingham-doney74}. In the Galton-Watson case, define $\T_n := \sigma(\TT_n)$; then $(\T_n)_{n = 0}^\infty$ is a filtration that increases to $\T$. For a vertex $v$ of $T$, define $T(v)$ to be the descendant tree of $v$ and extend our notation to include $T_n(v), Z_n(v), W_n(v)$ and $W(v)$. For vertices $v$ and $w$, write $v \leq w$ if $v$ is an ancestor of $w$. For percolation, recall that the critical percolation probability for $\GW$-a.e. $\TT$ is $p_c:= 1/\mu$ and that percolation does not occur at criticality \cite{lyons90}. For vertices $v$ and $w$ with $v \leq w$, let $\{v \conn w\}$ denote the event that there is an open path from $v$ to $w$ in $p_c$ percolation; let $\{v \conn(u,w)\}$ be the event that $v$ is connected to both $u$ and $w$ in $p_c$ percolation; for a subset $S$ of $\TT$, let $\{v \conn S \}$ denote the event that $v$ is connected to some element of $S$ in $p_c$ percolation; lastly, let $Y_n$ be the set of vertices in $\TT_n$ that are connected to $\rtt$ in $p_c$ percolation. \section{Quenched results} \subsection{Moments} For $k \geq j$, let $\mathcal{C}_j(k)$ denote the set of $j$-compositions of $k$, i.e.\ ordered $j$-tuples of positive integers that sum to $k$. Define \[ c_{k,j} := p_c^k \sum_{a \in\mathcal{C}_j(k)} m_{a_1} m_{a_2}\cdots m_{a_j} \] where $m_r:= \E[\binom{Z}{r}]$. We use the following result from \cite{mpr-quenched}: \begin{thm}[\cite{mpr-quenched}] \label{thm:martingales} Define \[ M_n^{(k)} := \E_\TT\left[\binom{|Y_n|}{k}\right] - \sum_{i = 1}^{k-1}c_{k,i} \sum_{j = 0}^{n-1} \E_\TT\left[ \binom{|Y_j|}{i} \right] \,. \] If $\E[Z^{2k}]< \infty$, then $M_n^{(k)}$ is a martingale with respect to the filtration $(\T_n)$, and there exist constants $C_k$ and $c_k$ so that \[ \Vert M_{n+1}^{(k)} - M_n^{(k)} \Vert_{L^2} \leq C_k e^{-c_k n}\,. \] \end{thm} While Theorem \ref{thm:martingales} is not stated precisely this way in \cite{mpr-quenched}, the martingale property follows from \cite [Lemma $4.1$]{mpr-quenched}, while the $L^2$ bound on the increments is given in \cite[Theorem $4.4$]{mpr-quenched}. This gives us the leading term of each $\E_\TT\left[|Y_n|^k\right]$. \begin{proposition}\label{pr:factorial-moments} For each $k$, \[ \E_\TT\left[|Y_n|^k\right]n^{-(k-1)} \to k! \left(\frac{p_c^2 \phi''(1)}{2} \right)^{k-1} W \] almost surely and in $L^2$. \end{proposition} \begin{proof} By Theorem \ref{thm:martingales}, $M_n^{(k)}$ is a martingale with uniformly bounded $L^2$ norm for each $k$. By the $L^p$ martingale convergence theorem, $M_n^{(k)}$ converges in $L^{2}$ and almost surely. We now proceed by induction on $k$. For $k = 1$, $\E_\TT [|Y_n|] = W_n$ which converges to $W$. Suppose that the proposition holds for all $j < k$. Then by convergence of $M_n^{(k)}$, \[ \E_\TT\left[\binom{|Y_n|}{k} \right]n^{-(k-1)} = \sum_{i =1}^{k-1} c_{k,i} n^{-(k-1)}\sum_{j = 0}^{n-1} \E_\TT\left[\binom {|Y_j|}{i} \right] + o(1) \] where the $o(1)$ term is both in $L^2$ and almost surely. By induction, the leading term is the contribution from $i = k-1$. Noting that $c_{k,k-1} = (k-1)p_c^2 \frac{\phi''(1)}{2}$ and the fact that $\sum_{j = 0}^{n-1} j^d \sim\frac{1}{d+1}n^{d+1}$ completes the proof. \end{proof} \subsection{Survival probabilities} Throughout, define $\lambda:= \frac{2}{p_c^2 \phi''(1)}$. Our first task is to find a quenched analogue of Kolmogorov's estimate: \begin{thm} \label{thm:surv-prob} If $\E[Z^4] < \infty$, then \[ n\cdot\P_\TT[|Y_n| > 0] \to W \lambda \] almost surely. \end{thm} The proof utilizes the Bonferroni inequalities. In order to control the second-order term, the variance of a sum of pairs is calculated, thereby introducing the requirement of $\E[Z^4] < \infty$. We begin first by proving upper and lower bounds: \begin{lemma} \label{lem:prop-sandwich} For each $n$, \begin{equation*} \frac{n\cdot\E_\TT[|Y_n|]^2}{\E_\TT[|Y_n|^2]}\leq n\cdot\P_\TT [|Y_n| > 0 ] \leq\frac{2 \overline{W}}{1 - p_c} \end{equation*} where, $\overline{W} = \sup_n W_n$. \end{lemma} \begin{proof} The lower bound is the Paley-Zygmund inequality. For the upper bound, we use \cite[Theorem 5.24]{LP-book}: \[ \P_\TT[|Y_n| > 0] \leq\frac{2}{\mathscr{R}(\rtt\conn\TT_n)} \] where $\mathscr{R}(\rtt\conn\TT_n)$ is the equivalent resistance between the root and $\TT_n$ when all of $\TT_n$ is shorted to a single vertex and each edge branching from depth $k-1$ to $k$ has resistance $\frac{1 - p_c}{p_c^k}$. Shorting together all vertices at depth $k$ for each $k$ gives the lower bound \begin{equation*} \mathscr{R}(\rtt\conn\TT_n) \geq\sum_{k = 1}^{n}\frac{1 - p_c}{Z_k p_c^k} = \sum_{k = 1}^{n}\frac{1 - p_c}{W_k}\geq(1 - p_c)\frac{n}{\overline{W}}\,.\qedhere \end{equation*} \end{proof} \noindent{\emph{Proof of Theorem }\ref{thm:surv-prob}: } For each fixed $m < n$, the Bonferroni inequalities imply \begin{equation} \label{eq:bonf} \left|n\P_\TT[\rtt\conn\TT_n] - n\sum_{v \in\TT_m}\P_\TT [\rtt\conn v \conn\TT_n]\right| \leq n\sum_{u,v \in\binom{\TT _m}{2}} \P_\TT[\rtt\conn(u,v) \conn\TT_n]\,. \end{equation} If we can show that the right-hand side of \eqref{eq:bonf} converges a.s. to zero for some choice of $m = m(n)$, then the survival probability is sufficiently close to a sum of i.i.d. random variables. The random variables $\P_\TT[\rtt\conn v \conn\TT_n]$ are i.i.d.\ with mean $p_c^m \P[\rtt\conn\TT_{n - m}]$, implying that the sum is close to $W_m \P[\rtt\conn\TT_{n - m}]$. Applying the annealed result Theorem \ref{thm:annealed} would then complete the proof after noting that $W_m \to W$ almost surely provided $m \to\infty$. The remainder of the proof follows this sketch. Set $m = \lceil n^{1/4} \rceil$; we then bound the second moment \begin{align*} \E&\left[ \left(\sum_{u,v \in\binom{\TT_m}{2}} \P_\TT[\rtt \conn(u,v) \conn\TT_n]\right)^2\right] \\ &\qquad= \E\left[\left(\sum_{u,v \in\binom{\TT_m}{2}}\P_\TT[\rtt\conn (u,v)]\P_\TT[u\conn\TT_n]\P_\TT[v \conn\TT_n] \right)^2 \right] \\ &\qquad= \E\left[ \E\left[\left(\sum_{u,v \in\binom{\TT _m}{2}}\P_\TT[\rtt\conn(u,v)]\P_\TT[u\conn\TT_n]\P_\TT[v \conn\TT_n] \right)^2 \, \Bigg| \, \T_m \right]\right] \\ &\qquad= \E\left[ \E\left[\left(\sum_{u,v \in\binom{\TT _m}{2}}\P_\TT[\rtt\conn(u,v)]\P_\TT[u\conn\TT_n]\P_\TT[v \conn\TT_n] \right)^2 \, \Bigg| \, \T_m \right]^{(1/2)\cdot 2}\right] \\ &\qquad\leq\E\left[ \left(\sum_{u,v \in\binom{\TT_m}{2}} \P _\TT[\rtt\conn(u,v)] \left\Vert\P_\TT[u \conn\TT_n] \P_\TT[v \conn\TT_n] \right\Vert_{L^2} \right)^2\right] &&\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{ by the triangle inequality}& \\ &\qquad\leq\left(\frac{2}{1 - p_c}\right)^4 \E[\overline{W}^2]^2 \cdot(n-m)^{-4} \E\left[\binom{|Y_m|}{2}^2 \right]\qquad\qquad\quad\text{ by Lemma }\ref{lem:prop-sandwich}&&\\ &\qquad\leq C m^2 n^{-4}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ \ \text{ by Theorem }\ref{thm:martingales}\,.&& \end{align*} Multiplying by $n$, the second moment of the right-hand side of \eqref {eq:bonf} is bounded above by $C m^2 n^{-2} = O(n^{-3/2})$ which is summable in $n$. By Chebyshev's Inequality together with the Borel-Cantelli Lemma, the right-hand side of \eqref{eq:bonf} converges to zero almost surely. This implies \begin{equation} \label{eq:surv-prob-avg} n\P_\TT[\rtt\conn\TT_n] = n \sum_{v \in\TT_m} \P_\TT[\rtt \conn v \conn\TT_n] +o(1) = \sum_{v \in\TT_m} \frac{n \P_\TT[v \conn\TT_n]}{\mu^m }+ o(1)\,. \end{equation} We want to show that the right-hand side of \eqref{eq:surv-prob-avg} converges to $W \lambda$, so we first calculate \begin{align*} \Var\bigg[ \sum_{v \in\TT_m} &\frac{n \P_\TT[v \conn\TT_n] - n \P[\rtt\conn\TT_{n - m}]}{\mu^m} \bigg] \\ &= \E\left[\Var\left[ \sum_{v \in\TT_m} \frac{n \P_\TT[v \conn\TT_n] - n \P[\rtt\conn\TT_{n - m}]}{\mu^m} \, \bigg| \, \T_m\right] \right] \\ &= \E\left[\frac{1}{\mu^{2m}} \sum_{v \in\TT_m} \Var[n \P_\TT [v \conn\TT_n]] \right] \\ &\leq\frac{C}{\mu^m} \end{align*} where the last inequality is via Lemma \ref{lem:prop-sandwich}. Since this is summable in $n$, Chebyshev's Inequality and the Borel-Cantelli Lemma again imply \[ \sum_{v \in\TT_m} \frac{n \P_\TT[v \conn\TT_n]}{\mu^m} = \sum _{v \in\TT_m} \frac{n \P[\rtt\conn\TT_{n - m}]}{\mu^m} + o(1) = W_m (n \cdot\P[\rtt\conn\TT_{n - m}]) + o(1)\,. \] Taking $n \to\infty$ and utilizing Theorem \ref{thm:annealed} together with \eqref{eq:surv-prob-avg} completes the proof. $\Cox$ \subsection{Conditioned survival} \begin{thm} \label{thm:cond-surv} Suppose $\E[Z^p] < \infty$ for all $p \geq1$. Then the conditional variable $(|Y_n|/n \, | \, |Y_n| > 0)$ converges in distribution to an exponential random variable with mean $\lambda^{-1}$ for $\GW$-almost every $\TT$. \end{thm} By conditional random variable $(|Y_n|/n \, | \, |Y_n| > 0)$, we mean the random variable with law $\P_\TT[|Y_n|/n \in\cdot\, | \, |Y_n| > 0 ]$. \begin{proof} The proof is via the method of moments. In particular, since the moment generating function of an exponential random variable has a positive radius of convergence, its distribution is uniquely determined by its moments. Thus, any sequence of random variables with each moment converging to the moment of an exponential random variable must converge in distribution to that exponential random variable \cite [Theorems $30.1$ and $30.2$]{billingsley}. Let $X_n$ be a random variable with distribution $(|Y_n|/n \, | \, |Y_n| > 0)$. It is sufficient to show $\E_\TT[X_n^k] \to k! \lambda ^{-k}$ $\GW$-a.s. since $k! \lambda^{-k}$ is the $k$th moment of an exponential random variable. Proposition \ref{pr:factorial-moments} and Theorem \ref{thm:surv-prob} imply \begin{align*} \E_\TT[X_n^k] &= \frac{\E_\TT[|Y_n|^k]}{n^k \P_\TT[|Y_n| > 0]} \\ &= \frac{\E_\TT[|Y_n|^k}{n^{k-1}} \cdot\frac{1}{n\cdot\P_\TT [|Y_n| > 0]} \\ &\to k! W\lambda^{-(k-1)}\cdot\frac{1}{\lambda W} \\ &= k! \lambda^{-k}\,.\qedhere \end{align*} \end{proof} More can be said about the structure of the open percolation cluster of the root conditioned on $\rtt\conn\TT_n$, but we require two general, more or less standard lemmas first. \begin{lemma} \label{lem:condition} For any events $A$ and $B$ with $\P[B] \neq0$, \[ |\P[A \| B] - \P[A]| \leq\P[B^c]\,. \] \end{lemma} \begin{proof} Expand \[ \P[A] = \P[A \| B](1 - \P[B^c]) + \P[A \| B^c] \P[B^c] \] and solve \[ \P[A] - \P[A\| B] = (\P[A \| B^c] - \P[A \| B]) \P[B^c]\,. \] Taking absolute values and bounding $|\P[A \| B^c] - \P[A \| B]| \leq 1$ completes the proof. \end{proof} \begin{lemma} \label{lem:sum-tails} Let $X_k$ be i.i.d. centered random variables with $\E[|X_1|^p] < \infty$ for some $p \in[2,\infty)$. Then there exists a constant $C_p$ so that \[ \P\left[ \left|\sum_{k = 1}^n \frac{X_k}{n}\right| > t \right] \leq C_p t^{-p}n^{-p/2} + 2 \exp\left(-\frac{n t^2}{\Var [X_1]}\right) \] for all $t > 0$. \end{lemma} \begin{proof} This is a straightforward application of \cite[Theorem $2.1$]{chesneau} which states that for independent random variables $M_i$ with $\E[M_i] = 0$ and $\E[|M_i|^p] < \infty$ for some $p > 2$ we have \[ \P\left[\left|\sum_{i=1}^n M_i\right| \geq t \right] \leq C_p t^{-p}\max\left(r_{n,p}(t), (r_{n,2}(t))^{p/2}\right) + \exp\left (- \frac{t^2}{16 b_n}\right) \] where $r_{n,u}(t) = \sum_{i = 1}^n \E(|M_i|^u \one_{|M_i| \geq 3b_n/t}),$ $b_n = \sum_{i = 1}^n \E[M_i^2]$ and $C_p$ is a positive constant. Setting $M_i = X_i/n$ completes the proof. \end{proof} For a fixed tree and $m < n$, define $B_m(n)$ to be the event that $\rtt\conn\TT_n$ through precisely one vertex at depth $m$. \begin{proposition} \label{pr:spread} Suppose $\E[Z^p] < \infty$ for all $p \geq1$. There exists an $N = N(\TT)$ with $N < \infty$ almost surely so that for all $n \geq N$, we have \begin{enumerate} \item[$(a)$] $\P_\TT[B_m(n)^c \| \rtt\conn\TT_n] < Cn^{-1/4}$ for $m = m(n) := \lceil\frac{\log n}{4 \log\mu}\rceil$ \item[$(b)$] $\max_{v \in\TT_n} \P_\TT[v \in Y_n \| \rtt\conn \TT_n] = O(n^{-1/8})$ \end{enumerate} for some constant $C > 0$. \end{proposition} \emph{Proof.} Note first that for the choice of $m$ as in part $(a)$, we have $ \frac{1}{2\mu} W n^{1/4} \leq Z_m \leq2 \mu W n^{1/4}$ for sufficiently large $n$. $(a)$ Using Theorem \ref{thm:surv-prob} and Lemma \ref {lem:prop-sandwich}, we bound \begin{align} \P_\TT[B_m(n)^c \| \rtt\conn\TT_n] &\leq\frac{\left(\sum_{v \in T_m} \P_\TT[v \conn\TT_n] \right)^2}{\P_\TT[\rtt\conn\TT _n]} \nonumber\\ &\leq\left(\frac{2}{1 - p_c}\right)^2 \left(\frac{\sum_{v \in \TT_m} \overline{W}(v)}{Z_m} \right)^2 \frac{Z_m^2}{(n - m)^2 \P _\TT[\rtt\conn\TT_n]} \nonumber\\ &\leq C \left(\frac{\sum_{v \in\TT_m} \overline{W}(v)}{Z_m} \right)^2 W n^{-1/2} \label{eq:B-n-complement-bound} \end{align} for $n$ sufficiently large, and some choice of $C > 0$ depending on the distribution of $Z$. Applying Lemma \ref{lem:sum-tails} for $p = 9$ gives \[ \P\left[ \left|\frac{\sum_{v \in\TT_m} \overline{W}(v)}{Z_m} - \E[\overline{W}] \right| > n^{1/8} \right] \leq C_9 n^{-9/8} + 2 \exp\left(- n^{1/4} / \Var[\overline{W}] \right) \] where we use the trivial bound of $1 \leq Z_m.$ Since this is summable in $n$, the Borel-Cantelli Lemma implies that this event only occurs finitely often. In particular, this means that for sufficiently large $n$ \begin{equation} \label{eq:B_m-bound} \P_\TT[B_m(n)^c \, | \, \rtt\conn\TT_n] \leq C W n^{-1/4} \end{equation} for some constant $C > 0$ depending only on the distribution of $Z$. $(b)$ Applying Lemma \ref{lem:condition} to the measure $\P_\TT[ \cdot\| \rtt\conn\TT_n]$ and recalling $B_m(n) \subseteq\rtt\conn \TT_n$, \begin{align*} \Big|\P_\TT[v \in Y_n \, | \,\rtt\conn\TT_n] - \P_\TT[v \in Y_n \| B_m(n)] \Big| &\leq\P_\TT\left[B_m(n)^c \| \rtt\conn\TT_n \right] \end{align*} which is $O(n^{-1/4})$ by part $(a)$. It is thus sufficient to bound $\P_\TT[v \in Y_n \| B_m(n)]$. For a vertex $v \in\TT_n$ and $m < n$, let $P_m(v)$ be the ancestor of $v$ in $\TT_m$. We then have \[ \P_\TT[v \in Y_n \| B_m(n)] \leq\P_\TT[\rtt\conn P_m(v) \conn\TT _n \| B_m(n)]\,. \] Conditioned on $B_m(n)$, there exists a unique vertex $w \in\TT_m$ so that $\rtt\conn w \conn\TT_n$; this vertex $w$ is chosen with probability bounded above by \begin{align} \P_\TT[\rtt&\conn w \conn\TT_n \| B_m(n)] \nonumber\\ &\leq\frac{\P_\TT[\rtt\conn w \conn\TT_n]}{\sum_{u \in\TT_m} \P_\TT[\rtt\conn u \conn\TT_n] - \sum_{(u_1,u_2) \in\binom{\TT _m}{2}} \P_\TT[\rtt\conn(u_1,u_2) \conn\TT_n] } \nonumber\\ &\leq\frac{\P_\TT[w \conn\TT_n]}{ \sum_{u \in\TT_m} \P_\TT[u \conn\TT_n] - \left(\sum_{u \in\TT_m}\P_\TT[u \conn\TT_n] \right)^2 } \nonumber\\ &\leq\frac{c (n - m)^{-1}\overline{W}(w)}{(1 + o(1))\sum_{u \in\TT _m} \P_\TT[u \conn\TT_n]} \label{eq:point-B_m-bound} \end{align} where the latter inequality is by applying the bound of Lemma \ref {lem:prop-sandwich} to the numerator and arguing as in \eqref {eq:B-n-complement-bound} to almost-surely bound the denominator. In particular, the $o(1)$ term is uniform in $w$. We want to take the maximum over all possible $w \in\TT_m$, and note that for any $\alpha> 0$, \begin{align*} \P\left[\max_{w \in\TT_m} \overline{W}(w) > n^{\alpha}\right] &=\E\left[\P\left[\max_{w \in\TT_m} \overline{W}(w) > n^{\alpha }\, \big|\, \T_m \right]\right] \\ &\leq \E[Z_m] \P[\overline{W}> n^\alpha] \\ &\leq\mu^m \cdot\frac{\E[\overline{W}^{2/\alpha}]}{n^{2}} \\ &= O(n^{-7/4}) \end{align*} which is summable, implying that for any fixed $\alpha\!>\!0$, we eventually have $\max_{w \in\TT_m} \overline{W}(w)\!\leq\!n^{\alpha }.$ It merely remains to bound the denominator of \eqref{eq:point-B_m-bound}. Note that by Proposition \ref{pr:factorial-moments}, the lower bound given in Lemma \ref{lem:prop-sandwich} converges almost surely to $\frac{W \lambda}{2}$ as $n \to\infty$. In particular, this means that if we set \[ p_n := \P\left[\frac{W \lambda}{4} \leq n\P_\TT[|Y_n| > 0] \right], \] then $p_n \to1$. By Hoeffding's inequality together with Borel-Cantelli, the number of vertices $u \in\TT_m$ for which we have \[ \frac{W(u) \lambda}{4} \leq(n - m) \P_\TT[u \conn\TT_n] \] is almost surely at least $1/2$ of $\TT_m$ for $n$ sufficiently large. This gives \[ (n-m)\sum_{u \in\TT_m} \P_\TT[u \conn\TT_n] \geq\frac{\lambda }{4}\sum_{u \in\TT_m} W(u) \one_{W(u)\lambda/4 \leq(n - m)\P_\TT [u \conn\TT_n] } = \Omega(Z_m) \,. \] Recalling that $Z_m = \Theta(Wn^{-1/4})$ and plugging the above into \eqref{eq:point-B_m-bound} completes the proof. $\Cox$ \subsection{Incipient infinite cluster} As in \cite{kesten-IIC}, we sketch a proof of the construction of the IIC. For an infinite tree $T$, define $T{[n]}$ to be the finite subtree of $T$ obtained by restricting to vertices of depth at most $n$. \begin{lemma}\label{lem:constr} Suppose $\E[Z^4] < \infty$; for a subtree $t$ of $\TT{[n]}$, we have \[ \lim_{M \to\infty} \P_\TT[\mathcal{C}_{p_c}[n] = t \| \rtt\conn \TT_M ] = \frac{\sum_{v \in t_n}W(v)}{W}\P_\TT[\mathcal {C}_{p_c}[n] = t ] \] almost surely for each tree $t$. The random measure $\mu_\TT$ on subtrees of $\TT$ with marginals \[ \mu_\TT\big|_{\T_n}[t] := \frac{\sum_{v \in t_n}W(v)}{W}\P_\TT [\mathcal{C}_{p_c}[n] = t ] \] has a unique extension to a probability measure on rooted infinite trees $\GW$ almost surely. The IIC is thus the random subtree of $\TT $ with law $\mu_\TT$. \end{lemma} \begin{proof} Since each $\TT$ has countably many vertices, Theorem \ref {thm:surv-prob} assures that $n\P_\TT[v \conn\TT_{n + |v|}] = \lambda W(v)$ for each vertex $v$ of $\TT$ a.s. When all of these limits hold, we then have \begin{align*} \P_\TT[\mathcal{C}_{p_c}[n] = t \| \rtt\conn\TT_M] &= \frac{ \P _\TT[\mathcal{C}_{p_c}[n] = t, \rtt\conn\TT_M]}{\P_\TT[\rtt \conn\TT_M]} \\ &= \P_\TT[\mathcal{C}_{p_c}[n] = t]\left(\frac{\sum_{v \in t_n} \P_\TT[v \conn\TT_M] + O(|t_n|^2 M^{-2})}{\P_\TT[\rtt\conn\TT _M]} \right) \\ &\xrightarrow{M\to\infty} \P_\TT[\mathcal{C}_{p_c}[n] = t]\frac {\sum_{v \in t_n} W(v)}{W} \end{align*} for each $t$. To show that the measure $\mu_\TT$ can be extended, we note that its marginals are consistent, as can be seen via the recurrence $W(v) = p_c \sum_{w} W(w)$ where the sum is over all children of $v$. Applying the Kolmogorov extension theorem \cite [Theorem $2.1.14$]{durrett4} completes the proof. \end{proof} It is easy to show that the law of the IIC can in fact be generated by conditioning on $p > p_c$ percolation to survive and then taking $p \to p_c^+$: \begin{corollary} For a subtree $t$ of $\TT[n]$, we have \[ \lim_{p \to p_c^+} \P_\TT[\mathcal{C}_p[n] = t \| |\mathcal{C}_p| = \infty] = \frac{\sum_{v \in t_n}W(v) }{W} \P_\TT[\mathcal {C}_{p_c}[n] = t] \] almost surely. \end{corollary} \begin{proof} As shown in \cite{mpr-quenched}, we have \[ \lim_{p \to p_c} \frac{\P_\TT[|\mathcal{C}_p| = \infty]}{p - p_c} = K W \] almost-surely for some constant $K$ depending only on the offspring distribution. The Corollary follows from Bayes' theorem in the same manner as Lemma \ref{lem:constr}. \end{proof} In light of Lemma \ref{lem:constr}, it is natural to guess that the number of vertices in the IIC at depth $n$ will asymptotically be the size-biased version of $(|Y_n| \| \rtt\conn\TT_n )$: the sum $\sum _{v \in t_n} W(v)$ will be relatively close to $|t_n| W$, therefore biasing each choice of $t$ by a factor of $|t_n|$. In order to make this argument rigorous, we will invoke Proposition \ref{pr:spread} which shows that no single vertex has high probably of surviving conditionally. Throughout, we use the notation $n(a,b) = (na,nb)$ for $a < b$ and $\mathbf{C}$ to denote the IIC. \begin{thm} \label{thm:IIC} Suppose $\E[Z^p] < \infty$ for each $1 \leq p < \infty$. Then for each $0 \leq a < b$, \[ \lim_{n \to\infty} \P_\TT[\mathbf{C}_n \in n(a,b)] = \int_{a}^b \lambda^2 x e^{-\lambda x} \,dx \] almost surely. In fact, $\mathbf{C}_n / n$ converges in distribution to the random variable with density $\lambda^2 x e^{-\lambda x}$ for $\GW$-almost every $\TT$. \end{thm} \begin{proof} To see that convergence in distribution follows from the almost sure limit, apply the almost sure limit to each interval $(a,b)$ with $a,b \in\mathbb{Q}$; since there are only countably many such intervals, there exists a set of full $\GW$ measure on which these limits simultaneously exist for each rational interval, thereby implying convergence in distribution \cite[Theorem 3.2.5]{durrett4}. We have \[ \P_\TT[\mathbf{C}_n \in n(a,b)] = \lim_{M \to\infty} \P_\TT[Y_n \in n(a,b)\| \rtt\conn\TT_{n + M}]\,. \] For a fixed $n$, write \begin{align} \P_\TT[&|Y_n| \in n(a,b) \| \rtt\conn\TT_{n+M}] \nonumber\\ &= \frac{\P_\TT[\rtt\conn\TT_{n + M} \| |Y_n| \in n(a,b)]\cdot\P _\TT[|Y_n| \in n(a,b) \| \rtt\conn\TT_n] \cdot\P_\TT[\rtt\conn \TT_n]}{\P_\TT[\rtt\conn\TT_{n + M}]}\,. \label{eq:Y_n-size-cond} \end{align} We then calculate \begin{align*} \P_\TT&[\rtt\conn\TT_{n + M} \| |Y_n| \in n(a,b)]\\ &= \sum_{S} \P_\TT[Y_n = S \| |Y_n| \in n(a,b) ] \P_\TT[S \conn\TT_{n + M}] \\ &= \sum_{S} \P_\TT[Y_n = S \| |Y_n| \in n(a,b) ] \sum_{v \in S} \P _\TT[v \conn\TT_{n + M}] + O(M^{-2}) \\ &= \sum_{v \in\TT_n} \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] \P_\TT [v \conn\TT_{n + M}] + O(M^{-2})\,. \end{align*} For a fixed $n$, we take $M \to\infty$ and utilize Theorem \ref {thm:surv-prob} to get \begin{equation} \label{eq:p-IIC} \lim_{M\to\infty} \frac{\P_\TT[\rtt\conn\TT_{n + M} \| |Y_n| \in n(a,b)]}{\P_\TT[\rtt\conn\TT_{n + M}]} = \frac{1}{W} \sum_{v \in\TT_n} \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)]\cdot W(v)\,. \end{equation} We plug this into \eqref{eq:Y_n-size-cond} to get the limit \begin{align*} \lim_{M \to\infty} & \P_\TT[|Y_n| \in n(a,b) \| \rtt\conn\TT _{n+M}] \nonumber\\ &= \left(\sum_{v \in\TT_n} \frac{ \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] }{n} \cdot W(v)\right)\nonumber\\ &\quad\times \left(\P_\TT[|Y_n| \in n(a,b) \| \rtt\conn\TT_n]\right) \left( \frac{n\cdot\P_\TT[\rtt\conn \TT_n]}{W} \right)\,. \end{align*} Theorems \ref{thm:surv-prob} and \ref{thm:cond-surv} show that the latter two factors above have almost sure limits $\int_a^b \lambda e^{-\lambda x} \,dx$ and $\lambda$ as $n \to\infty$, leaving only the first term. We note that \begin{align*} \E\left[\sum_{v \in\TT_n} \frac{ \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] }{n} \cdot W(v) \,\Bigg|\, \T_n \right] &= \sum_{v \in\TT _n}\frac{ \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] }{n} \\ &= \E_\TT\left[\frac{|Y_n|}{n} \, \bigg| \, |Y_n| \in n(a,b) \right] \\ &= \frac{\E_\TT\left[\frac{|Y_n|}{n}\cdot\one_{|Y_n|/n \in (a,b)} \| \rtt\conn\TT_n \right]}{\P_\TT\left[\frac{|Y_n|}{n} \in(a,b) \| \rtt\conn\TT_n \right]}\\ &\to\frac{\int_{a}^{b} \lambda x e^{-\lambda x} \,dx}{\int_{a}^{b} \lambda e^{-\lambda x}\,dx} \end{align*} where the limit is by the continuous mapping theorem \cite[Theorem 3.2.4]{durrett4} and Theorem \ref{thm:cond-surv}. It's thus sufficient to show that \begin{equation}\label{eq:need} \left|\sum_{v \in\TT_n} \frac{ \P _\TT[v \in Y_n \| |Y_n| \in n(a,b)] }{n} \cdot(W(v) - 1) \right| \xrightarrow{n \to\infty} 0 \end{equation} almost surely. Our strategy is to use a conditional version of the Borel-Cantelli Lemma together with Chebyshev's inequality. We bound the conditional variance \begin{align} \Var\Bigg[\sum_{v \in\TT_n}&\frac{ \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] }{n} \cdot(W(v) - 1) \, \bigg| \, \T_n \Bigg] \nonumber \\ &= \Var(W)\sum_{v \in\TT_n} \frac{\P_\TT[v \in Y_n \| |Y_n| \in n(a,b)]^2 }{n^2} \nonumber\\ &\leq\Var(W) \max_{v \in\TT_n} \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] \sum_{v \in\TT_n} \frac{\P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] }{n^2} \nonumber\\ &\leq\Var(W) \max_{v \in\TT_n} \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] \cdot\frac{\E[Y_n \| |Y_n| \in n(a,b)] }{n^2} \nonumber\\ & \leq\Var(W) \cdot\frac{b}{n} \cdot\max_{v \in\TT_n} \P_\TT [v \in Y_n \| |Y_n| \in n(a,b)] \label{eq:var-bound} \,. \end{align} We want to show that this is summable, and thus look to bound the $\max $ term. Applying Lemma \ref{lem:condition} to the measure $\P_\TT [\cdot\| |Y_n| \in n(a,b)]$ gives \begin{align} &\left|\P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] - \P_\TT[v \in Y_n \| |Y_n| \in n(a,b), B_m(n)] \right|\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad\leq\P_\TT[B_m(n)^c \| |Y_m| \in n(a,b)] \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad\leq\frac{\P_\TT[B_m(n)^c \| \rtt\conn\TT_n]}{\P_\TT[|Y_n| \in n(a,b) \| \rtt\conn\TT_n]} \nonumber \\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad= O(n^{-1/4}) \label{eq:point-difference} \end{align} by Proposition \ref{pr:spread} and Theorem \ref{thm:cond-surv}. Similarly, \begin{align} \P_\TT[v \in Y_n \| |Y_n| \in n(a,b), B_m(n)] &= \frac{\P_\TT[v \in Y_n, |Y_n| \in n(a,b), B_m(n)]}{\P_\TT[|Y_n| \in n(a,b), B_m(n)]} \nonumber\\ &\leq\frac{ \P_\TT[v \in Y_n,B_m(n)]}{\P_\TT[|Y_n| \in n(a,b), B_m(n)]} \nonumber\\ &= \frac{\P_\TT[v \in Y_n \| B_m(n)]}{\P_\TT[|Y_n| \in n(a,b) \| B_m(n)]} \label{eq:point-up}\,. \end{align} Using Lemma \ref{lem:condition} once again expands the denominator \[ \Big| \P_\TT[|Y_n| \in n(a,b) \| B_m(n)] - \P_\TT[|Y_n| \in n(a,b) \| \rtt\conn\TT_n]\Big| \leq\P_\TT[B_m(n)^c \| \rtt\conn\TT _n] \leq C n^{-1/4} \] by Proposition \ref{pr:spread}. Plugging into \eqref{eq:point-up} gives the upper bound \begin{equation} \P_\TT[v \in Y_n \| |Y_n| \in n(a,b), B_m(n)] \leq\frac{\P_\TT[v \in Y_n \| B_m(n)]}{\P_\TT[|Y_n| \in n(a,b) \| \rtt\conn\TT_n] - C n^{-1/4}} \label{eq:point-final}\,. \end{equation} Combining \eqref{eq:point-difference}, \eqref{eq:point-final} and Proposition \ref{pr:spread} bounds \[ \max_{v \in\TT_n} \P_\TT[v \in Y_n \| |Y_n| \in n(a,b)] = O(n^{-1/8})\,. \] Thus, by \eqref{eq:var-bound}, the conditional variance is almost surely summable. For any fixed $\delta> 0$, Chebyshev's inequality then implies \[ \P\left[\left|\sum_{v \in\TT_n} \frac{\P_\TT[v \in Y_n \| |Y_n| \in(a,b)]}{n}\cdot(W(v) - 1) \right| > \delta\,\Bigg|\, \T _n\right] \] is summable almost surely. Applying a conditional Borel-Cantelli Lemma (e.g. \cite{chen-bc}) shows that \eqref{eq:need} holds almost surely. \end{proof} \section*{Acknowledgements} The author would like to thank Josh Rosenberg for helpful conversations. \bibliographystyle{alpha}
{ "timestamp": "2019-02-20T02:03:56", "yymm": "1806", "arxiv_id": "1806.00888", "language": "en", "url": "https://arxiv.org/abs/1806.00888" }
\section{Introduction} \IEEEPARstart{S}{ecurity} assessment is a fundamental function for both short-term and long-term power system operation. Operators need to eliminate any probability of system failure on a sub-hourly basis, and need to guarantee the security of supply in the long-term, having the required infrastructure and operating practices in place. All these functions require the assessment of thousands of possibilities with respect to load patterns, system topology, power generation, and the associated uncertainty which is taking up a more profound role with the increased integration of renewable energy sources (RES). Millions of possible operating points violate operating constraints and lead to an insecure system, while millions satisfy all limitations and ensure safe operation. For systems exceeding the size of a few buses it is impossible to assess the total number of operating points, as the problem complexity explodes. Therefore, computationally efficient methods are necessary to perform a fast and accurate dynamic security assessment Numerous approaches exist in the literature proposing methods to assess or predict different types of instability, e.g. transient, small-signal, or voltage instability. Recently, with the abundance of data from sensors, such as smart meters and phasor measurement units, machine learning approaches have emerged showing promising results in tackling this problem \cite{Konstantelos2016, Preece2016, IREP2017, Lejla_PSCC}. Due to the high reliability of the power system operation, however, historical data are not sufficient to train such techniques, as information related to the security boundary or insecure regions is often missing. For that, simulation data are necessary. This paper deals with the fundamental problem that most of the dynamic security assessment (DSA) methods are confronted with before the implementation of any algorithm: the generation of the necessary dataset which is required for the development of dynamic security classification approaches. With this work we aim to propose a modular and scalable algorithm that can map the secure and insecure regions, and identify the security boundaries of large systems in a computationally efficient manner. There are two main challenges with the generation of such a database. First, the problem size. It is computationally impossible to assess all possible operating points for systems exceeding a few tens of buses. Second, the information quality. Dynamic security assessment is a non-convex and highly nonlinear problem. Generating an information-rich and not too large dataset can lead to algorithms that can be trained faster and achieve higher prediction accuracy. The efforts to develop a systematic and computationally efficient methodology to generate the required database have been limited up to date. In \cite{Wehenkel1994, Hatziargyriou1994} re-sampling techniques based on post-rule validation were used to enrich the database with samples close to the boundary. Genc et al. \cite{Genc2010} propose to enrich the database iteratively with additional points close to the security boundary by adding operating points at half the distance of the already existing operating points at the stability boundary. In \cite{Liu2013b,Liu2014,Krishnan2011,Preece2016,Hamon2016}, the authors propose to use importance sampling methods based on Monte-Carlo variance reduction (MCVR) technique, introducing a bias in the sampling process such that the representation of rare events increases in the assessment phase. In \cite{Sun2016}, the authors propose a composite modelling approach using high dimensional historical data. This work leverages advancements in several different fields to propose a highly scalable, modular, and computationally efficient method. Using properties derived from convex relaxation techniques applied on power systems, we drastically reduce the search space. Applying complex network theory approaches, we identify the most critical contingencies boosting the efficiency of our search algorithms. Based on steepest descent methods, we design the exploration algorithm in a highly parallelizable fashion, and exploit parallel computing to reduce computation time. Compared with existing approaches, our method achieves a speed-up of 10 to 20 times, requiring less than 10\% of the time other approaches need to achieve the same results. The contributions of this work are the following: \begin{itemize} \item We propose a computationally efficient and highly scalable method to generate the required datasets for the training or testing of dynamic security assessment methods. Our approach requires less than 10\% of the time existing methods need for results of similar quality. \item Our method is modular and can accommodate several types of security boundaries, including transient stability and voltage stability. In this paper, we demonstrate our approach considering the combination of N-k security and small signal stability. \item Besides the database generation, the methodology we propose can be easily employed in real-time operation, where computationally efficient techniques are sought to explore the security region in case of contingencies around the current operating point. \item In case studies we demonstrate the importance of a high quality database to achieve the best possible results in a data-driven security assessment. Given equal computation time, training machine learning algorithms with the database generated by our method clearly outperforms other approaches. \end{itemize} The remainder of this paper is organized as follows: First, a set of terms are defined in Section~\ref{sec:definitions}. In Section \ref{sec:Challenges}, we describe the challenges of the database generation for data-driven security analysis. Section \ref{sec:Methodology} provides an overview of the methodology, which we detail in the two subsequent sections. Section~\ref{sec:reducingsearchspace} describes how we reduce the search space, while Section~\ref{sec:directedwalks} describes the highly parallelizable exploration of the remaining space. We demonstrate our methods in Section~\ref{sec:Case2}. Section~\ref{sec:conclusion} concludes the paper. \section{Definitions} \label{sec:definitions} \subsubsection{Security boundary} the boundary $\gamma$ dividing the secure from the insecure region; (a) can correspond to a specific stability boundary, e.g. small-signal stability or voltage stability, (b) can represent a specific stability margin, i.e. all operating points not satisfying the stability margin belong to the insecure region, (c) can be a combination of security indices, e.g. the intersection of operating points that are both N-1 secure and small-signal stable. Note that our proposed method can apply to any security boundary the user needs to consider. \subsubsection{HIC -- High Information Content} the set $\Omega$ of operating points in the vicinity of the security boundary $\gamma$, see \eqref{eq:high_inf} \cite{Krishnan2011}. This is the search space of high interest for our methods as it separates the secure from insecure regions. \subsubsection{DW -- Directed Walk} we use this term to denote the steepest descent path our algorithm follows, starting from a given initialization point, in order to arrive close to the security boundary. \section{Challenges of the Database Generation} \label{sec:Challenges} Determining the secure region of a power system is an NP-hard problem. In an ideal situation, in order to accurately determine the non-convex secure region we need to discretize the whole space of operating points with an as small interval as possible, and perform a security assessment for each of those points. For a given system topology, this set consists primarily of all possible combinations of generator and load setpoints (Note that if the system includes tap-changing transformers, and other controllable devices, the number of credible points increases geometrically). Thus, in a classical brute force approach, the number of points to be considered is given by: \begin{align} \left|\Psi \right| = \Lambda \cdot \prod_{i=1}^{N_G-1} \Big( \frac{P_{i}^{max}-P_{i}^{min}}{\alpha}+1 \Big), \label{eq:numberOP} \end{align} where $N_G$ is the number of generators $i$, $P_{i}^{max}$ and $P_{i}^{min}$ is their maximum and minimum capacity, $\alpha$ is the chosen discretization interval between the generation setpoints, and $\Lambda$ represents the number of different load profiles. For example, for the IEEE 14 bus system \cite{Zimmerman2011} with 5 generators and a discretization interval of $\alpha = \unit[1]{MW}$, a classical brute force approach requires $\left|\Psi \right| \approx 2.5 \cdot 10^6$ operating points to be assessed for a \emph{single load profile}. It can be easily seen that security assessment of large systems can very fast result in an intractable problem. For example, in the NESTA 162 bus system \cite{Coffrin2014}, a brute force approach would require the analysis of $7\cdot 10^{29}$ points. It becomes clear that the efficient database generation is one of the major challenges for the implementation of data-driven tools in power system security analysis. In this effort, we need to balance the trade-off between two partially contradicting goals: keep the database as small as possible to minimize computational effort, but contain enough data to determine the security boundary as accurately as possible. \begin{figure}[!b] \vspace{-3ex} \centering \includegraphics[width=2.3in]{fig/fig1.pdf} \vspace{-2ex} \caption{Scatter plot of all possible operating points of two generators for a certain load profile. Operating points fulfilling the stability margin and outside the high information content (HIC) region ($\gamma_k > 3.25\%$) are marked in blue, those not fulfilling the stability margin and outside HIC ($\gamma_k < 2.75\%$) are marked in yellow. Operating points located in the HIC region ($2.75\% < \gamma_x < 3.25\%$) are marked in grey.} \label{fig_lineflow} \vspace{-3ex} \end{figure} To better illustrate our approach, in Fig.~\ref{fig_lineflow} we show all possible operating points of two generators for a certain load profile in a system. Focusing on small-signal stability here, we define the security boundary $\gamma$ as a certain level of minimum damping ratio, which corresponds to our stability margin. All safe operating points, with a damping ratio below $\gamma$, are plotted in blue, while operating points that do not fulfill the stability margin are plotted in yellow. From Fig.\ref{fig_lineflow}, it is obvious that if we are able to assess all points close to $\gamma$, it is easy to classify the rest of the points. By that, the size of the required database can be significantly reduced. In the remainder of this paper, we will call the set of operating points in the vicinity of $\gamma$ as the set of \emph{high information content (HIC)}, defined as follows: \begin{align} \label{eq:high_inf} \Omega =\{OP_k\in \Psi\mid \gamma-\mu < \gamma_k < \gamma +\mu\}, \end{align} with $\gamma_k$ denoting the value of the chosen stability margin for operating point $OP_k$ and $\mu$ representing an appropriate small value to let $\left|\Omega\right|$ be large enough to describe the desired security boundary with sufficient accuracy. The value of $\mu$ depends on the chosen discretization interval in the vicinity of the boundary. In Fig. \ref{fig_lineflow}, the HIC set, i.e. all points $OP_k\in \Omega$, is visualized as the grey area surrounding $\gamma$. In this small example, we were able to assess all possible operating points and accurately determine the HIC area. For large systems this is obviously not possible. As a result, in the general case, the main challenge is to find the points $OP_k$ which belong to the HIC area $\left|\Omega \right|$. To put the difference between $\left|\Psi \right|$ and $\left|\Omega \right|$ in perspective: for the small signal stability analysis of the IEEE 14 bus system, the classical brute force approach requires the analysis of $\left|\Psi \right| \approx \unit[2.5 \cdot 10^6]{}$ operating points (OPs) for a single load profile. By assuming a required damping ratio of $\gamma = \unit[3]{\%}$, and $\mu = \unit[0.25]{\%}$, the HIC set, defined as $\Omega =\{OP_k\in \Psi\mid \unit[2.75]{\%} < \gamma_k< \unit[3.25]{\%}\}$, reduces the analysis to only $1457$ points (here, $\gamma$ refers to the damping ratio of the lowest damped eigenvalue). In other words, by assessing only $\unit[0.06]{\%}$ of all data points, we can accurately determine the whole secure region of this example. This small amount of required operating points to assess has actually worked as an obstacle for one of the most popular approaches in previous works: importance sampling. Importance sampling re-orients the sampling process towards a desired region of interest in the sampling space, while also preserving the probability distribution. Thus, it requires that the initial sampling points include sufficient knowledge about the desired region of interest. However, the smaller the proportion of the region of interest is in respect to the entire multi-dimensional space, the larger the initial sample size needs to be to include a sufficient number of points within the desired region of interest. Therefore, the use of expert knowledge \cite{Wehenkel1994, Hatziargyriou1994, Krishnan2011}, regression models \cite{Preece2016} or linear sensitivities \cite{Krishnan2011} are proposed to determine the desired region and reduce the search space. However, since this search space reduction is based on a limited initial sample size, it entails the risk of either missing regions of interest not represented in the initial sample or requires a large initial sample which increases computational burden. Furthermore, previous works \cite{Krishnan2011,Genc2010} often use expert knowledge to reduce the burden of the N-1 security assessment to a few critical contingencies. Our proposed method does not require expert knowledge and avoids potential biases by not separating the knowledge extraction from the sampling procedure. Still, if expert knowledge of e.g. a preferred search region or the most critical contingencies is available, our method can easily integrate it and benefit from it. \section{Methodology} \label{sec:Methodology} We divide the proposed methodology in two main parts. First, the search space reduction by the elimination of a large number of infeasible (and insecure) operating points. Second, the directed walks: a steepest-descent based algorithm to explore the search space and accurately determine the security boundary. During the search space reduction, we exploit properties of convex relaxation techniques to discard large infeasible regions. In order to reduce the problem complexity, we employ complex network theory approaches which allow us to identify the most critical contingencies. Finally, designing the directed walks as a highly parallelizable algorithm, we use parallel computing capabilities to drastically reduce the computation time. The different parts are described in detail in the following sections. Our algorithm starts by uniformly sampling the search space using the Latin Hypercube Sampling (LHS) method to generate initialization points for the subsequent steps (Section~\ref{seq:initPoints}). Following that, we propose a convex grid pruning algorithm, which also considers contingency constraints, to discard infeasible regions and reduce the search space (Section~\ref{sec:GridPruning}). In Section~\ref{seq:VulnerableCont.}, we leverage complex network theory approaches to identify the most critical contingencies. The identified contingency set is crucial both for the grid pruning algorithm, and for subsequent steps within the Directed Walks. After resampling the now significantly reduced search space, we use these samples as initialization points for the Directed Walk (DW) algorithm, described in Section~\ref{sec:directedwalks}. In order to achieve an efficient database generation, the goal of the algorithm is to traverse as fast as possible large parts of the feasible (or infeasible) region, while carrying out a high number of evaluations inside the HIC region. This allows the algorithm to focus on the most relevant areas of the search space in order to accurately determine the security boundary. The DWs are highly parallelizable, use a variable step size depending on their distance from the security boundary, and follow the direction of the steepest descent. Defining the secure region as the N-2 secure \emph{and} small signal stable region in our case studies, we demonstrate how our method outperforms existing importance sampling approaches, achieving a 10 to 20 times speed-up. \section{Reducing the Search Space} \label{sec:reducingsearchspace} \subsection{Choice of Initialization Points}\label{seq:initPoints} An initial set of operating points is necessary to start our computation procedure. Using the Latin Hypercube Sampling (LHS), we sample the space of operating points to select the initialization points $\eta$. Besides the initialization points at the first stage, $\eta_1$, our method requires the finer selection of initialization samples, $\eta_2$ and $\eta_3$ during the reduction of the search space in two subsequent stages, as will be explained later. We use the same sampling procedure at all stages. The Latin hypercube sampling (LHS) aims to achieve a uniform distribution of samples across the whole space. Dividing each dimension in subsections, LHS selects only one sample from each subsection of each parameter, while, at the same time, it maximizes the minimum distance between the samples \cite{Preece2016}. An even distribution of the initialization points over the multi-dimensional space is of high importance in order to increase the probability that our method does not miss any infeasible region or any HIC region. The number of initialization points $|\eta|$ is a tuning factor which depends on the specific system under investigation. In general, quite sparse discretization intervals are used for the search space reduction procedures in the first two stages of our approach, $\eta_{1-2}$, while a more dense discretization interval is used for the directed walks at a later stage, $\eta_{3}$. Suitable values are discussed in the case study. While LHS allows an even sampling, it is very computationally expensive for high-dimensional spaces and large numbers of initialization points. Thus, for larger systems there is a trade-off between initial sampling and computation time that needs to be considered. \subsection{Grid Pruning Algorithm For Search Space Reduction}\label{sec:GridPruning Given the $\eta_{1}$ initialization points from the first stage, the aim of this stage is to reduce the search space by eliminating infeasible operating regions. For that, we use a grid pruning algorithm which relies on the concept of convex relaxations. The algorithm is inspired by \cite{Molzahn2016}, where it was developed to compute the feasible space of small AC optimal power flow (OPF) problems. In this work, we introduce a grid pruning algorithm which determines infeasible operating regions considering not only the intact system but also all N-1 contingencies. Convex relaxations have been recently proposed to relax the non-convex AC-OPF to a semidefinite program \cite{Lavaei2012}. A corollary of that method is that the resulting semidefinite relaxation provides an infeasibility certificate: If an initialization point is infeasible for the semidefinite relaxation, it is guaranteed to be infeasible for the non-convex AC-OPF problem. This means that for that initialization point there does not exist a power flow solution which complies with all operational constraints (i.e. voltage limits, active / reactive power limits). A feasible power flow solution is a basic requirement for a security assessment using any stability metric. This property of the semidefinite relaxation is used in our grid pruning algorithm. The semidefinite relaxation introduces the matrix variable $W$ to represent the product of real and imaginary parts of the complex bus voltages (for more details the interested reader is referred to \cite{Lavaei2012,Molzahn2013}). Defining our notation, the investigated power grid consists of $\mathcal{N}$ buses, where $\mathcal{G}$ is the set of generator buses. We consider a set of line outages $\mathcal{C}$, where the first entry $\{0\}$ of set $\mathcal{C}$ corresponds to the intact system state. The following auxiliary variables are introduced for each bus $i \in \mathcal{N}$ and outage $c \in \mathcal{C}$: \begin{align} Y^c_i &:= e_i e_i^T Y^c & \label{aux1} \\ \textbf{Y}^c_i & := \dfrac{1}{2} \begin{bmatrix} \Re \{Y^c_i + (Y^c_i)^T\} & \Im \{ (Y^c_i)^T - Y^c_i \} \\ \Im \{ Y^c_i - (Y^c_i)^T\} & \Re \{Y^c_i + (Y^c_i)^T\} \end{bmatrix} & \\ \bar{\textbf{Y}}^c_i &:= \dfrac{-1}{2} \begin{bmatrix} \Im \{Y^c_i + (Y^c_i)^T\} & \Re \{ Y^c_i - (Y^c_i)^T \} \\ \Re \{ (Y^c_i)^T - Y^c_i\} & \Im \{Y^c_i + (Y^c_i)^T\} \end{bmatrix} \\ M_i &:= \begin{bmatrix} e_i e_i^T & 0 \\ 0 & e_i e_i^T \end{bmatrix} \end{align} Matrix $Y^c$ denotes the bus admittance matrix of the power grid for outage $c$, and $e_i$ is the i-th basis vector. The operators $\Re$ and $\Im$ denote the real and imaginary parts of the matrix. The initialization points $\eta_1$ from stage A (see Section~\ref{seq:initPoints}) correspond to both feasible and infeasible operating points for the AC optimal power flow problem. Given a set-point $P^*$ for the generation dispatch (corresponding to initialization point $\eta_{1}^{*}$), \eqref{CR_obj} -- \eqref{LINK_SC2} compute the minimum distance from $P^*$ to the closest feasible generation dispatch. Obviously, if $P^*$ is a feasible generation dispatch, the minimum distance is zero. \begin{align} \min_{W^c} \, & \sqrt{ \sum_{i \in \mathcal{G}} ( \text{Tr} \{ \textbf{Y}^0_i W^0\} + P_{D_i} - P^*_i )^2 } \label{CR_obj} \\ \text{s.t.} \, & \underline{P}_{G_i} \leq \text{Tr} \{ \textbf{Y}^c_i W^c\} + P_{D_i} \leq \overline{P}_{G_i} \quad \forall i \in \mathcal{N} \, \forall c \in \mathcal{C} \label{PBal} \\ & \underline{Q}_{G_i} \leq \text{Tr} \{ \bar{\textbf{Y}}^c_i W^c\} + Q_{D_i} \leq \overline{Q}_{G_i} \quad \forall i \in \mathcal{N} \, \forall c \in \mathcal{C} \label{QBal} \\ & \underline{V}_i^2 \leq \text{Tr} \{ M_i W^c\} \leq \overline{V}_i^2 \quad \forall i \in \mathcal{N} \, \forall c \in \mathcal{C} \label{VCon} \\ & W^c \succeq 0 \quad \forall c \in \mathcal{C} \label{SDP} \\ & \text{Tr} \{ \textbf{Y}^c_i W^c\} = \text{Tr} \{ \textbf{Y}^0_i W^0\} \quad \forall i \in \mathcal{G} \backslash \{\text{slack}\} \, \forall c \in \mathcal{C} \label{LINK_SC1} \\ & \text{Tr} \{ M_i W^c\} = \text{Tr} \{ M_i W^0\} \quad \forall i \in \mathcal{G} \, \forall c \in \mathcal{C} \label{LINK_SC2} \end{align} The matrix variable $W^0$ refers to the intact system state with admittance matrix $\textbf{Y}^0$. The objective function \eqref{CR_obj} minimizes the distance of the active generation dispatch from the set-point $P^*$. The operator $Tr\{\}$ denotes the trace of a matrix. For each outage $c \in \mathcal{C}$, one matrix variable $W^c$ is introduced which is constrained to be positive semidefinite in \eqref{SDP}. The terms $\underline{P}_{G_i}$, $\overline{P}_{G_i}$, $\underline{Q}_{G_i}$, $\overline{Q}_{G_i}$ in the nodal active and reactive power balance \eqref{PBal} and \eqref{QBal} are the maximum and minimum active and reactive power limits of the generator at bus $i$, respectively. The active and reactive power demand at bus $i$ is denoted with the terms $P_{D_i}$ and $Q_{D_i}$. The bus voltages at each bus $i$ are constrained by upper and lower bounds $\overline{V}_i$ and $\underline{V}_i$ in \eqref{VCon}. In case of an outage, the generator active power and voltage set-points remain fixed \eqref{LINK_SC1} -- \eqref{LINK_SC2}, as traditional N-1 (and \mbox{N-k}) calculations do not consider corrective control. To reduce the computational complexity of the semidefinite constraint \eqref{SDP}, we apply a chordal decomposition according to \cite{Molzahn2013} and enforce positive semidefiniteness only for the maximum cliques of matrix $W^c$. To obtain an objective function linear in $W^0$, we introduce the auxiliary variable $R$ and replace \eqref{CR_obj} with: \begin{align} \min_{W^c,R} \quad & R \label{Cr_obj}\\ \text{s.t.} & \sqrt{ \sum_{i \in \mathcal{G}} ( \text{Tr} \{ \textbf{Y}_i^0 W^0\} + P_{D_i} - P^*_i )^2 } \leq R \label{SOC_D} \end{align} The convex optimization problem \eqref{PBal} -- \eqref{SOC_D} guarantees that the hypersphere with radius $R$ around the operating point $P^*$ does not contain any points belonging to the non-convex feasible region, considering both the intact system state and the contingencies in set $\mathcal{C}$. Note that the obtained closest generation dispatch $P_i^0 = \text{Tr} \{ \textbf{Y}_i^0 W^0\} + P_{D_i}$ is feasible in the relaxation but not necessarily in the non-convex problem. Hence, with $R$ we obtain a lower bound of the distance to the closest feasible generation dispatch in the non-convex problem. In a procedure similar to \cite{Venzke_SDP_SCOPF_PSCC2018}, we apply an iterative algorithm for the grid pruning: First, given $\eta_{1}$ initialization points, we solve \eqref{PBal} -- \eqref{SOC_D} without considering contingencies, i.e. $\mathcal{C} = \{ 0 \}$. Using the determined hyperspheres, we eliminate the infeasible operating regions and, using LHS, we resample the reduced search space to select a set of initialization points $\eta_2$. In the next stage, given $\eta_{2}$, we determine the five most critical contingencies (see section \ref{seq:VulnerableCont.} for more details) and resolve \eqref{PBal} -- \eqref{SOC_D}. We remove all resulting infeasible regions from the set $\eta_2$, and using LHS we resample the remaining feasible region to determine the initialization set $\eta_{3}$. The number of considered contingencies is a trade-off between the amount of obtained infeasible points and the required computational time to solve the semidefinite relaxation. The number of initialization points $\eta_{1-2}$ should be chosen to minimize the overlapping of the hyperspheres while maximizing the search space reduction. As all $\eta_{2}$ points within the infeasible region are immediately discarded, a value $\eta_{2}>\eta_{1}$ is required in order to obtain a smaller distance between the initialization points and, thus more points within the feasible region than before. However, the more the resulting hyperspheres are overlapping (as. e.g. visualized in Fig.~\ref{fig_LHS}), the less information every point is providing and the less computationally efficient the grid pruning is. Thus, the choice depends also on the system size, as the same number of initialization points will lead to different distances between the points depending on the system size. Finally, $\eta_{3}$ needs to be chosen large enough to obtain sufficient initialization points for the directed walks but not too large in order to avoid too many duplicates being created during the walks. As an example of the search space reduction, we consider the IEEE 14 bus system in a scenario where all but three generators ($P_{gen 2-4}$) are fixed to specific values. Considering the five most critical contingencies, and using our proposed convex grid pruning algorithm, the search space is reduced by $\unit[65.34]{\%}$. This is visualized in Fig.~\ref{fig_LHS}; the colored area shows the discarded regions of infeasible points as determined by the superposition of the spheres. \begin{figure}[!t] \centering \includegraphics[width=2.7in]{fig/SCOPF_visualized.pdf} \caption{Search space reduction obtained by the proposed grid pruning algorithm for the IEEE 14 bus system. Operating points within the structure formed by superimposed spheres are infeasible considering N-1 security.} \label{fig_LHS} \end{figure} \subsection{Determining the Most Critical Contingencies} \label{seq:VulnerableCont.} From the definition of N-1 security criterion it follows that a single contingency suffices to classify an operating point as infeasible. Most of the unsafe operating points, however, belong to the infeasible regions of several contingencies. As a result, focusing only on a limited number of \emph{critical} contingencies, we can accurately determine a large part of the N-k insecure region, thus reducing the search space without the need to carry out a redundant number of computations for the whole contingency set. This drastically decreases the computation time. The goal of this section is to propose a methodology that determines the most critical contingencies, which can then be used both in the convex grid pruning algorithm \eqref{PBal} -- \eqref{SOC_D}, and in the step direction of the DWs in Section~\ref{seq:DirectionDet.}. While classical N-1 (and N-k) analyses are computationally demanding, recent approaches based on complex network theory showed promising results while requiring a fraction of that time. Refs. \cite{Albert2004}, \cite{Fang2016} propose fast identification of vulnerable lines and nodes, using concepts such as the (extended) betweenness or the centrality index. The centrality index used in \cite{Fang2016}, and first proposed for power systems in \cite{Dwivedi2010, Dwivedi2013}, is based on a classical optimization problem in complex network theory, known as maximum flow problem. The index refers to the portion of the flow passing through a specific edge in the network. Components with higher centrality have a higher impact on the vulnerability of the system, and thus have higher probability to be critical contingencies. Similar to \cite{Fang2016}, we adopt an improved max-flow formulation for the power system problem which includes vertex weights, and extends the graph with a single global source and a single global sink node. The improved formulation accounts for the net load and generation injections at every vertex, avoids line capacity violations resulting from the superposition of different source-sink combinations, and decreases computation time. Contrary to \cite{Fang2016}, however, we use a modified definition of the centrality index. While Fang et al. \cite{Fang2016} analyze the most critical contingencies for all generation and demand patterns, we are interested in the most critical contingency for every \emph{specific} load and generation profile, i.e. for every operating point $OP_k$. Thus, for each operating point $OP_k$ we define the centrality index as: \begin{align}\label{eq:centrality} C_{ij}^{(k)}= f_{ij,actual}^{(k)} / f_{max}^{(k)} \quad \forall i,j \in \mathcal{N}, \end{align} where $f_{ij,actual}^{(k)}$, are the actual flows for that operating point $OP_k$, and $f_{max}^{(k)}$ represents the maximum possible flow between global source and global sink node for the same case. Thus, at every operating point $OP_k$, the lines are ranked according to their contribution to the maximum flow in the system. The higher their centrality index is, the more vulnerable becomes the system in case they fail, and as a result they are placed higher in the list of most critical contingencies. The case study includes a brief discussion about the performance of this vulnerability assessment for the investigated systems. Despite the drastic decrease in computation time and its general good performance, the proposed approach still includes approximations. As we will see in Sections~\ref{seq:FinalEval}--\ref{seq:Round2}, we take all necessary steps to ensure that we have avoided any possible misclassification. \begin{figure*}[!t] \vspace{-2ex} \centering \includegraphics[width=6in]{fig/distance2.pdf} \vspace{-2ex} \caption{Illustration of the Directed Walk (DW) through a two dimensional space using varying step sizes, $\alpha_i$, following the steepest descent of distance, $d$ } \label{fig_direction} \vspace{-3ex} \end{figure*} \section{Directed Walks} \label{sec:directedwalks} \subsubsection{Variable Step Size}\label{seq:StepSize.} As mentioned in Section~\ref{sec:Challenges}, to achieve an efficient database generation our focus is to assess a sufficiently high number of points inside the HIC area, while traverse the rest of the space faster and with fewer evaluations. To do that, we propose to use a variable step size $\alpha$ depending on the distance $d$ of the operating point from the security boundary $\gamma$. The distance $d(OP_k)$ of the operating point under investigation is defined as: \begin{align}\label{eq:distanceOP} d(OP_k) = \left|\gamma_k - \gamma \right|, \end{align} with $\gamma_k$ being the stability index value for operating point $OP_k$. Then, for $OP_k$ we define the variable step size $\alpha_k$ as follows: \begin{equation} \alpha_k=\left\{ \begin{array}{@{}ll@{}} \epsilon_{1}\cdot P^{max}, & \text{if}\ d(OP_k) >d_1 \\ \epsilon_{2}\cdot P^{max}, & \text{if}\ d_1 \geq d(OP_k) >d_2 \\ \epsilon_{3}\cdot P^{max}, & \text{if}\ d_2 \geq d(OP_k) >d_3 \\ \epsilon_{4}\cdot P^{max}, & \text{otherwise} \end{array} \quad , \right. \label{eq:stepsize} \end{equation} where $P^{max}$ is the vector of generator maximum capacities, $\epsilon_{1-4}$ are scalars, and for distances $d_{1-3}$ holds $d_1>d_2>d_3$. Since the system is highly nonlinear (in our case study for example we are searching for the minimum damping ratio considering N-1 security, i.e. $\left|\mathcal{C} \right|$ different nonlinear systems superimposed), the exact step size required to reach the HIC region cannot be constant or determined a-priori. Thus, the step size is gradually reduced as we approach the security boundary in order not to miss any points within the HIC region. This is illustrated in Fig. \ref{fig_direction}. It follows that distances $d_{1-3}$ and the corresponding $\epsilon_{1-4}$ are tuning factors, to be chosen depending on the desired speed, granularity, precision and given system size. Factors found useful for given systems are discussed in the case study. \subsubsection{Determining the Step Direction}\label{seq:DirectionDet.} After identifying the step size, we need to determine the direction of the next step. Our goal is to traverse the feasible (or infeasible) region as fast as possible, and enter the HIC region. To do that, at every step we follow the steepest descent of the distance metric $d(OP_k)$, as shown in \eqref{eq:steepestdescent}. \begin{align}\label{eq:steepestdescent} OP_{k+1} &= OP_k- \alpha_k \cdot \nabla d(OP_k) \end{align} where $\alpha_k$ is the step size for $OP_k$, defined in \eqref{eq:stepsize}, and $\nabla d(OP_k)$ is the gradient of $d(OP_k)$. As the distance is a function of the chosen stability index, it is user specific and $\nabla d(OP_k)$ in the discrete space shall be determined by a suitable sensitivity measure, which differs for different stability indices. If the focus is on voltage stability for example, the associated margin sensitivities could be used \cite{Greene1997}. It is stressed that our method is suitable for any sensitivity capable of measuring the distance to the chosen stability index. In the case studies of this paper, we focus on small-signal stability and, as described in the next paragraph, we pick the damping ratio sensitivity as a suitable measure. Normally, at every step $k$ we should measure distance $d$ for all N-1 (or N-k) contingencies, select the minimum of those distances and based on that, determine the next step size and direction. Having thousands of initialization points $\eta_3$ implies checking along all possible dimensions and N-1 contingencies at every step of thousands of directed walks. Beyond a certain system size, this becomes computationally intractable. Instead, we take advantage of the critical contingency identification procedure described in Section \ref{seq:VulnerableCont.}, and at every step we measure distance $d$ assuming the most critical contingency for $OP_k$. This reduces the required analysis from $\left|\mathcal{C} \right|$ systems to one system, which drastically decreases the computation time. Following steps in this procedure, as described later, ensure that this approximation is sufficient and there is an accurate detection of the security boundary as soon as we enter the HIC region. \subsubsection{Sensitivity Measure for Small-Signal Stability} For small-signal stability, we determine the step direction by the sensitivity of the damping ratio, $\zeta$, of the system representing the most critical contingency, $c_c \in \mathcal{C}$. This requires to compute the eigenvalue sensitivity which, in turn, depends on the state matrix $\mathbf{A_{c_c}}$ (for more details about forming state matrix $\mathbf{A}$ the reader is referred to \cite{Kundur:1994tx}). Thus, the sensitivity of eigenvalue $\lambda_n$ to a system parameter $\rho_i$ is defined as \begin{align}\label{eq:eigenvalue_sens} \frac{\partial \lambda_n}{\partial \rho_i} = \frac{\mathbf{\psi}_n^T \frac{\partial \mathbf{A_{c_c}}}{\partial \rho_i} \mathbf{\phi_n}}{\mathbf{\psi}_n^T \mathbf{\phi}_n}. \end{align} $\mathbf{\psi}_n^T$ and $\mathbf{\phi}_n$ are the left and right eigenvectors, respectively, associated with eigenvalue $\lambda_n$ \cite{Kundur:1994tx}. Defining $\lambda_n = \sigma_n +j \omega_n$, and with $\zeta=\frac{-\sigma_n}{\sqrt{\sigma_n^2 + \omega_n^2}}$, we can determine the damping ratio sensitivity, $\frac{\partial \zeta_n}{\partial \rho_i}$ as \begin{align}\label{eq:damp_sens} \frac{\partial \zeta_n}{\partial \rho_i} = \frac{\partial}{\partial \rho_i} \left( \frac{-\sigma_n}{\sqrt{\sigma_n^2 + \omega_n^2}}\right) = \omega_n \frac{(\sigma_n\frac{\partial \omega_n}{\partial \rho_i}-\omega_n \frac{\partial \sigma_n}{\partial \rho_i})}{(\sigma_n^2 + \omega_n^2)^{\frac{3}{2}}}. \end{align} Due to the fact that the computation of $\frac{\partial \mathbf{A_{c_c}}}{\partial \rho_i}$ is extremely demanding, it is usually more efficient to determine the damping ratio sensitivity of $\zeta$ to $\rho_i$ by a small perturbation of $\rho_i$. The whole process is illustrated in Fig. \ref{fig_direction}. The parameters $\rho_i$ correspond to the power dispatch of two generators. The DW is illustrated following the steepest descent of damping ratio considering the lowest damped eigenvalue of the system representing the most critical contingency $c_c \in \mathcal{C}$. \subsubsection{Parallelization of the Directed Walks} Directed Walks are easily parallelizable. In our case studies, we have used 80 cores of the DTU HPC cluster for this part of our simulations. To ensure an efficient parallelization and not allow individual processes take up unlimited time, we set a maximum number of steps of DWs, $\kappa_{max}$. The tuning of $\kappa_{max}$ is discussed in the case studies in Section~\ref{sec:Case2}. \subsubsection{Entering the HIC region}\label{enterHIC} As soon as a DW enters the HIC region, two additional processes take place. First, all points surrounding the current operating point are assessed as well, as they may be part of $\Omega$. This is indicated in Fig. \ref{fig_direction} by the yellow circles. Second, we allow the DW to move along only a single dimension (the dimension is still selected based on the steepest descent) and with the minimum step size. This ensures that we collect as many points within the HIC region as possible. \subsubsection{Termination of the Directed Walks} Each DW terminates if the next step arrives at an operating point already existing in the database. The termination criterion excludes operating points that were collected as ``surrounding points'' of a current step (see Section~\ref{enterHIC}). \subsubsection{Full N-1 contingency check}\label{seq:FinalEval} After all DWs have been performed for every initialization point in parallel, we evaluate all safe (and almost safe) operating points in the database against all possible contingencies to ensure that no violations occur. More formally, we assess all operating points in the final database with $\gamma_k \geq \gamma-\mu$ for all remaining $\left|\mathcal{C} \right|-1$ systems to ensure that a possible false identification of the most critical contingency does not affect the stability boundary detection. This allows us to guarantee a high level of accuracy in determining the security boundary. Despite this being the most computationally expensive step of our method, accounting for over 50\% of the required time, in absence of expert knowledge this procedure is required for any method reported in the literature \cite{Liu2013b,Liu2014}. The difference is, however, that our approach manages to discard a large volume of non-relevant data before this step, and, as a result, outperforms existing methods by being at least 10 to 20 times faster. \subsubsection{Final Set of Directed Walks}\label{seq:Round2} The maximum number of steps $\kappa_{max}$, although helpful for the efficient parallelization of the DWs, may result in DWs that have not sufficiently explored the search space. In this final step, for any DWs that have reached $\kappa_{max}$ while inside the HIC region, we perform an additional round of DW to explore as thoroughly as possible the HIC region. The final points from the previous round serve as initialization points. \section{Extension to a N-k Analysis}\label{sec:N-k} As the authors in \cite{Chen2005} highlight, it is computationally impractical to analyze all N-k contingency sequences, due to the large number of possible contingencies and their combinations. In order to minimize the number of required analyses, different approaches exist in literature to find a subset of plausible harmful N-k contingencies using e.g. time domain simulations \cite{Weckesser2018}, event trees and functional groups \cite{Chen2005} or fault chain theory \cite{Wang2011b}. Each of these methods can be combined with our proposed method to determine in advance the list of plausible N-k contingencies, which can then be used as the set of considered critical contingencies during the Directed Walks. In this section, however, as we wish to continue with our approach of not requiring any kind of expert knowledge, we extend the security analysis to a N-k scenario. Up to this point, we have identified the HIC region and the security boundary considering N-1 security and small-signal stability (``N-1 and SSS''). Our ultimate goal in this section is to determine how the HIC region and the security boundary should be adjusted if we consider N-k security and small-signal stability. To do that, we start with all \emph{stable} points in the ``N-1 and SSS'' HIC region, as they were identified by the Directed Walks in Section~\ref{sec:directedwalks}. As described in Section~\ref{seq:FinalEval}, a full N-1 contingency assessment is carried out for every final point of the DWs. As a result, we have exact knowledge of the impact of all contingencies, and can rank them from the most critical to the least critical. To extend our analysis to the N-2 case, we apply the two most critical contingencies at the same time to our system. Our goal is to perform Directed Walks from the ``N-1 and SSS'' HIC region to the ``N-2 and SSS'' HIC region, and determine the new security boundary. Admittedly, the combination of the two most critical N-1 contingencies is often but not always the most critical N-2 contingency. The goal of the Directed Walks, however, is to determine a path that will lead towards the new HIC region -- and several combinations of most critical contingencies can lead to that. To ensure that no violations occur, similar to the N-1 case, we perform a N-2 contingency check (along with small-signal stability) at the end of the new Directed Walks. This ensures that all operating points which will land in the database will have been checked if they are N-2 secure for a wide range of contingencies. Similar is the procedure that can be followed for enforcing N-k security, with k>2. Given the combinatorial nature of the N-k security assessment though, beyond a certain point expert knowledge or advanced methods must be used in order to consider only a limited set of critical N-k contingencies. The different steps are summarized below taking the \mbox{N-2} security database generation as an example but can be generalized to a N-k case. \subsection{Initialization Points} As already mentioned, the initialization points for the ``N-2 and SSS'' security assessment are the final points of the Directed Walks during the ``N-1 and SSS'' procedure described in Section~\ref{sec:directedwalks}. More specifically, it is all \emph{stable} points belonging to the ``N-1 and SSS'' HIC region. Directed Walks often result to OPs close to each other. To cover an as large space as possible keeping the computation time low, we want to pick initialization points that have at least a certain distance between each other. As a result, after picking each initialization point, we assume a radius $R_{N-2}$ around that point, and discard any potential initialization point within this radius. This allows us to have a reduced and more uniformly distributed set of initialization points to start our assessment. The choice of $R_{N-2}$ depends on the maximum number of steps $\kappa_{max,N-2}$ of the directed walks during the N-2 security database generation. As we aim for avoiding duplicates but also for maximizing the number of unique OPs within $\Omega$, a choice of $R_{N-2} \leq \kappa_{max,N-2} \cdot \min\{\alpha_k\}$ is recommended, where $\min\{\alpha_k\}$ is the minimum step size as defined in \eqref{eq:stepsize}. \subsection{Most Critical N-2 Contingency} By taking advantage of the full N-1 contingency check, described in section \ref{seq:FinalEval}, we already know the set with the most critical contingencies for every initialization point. The two most critical contingencies for the N-1 case are used as the most critical N-2 contingency for the directed walks during the N-2 security analysis. Please note that the role of the critical contingencies is to determine an appropriate direction of the directed walks towards the new HIC region. Similar to Section~\ref{seq:FinalEval}, a N-2 contingency check (for the chosen N-1 fault) will follow in the end again. This ensures that even if the choice of the most critical N-2 contingency for the directed walks is inaccurate, it will not necessarily result in a falsely classified operating point in the database. \subsection{Directed Walks} The directed walks work exactly in the same way as introduced in Section \ref{sec:directedwalks} including a full N-2 contingency check for the given most critical outage (N-1 contingency) in the end. Ideally, a full N-2 contingency check must be carried out for all possible combinations. Given the exponential increase in the number of N-2 contingencies as the system grows larger, expert knowledge or advanced methods become necessary. In our case studies, we did not observe significant changes of the HIC set, while checking for several different pairs of \mbox{N-2} contingencies. However, in other systems, use of advanced methods will be probably necessary to select the set of most critical contingencies to be used for this final check. \section{Case Studies} \label{sec:Case2} In the first case study, the efficient database generation method is applied on the IEEE 14 bus system. We measure the efficiency improvement compared with the brute force approach (BF), and we demonstrate how our method outperforms importance sampling techniques. It is impossible to carry out the comparison with the BF approach in larger systems, as BF becomes intractable. In the second case study, we demonstrate the scalability of our method to larger systems, such as the NESTA 162 bus system \cite{Coffrin2014}. In the same case study, we also highlight how the proposed method allows to extend a N-1 security to a N-2 security assessment and emphasize how the high quality of the database generated with our method allow machine learning algorithms to achieve a higher accuracy in the data-driven security assessment. The case studies in this paper use the combination of N-1 (or N-2) security and small-signal stability for the definition of the security boundary. It should be stressed though that the proposed methodology proposes a general framework and is applicable to a number of other stability metrics or power system models. \subsection{Small-Signal Model} A sixth order synchronous machine model \cite{Sauer1998} with an Automatic Voltage Regulator (AVR) Type I (3 states) is used in this study. With an additional state for the bus voltage measurement delay this leads to a state-space model of $10\cdot N_G$ states, with $N_G$ representing the number of generators in the grid. In case of the NESTA 162 bus system, all generators are addtionally equipped with Power System Stabilizers (PSS) type 1 adding an additional state per generator. The small signal models were derived using Mathematica, the initialization and small signal analysis were carried out using Matpower 6.0 \cite{Zimmerman2011} and Matlab. Reactive power limits of the generators are enforced. For a detailed description of the derivation of a multi-machine model, the interested reader is referred to \cite{Milano2010}. Machine parameters are taken from \cite{Anderson1977}. \subsection{IEEE 14 bus system}\label{sec:case_study1} Carrying out the first case study on a small system, where the BF approach is still tractable, allows us to verify that our method is capable of finding $\unit[100]{\%}$ of the points belonging to the HIC region. To ensure comparability, all simulations used 20 cores of the DTU HPC cluster. Network data is given in \cite{Zimmerman2011}, machine parameters are given in \cite{IREP2017}. The considered contingencies include all line faults (except lines 7-8 and 6-13\footnote{\label{note1}The IEEE 14-bus and the NESTA 162-bus systems, based on the available data, are not N-1 secure for all possible contingencies. The outage of those specific lines lead to violations (e.g. voltage limits, component overloadings, or small-signal instability) that no redispatching measure can mitigate. This would not have happened in a real system. In order not to end up with an empty set of operating points, and still use data publicly available, we choose to neglect these outages.}). Due to the BF approach, we know that 1457 operating points belong to the HIC set, i.e. with $\unit[2.75]{\%} < \zeta_{min} < \unit[3.25]{\%}$. \begin{table}[!t] \vspace{-3ex} \caption{RESULTS: IEEE 14 BUS SYSTEM } \label{tab:resultsIEEE14} \centering \begin{tabular}{rrrcl} \hline \centering Required & Time in \%&\centering OPs in $\Omega$ &Method&\\ \centering time& w.r.t. BF & \centering found & & $\eta_1 \; /\; \eta_2 \; /\; \eta_3 \; /\; \kappa_{max}$\\ \hline $\unit[2.56]{min}$&$\unit[0.46]{\%}$&$\unit[95.13]{\%}$& DWs &$0\; /\;200\; /\; 2k\; /\; 10$\\ $\unit[2.99]{min}$&$\unit[0.54]{\%}$&$\unit[98.9]{\%}$&DWs &$0\; /\;200\; /\; 2k\; /\; 15$\\ $\unit[2.94]{min}$&$\unit[0.53]{\%}$&$\unit[97.80]{\%}$& DWs &$0\; /\;200\; /\; 2k\; /\; 20$\\ \cellcolor{green!25}$\unit[3.77]{min}$&\cellcolor{green!25}$\unit[0.68]{\%}$&\cellcolor{green!25}$\unit[100]{\%}$& \cellcolor{green!25}DWs &\cellcolor{green!25}$ 0\; /\; 200\; /\; 2k\; /\; 25$\\ $\unit[2.94]{min}$&$\unit[0.53]{\%}$&$\unit[97.80]{\%}$& DWs &$0\; /\;200\; /\; 1k\; /\; 20$\\ $\unit[3.48]{min}$&$\unit[0.74]{\%}$&$\unit[99.93]{\%}$& DWs &$ 0\; /\; 200\; /\; 3k\; /\; 20$\\ $\unit[4.80]{min}$&$\unit[0.86]{\%}$&$\unit[100]{\%}$& DWs &$ 0\; /\; 200\; /\; 5k\; /\; 20$\\ \hline $\unit[37.0]{min}$&$\unit[6.66]{\%}$&$\unit[100]{\%}$& \multicolumn{2}{l}{Importance Sampling (IS)} \\ \hline $\unit[556]{min}$&$\unit[100]{\%}$&$\unit[100]{\%}$& \multicolumn{2}{l}{Brute Force (BF)} \end{tabular} \vspace{-3ex} \end{table} The grid pruning without considering any contingency does not reduce the search space in this case study; this is because all possible combinations of generation setpoints do not violate any limits for the given load profile. Thus, $\eta_1$ is chosen as 0 and we directly start with the contingency-constrained grid pruning considering the five most critical contingencies. Table \ref{tab:resultsIEEE14} compares the performance of our method with the BF approach and an Importance Sampling (IS) approach \cite{Liu2013b}. Our method is capable of creating a database including \emph{all} points of interest in \unit[3.77]{min}; that is \unit[0.68]{\%} of the time required by the BF approach (\unit[9.26]{hours}; 147 times faster). The proposed method is also significantly faster (approx. 10 times) than an Importance Sampling approach (\unit[37.0]{min}). One of the major advantages of our method is the drastic search space reduction through the grid pruning and the most critical contingency identification. In this case study, grid pruning eliminated up to \unit[70.13]{\%} of all $\approx \unit[2.5 \cdot 10^6]{}$ potential operating points (the number varies based on the number of initialization points). At the same time, performing every DW step for the single most critical contingency, we reduce the required assessment from $\left|\mathcal{C} \right|$ systems to 1 system. In larger systems the speed benefits will be even more pronounced, e.g. 14-bus: $\left|\mathcal{C} \right| = 19$ contingencies are reduced to 1 (most critical); 162-bus: $\left|\mathcal{C} \right| = 160$ contingencies reduced to 1. Table \ref{tab:resultsIEEE14} also compares the method's performance for different numbers of initialization points $\eta_{1-3}$ and maximum number of DW steps $\kappa_{max}$. In this system, choosing a higher number of maximum steps instead of a higher number of initialization points leads to time savings. The same holds in larger systems, as shown in Table~\ref{tab:results162Bus}. In the highlighted case of Table~\ref{tab:resultsIEEE14}, the required computation time for the different parts of our method is split as follows: \unit[26.67]{\%} (\unit[60.31]{s}) for the grid pruning considering the 5 most critical contingencies (200 operating points); \unit[53.1]{\%} (\unit[120.12]{s}) for the Directed Walks; and \unit[20.24]{\%} (\unit[45.78]{s}) for the final N-1 check of all operating points. Grid pruning eliminates 1149 from the $\eta_3 = 2000$ initialization points, resulting in 851 feasible starting points for the DWs. The most critical contingency is detected correctly in \unit[94.55]{\%} of cases. \subsection{NESTA 162 bus system}\label{sec:case_study2} In the second part of the case study, we demonstrate and compare the performance of our method with an Importance Sampling (IS) approach for N-1 security assessment of the NESTA 162 Bus system. A BF approach with a $\unit[1]{MW}$ step size for this system requires the assessment of $7.6 \cdot 10^{29}$ operating points for a single load profile. The assessment of all those points becomes computationally intractable. Thus, the absolute number of unique OPs in $\Omega$ is unknown for this system. Therefore, we focus on highlighting that the proposed method finds significantly more unique OPs close to the security boundary, i.e. creates a database of higher quality, in comparable time frames. Then, we demonstrate how the higher quality of the database allows machine learning algorithms to achieve a higher accuracy within a data-driven security assessment. Finally, we demonstrate how the proposed method allows to extend the N-1 security assessment to a N-k security assessment as described in section \ref{sec:N-k}, here focusing on N-2. \subsubsection{Database Generation for N-1 Security and Small-Signal Stability Assessment} The set of considered contingencies includes 159 line faults\footnotemark[1]. To ensure comparability, all simulations for the 162-bus system have been performed using 80 cores of the DTU HPC cluster. \begin{table}[!t] \caption{RESULTS: NESTA 162 BUS SYSTEM} \label{tab:results162Bus} \centering \begin{tabular}{rrrc} \hline \centering Req. & Unique & \centering Method&\\ \centering time&OPs in $\Omega$ & & $\eta_1 \; /\; \eta_2 \; /\; \eta_3 \; /\; \kappa_{max}$\\ \hline $\unit[9.35]{h}$&$3118$& Directed Walks &$30k\; /\;120k\; /\; 800k\; /\; 5$\\ $\unit[13.17]{h}$&$4166$&Directed Walks &$30k\; /\;120k\; /\; 800k\; /\; 10$\\ $\unit[14.57]{h}$&$25046$&Directed Walks &$30k\; /\;120k\; /\; 800k\; /\; 20$\\ $\unit[29.78]{h}$&$150790$& Directed Walks &$ 30k\; /\; 120k\; /\; 800k\; /\; 30$\\ \cellcolor{green!25}$\unit[37.07]{h}$&\cellcolor{green!25}$183295$& \cellcolor{green!25}Directed Walks &\cellcolor{green!25}$ 30k\; /\; 120k\; /\; 800k\; /\; 40$\\ $\unit[13.36]{h}$&$16587$& Directed Walks &$ 100k\; /\;200k\; /\; 800k\; /\; 5$\\ $\unit[18.20]{h}$&$45040$& Directed Walks &$ 100k\; /\;200k\; /\; 800k\; /\; 10$\\ \hline $\unit[35.70]{h}$& 901& \multicolumn{2}{l}{Importance Sampling (IS)} \end{tabular} \vspace{-3ex} \end{table} Compared to the IEEE 14 bus system, the problem size (potential \# of OPs) is 23 orders of magnitude larger while the problem complexity (\# of faults) increased 6.2 times. Table~\ref{tab:results162Bus} presents the results of our method compared with an Importance Sampling approach \cite{Liu2013b}. As the BF approach for this system is intractable, the exact number of points within the HIC region (set $\Omega$) is unknown. Therefore, the focus here is on demonstrating that within similar time frames, our proposed method is capable of finding substantially more unique operating points inside $\Omega$. Indeed, our approach identifies approx. three orders of magnitude more HIC points than an Importance Sampling approach (183'295 vs 901 points). In the highlighted case of Table~\ref{tab:results162Bus}, the computation time is split as follows: $\unit[3.44]{h}$ ($\unit[9.28]{\%}$) for LHS (3 stages), $\unit[1.85]{h}$ ($\unit[4.98]{\%}$) for both stages of grid pruning, $\unit[7.04]{h}$ ($\unit[18.98]{\%}$) for the DWs, and $\unit[24.75]{h}$ ($\unit[66.76]{\%}$) for the final N-1 check of all operating points of interest. This highlights that the most computationally expensive part is the complete N-1 analysis and shows why our proposed method is significantly faster than others: (i) we reduce the search space by eliminating infeasible N-1 points through the grid pruning algorithm, (ii) we evaluate most points only for one contingency and discard all with $\zeta < \unit[2.75]{\%}$, and (iii) the method can largely be scheduled in parallel. \subsubsection{N-1 Security and Small Signal Stability Assessment} As the topic of this paper is the efficient database generation for a data-driven security assessment, we briefly want to highlight how important the higher quality of the databases created by the proposed method is for such a data-driven security assessment. There are three important factors to be considered here: (i) The more time is needed to create a database of high quality, the longer is the wait before the machine learning algorithm can start its training phase. (ii) The more data is needed to sufficiently describe a system, the longer the algorithm needs to be trained. (iii) The further away points are from the security boundary, the less information they contain about the security boundary. Thus, it is essential that the database contains many unique points close to the security boundary enabling the algorithm to determine the security boundary as accurately as possible \cite{Krishnan2011}. In order to demonstrate the impact of the higher quality of the database on the achievable accuracy of machine learning algorithms, we implement a data-driven N-1 security and small-signal stability assessment using a decision tree. Based on lessons learned from previous works \cite{IREP2017}, we use the active and reactive power flows on the lines as predictors and let the decision tree classify between `fulfilling the requirements' and `not fulfilling the requirements' i.e. N-1 secure \emph{and} a minimum damping ratio $\zeta_{min} \geq \unit[3]{\%}$, or not. This decision tree then could be included in an optimal power flow, or in general an optimization framework, as shown in \cite{IREP2017,Lejla_PSCC}. We compare the highlighted database in Table~\ref{tab:results162Bus} with the one created with Importance Sampling and also presented in analyzed in Table \ref{tab:results162Bus}. Both required a comparable computation time to be generated. For a fair comparison, we use as training set all assessed operating points in each case; i.e. not only the unique OPs in the HIC region listed in Table \ref{tab:results162Bus}, but rather all OPs that were found in the safe, unsafe, and HIC regions during these 35-37 hours. To simplify the comparison, we used Matlab to train a simple decision tree using the standard Classification and Regression Tree (CART) algorithm with Gini's diversity index as splitting criterion and without limiting the tree depth. In order to avoid over-fitting, we used Matlab to apply cost-complexity pruning, minimizing the cross-validated classification error of the trees. To test the accuracy of the two decision trees, one of which was trained on the database created with our approach and the other a database created with the Importance Sampling method, we use a common test set of 90'000 operating points. To avoid favoring one of the two database generation methods, we created the test set by merging two datasets. Thus, $50\%$, i.e. 45'000 data points, are generated through an importance sampling approach using initialization values different from the ones used for the training set. The other 45'000 data points of the test set are taken from our last database generation attempt with our method shown in Table~\ref{tab:results162Bus}, where we generated 45'040 unique points in $\Omega$ in 18.20 h (please note though that the 45'000 points of the test set were picked from a wide range of points generated during that process, which are located in the safe, unsafe, and HIC region). For that attempt we used $\eta_1 = 100k$, $\eta_2 = 200k$ and $\eta_3 = 800k$, which means that this part of the test set also had different initialization points from the training set. In both cases we ensured that none of the data-points in the test set were part of any training set. Thus, this is completely unseen data and allows to evaluate the generalization capability of the trained classifiers. The decision tree trained on the highlighted database in Table \ref{tab:results162Bus}, which was generated with our method, achieves an accuracy of $\unit[85.91]{\%}$ while the tree trained on the database created with IS achieves an accuracy of $\unit[73.00]{\%}$. As properties of the test set might have an impact on the accuracy score, additional measures are usually examined for the performance of machine learning algorithms. Besides accuracy, an important measure of the quality of the classification is the number of true and false positives and negatives. The Matthews correlation coefficient (MCC) is generally regarded as a balanced measure for the quality of the binary classification. The MCC\footnote{\label{note2}$MCC=\frac{TP\cdot TN - FP \cdot FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}, \quad -1\leq MCC \leq 1$ with $TP$ and $TN$ representing the correctly identified positive and negative samples and $FP$ and $FN$ representing the falsely classified negative and positive samples \cite{Chicco2017}.} is in essence a correlation coefficient between the observed and predicted binary classifications, returning a value between -1 and +1. A value of +1 means perfect prediction, 0 means no better than random prediction, and -1 indicates total disagreement. In our case, the decision tree trained with our method has $MCC_{DW}=0.6247$, while the tree trained with importance sampling has $MCC_{IS}=-0.0943$. This highlights the over-optimistic results of the accuracy measure and the significant better performance of the tree trained on the database created with our proposed method. As the only difference between the configuration of the two decision trees is the database each tree was trained on, this emphasizes the importance of a high quality database to achieve the best possible results in a data-driven security assessment. More advanced machine learning algorithms may achieve even better results; this is, however, out of the scope of this paper. All created operating points are collected and published on GitHub \cite{GitHubDatabase}. \subsubsection{Database Generation for N-2 Security Assessment} To demonstrate how the proposed method is capable of extending a N-1 security assessment to an N-k security assessment, we used the highlighted N-1 database in Table \ref{tab:results162Bus} as a starting point. Similar to the N-1 case, for specific contingencies we were unable to obtain a N-2 secure system\footnotemark[1]. As a result, we had to relax the voltage limits to $\unit[0.9]{p.u.} \leq V_i \leq \unit[1.1]{p.u.}$ for all contingencies, and remove the line fault on the line between bus 125 and bus 126 from the contingency list. Thus, in the \mbox{N-2} security analysis the set of considered contingencies includes 158 line faults. In order to avoid the creation of unnecessary duplicates and minimize computation, all data-points located in the vicinity of other data-points from the HIC region of the N-1 security analysis, i.e. all OPs located within a radius of $\unit[5]{MW}$ surrounding another OP, are discarded. The remaining OPs serve as initialization points for a new round of directed walks as described in section \ref{sec:N-k}. Within $\unit[8.5]{h}$ we obtain a database with 52'107 unique OPs that belong to $\Omega_{N-2}$. $\unit[33.2]{\%}$ of the time, i.e. $\unit[2.8]{h}$, is used for the directed walks while $\unit[66.8]{\%}$ of the time, i.e. $\unit[5.7]{h}$, is required by the final N-2 contingency check. Hence, we obtain a $\unit[24]{\%}$ speed-up compared to the creation of the N-1 security database highlighted in Tab. \ref{tab:results162Bus}. This speed-up is achieved because the first half of the method, i.e. the creation of the initialization points and the grid pruning, is not required when starting from the N-1 case. \section{Conclusions} \label{sec:conclusion} This work proposes an efficient database generation method that can accurately determine power system security boundaries, while drastically reducing computation time. Such databases are fundamental to any Dynamic Security Assessment (DSA) method, as the information in historical data is not sufficient, containing very few abnormal situations. This topic has not received the appropriate attention in the literature, with the few existing approaches proposing methods based on importance sampling. Our approach is highly scalable, modular, and achieves drastic speed-ups compared with existing methods. It is composed of two parts. First, the search space reduction, which quickly discards large infeasible regions leveraging advancements in convex relaxation techniques and complex network theory approaches. Second, the ``Directed Walks'', a highly parallelizable algorithm, which efficiently explores the search space and can determine the security boundary with extremely high accuracy. Using a number of initialization points, a variable step size, and based on a steepest descent method, the ``Directed Walk'' algorithm traverses fast through large parts of feasible (or infeasible) regions, while it focuses on the high information content area in the vicinity of the security boundary. Our case studies on the IEEE 14-bus and the NESTA 162-bus system demonstrate the high quality, high scalability and excellent performance of our algorithm. They are able to identify up to 100\% of the operating points around the security boundary, while achieving computational speed-ups of over 10 to 20 times compared with an importance sampling approach. We also demonstrated the importance of a high quality database to achieve the best possible results in a data-driven security assessment. Given equal computation time, training machine learning algorithms with the database generated by our method clearly outperforms other approaches. Our approach is modular, not dependent on the initial sampling set (as importance sampling is), and agnostic to the security criteria used to define the security boundary. Criteria to be used include N-1 or N-k security, small-signal stability, voltage stability, or a combination of several of them. The method can find application in off-line security assessment, in real-time operation, and in machine learning and other data-driven applications, providing a computationally efficient way to generate the required data for training and testing of new methods. \vspace{-0.2cm} \section{Acknowledgements} \footnotesize This work has been supported by the EU-FP7 Project ``Best Paths'', grant agreement no.~612748, and by the Danish ForskEL project ``Best Paths for DK'', grant agreement 12264. \vspace{-0.2cm} \bibliographystyle{IEEEtran}
{ "timestamp": "2019-02-04T02:10:23", "yymm": "1806", "arxiv_id": "1806.01074", "language": "en", "url": "https://arxiv.org/abs/1806.01074" }
\section{Introduction}\label{sec1} \IEEEPARstart{M}{etamaterials} with artificially structured subwavelength building blocks have attracted substantial attention due to their unprecedented flexibility in light manipulation\cite{liu2011metamaterials}. Recently, the research on metamaterials has progressed to tunable and functional metadevice applications, as well as discovering novel structures with exceptional properties inaccessible to conventional metamaterials. Among the emerging varieties of metamaterials, hyperbolic metamaterials are most distinguished thanks to their extremely anisotropic feature enabling the hyperbolic isofrequency dispersion\cite{poddubny2013hyperbolic, sun2014indefinite}. Such metamaterials are good analogues of traditional optical crystals with one diagonal component of the permittivity tensors exhibiting the opposite sign to the other two diagonal components, which can be fulfilled by constricting the motion of free electrons within one or two spatial dimensions, practically, by periodically arranged metal/dielectric multilayers and metallic nanowire arrays. The permittivity tensors of hyperbolic metamaterials can be well defined by the effective medium theory, and expediently tuned by changing the filling ratios in addition to the material and other geometric parameters, providing a much more flexible control of the resonant responses for a wide range of applications in different spectral bands. Moreover, owing to the hyperbolic isofrequency dispersion, these simple and elegant structures possess unique optical and physical properties, such as supporting high propagation wave vectors and enhancing photonic density of states, and offer an excellent platform in tailoring light-matter interaction, which facilities prospective applications in high-resolution imaging and lithography\cite{jacob2006optical, lu2012hyperlenses, liang2015squeezing}, spontaneous emission enhancement\cite{lu2014enhancing, roth2017spontaneous, rustomji2017measurement}, ultrasensitive biosensing\cite{kabashin2009plasmonic, sreekanth2016extreme, sreekanth2016enhancing}, and broadband perfect absorption\cite{zhou2014experiment, yin2015ultra, chang2016metasurface}. In the last decade, atomically thin two-dimensional (2D) materials which show remarkable electronic, optical, mechanical and thermal properties, have received great research interest\cite{xia2014two, he2016further, xiao2017strong, xiao2018active, li2018wavelength}. Graphene as one of the most popular 2D materials, which can take over the role of metal in providing an inductive layer with negative permittivity, constitutes an excellent building block for multilayer structure of hyperbolic metamaterials\cite{iorsh2013hyperbolic, chang2016realization}. The graphene-based hyperbolic metamaterials have been extensively investigated for light manipulation at nanoscale, empowering different applications in negative refraction\cite{sreekanth2013negative, barzegar2016study}, light confinement\cite{othman2013graphene, he2013broadband, su2015terahertz} and slow light\cite{jia2015tunable, tyszka2017tunable}. More recently, a newly emerging 2D material, black phosphorus (BP) which has been isolated from bulk-BP by mechanical exfoliation and plasma thinning, shows high carrier mobility, relatively low loss and flexible tunability\cite{low2014tunable, ling2015renaissance, liu2018dynamical}. Similar to graphene, BP presents metallic behavior with negative permittivity in the mid- and far-infrared regime\cite{low2014plasmons, gonccalves2017hybridized, lu2017strong, nong2018strong, qing2018tailoring}, thus offering capabilities to replace the metal in metal/dielectric multilayer hyperbolic metamaterials. Moreover, BP with a direct and layer-sensitive bandgap from 0.3 eV to 2 eV shows an attractive advantage to realize high on-off ratios\cite{lu2015bandgap, correas2016black, li2017tunable}, which cannot be achieved in the zero-bandgap graphene. More importantly, the asymmetric crystal structure of BP brings about the highly in-plane anisotropy, leading to extremely polarization-dependent electronic and optical properties that may open avenues for novel functional devices\cite{liu2016localized, xiong2017strong, song2018biaxial, yuan2018highly, hong2018towards, feng2019perfect}. However, the BP-based hyperbolic metamaterials have been rarely considered, and the fundamental properties and the potential applications on this aspect have yet to be fully investigated. In this work, we propose a novel class of hyperbolic metamaterials based on BP/ dielectric multilayer structures for the realization of the tunable anisotropic absorption. To our knowledge, it is the first time to take BP instead of metal or graphene as the building block for multilayer hyperbolic metamaterials. Due to the intrinsic anisotropy of BP, the proposed structure shows perfect absorption for one polarization direction while the absorption of 8.2$\%$ for the other polarization direction at the same wavelength. Moreover, the absorption response of the proposed structure can be tuned by varying the electron doping of BP, the thickness and the number of the BP/ dielectric bilayer in the unit cell. This work confirms the potential of BP as an excellent building block in multilayer hyperbolic metamaterials, and demonstrates the promising application in tunable anisotropic metadevices. \section{The geometric structure and numerical model}\label{sec2} Fig.~\ref{fig:1}(a) depicts the schematic of our proposed hyperbolic metamaterials, which are composed of BP/dielectric layer stacking unit cells patterned on a gold mirror. The period distance between unit cells is $P=500$ nm and the cubic geometry is $L=400$ nm in length of each side. The thickness of each BP/dielectric bilayer is denoted by $t=t_{BP}+t_{d}$, with thickness for BP layer and dielectric layer as $t_{BP}=1$ nm and $t_{d}=79$ nm, respectively. The thickness of gold mirror is set as $t_{Au}=200$ nm. The number of the BP/ dielectric bilayer in each unit cell is represented by $N=20$. The permittivity of dielectric is $\varepsilon_{d}=2$. The permittivity of Au plane is derived from Drude model with the plasmon frequency $\omega_{p}=1.37\times10^{16}$ rad/s and the collision frequency $\gamma_{p}=4.08\times10^{13}$ rad/s\cite{ordal1985optical}. In this configuration, the transmission is eliminated because the thick gold mirror prevents the propagation of incident light. \begin{figure}[h] \centering \includegraphic [scale=0.45]{fig_1.eps} \caption{\label{fig:1} (a) The schematic of the hyperbolic metamaterials composed of BP/dielectric layer stacking unit cells patterned on a gold mirror. (b) The top view of the lattice structure of BP.} \end{figure} The atoms in monolayer BP are covalently bonded to form a unique puckered honeycomb structure through $sp^{3}$ hybridization. The asymmetric crystal structure of BP, i.e. $x$ direction armchair edge and $y$ direction zigzag edge, is depicted in Fig.~\ref{fig:1}(b), leading to remarkable in-plane anisotropic electron dispersion and direction-dependent conduction. In the mid- and far-infrared regime, the surface conductivity of BP can be described using the semi-classical Drude model\cite{liu2016localized, xiong2017strong} \begin{equation} \label{eq:1} \sigma_{jj}=\frac{iD_{j}}{\pi(\omega+\frac{i\eta}{\hbar})},~j=x,y, \end{equation} where $j=x,y$ represents the $x$ and $y$ directions, respectively, $D_{j}$ is the Drude weight, $\omega$ is the incident light frequency, $\eta=10$ meV is the electron relaxation rate, $\hbar$ is the reduced Planck's constant. The parameter of $D_{j}$ is described by \begin{equation} \label{eq:2} D_{j}=\frac{\pi e^{2}n}{m_{j}},~j=x,y, \end{equation} where $e$ is the electron charge, $n$ is the electron doping, $m_{cx}=0.15m_{0}$ and $m_{cy}=0.7m_{0}$ ($m_{0}$ is the static electron mass) are the in-plane electron's effective mass along the $x$ and $y$ directions, respectively\cite{low2014tunable}. Hence, the equivalent relative permittivity of BP in three directions can be derived by \begin{equation} \label{eq:3} \varepsilon_{jj}=\varepsilon_{r}+\frac{i\sigma_{jj}}{\varepsilon_{0}\omega t_{BP}},~j=x,y,z, \end{equation} where $j=x,y,z$ represents the $x$, $y$ and $z$ directions, respectively. $\varepsilon_{r}=5.76$ is the relative permittivity of BP, $\varepsilon_{0}$ is the vacuum permittivity. The subwavelength BP/dielectric multilayer structure can be treated as an effective homogeneous medium, and the effective permittivity tensors along $x$, $y$ and $z$ directions are determined by\cite{agranovich1985notes} \begin{equation} \label{eq:4} \varepsilon_{jj}^{eff}=\frac{\varepsilon_{jj}t_{BP}+\varepsilon_{d}t_{d}}{t_{BP}+t_{d}},~j=x,y, \end{equation} \begin{equation} \label{eq:5} \varepsilon_{jj}^{eff}=\frac{\varepsilon_{jj}\varepsilon_{d}(t_{BP}+t_{d})}{t_{BP}\varepsilon_{d}+t_{d}\varepsilon_{BP}},~j=z, \end{equation} According to Eqs.~(\ref{eq:4})-(\ref{eq:5}), the real and imaginary parts of the three effective permittivity tensors are depicted in Fig.~\ref{fig:2}(a) and (b), respectively. In the initial setup, a moderate electron doping of $n=5\times10^{13}$ cm$^{-2}$ is considered, and we can observe $\varepsilon_{xx}^{eff},\varepsilon_{yy}^{eff}<0$ and $\varepsilon_{zz}^{eff}>0$ are satisfied in the infrared regime of interest. Therefore the hyperbolic dispersion properties are expected in the multilayer structure. By changing material properties and geometry parameters, the effective permittivity tensors can be expediently tuned, which provides an unprecedented degree of freedom to control the resonant responses compared with conventional metamaterials. \begin{figure}[htbp] \centering \includegraphic [scale=0.35]{fig_2.eps} \caption{\label{fig:2} (a) The real parts and (b) the imaginary parts of the effective permittivity tensors along $x$, $y$ and $z$ directions for the multilayer hyperbolic structure.} \end{figure} The absorption responses of the proposed hyperbolic metamaterials are investigated via simulations using finite difference time domain (FDTD) method via the commercial package FDTD Solutions, Lumerical Inc., Canada. In the calculations, the moderate mesh grid is adopted to make good tradeoff between accuracy, memory requirements and simulation time. The linearly polarized plane wave is incident along the $-z$ direction, and the periodic boundary conditions are used in the $x$ and $y$ directions and the perfect matching layer conditions are adopted in the $z$ direction. \section{Results and discussions}\label{sec3} Fig.~\ref{fig:3}(a) and (b) present the simulated spectra of the proposed BP/dielectric metamaterials for electric field $E$ along $x$ and $y$ directions under normal incidence. The total absorption is 100$\%$ at the resonance wavelength of 20.41 $\upmu$m for $E$ along $x$ direction, achieving the perfect absorption. On the other hand, a weak absorption of only 8.2$\%$ is observed for $E$ along $y$ direction at the same wavelength, and the strongest absorption of 43$\%$ appears at the resonance wavelength of 34.90 $\upmu$m. Therefore, the extremely anisotropic absorption can be well observed for different polarization directions. \begin{figure}[htbp] \centering \includegraphic [scale=0.35]{fig_3.eps} \caption{\label{fig:3} The anisotropic absorption and reflection spectra for electric field $E$ along (a) $x$ and (b) $y$ directions under normal incidence in the proposed hyperbolic metamaterials. In the inset of (a), FDTD numerical simulation (red line) and CMT theoretical analysis (red circle) under the critical coupling condition for the electric field $E$ along $x$ direction.} \end{figure} The extremely anisotropic absorption can be interpreted with the impedance match associated with the critical coupling condition. According to the coupled mode theory (CMT), the structure can be considered as a resonator with the amplitudes of the input and output waves of $u$ and $y$ at the resonance frequency $\omega_{0}$. The mode radiative loss rate and the material loss rate are assigned as $\gamma_{e}$ and $\delta$, respectively. The reflection coefficient of the structure can be described by\cite{piper2014total, jiang2017tunable} \begin{equation} \label{eq:6} \Gamma=\frac{y}{u}=\frac{i(\omega-\omega_{0})+\delta-\gamma_{e}}{i(\omega-\omega_{0})+\delta+\gamma_{e}}, \end{equation} and the absorption is calculated as $A=1-|\Gamma|^{2}$, \begin{equation} \label{eq:7} A=\frac{4\delta\gamma_{e}}{(\omega-\omega_{0})^{2}+(\delta+\gamma_{e})^{2}}. \end{equation} When the structure is driven on the resonance ($\omega=\omega_{0}$), and the mode radiative loss rate and the material loss rate are the same ($\delta=\gamma_{e}$), which fulfills the critical coupling condition, the reflection from the structure vanishes, and all the incident light can be absorbed. As shown in the inset of Fig.~\ref{fig:3}(a), the theoretical analysis (CMT) and the numerical simulation (FDTD) are in good agreement in the vicinity of the resonance for the electric field $E$ along $x$ direction. The slight deviation occurs far away from the resonance, which can be attributed to that the CMT assumes a lossless resonator off the resonance. From the macroscopical sight, when the critical coupling condition is fulfilled, the effective impedance of the structure should be equal to that of the free space ($Z_{0}=1$). For the proposed metamaterials composed of BP/dielectric multilayer structure supported on the gold mirror, the effective impedance $Z$ can be described by\cite{smith2005electromagnetic, szabo2010unique} \begin{equation} \label{eq:8} Z=\frac{(T_{22}-T_{11})\pm\sqrt{(T_{22}-T_{11})^{2}+4T_{12}T_{21}}}{2T_{21}}, \end{equation} where $T_{11}$, $T_{12}$, $T_{21}$ and $T_{22}$ are the element values of the transfer (T) matrix. The two solutions of the effective impedance describe the two different paths of the light propagation, for example, the plus sign corresponds to the positive direction. The elements of T matrix can be calculated from the scattering (S) matrix elements written as \begin{equation} \label{eq:9} T_{11}=\frac{(1+S_{11})(1-S_{22})+S_{21}S_{12}}{2S_{21}}, \end{equation} \begin{equation} \label{eq:10} T_{12}=\frac{(1+S_{11})(1+S_{22})-S_{21}S_{12}}{2S_{21}}, \end{equation} \begin{equation} \label{eq:11} T_{21}=\frac{(1-S_{11})(1-S_{22})-S_{21}S_{12}}{2S_{21}}, \end{equation} \begin{equation} \label{eq:12} T_{22}=\frac{(1-S_{11})(1+S_{22})+S_{21}S_{12}}{2S_{21}}. \end{equation} In order to achieve the perfect absorption, the effective impedance of the whole structure needs to match to that of the free space, which implies the value of $Z$ in Eq.~(\ref{eq:8}) as close to 1 as possible. The effective impedance of the structure is equal to the free space impedance at the resonance wavelength of 20.41 $\upmu$m for $E$ along $x$ direction, i.e. $Z=0.99-0.02i$, while the effective impedance is calculated as $3.49\times10^{-4}-0.16i$ at the same wavelength and as $0.14-0.04i$ at the resonance wavelength of 34.90 $\upmu$m for $E$ along $y$ direction. Due to the anisotropic property of BP along $x$ and $y$ directions, the impedance match associated with the critical coupling condition can be realized in only one direction at a time, giving rise to the anisotropic absorption as shown in Fig.~\ref{fig:3}. To further clarify the physical mechanism of the anisotropic absorption behaviors in the proposed structure, we simulate the electric field distributions at the wavelength of 20.41 $\upmu$m for $E$ along $x$ and $y$ directions. As shown in Fig.~\ref{fig:4}(a), the pronounced confinement of electric field between the adjacent unit cells can be observed for the electric field $E$ along $x$ direction. The divergence and convergence of the electric field exhibit a clear tendency to follow the polarization of the incident plane wave, and the positive and negative charges accumulate at the left and right sides of the gaps, which are typical characteristics of the electric dipole resonance in the hyperbolic metamaterials. When the structure is driven on the resonance, the mode radiative loss rate and the material loss rate are the same, which fulfills the critical coupling condition. The effective impedance of the proposed structure shows a great match to that of the free space, leading to the perfect absorption of the incident plane wave for $E$ along $x$ direction. In Fig.~\ref{fig:4}(b), the weak confinement of electric field can be observed at the same wavelength for $E$ along $y$ direction, the structure is not in the critical couling state, corresponding to the weak absorption. According to the above analysis, it can be concluded that the critical coupling or not directly causes the impedance match or mismatch of the whole structure to that of free space, and leads to the anisotropic absorption, which is rooted in the asymmetric crystal structure of BP. \begin{figure}[htbp] \centering \includegraphic [scale=0.45]{fig_4.eps} \caption{\label{fig:4} The electric field distributions at the wavelength of 20.41 $\upmu$m for $E$ along (a) $x$ and (b) $y$ directions.} \end{figure} Fig.~\ref{fig:5}(a) and (b) present the dependence of the absorption spectra on the incident angle for $E$ along $x$ and $y$ directions, respectively. For $E$ along $x$ direction, the near perfect absorption can be observed within a large incident angle from 0 to 45 degree. The resonance wavelength shows a slight blue shift as the increase of the incident angle, which may be induced by the unsymmetrical electric field component of the oblique incident plane wave. For $E$ along $y$ direction, the absorption efficiency gradually decreases when the incident angle increases, and the absorption decreases rapidly when the incident angle exceeds 30 degree. Hence, the proposed BP/dielectric multilayer structure can well maintain the anisotropic absorption behaviors at the incident angle from 0 to 45 degree for $E$ along $x$ and $y$ directions. \begin{figure}[htbp] \centering \includegraphic [scale=0.45]{fig_5.eps} \caption{\label{fig:5} The angular dependence of the absorption spectra for $E$ along (a) $x$ and (b) $y$ directions.} \end{figure} By changing material properties and geometry parameters, we can get flexible control of the absorption responses of the proposed hyperbolic metamaterials. In the following, we investigate the tunable anisotropic absorption of the structure by varying the electron doping $n$ of BP layer, the thickness $t$ and the layer number $N$ of the BP/dielectric bilayer. According to Eqs.~(\ref{eq:1})-(\ref{eq:5}), the electron doping $n$ directly determines the surface conductivity of BP, then influences the effective permittivity tensors and the absorption behavior of the whole structure. In Fig.~\ref{fig:6}(a) and (b), the absorption spectra are plotted at different electron doping $n$ ranging from $1\times10^{13}$ cm$^{-2}$ to $9\times10^{13}$ cm$^{-2}$, where the thickness $t=80$ nm and the number $N=20$ of the BP/dielectric bilayer are in line with the initial setup. The change in the electron doping $n$ of BP causes obvious variations in the absorption response. For $E$ along $x$ direction, the absorption peak of the proposed structure gradually increases from 37.78$\%$ to 100$\%$ as $n$ increases from $1\times10^{13}$ cm$^{-2}$ to $5\times10^{13}$ cm$^{-2}$, and then decreases to 81.58$\%$ with the further increase of $n$ to $9\times10^{13}$ cm$^{-2}$. For $E$ along $y$ direction, the absorption peak exhibits a monotone increasing tendency during the increase of $n$. At the same time, it can be clearly observed that the increase of the electron doping $n$ of BP also leads to blue shifts of the resonance wavelengths for both $E$ along $x$ and $y$ directions. \begin{figure}[htbp] \centering \includegraphic [scale=0.35]{fig_6.eps} \caption{\label{fig:6} The anisotropic absorption spectra for electric field $E$ along (a) $x$ and (b) $y$ directions under normal incidence with various electron doping of BP ($n$ ranging from $1\times10^{13}$ cm$^{-2}$ to $9\times10^{13}$ cm$^{-2}$) in the proposed hyperbolic metamaterials.} \end{figure} The variation in the absorption peak originates from the change in the effective impedance of the proposed structure influenced by the electron doping of BP. The effective impedances of the whole structure for $E$ along $x$ direction can be calculated as examples. When the electron doping $n$ starts at $1\times10^{13}$ cm$^{-2}$, the effective impedance is $0.12-0.05i$ at 34.57 $\upmu$m, which mismatches to that of the free space and leads to the absorption peak of only 37.78$\%$. As $n$ increases to $5\times10^{13}$ cm$^{-2}$, the effective impedance is $0.99-0.02i$ at 20.41 $\upmu$m, matching to that of the free space, and the perfect absorption is achieved. Finally when $n$ comes to $9\times10^{13}$ cm$^{-2}$, the effective impedance is $2.44-0.38i$ at 17.87 $\upmu$m, and the absorption peak turns down to 81.58$\%$ due to the impedance mismatch. On the other hand, the blue shifts of the resonance wavelengths can be attributed to the fact that the effective permittivities for the multilayer hyperbolic structure decrease as the electron doping of BP increase as shown in Fig.~\ref{fig:7}(a) and (b). Therefore, to approach the value of the effective permittivity $\varepsilon_{xx}^{eff}=-18.87$ and $\varepsilon_{yy}^{eff}=-2.44$ at the perfect absorption, the resonance wavelengths for the cases of $n=1\times10^{13}$ cm$^{-2}$ and $3\times10^{13}$ cm$^{-2}$ need to shift towards longer wavelengths, while the resonance wavelengths for the cases of $n=7\times10^{13}$ cm$^{-2}$ and $9\times10^{13}$ cm$^{-2}$ need to shift towards shorter wavelengths. It is also observed that when the electron doping comes to $n=9\times10^{13}$ cm$^{-2}$, there are bumps at the shorter wavelength side of the absorption peak for $E$ along $x$ direction, which can be attributed to the high order resonances. For the resonance located at 14.31 $\upmu$m, as an example, the relatively weak confinement of electric field between the adjacent unit cells can be observed in the electric field distribution (not shown). In contrast to the electric dipole resonance, the divergence and convergence of the electric field exhibit a complicated tendency: at the top of the adjacent, the positive and negative charges accumulate at the left and right sides of the gaps, and at the bottom of the adjacent, the positive and negative charges accumulate at the right and left sides of the gaps, showing an opposite distribution, forming the electric quadrupole resonance. \begin{figure}[htbp] \centering \includegraphic [scale=0.35]{fig_7.eps} \caption{\label{fig:7} The real parts of the effective permittivity tensors along (a) $x$ and (b) $y$ directions for the multilayer hyperbolic structure with various electron doping of BP ($n$ ranging from $1\times10^{-13}$ cm$^{-2}$ to $9\times10^{-13}$ cm$^{-2}$).} \end{figure} Next we investigate the absorption responses of the proposed structure with different thickness $t$ of the BP/dielectric bilayer, and the electron doping of BP is fixed as $n=5\times10^{13}$ cm$^{-2}$ and the number of the BP/dielectric bilayer is set as $N=20$. As shown in Fig.~\ref{fig:8}(a), all the absorption peaks of the proposed structure are above 97$\%$ for $E$ along $x$ direction, which indicates that we can approach the near perfect absorption at different wavelengths by adjusting the thickness of the BP/dielectric bilayer. Also, for $E$ along $y$ direction, it can be observed a slight increase in the absorption peak from 34.77$\%$ to 49.88$\%$ as the thickness increases, as Fig.~\ref{fig:8}(b) shows. At the same time, with the increase of the thickness of the BP/dielectric bilayer from $t=60$ nm to 100 nm, the resonance wavelengths for $E$ along both $x$ and $y$ directions show redshifts from 16.10 $\upmu$m to 23.95 $\upmu$m and from 29.95 $\upmu$m to 39.10 $\upmu$m, respectively. \begin{figure}[htbp] \centering \includegraphic [scale=0.35]{fig_8.eps} \caption{\label{fig:8} The anisotropic absorption spectra for electric field $E$ along (a) $x$ and (b) $y$ directions under normal incidence with various thicknesses of the BP/dielectric bilayer ($t$ ranging from 60 nm to 100 nm) in the proposed hyperbolic metamaterials.} \end{figure} The reason for the variation in the absorption peak lies in that the change in the thickness of the BP/dielectric bilayer alters the effective impedance of the proposed structure. For example, the effective impedances of the proposed structure for $E$ along $x$ direction is calculated as $0.72-0.05i$ at 16.10 $\upmu$m for $t=60$ nm, $0.99-0.02i$ at 20.41 $\upmu$m for $t=80$ nm and $1.27-0.02i$ at 23.95 $\upmu$m for $t=100$ nm, respectively. The impedances match to that of free space leads to the nearly perfect absorption of 97.35$\%$, 100$\%$ and 98.51$\%$ at the respective resonance wavelengths. On the other hand, the redshifts of the resonance wavelengths can be explained by the variations of the effective permittivity tensors with the increase of thickness of the BP/dielectric bilayer. In Fig.~\ref{fig:9}(a) and (b), both effective permittivity tensors $\varepsilon_{xx}^{eff}$ along $x$ direction, and $\varepsilon_{yy}^{eff}$ along $y$ direction show the rising tendency for a certain wavelength as the thickness increases from 60 nm to 100 nm. To satisfy the permittivity tensors at the absorption peak, the resonance wavelengths for the cases of $t=60$ nm and 70 nm need to shift towards shorter wavelengths while the resonance wavelengths for the cases of $t=90$ nm and 100 nm need to shift towards longer wavelengths. \begin{figure}[htbp] \centering \includegraphic [scale=0.35]{fig_9.eps} \caption{\label{fig:9} The real parts of the effective permittivity tensors along (a) $x$ and (b) $y$ directions for the multilayer hyperbolic structure with various thicknesses of the BP/dielectric bilayer ($t$ ranging from 60 nm to 100 nm).} \end{figure} The absorption response of the proposed structure can also be tuned by altering the number of the BP/dielectric bilayer, as shown in Fig.~\ref{fig:10}(a) and (b). For $E$ along $x$ direction, the absorption peak shows an increase from 81.17$\%$ to 100$\%$ when $N$ increases from 12 to 20, and then gradually declines to 82.75$\%$ as $N$ continues increasing to 28. For $E$ along $y$ direction, the absorption peak shows a monotone increase from 12.46$\%$ to 75.82$\%$. These variations can be explained by the reason that the different layer numbers influence the structure of the unit cell and further change the effective impedance of the structure. For example, when $N =12$, 20 and 28, the effective impedances of the whole structure for $E$ along $x$ direction are calculated as $0.40-0.01i$ at resonance wavelength of 16.40 $\upmu$m, $0.99-0.02i$ at 20.41 $\upmu$m, and $2.40-0.16i$ at 24.48 $\upmu$m, respectively. The impedance match of the whole structure to that of free space leads a perfect absorption for $N=20$, while the mismatch causes the relatively weak absorption for $N=12$ and 28. It is also observed that the resonance wavelengths show slight redshifts as the number of the BP/dielectric bilayer increases from $N=12$ to 28 for both $E$ along $x$ and $y$ directions, which can also be attributed to the variations of the effective permittivity tensors. Besides, when the number of BP/dielectric bilayer comes to $N=28$, there are bumps at the shorter wavelength side of the absorption peak for $E$ along $x$ direction, whcih can also be attributed to the high order resonance, as is the case with the high doping level. It needs to be noted that the effective permittivity tensors in Eqs.~(\ref{eq:4})-(\ref{eq:5}) are defined with effective medium theory under the assumption that the number of the BP/dielectric bilayer is infinite. Hence a slight deviation on the effective permittivity tensors would happen when limited layer number is employed to approximate the infinite condition in the practical simulations. \begin{figure}[htbp] \centering \includegraphic [scale=0.35]{fig_10.eps} \caption{\label{fig:10} The anisotropic absorption spectra for electric field $E$ along (a) $x$ and (b) $y$ directions under normal incidence with various numbers of the BP/dielectric bilayer ($N$ ranging from 12 to 28) in the proposed hyperbolic metamaterials.} \end{figure} \section{Conclusions}\label{sec4} In conclusions, we theoretically investigate the tunable anisotropic absorption based on the structure composed of BP/dielectric multilayer stacking unit cells patterned on a gold mirror. The proposed structure shows perfect absorption for $E$ along $x$ direction while the absorption of 8.2$\%$ for $E$ along $y$ direction at the same wavelength. The physical mechanism of the perfect absorption lies in the impedance match associated with the critical coupling condition due to the excitation of the electric dipole resonance between the adjacent unit cells, and the anisotropic responses can be attributed to the asymmetric crystal structure of BP. By changing the electron doping of BP, the thickness and the number of the BP/dielectric bilayer in the unit cell, the absorption responses of the proposed structure can be flexibly controlled. This work demonstrates the potentials of BP as an excellent building block for multilayer structure of hyperbolic metamaterials, and provides inspiration and guidance for a wide variety of tunable anisotropic metadevices such as polarizers and signal processing systems based on hybrid BP/dielectric multilayer structures.
{ "timestamp": "2019-11-05T02:25:22", "yymm": "1806", "arxiv_id": "1806.01028", "language": "en", "url": "https://arxiv.org/abs/1806.01028" }
{ "timestamp": "2018-12-05T02:15:36", "yymm": "1806", "arxiv_id": "1806.01015", "language": "en", "url": "https://arxiv.org/abs/1806.01015" }
\section{Introduction} Solid-state dewetting of thin films has been observed in various thin film/substrate systems by many research groups~\cite{Thompson12,Leroy16,Jiran90,Jiran92,Ye10a,Ye10b,Ye11a,Ye11b,Rabkin14,Kosinova14,Kovalenko17, Naffouti16,Naffouti17,Pierre09b}, and has attracted increasing attention because of its considerable technological interest. Especially, in recent years, the solid-state dewetting can be used to provide a simple method for making ordered nanoparticles and quantum dot arrays which have a rich variety of applications, such as used for sensors~\cite{Armelao06, Mizsei93}, optical and magnetic devices~\cite{Armelao06, Rath07}, as catalysis for the growth of carbon and semiconductor nanotube and nanowire \cite{Randolph07, Schmidt09}. Ono et al.~\cite{Ono95} first observed the solid-state dewetting (or agglomeration) in the silicon-on-insulator (SOI) system. Following with the experiment, many experimental studies on dewetting of single crystal films (mostly for SOI~\cite{Nuryadi00, Nuryadi02} and Ni~\cite{Ye10a, Ye10b, Ye11a, Ye11b} films) have been performed and have shown that it could produce well-ordered and controllable patterns. Unlike single crystal films, polycrystalline films usually lead to disordered structures on a flat substrate. While recent experiments have shown that thin films can evolve into ordered arrays of nanoparticles and well-organized patterns on a pre-patterned substrate, i.e., by making use of the templated solid-state dewetting~\cite{Giermann05,Giermann11,Ye11b,Naffouti16}. These, and related studies have led to increasing research interests on studying the kinetics of solid-state dewetting of thin films on both flat and curved substrates. The dewetting of solid thin films deposited on substrates is similar to the dewetting of liquid films~\cite{deGennes85}, and they share some common features, such as the moving contact line~\cite{Qian06,Ren07,Tripathi18}, Rayleigh instability~\cite{Pairam09,Mcgraw10,Kim15}, multi-scale and multi-physics features~\cite{Spencer97,Xu11,Herminghaus08,Khenner18}. However, they have many important major differences. For example, their mass transport processes are totally different, and the solid-state dewetting occurs through surface diffusion instead of fluid dynamics in liquid dewetting; in addition, the surface energy anisotropy plays an important role in determining equilibrium shapes of particles and the kinetic evolution during the solid-state dewetting, while the isotropic surface energy is usually assumed in liquid dewetting. In the literature, the solid-state dewetting is usually modeled as a surface-tracking problem described by surface diffusion flow, coupled with moving contact lines where the film-vapor-substrate three phases meet with each other~\cite{Srolovitz86b,Wong00,Du10,Dornel06,Jiang12,Wang15,Jiang16}. Based on different understandings to this problem, there have been lots of theoretical and modeling studies for solid-state dewetting problems in the literature. Srolovitz and Safran~\cite{Srolovitz86b} first proposed a sharp-interface model to investigate the hole growth under the three assumptions, i.e., isotropic surface energy, small slope profile and cylindrical symmetry. Based on the model, Wong et al.~\cite{Wong00} designed a ``marker particle'' numerical method for solving the two-dimensional fully-nonlinear isotropic sharp-interface model (i.e., without the small slope assumption), and to investigate the two-dimensional edge retraction of a semi-infinite step film. Dornel et al.~\cite{Dornel06} designed another numerical scheme to study the pinch-off phenomenon of two-dimensional island films with high-aspect-ratios during solid-state dewetting. Jiang et al.~\cite{Jiang12} designed a phase-field model for simulating solid-state dewetting of thin films with isotropic surface energies, and this approach can naturally capture the topological changes that occur during evolution. Although most of the above models are focused on the isotropic surface energy case, recent experiments have clearly demonstrated that the kinetic evolution that occurs during solid-state dewetting is strongly affected by crystalline anisotropy~\cite{Thompson12,Leroy16}. In order to investigate surface energy anisotropy effect, many approaches have been proposed and discussed, such as a discrete model~\cite{Dornel06}, a kinetic Monte Carlo model~\cite{Pierre09b,Dufay11}, a crystalline model~\cite{Carter95,Zucker13} and continuum models based on partial differential equations (PDEs)~\cite{Wang15,Jiang16,Bao17}. While most of these works are restricted on the flat substrate, dewetting of thin solid films on curved substrates is still not well understood. For simulating template-assisted solid-state dewetting, Giermann and Thompson proposed a simple model~\cite{Giermann11} to semi-quantitatively understand some observed phenomena, but they could not include the contact line/point migration or the surface energy anisotropy into the simple model. Klinger and Rabkin~\cite{Klinger12} developed a discrete algorithm for simulating capillary-driven motion of nanoparticles on curved rigid substrates in two dimensions. In their approach, the self-diffusion along the film/substrate interface (i.e., interface diffusion) and the surface diffusion along the particle surface are included, and the continuity of fluxes and chemical potentials of the interface and surface diffusions at the moving contact point is used to tackle the moving contact line problem. To the best of our knowledge, there are no completed continuum PDE models, which are used for simulating the kinetics of solid particles on curved substrates, available in the literature. In recent years, a continuum model based on sharp-interface approach was proposed by the authors for simulating solid-state dewetting of thin films on flat substrates~\cite{Wang15,Jiang16,Bao17b} in two dimensions. This continuum model is obtained from the thermodynamic variation to the total interfacial free energy functional and Mullins's method for deriving surface diffusion equation~\cite{Mullins57}. This model describes the interface evolution which occurs through surface diffusion and contact point migration, and the surface energy anisotropy is easily included into the model, no matter how strong the anisotropy is, i.e, weakly anisotropic~\cite{Wang15} and strongly anisotropic~\cite{Jiang16}. From mathematics, we can rigorously prove that the sharp-interface model fulfills the area/mass conservation and the total free energy dissipation properties when following with the kinetics described by the model, and a parametric finite element method was designed to efficiently solve the mathematical model~\cite{Bao17}. Furthermore, we have extended these approaches to simulating solid-state dewetting in three dimensions recently~\cite{Bao18,Zhao17thesis}, i.e., moving open surface coupled with moving contact lines. In this paper, we will generalize the modeling techniques and numerical methods to study solid-state dewetting of thin films on non-flat rigid substrates. In this paper, we assume that the surface diffusion is the only driving force for solid-state dewetting, and that elastic (interface stress, stresses associated with capillarity) effects are negligible, and there are no chemical reactions or phase transformations occurring during the evolution. The rest of this paper is organized as follows. In Section II, based on a thermodynamic variational approach, we rigorously derive a mathematical sharp-interface model for simulating solid-state dewetting of thin films on curved rigid substrates. Then, we perform numerical simulations to investigate several specific phenomena about solid-state dewetting of thin films on curved substrates, i.e., the equilibrium shapes of small island films and the pinch-off of large island films in Section III, the ``small'' solid particle migration in Section IV and templated solid-state dewetting in Section V. Finally, we draw some conclusions in Section VI. \section{Mathematical formulation} We first discuss the surface evolution kinetics for solid-state dewetting of thin films on rigid, curved substrates in two dimensions (2D). Following the usual non-equilibrium thermodynamic approach, we model the kinetics as driven by the variation of the free energy of the system with respect to matter transport in a sharp-interface framework. Most of the relevant variables are described by reference to the example shown in Fig.~\ref{fig:1}. We denote the film/vapor interface profile as $\Gamma = \mathbf{X}(s) = \big(x(s),y(s)\big), s \in [0,L]$ where $s$ and $L$ represent the arc length and the total length of the interface, respectively. The unit tangent vector $\bmath{\tau}$ and outer unit normal vector $\mathbf{n}$ of the film/vapor interface curve $\Gamma$ can be expressed as $\bmath{\tau}:=(x_s, y_s)$ and $\mathbf{n}:=(-y_s, x_s)$, respectively. The angle $\theta$ represents the angle between the local outer unit normal vector and the $y$-axis (or the local tangent vector and the $x$-axis). \begin{figure}[!htp] \centering \includegraphics[width=.45\textwidth]{fig_curved/fig1.eps} \caption{A schematic illustration of a solid film (island) in contact with a rigid, curved substrate in two dimensions, where $c_l$ and $c_r$ represent the left and right contact points, $\Gamma$ is the film/vapor interface curve, and $\hat{\Gamma}$ is the curved substrate.} \label{fig:1} \end{figure} The curved rigid substrate profile is denoted as $\hat{\Gamma}:= \hat{\mathbf{X}}(c) = \big(\hat{x}(c), \hat{y}(c)\big)$ with arc length $c\in [0, \hat{L}]$, and $\hat{L}$ represents the total length of the curved substrate. Similarly, $\hat{\bmath{\tau}}$, $\hat{\mathbf{n}}$ and $\hat{\theta}$ represent the unit tangent vector, the (outer) unit normal vector of the curved substrate $\hat{\Gamma}$ and the angle between the local unit normal vector and the $y$-axis. The left and right contact points are located at the intersections of the interface curve $\Gamma$ and the substrate curve $\hat{\Gamma}$, i.e., the contact points are at $s = 0$ and $s = L$ on $\Gamma$ and $c = c_l$ and $c = c_r$ on $\hat{\Gamma}$. For simplicity, we denote both as $c_l$ and $c_r$ (shown in Fig.~\ref{fig:1}), and represent the tangent angles to the external surface $\Gamma$ and substrate $\hat{\Gamma}$ at the two contact points as \begin{gather*} \theta_{\rm e}^l := \theta(s=0), \quad \theta_{\rm e}^r := \theta(s=L), \\ \hat{\theta}^l := \hat{\theta}(c=c_l), \quad \hat{\theta}^r := \hat{\theta}(c=c_r), \end{gather*} where $\theta_{\rm e}^l$ and $\theta_{\rm e}^r$ are the left and right extrinsic contact angles~\cite{Xu17}, respectively. Hence, the left and right intrinsic (or true) contact angles are \begin{equation}\label{eq:def_thetad} \theta_{\rm i}^l : = \theta_{\rm e}^l - \hat{\theta}^l,\quad \theta_{\rm i}^r : = \theta_{\rm e}^r - \hat{\theta}^r. \end{equation} which satisfy \begin{equation*} \cos \theta_{\rm i}^l = \bmath{\tau}(0) \cdot \hat{\bmath{\tau}}(c_l), \qquad \cos \theta_{\rm i}^r = \bmath{\tau}(L) \cdot \hat{\bmath{\tau}}(c_r). \end{equation*} Following with the above notations, the total interfacial free energy of the three-phase solid-state dewetting system (including possibly anisotropic surface energies) can be written as~\cite{Wang15,Jiang16,Bao17b}: \begin{equation}\label{eq:energy} W=\int_{\Gamma}\gamma(\theta)\;d\Gamma+ \underbrace{\big(\gamma_{\scriptscriptstyle{FS}}-\gamma_{\scriptscriptstyle{VS}}\big)(c_r-c_l)}_{{\textbf {Substrate\;Energy}}}, \end{equation} where the first term represents the film-vapor interface energy and the second term represents the substrate interface energy (we have subtracted the energy of the bare substrate). $\gamma_{\scriptscriptstyle{FV}}$, $\gamma_{\scriptscriptstyle{FS}}$ and $\gamma_{\scriptscriptstyle{VS}}$ are the surface energy densities of the film/vapor, film/substrate and vapor/substrate interfaces, respectively. Here, we assume that $\gamma_{\scriptscriptstyle{FS}}$ and $\gamma_{\scriptscriptstyle{VS}}$ are two constants, and the film/vapor interface energy density is a function of the interface orientation angle, i.e., $\gamma_{\scriptscriptstyle{FV}}:=\gamma(\theta)$. If $\gamma(\theta)\equiv {\text {constant}}$, the surface energy is isotropic; otherwise, it is anisotropic. Furthermore, if the surface stiffness $\widetilde\gamma(\theta):=\gamma(\theta)+\gamma^{\prime\prime}(\theta)>0$ for all $\theta\in[-\pi,\pi]$, the surface energy is weakly anisotropic; otherwise, if $\widetilde\gamma(\theta)=\gamma(\theta)+\gamma^{\prime\prime}(\theta)<0$ for some orientations $\theta\in[-\pi,\pi]$, the surface energy is strongly anisotropic. As shown rigorously (and in detail) in Appendix A, the first-order thermodynamic variations of the total free energy $W$ with respect to the film/vapor interface profile $\Gamma$ and the two contact points $c_r$ and $c_l$ are \begin{eqnarray} \frac{\delta W}{\delta \Gamma}&=&\Big(\gamma(\theta)+\gamma\,''(\theta)\Big)\kappa,\label{eqn_ch4:var1} \\ [0.6em] \frac{\delta W}{\delta c_r}&=&\gamma(\theta_{\rm e}^r)\cos\theta_{\rm i}^r-\gamma\,'(\theta_{\rm e}^r)\sin\theta_{\rm i}^r + (\gamma_{\scriptscriptstyle{FS}}-\gamma_{\scriptscriptstyle{VS}}), \label{eqn_ch4:var2}\\ [0.6em] \frac{\delta W}{\delta c_l}&=&-\Big[\gamma(\theta_{\rm e}^l)\cos\theta_{\rm i}^l-\gamma\,'(\theta_{\rm e}^l)\sin\theta_{\rm i}^l+ (\gamma_{\scriptscriptstyle{FS}}-\gamma_{\scriptscriptstyle{VS}})\Big], \quad \label{eqn_ch4:var3} \end{eqnarray} where $\kappa$ is the curvature of the interface curve $\Gamma$. From the Gibbs-Thomson relation~\cite{Mullins57,Sutton95} (in terms of the curvature, Eq.~\eqref{eqn_ch4:var1}), we can define the chemical potential $\mu$ at any point along the interface curve $\Gamma$. Variations in the chemical potential along the interface give rise to a material (film) flux along the interface $\vec J$ and the the normal velocity of the film/vapor interface $V_n$~\cite{Wang15,Mullins57}: \begin{equation}\label{eq:mu} \mu=\Omega_0\frac{\delta W}{\delta \Gamma}=\Omega_0\Big(\gamma(\theta)+\gamma\,''(\theta)\Big)\kappa = \Omega_0\widetilde{\gamma}(\theta) \kappa, \end{equation} \begin{equation}\label{eq:normalvel} \vec J = -\frac{D_s\nu}{k_B\,T_e}\nabla_s\, \mu,\quad V_n=-\Omega_0 (\nabla_s \cdot \vec J)=\frac{D_s\nu\Omega_0}{k_B\,T_e}\frac{\partial^2 \mu}{\partial s^2}, \end{equation} where $\nabla_s$ is the surface gradient operator (i.e., the derivative with respect to position $s$ along $\Gamma$), $\Omega_0$ is the atomic volume of the film material, $D_s$ is the coefficient of surface diffusion, $\nu$ is the number of diffusing atoms per unit length, and $k_BT_e$ is the thermal energy. Equations~\eqref{eqn_ch4:var2} and~\eqref{eqn_ch4:var3} are used to construct the equations of motion for the moving contact points in the manner described in~\cite{Wang15,Jiang16}, \begin{eqnarray} \frac{d c_l(t)}{d t} &=&-\eta\frac{\delta W}{\delta c_l}, \quad \text{at} \quad c = c_l, \label{eqn_ch4:frelaxationright}\\ [0.5em] \frac{d c_r(t)}{d t}&=&-\eta\frac{\delta W}{\delta c_r}, \quad \text{at} \quad c = c_r, \label{eqn_ch4:frelaxationleft} \end{eqnarray} where the constant $\eta \in (0, \infty)$, represents a contact line (or point) mobility. Next, we nondimensionalize the equations by scaling all lengths by a constant characteristic length scale $R_0$ (e.g., the initial thickness of the thin film layer), energies in terms of the constant, mean surface energy (density) $\gamma_0=\frac{1}{2\pi}\int_{-\pi}^\pi \gamma(\theta)d\theta$, and time by $t_0=R_0^4/(B\gamma_0)$, where $B:= D_s\nu\Omega_0^2/(k_BT_e)$ is a material constant (the contact line mobility is therefore scaled by $B/R_0^3$). With these scalings, the above sharp-interface model for the interface evolution (Eq.~\eqref{eq:normalvel}) becomes \begin{equation}\label{eq:GE_curve_weak} \begin{cases} \displaystyle \frac{\partial{\mathbf{X}}}{\partial t}=V_n \mathbf{n} = \frac{\partial^2 \mu}{\partial s^2} \mathbf{n},\\[0.8em] \displaystyle \mu=\widetilde{\gamma}(\theta) \kappa = \Big(\gamma(\theta)+\gamma\,''(\theta)\Big)\kappa. \end{cases} \end{equation} Note that now $\mathbf{X}$, $t$, $V_n$, $s$, $\mu$, $\gamma$, $\kappa$ and $\eta$ are now dimensionless, yet we retain the same notation for brevity. The dimensionless interface evolution equation~\eqref{eq:GE_curve_weak} is subject to the following dimensionless boundary conditions: \begin{itemize} \item[(i)] Contact point condition ({\bf {BC1}}) \begin{equation}\label{eq:BC1_curve_weak} \mathbf{X}(0, t) = \hat{\mathbf{X}}(c_l), \quad \mathbf{X}(L, t) = \hat{\mathbf{X}}(c_r). \end{equation} \end{itemize} This ensures that the left and right contact points move along the rigid, curved substrate $\hat{\Gamma}$ and simultaneously lie on both the film/vapor $\Gamma$ and substrate $\hat{\Gamma}$ interfaces. \begin{itemize} \item[(ii)] Relaxed/dissipative contact angle condition ({\bf {BC2}}) \begin{equation}\label{eq:BC2_curve_weak} \frac{d c_l}{d t} = \eta\, f(\theta_{\rm e}^l, \theta_{\rm i}^l), \qquad \frac{d c_r}{d t} = -\eta\, f(\theta_{\rm e}^r, \theta_{\rm i}^r), \end{equation} \end{itemize} where \begin{equation*} f(\theta_{\rm e}, \theta_{\rm i}) := \gamma(\theta_{\rm e})\cos\theta_{\rm i}-\gamma\,'(\theta_{\rm e})\sin\theta_{\rm i} - \sigma, \end{equation*} and $\sigma := (\gamma_{\scriptscriptstyle{VS}} - \gamma_{\scriptscriptstyle{FS}})/\gamma_0$. The contact angles $\theta_{\rm e}^l, \theta_{\rm e}^r, \theta_{\rm i}^l, \theta_{\rm i}^r$ are related as per Eq.~\eqref{eq:def_thetad} and hence are intrinsically related to the substrate shape. \begin{itemize} \item[(iii)] Zero-mass flux condition ({\bf {BC3}}) \begin{equation}\label{eq:BC3_curve_weak} \frac{\partial \mu}{\partial s}(0, t)=0, \qquad \frac{\partial \mu}{\partial s}(L, t)=0, \end{equation} \end{itemize} This condition implies that the total mass of the film is conserved (see Appendix B). If the film evolves to a stationary state, the contact angles evolution equation~\eqref{eq:BC2_curve_weak} ensures that the equilibrium contact angle is achieved by $\gamma(\theta_{\rm e})\cos\theta_{\rm i}-\gamma\,'(\theta_{\rm e})\sin\theta_{\rm i} = \sigma$ . This is the classical Young equation generalized for the curved substrate case. If the surface energy is isotropic (i.e., $\gamma(\theta) \equiv 1$, and $\gamma\,'(\theta) \equiv 0$), the generalized Young equation reduces to the classical isotropic Young equation~\cite{Young1805}, i.e., $\cos\theta_{\rm i}=\sigma$. On the other hand, when the substrate is flat ($\hat{\theta}\equiv 0$), the generalized Young equation reduces to the classical anisotropic Young equation~\cite{Wang15, Jiang16} (in this case $\theta_{\rm e} = \theta_{\rm i}$). However, when the substrate is curved, we cannot, in general, explicitly determine the static intrinsic angles for arbitrary anisotropy. We demonstrate, in Appendix B, that the general (anisotropic) evolution equation \eqref{eq:GE_curve_weak} together with boundary conditions \eqref{eq:BC1_curve_weak}-\eqref{eq:BC3_curve_weak} ensures that the total film mass (area) is conserved and the total free energy of the system decreases monotonically during film morphology evolution. From a mathematical point of view, we note that the governing equations are well-posed when the surface energy is isotropic or weakly anisotropic. On the other hand, when the surface energy is strongly anisotropic, the equations will become of the anti-diffusion type (e.g., likewise, a second-order diffusion term with a negative ``diffusion'' coefficient) and are ill-posed. We handle this ill-posedness by regularizing the equations by adding high-order terms (e.g., see~\cite{Jiang16}). \section{Island evolution on curved substrates} We employ a parametric finite-element method to numerically solve the above mathematical model for the evolution of islands on curved substrates. The numerical algorithm is described in Appendix C and was previously applied to solid-state dewetting problems on flat substrates in~\cite{Bao17}. Our numerical examples all use an anisotropic film/vapor surface energy (density) of the following form \begin{equation} \gamma(\theta) = 1+\beta\cos(m\theta), \end{equation} where the parameter $\beta$ controls the degree of the anisotropy and $m$ describes the order of the rotational symmetry. For $\beta=0$, the surface energy is isotropic. For $0<\beta<\frac{1}{m^2-1}$, it is weakly anisotropic. And, for $\beta>\frac{1}{m^2-1}$, it is strongly anisotropic. We focus here on the case of large contact point mobility ($\eta=100$). A more detailed discussion of the influence of the parameter $\eta$ and contact-line drag on the kinetic evolution process (and even stationary morphologies) can be found in~\cite{Wang15}. \subsection{Small island equilibrium } Isotropic islands on flat substrates evolve to the same stationary state determined by the equilibrium contact angle, independent of the initial island shape. However, this is not necessarily the case when the substrate is not flat, as illustrated in Fig.~\ref{fig:3-1}~(a1-a2) for the case of a sawtooth-profile substrate. Here, the stationary island shapes (evolving from different initial island shapes) have very different macroscopic aspect ratios and cover vastly different substrate lengths (areas). This suggests the possibility of manipulating island shape through control of substrate morphology and/or initial island profile. Fig.~\ref{fig:3-1} (b1-b2) shows two stationary island shapes for islands on a circular substrate with exactly the same values of the material parameter $\sigma$. In the first case, the island surface energy is isotropic, while in the second the surface energy is weakly anisotropic. Initially, the two islands have the same shapes and locations. As can be clearly seen from the figure, the isotropic island evolves to a symmetric circular shape with static intrinsic contact angle $2\pi/3$; while the anisotropic island evolves to an asymmetric island shape (the shape itself is determined by the surface energy anisotropy) and has two different left and right static intrinsic contact angles. These numerical results indicate that the surface energy anisotropy can lead to multiple static intrinsic contact angles on curved substrates. The presence of different (left and right) contact angles on the same island was observed earlier for strongly anisotropic islands on a flat substrate but not for weakly anisotropic islands~\cite{Bao17b}. This feature of weakly anisotropic islands is associated with the fact that here, the substrate is curved. \begin{figure} \centering \includegraphics[width=.48\textwidth]{fig_curved/fig2_1.eps} \includegraphics[width=.48\textwidth]{fig_curved/fig2_2.eps} \caption{(a1-a2) show two equilibrium isotropic islands with material constant $\sigma = 0$ (intrinsic contact angles are both $\pi/2$) on a sawtooth substrate starting from two different initial island shapes (indicated by the red dashed lines); (b1-b2) shows two equilibrium shapes of island films with material constant $\sigma = -0.5$ on a circular substrate with radius $R = 20$, where (b1) is the isotropic case with static intrinsic contact angle $2\pi/3$, (b2) is the weakly anisotropic case (where $m = 4, \beta = 0.06$) with static intrinsic contact angles $2.025$ (left) and $2.319$ (right).} \label{fig:3-1} \end{figure} \subsection{Large island pinch-off } When the aspect ratio of an island film is larger than a critical value, the island will pinch off and break up into two or more islands. In analogy to pinch-off on flat substrates~\cite{Dornel06,Wang15}, we perform numerical simulations of large islands on circular curved substrates. Fig.~\ref{fig:3-2} shows several configurations during the evolution of a large-aspect-ratio island on a circular substrate of radius $R=30$. As shown in Fig.~\ref{fig:3-2}, surface diffusion very quickly leads to the formation of ridges at the island edges followed by valleys; then as time evolves, the two valleys merge near the island center; eventually, the valley at the center of the islands deepens until it touches the substrate, leading to a pinch-off event that separates the initial island into a pair of islands. This evolution is very similar to that on flat substrates~\cite{Wang15}. \begin{figure} \centering \includegraphics[width=.49\textwidth]{fig_curved/fig3.eps} \caption{Morphology evolution of a large island film (aspect ratio $L=60$) with weakly anisotropic surface energy on a circular substrate of radius $R = 30$, where $m = 4, \beta = 0.06, \sigma = -\sqrt{3}/2$.} \label{fig:3-2} \end{figure} We now investigate how the substrate curvature affects the critical pinch-off length $L_c$ of island films (above which pinch-off occurs). Fig.~\ref{fig:3-3} shows the number of small islands formed during solid-state dewetting on circular substrates of radii $R = 30$ and $60$ for isotropic surface energy and Young angles $\theta_i\in [0, \pi]$. This shows that the boundary line separating domains of different number of pinched off islands is well-fitted by straight lines: $L_c=79.2/\sin(\theta_i/2)+0.2$ for $R=30$ and $L_c = 85.0/\sin(\theta_i/2)+0.3$ for $R = 60$, respectively. We performed similar calculations for substrates of several curvatures and intrinsic contact angle $\theta_i$. The resultant critical pinch-off lengths for different $R$ and $\theta_i$ are shown in Table~\ref{tab_ch4:length} (the flat substrate result $R\to\infty$ is obtained from the fitting formula of Dornel~\cite{Dornel06}). This table shows that the critical pinch-off length increases with decreasing isotropic Young angle $\theta_i$ and increasing substrate radius $R$. We fit these numerical results for the critical pinch-off film length $L_c$ (as a function of isotropic Young angle $\theta_i$ and substrate radius $R$) to the functional form \begin{equation} \label{eqn:curve} L_c = \frac{a(R)}{\sin(\theta_i/2)} + b(R), \end{equation} where the functions $a(R)$ and $b(R)$ are well approximated by $a(R) \approx -320.2/R + 89.9$ and $b(R)\approx 0.0$ for $R\ge 20$. \begin{figure}[!htp] \centering \includegraphics[width=.4\textwidth]{fig_curved/fig4_1.eps} \includegraphics[width=.4\textwidth]{fig_curved/fig4_2.eps} \caption{The number of islands formed from the retraction of a high-aspect-ratio island (with isotropic Young angle $\theta_i$; $\sigma=\cos\theta_i$) as a function of initial length $L$ on circular substrates of radii (a) $R = 30$ and (b) $R = 60$. The solid black lines separating the one and two island domains correspond to (a) $L_c = 79.2/\sin(\theta_i/2) + 0.2$, (b) $L_c = 85.0/\sin(\theta_i/2) + 0.3$. The black dashed line in (b) is the solid black line in (a).} \label{fig:3-3} \end{figure} \begin{table}[!htp] \renewcommand\arraystretch{1.4} \centering \begin{tabular}{l||c|c|c|c|c|c} \hline \hline & $R = 20$ & $R = 30$ & $R = 40$ & $R = 50$ & $R = 60$ & $R \to \infty$ \\ \hline $\theta_i = \pi$ & 73.5 & 77.5 & 79.5 & 80.5 & 81.5 & 87.9\\ \hline $\theta_i = \frac{11}{12}\pi$ & 74.5 & 78.5 & 80.5 & 81.5 & 82.5 & 88.8\\ \hline $\theta_i = \frac{10}{12}\pi$ & 76.5 & 81.5 & 83.5 & 84.5 & 84.5 & 91.3\\ \hline $\theta_i = \frac{9}{12}\pi$ & 80.5 & 85.5 & 87.5 & 88.5 & 89.5 & 95.9\\ \hline $\theta_i = \frac{8}{12}\pi$ & 86.5 & 91.5 & 94.5 & 95.5 & 96.5 & 102.9 \\ \hline $\theta_i = \frac{7}{12}\pi$ & 94.5 & 100.5 & 103.5 & 105.5 & 106.5 & 113.1 \\ \hline $\theta_i = \frac{6}{12}\pi$ & 105.5 & 113.5 & 119.5 & 119.5 & 121.5 & 128.0 \\ \hline $\theta_i = \frac{5}{12}\pi$ & 120.5 & 131.5 & 137.5 & 140.5 & 142.5 & 150.0 \\ \hline $\theta_i = \frac{4}{12}\pi$ & -- & 157.5 & 166.5 & 170.5 & 172.5 & 184.5 \\ \hline $\theta_i = \frac{3}{12}\pi$ & -- & -- & 210.5 & 219.5 & 224.5 & 243.8 \\ \hline $\theta_i = \frac{2}{12}\pi$ & -- & -- & -- & 306.5 & 319.5 & 364.6 \\ \hline \hline \end{tabular} \caption{Critical island film length $L_c$ for island break-up as a function of isotropic Young angles $\theta_i$ (i.e., the material constant $\sigma=\cos\theta_i$) and substrate radius $R$ for the isotropic surface energy case. The symbol ``-'' implies that no pinch-off occurred (i.e., $L_c>2\pi R$). The $R\to\infty$ (flat substrate) data is consistent with earlier results~\cite{Dornel06}.} \label{tab_ch4:length} \end{table} \section{Migration of ``small'' islands} In this section, we will examine the evolution of small islands on substrates with non-constant surface curvature. As discussed above (see Section III.1), the equilibrium shape of small islands on substrates with constant surface curvature for both the cases of isotropic and anisotropic surface energies can be determined. Interestingly, when the substrate curvature is not constant, island migration is possible. Using a simple model, Ahn and Wynblatt showed that a solid particle will migrate from convex to concave substrate sites~\cite{Ahn80}. Klinger and Rabkin, using a different algorithm, examined the motion of (for example) a particle on a substrate with a sinusoidal profile~\cite{Klinger12}. Here, we apply the proposed mathematical model to investigate the motion of a ``small'' solid particle on an arbitrarily curved substrate for the case of isotropic surface energy. As we discuss below, ``small'' implies that the product of the island size (i.e., the area of particle in 2D) and the substrate curvature gradient is small compared with one. This implies that the relaxation time of the island shape is small compared with the time necessary for the island to translate by an island radius. Here we focus on the leading-order term in the expansion of the total free energy variation that gives rise to particle migration~\cite{Jiang18}; that is, we focus on the effect of a substrate curvature gradient (i.e., $\hat{\kappa}'(c) \equiv {\text{Const.}}$) on the evolution of the particle on the substrate (we assume that $\hat{\kappa}$ is positive for a convex substrate curve). Fig.~\ref{fig:4-1}(a) shows several images during the kinetic evolution of a small, initially square, solid island evolving on a substrate with $\hat{\kappa}' = -0.01$; the evolution was determined by numerical solutions of the proposed sharp-interface model. The position of the particle versus time, $P(t):=(c_l(t)+c_r(t))/2$, is shown in Fig.~\ref{fig:4-1}(b). As is clearly shown that, the island rapidly evolves from its initial square shape (red dashed line) into a nearly perfect circular arc (blue shape, at about $t=0.02$) in an instant time. After the island achieves its near equilibrium shape, it slowly migrates down along the substrate (translates to the right in Fig.~\ref{fig:4-1}). During the migration, the island keeps with its near equilibrium shape. Here, we refer to the time period associated with the island morphology relaxation to its near equilibrium shape as the relaxation time $\tau_R$, and it may be estimated from the inset of Fig.~\ref{fig:4-1}(b), and we estimate this time $\tau_R$ to be around $10^{-2}$. Since the capillarity-driven evolution is dictated by Eq.~\eqref{eq:GE_curve_weak} (fourth-order in space, first-order in time), the characteristic island shape evolution time $\sim R_0^4$, where $R_0\sim\sqrt{A}$ is the nominal island radius. We demonstrate below that the island translation velocity is proportional to the substrate curvature gradient and inversely proportional to the nominal island radius $R_0$. This implies that the shape evolution rate is much faster than the particle translation rate, when $|A\hat{\kappa}'|\ll1$. This is the case for the results shown in Fig.~\ref{fig:4-1} ($|A\hat{\kappa}'|=0.004$). Since the relaxation time is small compared with the time required for the island to move an island radius, it is reasonable to assume that the particle shape is always in equilibrium at the local substrate site~\cite{Jiang18}. \begin{figure}[htp!] \centering \includegraphics[width = .48\textwidth]{fig_curved/fig5a.eps} \includegraphics[width = .40\textwidth]{fig_curved/fig5b.eps} \caption{(a) Simulation results for a ``small'' solid particle migration on a curved rigid substrate with a constant curvature gradient $\hat{\kappa}'(c)\equiv -0.01$ at different times $t=0, 0.02, 300, 600, 900$, respectively, where the isotropic Young angle is chosen as $\theta_i = \pi/2$, and the red dashed line represents the initial shape and location of the solid particle (its area $A=0.4$); (b) simulation results for the position of the particle $P(t)$ as a function of time.} \label{fig:4-1} \end{figure} We now examine how the island velocity $v$ varies with substrate curvature gradient $\hat{\kappa}'$, the island area $A$ and the isotropic Young angle $\theta_i$ (i.e., the material constant is chosen as $\sigma=\cos\theta_i$). Numerical simulations were performed for several values of the substrate curvature gradient at fixed island area $A=1$ and Young angle $\theta_i = \pi/3$, and Fig.~\ref{fig:4-2}(a) shows the particle position $P(t)$ versus time. These data are well fit by straight lines, where the slope is a function of substrate curvature gradient $\hat{\kappa}'$; i.e., the particle velocity is nearly constant after a very short time transient (shown in Fig~\ref{fig:4-1}(b)). Least square linear fits to these data yield island velocity versus substrate curvature gradient $\hat{\kappa}'$ as shown in Fig~\ref{fig:4-2}(b). This plot demonstrates that ``small'' island velocity is proportional to the substrate curvature gradient $\hat{\kappa}'$. \begin{figure}[htp!] \centering \includegraphics[width = .45\textwidth]{fig_curved/fig6a.eps} \includegraphics[width = .45\textwidth]{fig_curved/fig6b.eps} \caption{(a) Plot of the position of the ``small'' solid island on the substrate as a function of time for different values of the substrate curvature gradient $\hat{\kappa}'$, where the black solid lines are least square linear fits to the numerical simulation data (points). (b) Plot of the island velocity as a function of the curvature gradient $\hat{\kappa}'$. These data are well fit by the expression $v = -1.27\,\hat{\kappa}'$ (in red solid line). In all of these numerical simulations, we fix the island area to be $A = 1$ and the isotropic Young angle to be $\theta_i = \pi/3$.} \label{fig:4-2} \end{figure} We also examined the relation between the island velocity and the initial island area $A$ and Young angle $\theta_i$. The numerical simulation results for the effect of island size are shown in Fig.~\ref{fig:4-3} for a constant substrate curvature gradient $\hat{\kappa}'= -0.01$ and an isotropic Young angle $\theta_i = \pi/3$. These data demonstrate that the ``small'' island velocity is inversely proportional to the island radius (or more precisely the square root of the island area $\sqrt{A}$), although there are small deviations from this relation for very small islands. The numerical simulation results for the effect of isotropic Young angle $\theta_i$ is shown in Fig.~\ref{fig:4-4} for fixed curvature gradient $\hat{\kappa}'= -0.01$ and fixed island size $A=1$. The island velocity increases with decreasing Young angle $\theta_i$ and decreases to zero as $\theta_i \to \pi$. The latter observation is consistent with the fact that a completely dewetting island ($\theta_i = \pi$) will not cover the substrate and hence its free energy is independent of the location where it stands on the curved substrate. \begin{figure}[htp!] \centering \includegraphics[width = .45\textwidth]{fig_curved/fig7.eps} \caption{Plot of the island velocity as a function of $1/\sqrt{A}$. These data are well fit by the linear relation $v = 0.01/\sqrt{A}$ (blue solid line). In all of these numerical simulations, we fix the substrate curvature gradient to be $\hat{\kappa}'= -0.01$ and the isotropic Young angle to be $\theta_i = \pi/3$.} \label{fig:4-3} \end{figure} \begin{figure}[htp!] \centering \includegraphics[width = .45\textwidth]{fig_curved/fig8.eps} \caption{Plot of the island velocity as a function of the isotropic Young angle $\theta_i$. In all of these numerical simulations, we set the substrate curvature gradient to be $\hat{\kappa}'= -0.01$ and the initial island area to $A = 1$.} \label{fig:4-4} \end{figure} \begin{figure}[hbp!] \centering \includegraphics[width = .48\textwidth]{fig_curved/fig9.eps} \caption{Comparison between solving the full model and the ODE model (i.e., Eq.~\eqref{eq:ode}) for obtaining the position of a ``small'' particle at different times during the migration time on a sinusoidal substrate $\hat{y} = 4\sin(\hat{x}/4)$, where the red line represents the numerical result by solving the full model, i.e., Eq.\eqref{eq:GE_curve_weak} together with the boundary conditions \eqref{eq:BC1_curve_weak}-\eqref{eq:BC3_curve_weak}, and the blue dashed line represents the numerical results by solving the ODE model, i.e., Eq.~\eqref{eq:ode}, with $C(\theta_i) = 1.2$. The other parameters are chosen as $A = 1, \theta_i = \pi/3$.} \label{fig:4-5} \end{figure} Based upon the numerical results presented here, we conclude that the migration velocity of ``small'' solid islands on curved substrates are well described by the following relation: \begin{equation}\label{eq:ode} v(t):= \frac{{\rm d} P(t)}{\rm d t} = -B \gamma_0 C(\theta_i)\frac{\hat{\kappa}'(P)}{\sqrt{A}}, \end{equation} where $B:= D_s\nu\Omega_0^2/(k_BT_e)$ is a material constant, $\gamma_0$ is the isotropic particle surface energy density, $C(\theta_i)$ is a function of the isotropic Young angle $\theta_i$ that decreases with increasing $\theta_i$, and $\hat{\kappa}'(P)$ is the local substrate curvature gradient at the arc-length point $P$ on the curved substrate, where $P\in[0, \hat{L}]$ is the arc length along the curved substrate. In a forthcoming paper~\cite{Jiang18}, based upon the Onsager's variational principle, we can obtain an analytical expression for the function $C(\theta_i)$, which is consistent with the above numerical results. \begin{figure*}[htp] \centering \includegraphics[width=.92\textwidth]{fig_curved/fig10.eps} \caption{Solid-state dewetting of thin film with different initial lengths on a pre-patterned sinusoidal substrate, where the initial length of thin film is chosen as $100$, $150$ and $200$, respectively, and the length scale $R_0$ is chosen as the initial thickness of the thin film. The magenta dashed line is the initial shape of the thin film, and the shaded blue region is the final equilibrium pattern.} \label{fig:5-1} \end{figure*} While the above numerical results focussed on substrate of fixed curvature gradients, we can characterize an arbitrary substrate profile by a position-dependent substrate curvature gradient $\hat{\kappa}'(P)$. Hence, since we can determine the velocity of a ``small'' solid particle at any point along the substrate and by numerically solving the ordinary differential equation in \eqref{eq:ode}, we can predict the trajectory of a ``small'' solid particle on a substrate surface of arbitrary shape. To validate this approach, we numerically simulate the migration of ``small'' solid particles ($A = 1, \theta_i = \pi/3$) on a sinusoidal substrate $\hat{y} = 4\sin(\hat{x}/4)$. The results are shown in Fig.~\ref{fig:4-5}, where the red line represents the results of the numerical simulation via the full model, i.e., Eq.\eqref{eq:GE_curve_weak} together with the boundary conditions \eqref{eq:BC1_curve_weak}-\eqref{eq:BC3_curve_weak}, while the blue dashed line represents the solution of the ordinary differential equation in Eq.~\eqref{eq:ode} for $C(\pi/3) = 1.2$ (see Fig.~\ref{fig:4-4}). These results show the excellent agreement between our ordinary differential equation model Eq.~\eqref{eq:ode} and the numerical solution to the full model. \section{Templated solid-state dewetting} In this section, we will apply the sharp-interface model to simulate templated solid-state dewetting on a pre-patterned substrate. The recent experiments have demonstrated that templated solid-state dewetting can be used to controllably produce complex and well-ordered patterns~\cite{Thompson12,Ye11b,Giermann05,Giermann11}. For example, Giermann and Thompson used topographically patterned substrate to modulate the curvature of thin gold films, creating the instabilities which is driven by the solid-state dewetting and results in well-ordered patterns and almost-uniform size of particles, and furthermore, they observed four general type of island morphologies on this inverted pyramid topography~\cite{Giermann05}. In a companion paper~\cite{Giermann11}, they proposed two simple models to semi-quantitatively understand the observed phenomena. In this section, we choose the pre-patterned substrate as the sinusoidal curve, which is expressed as $\hat{y}=H\sin(\omega \hat{x})$ with amplitude $H$ and frequency $\omega$, and apply the proposed sharp-interface model to investigate the relation between different type of periodic patterns and the substrate parameters (i.e., $H$ and $\omega$). Fig.~\ref{fig:5-1} depicts how the finite (initial) length of thin film takes influence on the equilibrium pattern. As shown in the figure, the finite length of thin film will result in non-periodic patterns due to the edge effect, but when the initial length is chosen to be longer and longer, its equilibrium shape will become closer and closer to a periodic pattern. Note that during numerical simulations, when a pinch-off event happens, a new contact point is generated; then after the pinch-off event, we compute each part of the pinch-off curve separately. In the following, we performed numerical simulations to investigate the relation. In order to consider the ``periodic'' equilibrium pattern, we choose the initial length of thin films to be long enough. This is the common case, because thin films often have very large aspect ratios. As shown in Fig.~\ref{fig:5-2}, we divide the observed periodic equilibrium patterns into the following four categories of dewetting on a sinusoidal substrate: (I) one particle per pit with no empty intermediate pits; (II) one particle occupies one pit with empty intermediate pits; (III) one particle occupies multiple pits with empty intermediate pits; (IV) different sizes of particles. \begin{figure*}[htp!] \centering \includegraphics[width=.92\textwidth]{fig_curved/fig11.eps} \caption{Phase diagram of the four observed periodic categories of solid-state dewetting on a pre-patterned sinusoidal substrate, which are: (I) one particle per pit with no empty intermediate pits, (II) one particle occupies one pit with empty intermediate pits, (III) one particle occupies multiple pits with empty intermediate pits, (IV) different sizes of particles. In all above numerical simulations, the isotropic Young angle $\theta_i = 2\pi/3$ and the initial length of thin film is chosen to be long enough.} \label{fig:5-2} \end{figure*} The phase diagram of the four periodic categories of dewetting is also depicted in Fig.~\ref{fig:5-2}. As shown in the phase diagram, when the amplitude $H>R_0$ (where $R_0$ is the initial thickness of thin film, and is chosen as the length scale), the equilibrium pattern will fall into the category (I). This can be explained as because the thin film tends to flatten in order to minimize the total interfacial free energy, and if the amplitude of the sinusoidal substrate is too large, it will touch the substrate before flattening and result in one particle in each pit. A simple model~\cite{Giermann11} was proposed to predict the critical amplitude of the substrate, i.e., the condition in which the area of thin film is equal to the area of one pit. Here, for a sinusoidal substrate, by some simple calculations, the initial area of the film in one pit is $2\pi R_0/\omega$, and the area of one pit is $2\pi H/\omega$. If they are equal, the critical amplitude is $R_0$, which is excellently consistent with our numerical results. On the other hand, as shown in Fig.~\ref{fig:5-2}, when $H<R_0$, the equilibrium pattern will fall into three possible categories: (II)-(IV). In these categories, (II) and (III) are both uniform size of particles, and the intermediate space between these particles can be well-controlled by adjusting the parameters $H$ and $\omega$. When the amplitude $H$ is fixed and the frequency $\omega$ increases to be larger than a critical value, the final pattern will fall into the category (IV), i.e., non-uniform size of particles will appear. Numerical simulations indicate that this critical frequency increases as the amplitude $H$ decreases, and when ${H}/{R_0}$ goes to zero, the critical frequency will go to infinity. Furthermore, in this case (i.e., $H/R_0\ll 1$), our numerical simulations have demonstrated that the periodicity of the final equilibrium pattern is very close to the one predicted by Wong {\it et al.} in their ``mass-shedding model'' for a thin film on a planar substrate~\cite{Wong00}. \section{Conclusions} In this paper, we proposed a sharp-interface mathematical model for simulating solid-state dewetting of thin films on a non-flat rigid substrate in two dimensions, and applied this model to studying several interesting phenomena about solid-state dewetting problems on a non-flat substrate. First, we rigorously derived the governing equations of solid-state dewetting from the thermodynamic variation of the total interfacial free energy functional. The morphology evolution of thin films is governed by surface diffusion and contact point migration on a non-flat rigid substrate curve. Similar to the flat substrate case~\cite{Wang15,Jiang16}, we introduced a relaxation kinetics with a finite contact point mobility for describing the contact point migration. For equilibrium shapes, we obtained a bivariate equation (referred to as the generalized Young equation) to determine the static intrinsic and extrinsic contact angles of equilibrium shapes. This generalized Young equation will reduce to the classical isotropic/anisotropic Young equation when the substrate is flat~\cite{Min06,Wang15,Pierre16,Bao17b}. Second, we used a parametric finite element method for numerically solving the proposed mathematical model. Ample numerical experiments were performed for examining several interesting examples about solid-state dewetting of thin films on curved substrates, i.e., equilibrium shapes of small islands, pinch-off of large islands, migration of ``small'' solid particles on curved substrates and template-assisted solid-state dewetting on a pre-patterned sinusoidal substrate. For equilibrium shapes of small islands, we found that on curved substrates different initial shapes may evolve into different equilibrium morphologies, even for the isotropic case, and the weak anisotropy also can lead to asymmetric equilibrium shapes with multiple intrinsic contact angles. For the pinch-off of large islands, we found that the critical pinch-off length $L_c$ becomes larger when the isotropic Young angle $\theta_i$ decreases and the radius $R$ of the circular substrate increases, respectively, and a simple fitting formula for $L_c$ as a function of $\theta_i$ and $R$ is also given. For a ``small'' solid particle migration on a curved substrate with a constant substrate curvature gradient $\hat{\kappa}'$, our numerical results demonstrated that the migration velocity $v$ is proportional to $\hat{\kappa}'$, inversely proportional to the square root of the area of the particle $\sqrt{A}$, and furthermore, it decreases when the isotropic Young angle increases from $0$ to $\pi$. For templated solid-state dewetting of thin films on a sinusoidal substrate, we observed four periodic categories of dewetting which have been experimentally and theoretically studied for a similar pre-patterned substrate in the reference~\cite{Giermann05}. Our simulation results are able to capture many of the complexities associated with solid-state dewetting experiments on pre-patterned curved substrates~\cite{Ahn80,Kim85,Giermann05,Giermann11}. \section*{Acknowledgement} This work was partially supported by the National Natural Science Foundation of China Nos. 11871384 (W.J.) and 91630207 (Y.W. and W.B.), Natural Science Foundation of Hubei Province No. 2018CFB466 (W.J.), the NSF Division of Materials Research through award DMR 1609267 (D.J.S.) and the Ministry of Education of Singapore grant R-146-000- 247-114 (W.B.). This work was partially done while the authors were visiting the Institute for Mathematical Sciences, National University of Singapore, in 2018.
{ "timestamp": "2018-10-12T02:11:50", "yymm": "1806", "arxiv_id": "1806.00744", "language": "en", "url": "https://arxiv.org/abs/1806.00744" }
\section{Introduction} In the simple binary hypothesis testing problem, one is given a source sequence $Y^n$ and one knows that it is either generated in an i.i.d.\ fashion from one of two known distributions $P_1$ or $P_2$. One is then asked to design a test to make this decision. There is a natural trade-off between the type-I and type-II error probabilities. This is quantified by the Chernoff-Stein lemma~\cite{chernoff1952measure} in the Neyman-Pearson setting in which the type-I error probability decays exponentially fast in $n$ with exponent given by $D(P_2\| P_1)$ if the type-II error probability is upper bounded by some fixed $\varepsilon \in (0,1)$. Blahut~\cite{blahut1974hypothesis} established the tradeoff between the exponents of the type-I and type-II error probabilities. Strassen~\cite{strassen1962asymptotische} derived a refinement of the Chernoff-Stein lemma. This area of study is now commonly known as {\em second-order asymptotics} and it quantifies the backoff from $D(P_2\|P_1)$ one incurs at finite sample sizes and non-vanishing type-II error probabilities $\varepsilon\in (0,1)$. In all these analyses, the likelihood ratio test~\cite{poor2013introduction} is optimal. However, in real-world machine learning applications, the generating distributions are \emph{not} known. For the {\em binary} classification framework, one is given two training sequences, one generated from $P_1$ and the other from $P_2$. Using these training sequences, one attempts to classify a test sequence according to whether one believes that it is generated from either $P_1$ or $P_2$. \vspace{-.01in} \subsection{Main Contributions} \vspace{-.01in} Instead of algorithms, in this paper, we are concerned with the information-theoretic limits of the binary classification problem. This was first considered by Gutman who proposed a type-based (empirical distribution-based) test~\cite[Eq.~(6)]{gutman1989asymptotically} and proved that this test is asymptotically optimal in the sense that any other test that achieves the same exponential decay for the type-I error probability for {\em all} pairs of distributions, necessarily has a larger type-II error probability for any {\em fixed} pair of distributions. Inspired by Gutman's~\cite{gutman1989asymptotically} and Strassen's~\cite{strassen1962asymptotische} seminal works, and by practical applications where the number of training and test samples is limited (due to the prohibitive cost in obtaining labeled data), we derive refinements to the tradeoff between the type-I and type-II error probabilities for such tests. In particular, we derive the exact second-order asymptotics~\cite{strassen1962asymptotische,polyanskiy2010finite,hayashi2009information} for binary classification. Our main result asserts that Gutman's test is second-order optimal. The proofs follow by judiciously modifying and refining Gutman's arguments in~\cite{gutman1989asymptotically} in both the achievability and converse proofs. In the achievability part, we apply a Taylor expansion to a generalized form of the Jensen-Shannon divergence~\cite{lin1991divergence} and apply the Berry-Esseen theorem to analyze Gutman's test. The converse part follows by showing that Gutman's type-based test is approximately optimal in a certain sense to be made precise in Lemma~\ref{anytotype}. This study provides intuition for the non-asymptotic fundamental limits and our results have the potential to allow practitioners to gauge the effectiveness of various classification algorithms. Second, we discuss three consequences of our main result. The first asserts that the largest exponential decay rate of the maximal type-I error probability is a generalized version of the Jensen-Shannon divergence, defined in~\eqref{def:GJS} to follow. This result can be seen as a counterpart of Chernoff-Stein lemma~\cite{chernoff1952measure} which is applicable to binary hypothesis testing. Next, we show that our main result can be applied to obtain a second-order asymptotic expansion for the fundamental limits of the two sample homogeneity testing problem~\cite[Sec.~II-C]{unnikrishnan2016weak} and the closeness testing problem~\cite{batu2013testing,acharya2014sublinear,chan2014optimal}. Finally, we consider the dual setting of the main result in which the type-I error probabilities are non-vanishing while the type-II error probabilities decay exponentially fast. In this case, the largest exponential decay rate of the type-II error probabilities for Gutman's rule is given by a R\'enyi divergence~\cite{renyi1961measures} of a certain order related to the ratio of the lengths of the training and test sequences. Finally, we generalize our second-order asymptotic result for binary classification to classification of multiple hypotheses with the rejection option. We first consider tests satisfying the following conditions (i) the error probability under each hypothesis decays exponentially fast with the same exponent {\em for all tuples of distributions} and (ii) the rejection probability under each hypothesis is upper bounded by a different constant {\em for a particular tuple}. We derive second-order approximations of the largest error exponent for all hypotheses and show that a generalization of Gutman's test by Unnikrishnan in \cite[Theorem~4.1]{unnikrishnan2015asymptotically} is second-order optimal. The proofs follow by generalizing those for binary classification and carefully analyzing the rejection probabilities. In addition, similarly to the binary case, we also consider a dual setting, in which under each hypothesis, the error probability is non-vanishing for all tuples of distributions and the rejection probability decays exponentially fast for a particular tuple. \subsection{Related Works} The most related work is \cite{gutman1989asymptotically} where Gutman showed that his type-based test is asymptotically optimal for the binary classification problem and its extension to classification of multiple hypotheses with rejection for Markov sources. Ziv~\cite{ziv1988classification} illustrated the relationship between binary classification and universal data compression. The Bayesian setting of the binary classification problem was studied by Merhav and Ziv~\cite{merhav1991bayesian}. Subsequently, Kelly, Wagner, Tularak and Viswanath~\cite{kelly2013} considered the binary classification problem with large alphabets. Unnikrishnan~\cite{unnikrishnan2015asymptotically} generalized the result of Gutman by considering classification for multiple hypotheses where there are multiple test sequences. Finally, Unnikrishnan and Huang~\cite{unnikrishnan2016weak} approximated the type-I error probability of the binary classification problem using weak convergence analysis. \subsection{Organization of the Rest of the Paper} The rest of our paper is organized as follows. In Section \ref{sec:formulation}, we set up the notation, formulate the binary classification problem and present existing results by Gutman \cite{gutman1989asymptotically}. In Section \ref{sec:results4bc}, we discuss the motivation for our setting and present our second-order result for binary classification. We also discuss some consequences of our main result. In Section \ref{sec:result4cmr}, we generalize our result for binary classification to classification of multiple hypotheses with the rejection option. The proofs of our results are provided in Section \ref{sec:proofs}. The proofs of some supporting lemmas are deferred to the appendices. \section{Problem Formulation and Existing Results} \label{sec:formulation} \subsection{Notation} \label{sec:notation} Random variables and their realizations are in upper (e.g., $X$) and lower case (e.g., $x$) respectively. All sets are denoted in calligraphic font (e.g., $\mathcal{X}$). We use $\mathcal{X}^{\mathrm{c}}$ to denote the complement of $\mathcal{X}$. Let $X^n:=(X_1,\ldots,X_n)$ be a random vector of length $n$. All logarithms are base $e$. We use $\Phi(\cdot)$ to denote the cumulative distribution function (cdf) of the standard Gaussian and $\Phi^{-1}(\cdot )$ its inverse. Let $\mathrm{Q}(t):=1-\Phi(t)$ be the corresponding complementary cdf. We use $\mathrm{G}_k(\cdot)$ to denote the complementary cdf of a chi-squared random variable with $k$ degrees of freedom and $\mathrm{G}^{-1}_k(\cdot)$ its inverse. Given any two integers $(a,b)\in\mathbb{N}^2$, we use $[a:b]$ to denote the set of integers $\{a,a+1,\ldots,b\}$ and use $[a]$ to denote $[1:a]$. The set of all probability distributions on a finite set $\mathcal{X}$ is denoted as $\mathcal{P}(\mathcal{X})$. Notation concerning the method of types follows~\cite{Tanbook}. Given a vector $x^n = (x_1,x_2,\ldots,x_n) \in\mathcal{X}^n$, the {\em type} or {\em empirical distribution} is denoted as $\hat{T}_{x^n}(a)=\frac{1}{n}\sum_{i=1}^n \mathbbm{1}\{x_i=a\},a\in\mathcal{X}$. The set of types formed from length-$n$ sequences with alphabet $\mathcal{X}$ is denoted as $\mathcal{P}_{n}(\mathcal{X})$. Given $P\in\mathcal{P}_{n}(\mathcal{X})$, the set of all sequences of length $n$ with type $P$, the {\em type class}, is denoted as $\mathcal{T}^n_P$. The {\em support} of the probability mass function $P\in\mathcal{P}(\mathcal{X})$ is denoted as $\mathrm{supp}(P):=\{x\in\mathcal{X}:P(x)>0\}$. \subsection{Problem Formulation} The main goal in {\em binary hypothesis testing} is to classify a sequence $Y^n$ as being independently generated from one of two distinct distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$. However, different from classical binary hypothesis testing~\cite{lehmann2006testing,blahut1974hypothesis} where the two distributions are known, in {\em binary classification}~\cite{gutman1989asymptotically}, we do not know the two distributions. We instead have two training sequences $X_1^N$ and $X_2^N$ generated in an i.i.d.\ fashion according to $P_1$ and $P_2$ respectively. Therefore, the two hypotheses are \begin{itemize} \item $\mathrm{H}_1$: the test sequence $Y^n$ and the 1$^{\mathrm{st}}$ training sequence $X_1^N$ are generated according to the same distribution; \item $\mathrm{H}_2$: the test sequence $Y^n$ and the 2$^{\mathrm{nd}}$ training sequence $X_2^N$ are generated according to the same distribution. \end{itemize} We assume that $N=\lceil\alpha n\rceil$ for some $\alpha\in\mathbb{R}_+$.\footnote{\label{foot} In the following, we will often write $N=n\alpha$ for brevity, ignoring the integer constraints on $N$ and $n$.} The task in the binary classification problem is to design a decision rule (test) $\phi_n:\mathcal{X}^{2N}\times\mathcal{X}^n\to\{\mathrm{H}_1,\mathrm{H}_2\}$. Note that a decision rule partitions the sample space $\mathcal{X}^{2N}\times\mathcal{X}^n$ into two disjoint regions: $\mathcal{A}(\phi_n)$ where any triple $(X_1^N,X_2^N,Y^n)\in\mathcal{A}(\phi_n)$ favors hypothesis $\mathrm{H}_1$ and $\mathcal{A}^\mathrm{c}(\phi_n)$ where any triple $(X_1^N,X_2^N,Y^n)\in\mathcal{A}^\mathrm{c}(\phi_n)$ favors hypothesis $\mathrm{H}_2$. Given any decision rule $\phi_n$ and any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, we have two types of error probabilities, i.e., \begin{align} \beta_1(\phi_n|P_1,P_2) &:=\mathbb{P}_1\big\{\phi_n(X_1^N,X_2^N,Y^n)=\mathrm{H}_2\big\} \label{def:type1err},\\ \beta_2(\phi_n|P_1,P_2) &:=\mathbb{P}_2\big\{\phi_n(X_1^N,X_2^N,Y^n)=\mathrm{H}_1\big\} \label{def:type2err}, \end{align} where for $j\in[2]$, we define $\mathbb{P}_j\{\cdot\}:=\Pr\{\cdot|\mathrm{H}_j\}$ where $X_i^N\sim P_i^N$ for all $i\in[2]$. The two error probabilities in \eqref{def:type1err} and \eqref{def:type2err} are respectively known as the type-I and type-II error probabilities. \subsection{Existing Results and Definitions} The goal of binary classification is to design a classification rule based on the training sequences. This rule is then used on the test sequence to decide whether $\mathrm{H}_1$ or $\mathrm{H}_2$ is true. We revisit the study of the fundamental limits of the problem here. Towards this goal, Gutman~\cite{gutman1989asymptotically} proposed a decision rule using marginal types of $X_1^N$, $X_2^N$ and $Y^n$. To present Gutman's test, we need the following generalization of the Jensen-Shannon divergence~\cite{lin1991divergence}. Given any two distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$ and any number $\alpha\in\mathbb{R}_+$, let the {\em generalized Jensen-Shannon divergence} be \begin{align} \mathrm{GJS}(P_1,P_2,\alpha) &:=\alpha D\Big(P_1\Big\|\frac{\alpha P_1+P_2}{1+\alpha}\Big)+D\Big(P_2\Big\|\frac{\alpha P_1+P_2}{1+\alpha}\Big)\label{def:GJS}. \end{align} Given a threshold $\lambda\in\mathbb{R}_+$ and any triple $(x_1^N,x_2^N,y^n)$, Gutman's decision rule is as follows: \begin{align} \phi_n^{\rm{Gut}}(x_1^N,x_2^N,y^n) &:=\left\{ \begin{array}{ll} \mathrm{H}_1&\mathrm{if}~\mathrm{GJS}(\hat{T}_{x_1^N},\hat{T}_{y^n},\alpha)\leq \lambda\\ \mathrm{H}_2&\mathrm{if}~\mathrm{GJS}(\hat{T}_{x_1^N},\hat{T}_{y^n},\alpha)>\lambda. \end{array} \right.\label{gutmanrule} \end{align} To state Gutman's main result, we define the following ``exponent'' function \begin{align} F(P_1,P_2,\alpha,\lambda) &:=\min_{\substack{(Q_1,Q_2)\in\mathcal{P}(\mathcal{X})^2:\\\mathrm{GJS}(Q_1,Q_2,\alpha)\leq \lambda}} \alpha D(Q_1\|P_1)+ D(Q_2\|P_2)\label{def:FP1P2l}. \end{align} Note that $F(P_1,P_2,\alpha,\lambda)=0$ for $\lambda\geq \mathrm{GJS}(P_1,P_2,\alpha)$ and that $\lambda \mapsto F(P_1,P_2,\alpha,\lambda)$ is continuous (a consequence of \cite[Lemma~12]{Tan11_IT} in which $y\mapsto \min_{x\in \mathcal{K}}f(x,y)$ is continuous if $f$ is continuous and $\mathcal{K}$ is compact). Gutman~\cite[Lemma 2 and Theorem 1]{gutman1989asymptotically} showed that the rule in \eqref{gutmanrule} is asymptotically optimal (error exponent-wise) if the type-I error probability vanishes exponentially fast over {\em all pairs of distributions}. \begin{theorem} \label{gutmantheorem} Gutman's decision rule $\phi_n^{\rm{Gut}}$ satisfies the following two properties: \begin{enumerate} \item \emph{Asymptotic/Exponential performance}: For any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \liminf_{n\to\infty} -\frac{1}{n}\log \beta_1(\phi_n^{\rm{Gut}}|P_1,P_2)&\geq \lambda,\label{gutet1}\\* \liminf_{n\to\infty} -\frac{1}{n}\log \beta_2(\phi_n^{\rm{Gut}}|P_1,P_2)&\geq F(P_1,P_2,\alpha,\lambda).\label{gutet2} \end{align} \item \emph{Asymptotic/Exponential Optimality}: Fix a sequence of decision rules $\{\phi_n\}_{n=1}^\infty$ such that for all pairs of distributions $(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \liminf_{n\to\infty}-\frac{1}{n}\log\beta_1(\phi_n|\tilde{P}_1,\tilde{P}_2)\ge \lambda, \end{align} then for any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \beta_2(\phi_n|P_1,P_2)\geq \beta_2(\phi_n^{\rm{Gut}}|P_1,P_2), \end{align} where $\phi_n^{\rm{Gut}}$ is Gutman's test with threshold $\lambda$ defined in \eqref{gutmanrule} which achieves~\eqref{gutet1}--\eqref{gutet2}. \end{enumerate} \end{theorem} We remark that using Sanov's theorem~\cite[Chapter 11]{cover2012elements}, one can easily show that, for any pairs of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$ and any $\lambda>0$, Gutman's decision rule in \eqref{gutmanrule} satisfies \eqref{gutet1} as well as \begin{equation} \lim_{n\to\infty} -\frac{1}{n}\log \beta_2(\phi_n^{\rm{Gut}}|P_1,P_2)=F(P_1,P_2,\alpha,\lambda). \end{equation} Note that Theorem~\ref{gutmantheorem} is analogous to Blahut's work~\cite{blahut1974hypothesis} in which the trade-off of the error exponents for the binary hypothesis testing problem was thoroughly analyzed. \section{Binary Classification} \label{sec:results4bc} \subsection{Definitions and Motivation} \label{sec:def} In this paper, motivated by practical applications where the lengths of source sequences are finite (obtaining labeled training samples is prohibitively expensive), we are interested in approximating the non-asymptotic fundamental limits in terms of the tradeoff between type-I and type-II error probabilities of optimal tests. In particular, out of all tests whose type-I error probabilities decay exponentially fast for all pairs of distributions and whose type-II error probability is upper bounded by a constant $\varepsilon\in(0,1)$ for a particular pair of distributions, what is the largest decay rate of the sequence of the type-I error probabilities? In other words, we are interested in the following fundamental limit \begin{align} \lambda^*(n,\alpha,\varepsilon|P_1,P_2) \nn:=\sup\Big\{\lambda\in\mathbb{R}_+:\exists~\phi_n~\mathrm{s.t.~}\beta_1(\phi_n|\tilde{P}_1,\tilde{P}_2)&\leq \exp(-n\lambda),~\forall~(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2,\\ \mathrm{and~}\beta_2(\phi_n|P_1,P_2)&\leq \varepsilon\Big\}\label{def:l1^*}. \end{align} From Theorem \ref{gutmantheorem} (see also \cite[Theorem 3]{gutman1989asymptotically}), we obtain that \begin{align} \liminf_{n\to\infty}\lambda^*(n,\alpha,\varepsilon|P_1,P_2)\geq \mathrm{GJS}(P_1,P_2,\alpha)\label{firstordergutman}. \end{align} As a corollary of our result in Theorem \ref{bc:second}, we find that the result in \eqref{firstordergutman} is in fact tight and the limit exists. In this paper, we refine the above asymptotic statement and, in particular, provide second-order approximations to $\lambda^*(n,\alpha,\varepsilon|P_1,P_2)$. To conclude this section, we explain why we consider $\lambda^*(n,\alpha,\varepsilon|P_1,P_2)$ instead of characterizing a seemingly more natural quantity, namely, the largest decay rate of type-I error probability when the type-II error probability is upper bounded by a constant $\varepsilon\in(0,1)$ for a particular pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, i.e., \begin{align} \beta_2^*(n,\alpha,\varepsilon|P_1,P_2) &:=\inf\Big\{r\in[0,1]:\exists~\phi_n~\mathrm{s.t.~}\beta_1(\phi_n|P_1,P_2)\leq r,~\beta_2(\phi_n|P_1,P_2)\leq \varepsilon\Big\}. \end{align} In the binary classification problem, when we design a test $\phi_n$, we do not know the pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$ from which the training sequences are generated. Thus, unlike the simple hypothesis testing problem~\cite{strassen1962asymptotische,csiszar2011information}, we cannot design of a test tailored to a particular pair of distributions. Instead, we are interested in designing {\em universal} tests which have good performances {\em for all} pairs of distributions for the type-I (resp.\ type-II) error probability and at the same time, constrain the type-II (resp.\ type-I) error probability with respect to a {\em particular} pair of distributions $(P_1,P_2)$. \subsection{Main Result}\label{sec:main_res} We need the following definitions before presenting our main result. Given any $x\in\mathcal{X}$ and any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, define the following two {\em information densities} \begin{align} \imath_i(x|P_1,P_2,\alpha)&:=\log\frac{(1+\alpha)P_i(x)}{\alpha P_1(x)+P_2(x)},\quad i\in[2]\label{def:i}. \end{align} Furthermore, given any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, define the following {\em dispersion function} (linear combination of the variances of the information densities) \begin{align} \mathrm{V}(P_1,P_2,\alpha) &=\alpha\mathrm{Var}_{P_1}[\imath_1(X|P_1,P_2,\alpha)]+\mathrm{Var}_{P_2}[\imath_2(X|P_1,P_2,\alpha)]\label{def:v}. \end{align} \begin{theorem} \label{bc:second} For any $\varepsilon\in(0,1)$, any $\alpha\in\mathbb{R}_+$ and any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, we have \begin{align} \lambda^*(n,\alpha,\varepsilon|P_1,P_2) =\mathrm{GJS}(P_1,P_2,\alpha)+\sqrt{\frac{\mathrm{V}(P_1,P_2,\alpha)}{n}}\Phi^{-1}(\varepsilon)+O\left(\frac{\log n}{n}\right). \label{eqn:lambda_s} \end{align} \end{theorem} Theorem \ref{bc:second} is proved in Section \ref{proof:bcsecond}. In~\eqref{eqn:lambda_s}, $\mathrm{GJS}(P_1,P_2,\alpha)$ and $\sqrt{{\mathrm{V}(P_1,P_2,\alpha)}/{n}}\,\Phi^{-1}(\varepsilon)$ are respectively known as the {\em first-} and {\em second-order terms} in the {\em asymptotic expansion} of $\lambda^*(n,\alpha,\varepsilon|P_1,P_2)$. Since $0<\varepsilon<1/2$ in most applications, $\Phi^{-1}(\varepsilon)<0$ and so the second-order term represents a {\em backoff} from the exponent $\mathrm{GJS}(P_1,P_2,\alpha)$ at finite sample sizes $n$. As shown by Polyanskiy, Poor and Verd\'u~\cite{polyanskiy2010finite} (also see~\cite{polyanskiy2010thesis}), in the channel coding context, these two terms usually constitute a reasonable approximation to the non-asymptotic fundamental limit at moderate $n$. This will also be corroborated numerically for the current problem in Section~\ref{sec:num}. Several other remarks are in order. First, we remark that since the achievability part is based on Gutman's test, this test in~\eqref{gutmanrule} is second-order optimal. This means that it achieves the optimal second-order term in the asymptotic expansion of $\lambda^*(n,\alpha,\varepsilon|P_1,P_2)$. Second, as a corollary of our result, we obtain that for any $\varepsilon\in(0,1)$, \begin{align} \lim_{n\to\infty} \lambda^*(n,\alpha,\varepsilon|P_1,P_2)=\mathrm{GJS}(P_1,P_2,\alpha). \end{align} In other words, a {\em strong converse} for $\lambda^*(n,\alpha,\varepsilon|P_1,P_2)$ holds. This result can be understood as the counterpart of the Chernoff-Stein lemma~\cite{chernoff1952measure} for the binary classification problem (with strong converse). In the following, we comment on the influence of the ratio of the number of training and test samples $\alpha = N/n$ in terms of the dominant term in $\lambda^*(n,\alpha,\varepsilon|P_1,P_2)$. Note that the generalized Jensen-Shannon divergence $\mathrm{GJS}(P_1,P_2,\alpha)$ admits the following properties: \begin{itemize} \item[(i)] $\mathrm{GJS}(P_1,P_2,\alpha)$ is increasing in $\alpha$; \item[(ii)] $\mathrm{GJS}(P_1,P_2,0)=0$ and $\lim_{\alpha\to\infty}\mathrm{GJS}(P_1,P_2,\alpha)=D(P_2\|P_1)$. \end{itemize} Thus, we conclude that the longer the lengths of training sequences (relative to the test sequence), the better the performance in terms of exponential decay rate of type-I error probabilities for all pairs of distributions. In the extreme case in which $\alpha\to 0$, i.e., the training sequence is arbitrarily short compared to the test sequence, we conclude that type-I error probability cannot decay exponentially fast. However, in the other extreme in which $\alpha\to\infty$, we conclude that type-I error probabilities for all pairs of distributions decay exponentially fast with the dominant (first-order) term being $D(P_2\|P_1)$. This implies that we can achieve the optimal decay rate determined by the Chernoff-Stein lemma~\cite{chernoff1952measure} for binary hypothesis testing. Intuitively, this occurs since when $\alpha\to\infty$, we can estimate the true pair of distributions with arbitrarily high accuracy (using the large number training samples). In fact, we can say even more. Based on the formula in \eqref{def:v}, we deduce that, $\lim_{\alpha \to\infty} \mathrm{V}(P_1,P_2,\alpha)= \mathrm{Var}_{P_2} [\log(P_2(X)/P_1(X))]$, the {\em relative entropy variance}, so we recover Strassen's seminal result~\cite[Theorem~1.1]{strassen1962asymptotische} concerning the second-order asymptotics of binary hypothesis testing. Finally, we remark that the binary classification problem is closely related with the so-called two sample homogeneity testing problem~\cite[Sec.~II-C]{unnikrishnan2016weak} and the closeness testing problem~\cite{batu2013testing,acharya2014sublinear,chan2014optimal} where given two i.i.d.\ generated sequences $X^N$ and $Y^n$, one aims to determine whether the two sequences are generated according to the same distribution or not. Thus, in this problem, we have the following two hypotheses: \begin{itemize} \item $\mathrm{H}_1$: the two sequences $X^N$ and $Y^n$ are generated according to the same distribution; \item $\mathrm{H}_2$: the two sequences $X^N$ and $Y^n$ are generated according to different distributions. \end{itemize} The task in such a problem is to design a test $\phi_n:\mathcal{X}^N\times\mathcal{Y}^n\to\rm\{H_1,H_2\}$. Given any $\phi_n$ and any $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, the false-alarm and miss detection probabilities for such a problem are \begin{align} \beta_{\rm{FA}}(\phi_n|P_1) &:=\mathbb{P}_{P_1}\big\{\phi_n(X^N,Y^n)=\mathrm{H}_2\big\},\\ \beta_{\rm{MD}}(\phi_n|P_1,P_2) &:=\mathbb{P}_{P_1,P_2}\big\{\phi_n(X^N,Y^n)=\mathrm{H}_1\big\}, \end{align} where in $\mathbb{P}_{P_1}\{\cdot\}$, the random variables $X^N$ and $Y^n$ are both distributed i.i.d.\ according to $P_1$ and in $\mathbb{P}_{P_1,P_2}\{\cdot\}$, $X^N$ and $Y^n$ are distributed i.i.d. according to $P_1$ and $P_2$ respectively. Paralleling our setting for the binary classification problem, we can study the following fundamental limit of the two sample hypothesis testing problem: \begin{align} \xi^*(n,\alpha,\varepsilon|P_1,P_2) \nn:=\sup\Big\{\lambda\in\mathbb{R}_+: \exists~\phi_n\mathrm{~s.t.~}\beta_{\rm{FA}}(\phi_n|\tilde{P}_1)&\leq \exp(-n\lambda),\forall\,\tilde{P}_1\in\mathcal{P}(\mathcal{X}),\\* \beta_{\rm{MD}}(\phi_n|P_1,P_2)&\leq \varepsilon \Big\}.\label{tsht} \end{align} \begin{corollary} \label{cor:testing} For any $\varepsilon\in(0,1)$, any $\alpha\in\mathbb{R}_+$ and any $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, we have \begin{align} \xi^*(n,\alpha,\varepsilon|P_1,P_2)&=\mathrm{GJS}(P_1,P_2,\alpha)+\sqrt{\frac{\mathrm{V}(P_1,P_2,\alpha)}{n}}\Phi^{-1}(\varepsilon)+O\left(\frac{\log n}{n}\right). \end{align} \end{corollary} Since the proof is similar to that of Theorem \ref{bc:second}, we omit it. Corollary \ref{cor:testing} implies that Gutman's test is second-order optimal for the two sample homogeneity testing problem. We remark that for the binary classification problem without rejection (i.e., we are not allowed to declare the neither $\mathrm{H}_1$ nor $\mathrm{H}_2$ is true), the problem is essentially the same as the two sample hypothesis testing problem except that we have one more training sequence. However, as shown in Theorem \ref{bc:second}, the second training sequence is not useful in order to obtain second-order optimal result. This asymmetry in binary classification problem is circumvented if one also considers a rejection option as will be demonstrated in Section \ref{sec:result4cmr}. \begin{figure}[t] \centering \begin{tabular}{cc} \hspace{-.25in} \includegraphics[width=.5\columnwidth]{type2_err_5000.eps}& \hspace{-.4in} \includegraphics[width=.5\columnwidth]{type1_2_228_log.eps}\\ \hspace{-.25in} {\footnotesize (a) Type-II Error Probability} & \hspace{-.4in} {\footnotesize (b) Logarithm of the Maximal Type-I Error Probability} \end{tabular} \caption{(a) Type-II error probability for Gutman's test with target error probability $\varepsilon=0.2$. The error bars denote $1$ standard deviation above and below the mean over the independent experiments; (b) Natural logarithm of the maximal type-I error probability for Gutman's test. The error bars denote $10$ standard deviations above and below the mean. } \label{type14Gutman} \end{figure} \subsection{Numerical Simulation for Theorem \ref{bc:second}} \label{sec:num} In this subsection, we present a numerical example to illustrate the performance of Gutman's test in~\eqref{gutmanrule} and the accuracy of our theoretical results. We consider binary sources with alphabet $\mathcal{X}=\{0,1\}$. Throughout this subsection, we set $\alpha=2$. In Figure~\ref{type14Gutman}(a), we plot the type-II error probability $\beta_2(\phi_n^{\rm{Gut}}|P_1,P_2)$ for a particular pair of distributions $(P_1,P_2)$ where $P_1=\mathrm{Bern}(0.2)$ and $P_2=\mathrm{Bern}(0.4)$. The threshold is chosen to be the second-order asymptotic expansion \begin{equation} \hat{\lambda}:=\mathrm{GJS}(P_1,P_2,\alpha)+\sqrt{\frac{\mathrm{V}(P_1,P_2,\alpha)}{n}}\Phi^{-1}(\varepsilon),\label{eqn:lambda_hat} \end{equation} with target error probability being set to $\varepsilon=0.2$. Each point in Figure~\ref{type14Gutman}(a) is obtained by estimating the average error probability in the following manner. For each length of the test sequence $n\in \{1000,1200, 1400,\ldots, 5000\}$, we estimate the type-II error probability of a single Gutman's test in~\eqref{gutmanrule} using $10^7$ independent experiments. From Figure~\ref{type14Gutman}(a), we observe that the simulated error probability for Gutman's test is close to the target error probability of $\varepsilon=0.2$ as the length of the test sequence $n$ increases. We believe that there is a slight bias in the results as we have not taken the third-order term, which scales as $O(\frac{\log n}{n})$ into account in the threshold in~\eqref{eqn:lambda_hat}. In Figure \ref{type14Gutman}(b), we plot the natural logarithm of the theoretical upper bound $\exp(-n \hat{\lambda})$ and the maximal empirical type-I error probability $\beta_1(\phi_n^{\rm{Gut}}|\tilde{P}_1,\tilde{P}_2)$ over all pairs of distributions $(\tilde{P}_1,\tilde{P}_2)$. We set the fixed pair of distributions $(P_1,P_2)$ to be $P_1=\mathrm{Bern}(0.2)$ and $P_2=\mathrm{Bern}(0.228)$ and choose $\varepsilon=0.2$. We ensured that the threshold $\hat{\lambda}$ in~\eqref{eqn:lambda_hat} is small enough so that even if $n$ is large, the type-I error event occurs sufficiently many times and thus the numerical results are statistically significant. From Figure~\ref{type14Gutman}(b), we observe that the simulated probability lies below the theoretical one as expected. The gap can be explained by the fact that the method of types analysis is typically loose non-asymptotically due to a large polynomial factor. A more refined analysis based on strong large deviations~\cite[Theorem~3.7.2]{dembo2009large} would yield better estimates on exponentially decaying probabilities but we do not pursue this here. However, we do note that as $n$ becomes large, the slopes of the simulated and theoretical curves become increasingly close to each other (simulated slope at $n=5000$ is $\approx-0.001336$; theoretical slope at $n=5000$ is $\approx-0.001225$), showing that on the exponential scale, our estimate of the maximal type-I error probability is relatively tight. \subsection{Analysis of Gutman's Test in A Dual Setting} \label{sec:weakconvergence} In addition to analyzing $\lambda^*(n,\alpha,\varepsilon|P_1,P_2)$, one might also be interested in decision rules whose type-I error probabilities for all pairs of distributions are non-vanishing and whose type-II error probabilities for a particular pair of distributions decays exponentially fast. To be specific, for any decision rule $\phi_n$, we consider the following non-asymptotic fundamental limit: \begin{align} \tau^*(n,\alpha,\varepsilon|\phi_n,P_1,P_2) \nn&:=\sup\Big\{\tau\in\mathbb{R}_+: \beta_1(\phi_n|\tilde{P}_1,\tilde{P}_2)\leq \varepsilon,~\forall~(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2\\* &\qquad\quad\qquad\qquad\quad\mathrm{and~}\beta_2(\phi_n|P_1,P_2)\leq \exp(-n\tau)~\Big\}\label{def:l2^*}. \end{align} This can be considered as a dual to the problem studied in Sections~\ref{sec:def} to~\ref{sec:num}. We characterize the asymptotic behavior of $\tau^*(n,\alpha,\varepsilon|\phi_n,P_1,P_2)$ when $\phi_n=\phi_n^{\rm{Gut}}$. To do so, we recall that the R\'enyi divergence of order $\gamma\in\mathbb{R}_+$~\cite{renyi1961measures} is defined as \begin{align} D_{\gamma}(P_1\|P_2):=\frac{1}{\gamma-1}\log \bigg(\sum_{x \in\mathcal{X}}P_1^{\gamma}(x)P_2^{ 1-\gamma}(x)\bigg). \end{align} Note that $\lim_{\gamma\downarrow 1}D_{\gamma}(P_1\|P_2)=D(P_1\|P_2)$, the usual relative entropy. \begin{proposition} \label{weakc4bc} For any $\varepsilon\in(0,1)$, any $\alpha\in\mathbb{R}_+$ and any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \lim_{n\to\infty}\tau^*(n,\alpha,\varepsilon|\phi_n^{\rm{Gut}},P_1,P_2)=D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2)\label{wcresult}. \end{align} \end{proposition} The proof of Proposition \ref{weakc4bc} is provided in Section \ref{proof:weakc4bc}. Several remarks are in order. First, the performance of Gutman's test in \eqref{gutmanrule} under this dual setting is dictated by $D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2)$, which is different from $\mathrm{GJS}(P_1,P_2,\alpha)$ in Theorem \ref{bc:second}. Intuitively, this is because of two reasons. Firstly, for the type-I error probabilities to be upper bounded by a non-vanishing constant $\varepsilon\in(0,1)$ for all pairs of distributions, one needs to choose $\lambda = \Theta(\frac{1}{n})$ (implied by the weak convergence analysis in~\cite{unnikrishnan2016weak}). Consequently, the type-II exponent then satisfies \begin{align} \lim_{\lambda\downarrow 0} F(P_1,P_2,\alpha,\lambda)&=\min_{Q\in\mathcal{P}(\mathcal{X})}\alpha D(Q\|P_1)+D(Q\|P_2)=D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2). \end{align} Second, as $\alpha\to 0$, the exponent $D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2)\to 0$ and thus the type-II error probability does not decay exponentially fast. However, when $\alpha\to\infty$, the exponent $D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2)\to D(P_1\|P_2)$ and thus we can achieve the optimal exponential decay rate of the type-II error probability as if $P_1$ and $P_2$ were known (implied by the Chernoff-Stein lemma~\cite{chernoff1952measure}). Finally, we remark that Proposition \ref{weakc4bc} is not comparable to Theorem \ref{bc:second} since the settings are different. Furthermore, Proposition \ref{weakc4bc} applies only to Gutman's test while Theorem \ref{bc:second} contains an optimization over {\em all} tests or classifiers. \section{Classification of Multiple Hypotheses with the Rejection Option} \label{sec:result4cmr} In this section, we generalize our second-order asymptotic result for binary classification in Theorem~\ref{bc:second} to classification of multiple hypotheses with rejection~\cite[Theorem 2]{gutman1989asymptotically}. \subsection{Problem Formulation} Given $M$ training sequences $\{X_i^N\}_{i\in[M]}$ generated i.i.d. according to distinct distributions $\{P_i\}_{i\in{M}}\in\mathcal{P}(\mathcal{X})^M$, in classification of multiple hypotheses with rejection, one is asked to determine whether a test sequence $Y^n$ is generated i.i.d. according to a distribution in $\{P_i\}_{i\in[M]}$ or some other distribution. In other words, there are $M+1$ hypotheses: \begin{itemize} \item $\mathrm{H}_j$ for each $j\in[M]$: the test sequence $Y^n$ and $j^{\mathrm{th}}$ training sequence $X_j^N$ are generated according to the same distribution; \item $\mathrm{H}_\mathrm{r}$: the test sequence $Y^n$ is generated according to a distribution different from those in which the training sequences are generated from. \end{itemize} In the following, for simplicity, we use $\mathbf{X}^N$ to denote $(X_1^N,\ldots,X_M^N)$, $\mathbf{x}^N$ to denote $(x_1^N,\ldots,x_M^N)$ and $\mathbf{P}$ to denote $(P_1,\ldots,P_M)$. Recall that $N=\alpha n$ for brevity. The main task in classification of multiple hypotheses with rejection is thus to design a test $\psi_n:\mathcal{X}^{MN}\times\mathcal{X}^n\to \{\mathrm{H}_1,\ldots,\mathrm{H}_M,\mathrm{H}_\mathrm{r}\}$. Note that any such test $\psi_n$ partitions the sample space $\mathcal{X}^{MN}\times\mathcal{X}^n$ into $M+1$ disjoint regions: $M$ acceptance regions $\{\mathcal{A}_j(\psi_n)\}_{j\in[M]}$ where $(\mathbf{X}^N,Y^n)\in\mathcal{A}_j(\psi_n)$ favors hypothesis $\mathrm{H}_j$ and a rejection region $\mathcal{A}^\mathrm{c}(\psi_n):=\left(\cup_{j\in[M]}\mathcal{A}_j(\psi_n)\right)^\mathrm{c}$ where $(\mathbf{X}^N,Y^n)\in\mathcal{A}^\mathrm{c}(\psi_n)$ favors hypothesis $\mathrm{H}_\mathrm{r}$. Given any test $\psi_n$ and any tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$, we have the following $M$ error probabilities and $M$ rejection probabilities: for each $j\in[M]$, \begin{align} \beta_j(\psi_n|\mathbf{P}) &:=\mathbb{P}_j\big\{\psi_n(\mathbf{X}^N,Y^n)\notin\{\mathrm{H}_j,\mathrm{H}_\mathrm{r}\}\big\} \label{def:typejerror},\\ \zeta_j(\psi_n|\mathbf{P}) &:=\mathbb{P}_j\big\{\psi_n(\mathbf{X}^N,Y^n)=\mathrm{H}_\mathrm{r}\big\} \label{def:typejreject}, \end{align} where similarly to \eqref{def:type1err} and \eqref{def:type2err}, for $j\in[M]$, we define $\mathbb{P}_j\{\cdot\}:=\Pr\{\cdot|\mathrm{H}_j\}$ where $X_i^N$ is distributed i.i.d.\ according to $P_i$ for all $i\in[M]$. We term the probabilities in \eqref{def:typejerror} and \eqref{def:typejreject} as type-$j$ error and rejection probabilities respectively for each $j\in[M]$. Similarly to Section \ref{sec:results4bc}, we are interested in the following question. For all tests satisfying (i) for each $j\in[M]$, the type-$j$ error probability decays exponentially fast with the exponent being at least $\lambda\in\mathbb{R}_+$ for all tuples of distributions and (ii) for each $j\in[M]$, the type-$j$ rejection probability is upper bounded by a constant $\varepsilon_j\in(0,1)$ for a particular tuple of distributions, what is the largest achievable exponent $\lambda$? In other words, given $\bm{\varepsilon}=(\varepsilon_1,\ldots,\varepsilon_M)\in(0,1)^M$, we are interested in the following fundamental limit: \begin{align} \lambda^*(n,\alpha,\bm{\varepsilon}|\mathbf{P}) \nn:=\sup\Big\{\lambda\in\mathbb{R}_+: \exists \, \psi_n~\mathrm{s.t.~}\forall j\in[M], \beta_j(\psi_n|\tilde{\mathbf{P}})&\leq \exp(-n\lambda),\forall~\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M,\\* \zeta_j(\psi_n|\mathbf{P})&\leq \varepsilon_j \Big\}\label{def:calL}. \end{align} \subsection{Main Result}\label{sec:main_M} For brevity, let $\mathcal{M} :=\{(r,s)\in[M]^2:r\neq s\}$. Given any $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$, for each $j\in[M]$, let \begin{align} \theta_j(\mathbf{P},\alpha) &:=\min_{i\in[M]:i\neq j}\mathrm{GJS}(P_i,P_j,\alpha)\label{def:thetaj}. \end{align} Consider any $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$ such that the minimizer for $\theta_j(\mathbf{P},\alpha)$ in~\eqref{def:thetaj} is unique for each $j\in[M]$ and denote the unique minimizer for $\theta_j(\mathbf{P},\alpha)$ as $i^*(j|\mathbf{P},\alpha)$. For simplicity, we use $i^*(j)$ to denote $i^*(j|\mathbf{P},\alpha)$ when the dependence on $\mathbf{P}$ is clear. From Gutman's result in \cite[Thereoms 2 and 3]{gutman1989asymptotically}, we conclude that \begin{align} \liminf_{n\to\infty}\lambda^*(n,\alpha,\bm{\varepsilon}|\mathbf{P}) &\ge \min_{j\in[M]}\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)= \min_{(i,j)\in\mathcal{M}}\mathrm{GJS}(P_i,P_j,\alpha). \end{align} In this section, we refine the above asymptotic statement, and in particular, derive the second-order approximations to the fundamental limit $\lambda^*(n,\alpha,\bm{\varepsilon}|\mathbf{P})$. Given any tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$ and any vector $\bm{\varepsilon}\in(0,1)^M$, let \begin{align} \mathcal{J}_1(\mathbf{P},\alpha) &:=\argmin_{j\in[M]}\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)\label{def:calJ1},\\ \mathcal{J}_2(\mathbf{P},\alpha) &:=\argmin_{j\in\mathcal{J}_1(\mathbf{P},\alpha)}\sqrt{\mathrm{V}(P_{i^*(j)},P_j,\alpha)}\Phi^{-1}(\varepsilon_j)\label{def:calJ2}. \end{align} \begin{theorem} \label{second:cm} For any $\alpha\in\mathbb{R}_+$, any $\bm{\varepsilon}\in(0,1)^M$ and any tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$ satisfying that the minimizer for $\theta_j(\mathbf{P},\alpha)$ is unique for each $j\in[M]$, we have \begin{align} \lambda^*(n,\alpha,\bm{\varepsilon}|\mathbf{P}) &=\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)+\sqrt{\frac{\mathrm{V}(P_{i^*(j)},P_j,\alpha)}{n}}\Phi^{-1}(\varepsilon_j)+O\left(\frac{\log n}{n}\right), \label{eqn:multiple} \end{align} where \eqref{eqn:multiple} holds for any $j\in\mathcal{J}_2(\mathbf{P},\alpha)$. \end{theorem} The proof of Theorem \ref{second:cm} is given in Section \ref{proof:second:cm}. Several remarks are in order. First, in the achievability proof, we make use of a test proposed by Unnikrishnan~\cite[Theorem~4.1]{unnikrishnan2015asymptotically} and show that it is second-order optimal for classification of multiple hypotheses with rejection. Second, we remark that it is not straightforward to obtain the results in Theorem \ref{second:cm} by using the same set of techniques to prove Theorem \ref{bc:second}. The converse proof of Theorem \ref{second:cm} is a generalization of that for Theorem \ref{bc:second}. However, the achievability proof is more involved. As can be gleaned in our proof in Section \ref{proof:second:cm}, the test by Unnikrishnan (see~\eqref{def:unntest}) outputs rejection if the second smallest value of $\{\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)\}_{i\in[M]}$ is smaller than a threshold $\tilde{\lambda}$. The main difficulty lies in identifying the index of the second smallest value in $\{\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)\}_{i\in[M]}$. Note that for each realization of $(\mathbf{x}^N,y^n)$, such an index can potentially be different. However, we show that for any tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$ satisfying the condition in Theorem \ref{second:cm}, if the training sequences are generated in an i.i.d.\ fashion according to $\mathbf{P}$, with probability tending to one, the index of the second smallest value in $\{\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)\}_{i\in[M]}$ under hypothesis $\mathrm{H}_j$ is given by $i^*(j)$. Equipped this important observation, we establish our achievability proof by proceeding similarly to that of Theorem~\ref{bc:second}. Finally, we remark that one might also consider tests which provide inhomogeneous performance guarantees under different hypotheses in terms of the error probabilities for all tuples of distributions and, at the same time, constrains the sum of all rejection probabilities to be upper bounded by some $\varepsilon\in(0,1)$. In this direction, the fundamental limit of interest is \begin{align} \Lambda(n,\alpha,\varepsilon|\mathbf{P}) \nn:=\Big\{\lambda^M\in\mathbb{R}_+^M: \exists\ \psi_n~\mathrm{s.t.~}\forall j\in[M], \beta_j(\psi_n|\tilde{\mathbf{P}})&\leq \exp(-n\lambda_j),~\forall \, \tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M,\\* \sum_{j\in[M]}\zeta_j(\psi_n|\mathbf{P})&\leq \varepsilon \Big\}\label{def:inhomo}. \end{align} Characterizing the second-order asymptotics of the set $\Lambda(n,\alpha,\varepsilon|\mathbf{P})$ for $M\geq 3$ is challenging. However, when $M=2$, using similar proof techniques as that for Theorem \ref{second:cm}, we can characterize the following {\em second-order region}~\cite[Chapter~6]{Tanbook} \begin{align} \mathcal{L}(\alpha,\varepsilon|P_1,P_2) \nn := \Bigg\{ (L_1,L_2)\in\mathbb{R}_+ &: \exists\ \{\psi_n\}_{n=1}^\infty \mathrm{~s.t.}~ \forall\, (\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2 ,\nn\\* &\liminf_{n\to\infty}\frac{1}{\sqrt{n}}\Big( \log \frac{1}{\beta_1(\psi_n|\tilde{P}_1,\tilde{P}_2)} - n\, \mathrm{GJS}(P_1,P_2,\alpha)\Big) \ge L_1,\nn\\* \nn&\liminf_{n\to\infty}\frac{1}{\sqrt{n}}\Big( \log \frac{1}{\beta_2(\psi_n|\tilde{P}_1,\tilde{P}_2)} - n\, \mathrm{GJS}(P_2,P_1,\alpha)\Big) \ge L_2,\\* &\limsup_{n\to\infty}\sum_{j\in[2]}\zeta_j(\psi_n| P_1,P_2)\leq \varepsilon \Bigg\}\label{def:callseregion}. \end{align} Indeed, one can consider the following generalization of Gutman's test~\cite[Theorem 2]{gutman1989asymptotically} \begin{align} \psi_n^{\rm{Gut}}(x_1^N,x_2^N,y^n)&:= \left\{ \begin{array}{ll} \mathrm{H}_1&\mathrm{if~}\mathrm{GJS}(\hat{T}_{x_2^N},\hat{T}{y^n},\alpha)-\tilde{\lambda}_2> 0,\\ \mathrm{H}_2&\mathrm{if~}\mathrm{GJS}(\hat{T}_{x_1^N},\hat{T}_{y^n},\alpha)-\tilde{\lambda}_1> 0,\mathrm{GJS}(\hat{T}_{x_2^N},\hat{T}_{y^n},\alpha)-\tilde{\lambda}_2\leq 0\\ \mathrm{H}_\mathrm{r}&\mathrm{if~}\mathrm{GJS}(\hat{T}_{x_i^N},\hat{T}_{y^n},\alpha)-\tilde{\lambda}_i\leq 0,i\in[2] \end{array} \right.\label{gutmanbcr}, \end{align} where $\tilde{\lambda}_1$ and $\tilde{\lambda}_2$ are thresholds chosen so that the sum of the type-II error probabilities is upper bounded by $\varepsilon\in(0,1)$. Then, by means of a standard calculation, \begin{align} \mathcal{L}(\alpha,\varepsilon|P_1,P_2) &=\bigg\{(L_1,L_2)\in\mathbb{R}_+:\Phi\left(\frac{L_1}{\sqrt{\mathrm{V}(P_1,P_2,\alpha)}}\right)+\Phi\left(\frac{L_2}{\sqrt{\mathrm{V}(P_2,P_1,\alpha)}}\right)\leq \varepsilon\bigg\}. \end{align} This result clearly elucidates a trade-off between $L_1$ and $L_2$ or, equivalently, the two rejection probabilities $\zeta_1(\psi_n| P_1,P_2)$ and $\zeta_2(\psi_n| P_1,P_2)$. \subsection{Analysis in A Dual Setting} \label{sec:dual:cm} Similar to the analysis of the dual setting in Section~\ref{sec:weakconvergence}, for classification of multiple hypotheses with the rejection option, one might be interested in studying tests whose type-$j$ error probability for each $j\in[M]$ are upper bounded by a constant for all tuples of distributions $\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M$ and whose type-$j$ rejection probability for each $j\in[M]$ decays exponentially fast for a particular $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$. To be specific, given any decision rule $\Psi_n$ and any $\varepsilon\in(0,1)$, we study the following non-asymptotic fundamental limit: \begin{align} \tau^*(n,\alpha,\varepsilon|\Psi_n,\mathbf{P}) \nn:=\sup\big\{\tau\in\mathbb{R}_+: \exists\, \psi_n~\mathrm{s.t.~}\forall j\in[M], \beta_j(\psi_n|\tilde{\mathbf{P}})&\leq \varepsilon,\forall~\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M,\\* \zeta_j(\psi_n|\mathbf{P})&\leq \exp(-n\tau) \big\}\label{def:caltau2}. \end{align} To analyze the fundamental limit in \eqref{def:caltau2}, given training and test sequences $(\mathbf{x}^M,y^n)$, we consider Gutman's test~\cite[Theorem 2]{gutman1989asymptotically} which is given by the following rule \begin{align} \Psi_n^{\rm{Gut}} (\mathbf{x}^M,y^n) &:= \left\{ \begin{array}{ll} \mathrm{H}_1&\mathrm{if~}\max_{i\in[M]:i\neq 1}\mathrm{GJS}(\hat{T}_{x_i^N},\hat{T}_{y^n},\alpha)>\lambda,\\ \mathrm{H}_j&\mathrm{if~}\max_{i\in[M]:i\neq j}\mathrm{GJS}(\hat{T}_{x_i^N},\hat{T}_{y^n},\alpha)>\lambda,\mathrm{GJS}(\hat{T}_{x_j^N},\hat{T}_{y^n},\alpha)\leq \lambda,\\ \mathrm{H}_\mathrm{r}& \mathrm{otherwise} \end{array} \right.\label{gut:mwithreject} \end{align} for $j\in[2:M]$. The reason why, unlike in Section~\ref{sec:main_M}, we do not analyze Unnikrishnan's test~\cite{unnikrishnan2015asymptotically} (see~\eqref{def:unntest}) is because it is designed so that the $j$-th error probability $ \beta_j(\psi_n|\tilde{\mathbf{P}}) $ decays exponentially fast for every tuple of distributions $\tilde{\mathbf{P}}$. Since \eqref{def:caltau2} stipulates that $\beta_j(\psi_n|\tilde{\mathbf{P}}) $ is non-vanishing, clearly Unnikrishnan's test is not suited to this dual regime. To present our result, we need the following definition. Given any triple of distributions $(P_1,P_2,P_3)\in\mathcal{P}(\mathcal{X})^3$ and any $\gamma\in\mathbb{R}_+$, define a generalized divergence measure between three distributions as \begin{align} D_{\gamma}(P_1,P_2,P_3) &:=\frac{1}{\gamma-1}\log\bigg(\sum_{x}P_1(x)^{1-\gamma}P_2(x)^{\frac{\gamma}{2}}P_3(x)^{\frac{\gamma}{2}}\bigg)\label{def:gdivergence}. \end{align} \begin{proposition} \label{dual:unn} For any $\alpha\in\mathbb{R}_+$, any $\varepsilon\in(0,1)$ and any tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$, we have \begin{align} \lim_{n\to\infty}\tau^*(n,\alpha,\varepsilon|\Psi_n^{\rm{Gut}},\mathbf{P})&=\min_{j\in[M]}\min_{(i,k)\in\mathcal{M}}D_{\frac{2\alpha}{1+2\alpha}}(P_j,P_i,P_k), \end{align} where $\mathcal{M}=\{(r,s)\in[M]^2:r\neq s\}$. \end{proposition} The proof of Proposition \ref{dual:unn} is provided in Section \ref{proof:dual:unn}. Several remarks are in order. First, the exponent of Gutman's test in~\eqref{gut:mwithreject} in the dual setting is considerably different from that in Theorem~\ref{second:cm}. Intuitively, this is because for this setting, in order to ensure that the error probability under each hypothesis is upper bounded by $\varepsilon$ for all $\tilde{\mathbf{P}}$, we need to choose $\lambda=\Theta(\frac{1}{n})$ in \eqref{gut:mwithreject}. In contrast, $\lambda$ is chosen to $\Theta(1)$ in the proof of Theorem \ref{second:cm}. Second, as $\alpha\to 0$, the exponent $D_{\frac{2\alpha}{1+2\alpha}}(P_j,P_i,P_k)\to 0$ for each $(j,i,k)\in[M]\times\mathcal{M}$. Thus if the ratio of the lengths of the training to test sequences is vanishingly small, the rejection probabilities cannot decay exponentially fast. This conforms to our intuition as there are too few training samples to train effectively. Finally, when $\alpha\to\infty$, one can verify that $D_{\frac{2\alpha}{1+2\alpha}}(P_j,P_i,P_k)\to \infty$ and thus the rejection probabilities decay super exponentially fast if the length of the training sequences $N$ is scaling faster than the length of the test sequence $n$, i.e., $N=\omega(n)$. In contrast, in Proposition~\ref{weakc4bc}, when $\alpha\to \infty$, the exponent of type-II error probability for any $(P_1,P_2)$ converges to the Chernoff-Stein exponent $D(P_1\|P_2)$~\cite{chernoff1952measure}, which is finite. Why is there a dichotomy when in both settings, $N$ is much larger than $n$ and so one can estimate the underlying distributions with arbitrarily high accuracy? The dichotomy between these two results is due to a subtle difference in two settings, which we delineate here. In Proposition \ref{weakc4bc}, a test sequence is generated according to $P_1$ or $P_2$ and one is asked to make a decision {\em without the rejection option}. If the true pair of distributions is known, the setting basically reduces to {\em binary hypothesis testing}~\cite{chernoff1952measure} and so $D(P_1\| P_2)$ is the type-II exponent. However, in Proposition~\ref{dual:unn}, a test sequence is generated according to one of the $M$ unknown distributions in $\mathbf{P}$ and one is also {\em allowed the rejection option}. When the true $\mathbf{P}$ is known (i.e., the case $\alpha\to\infty$ which allows one to estimate $\mathbf{P}$ accurately), the setting in Proposition~\ref{dual:unn} essentially reduces to {\em $M$-ary hypothesis testing} in which rejection is no longer permitted, which implies that the exponent of the probability of the rejection event $\tau^*(n,\alpha,\varepsilon|\Psi_n^{\mathrm{Gut}},\mathbf{P})$ tends to infinity. \section{Proof of the Main Results} \label{sec:proofs} \subsection{Proof Theorem \ref{bc:second}} \label{proof:bcsecond} In this section, we present the proof of second-order asymptotics for the binary classification problem. The main techniques used are the method of types, Taylor approximations of the generalized Jensen-Shannon divergence and a careful application of the central limit theorem. \subsubsection{Achievability Proof} \label{sec:ach4bc} In the achievability proof, we use Gutman's test \eqref{gutmanrule} with the threshold $\lambda$ replaced by \begin{align} \tilde{\lambda}&:=\lambda-\frac{|\mathcal{X}|\log \big((1+\alpha)n+1\big)}{n}\label{def:tildelambda}. \end{align} Given any $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, the type-I and type-II error probabilities for $\phi_n^{\rm{Gut}}$ are given by \begin{align} \beta_1(\phi_n^{\rm{Gut}}|P_1,P_2) &=\mathbb{P}_1\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)>\tilde{\lambda}\Big\},\\* \beta_2(\phi_n^{\rm{Gut}}|P_1,P_2) &=\mathbb{P}_2\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)\leq \tilde{\lambda}\Big\}. \end{align} We first analyze $\beta_2(\phi_n^{\rm{Gut}}|P_1,P_2)$. Given any $P\in\mathcal{P}(\mathcal{X})$, define the following typical set: \begin{align} \mathcal{B}_n(P) &:=\bigg\{x^n\in\mathcal{X}^n:\max_{x\in\mathcal{X}}|\hat{T}_{x^n}(x)-P(x)|\leq \sqrt{\frac{\log n}{n}}\bigg\}\label{def:typical}. \end{align} By Chebyshev's inequality (see also \cite[Lemma 22]{tan2014state}), we can show that \begin{align} \mathbb{P}_2\Big\{X_1^N\notin\mathcal{B}_N(P_1)~\mathrm{or}~Y^n\notin\mathcal{B}_n(P_2)\Big\} &\leq \frac{2|\mathcal{X}|}{N^2}+\frac{2|\mathcal{X}|}{n^2}=\frac{2(1+\alpha^2)|\mathcal{X}|}{2\alpha^2 n^2}=:\tau_n\label{pofatypical}. \end{align} Recall the definitions of information densities in \eqref{def:i}. It is easy to verify that \begin{align} \mathrm{GJS}(P_1,P_2,\alpha) &=\alpha\mathbb{E}_{P_1}\left[\imath_1(X|P_1,P_2,\alpha)\right]+\mathbb{E}_{P_2}\left[\imath_2(X|P_1,P_2,\alpha)\right]\label{averageimath=GJS}. \end{align} Furthermore, for any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$ and any $\alpha\in\mathbb{R}_+$, the derivatives of the generalized Jensen-Shannon divergence $\mathrm{GJS}(P_1,P_2,\alpha)$ are as follows: \begin{alignat}{2} \frac{\partial \mathrm{GJS}(P_1,P_2,\alpha)}{\partial P_1(x)} &=\alpha\imath_1(x|P_1,P_2,\alpha), \qquad &\forall\, x\in\mathrm{supp}(P_1)\label{firstd1},\\ \frac{\partial \mathrm{GJS}(P_1,P_2,\alpha)}{\partial P_2(x)} &=\imath_2(x|P_1,P_2,\alpha), \qquad & \forall\, x\in\mathrm{supp}(P_2)\label{firstd2},\\ \frac{\partial^2 \mathrm{GJS}(P_1,P_2,\alpha)}{\partial (P_1(x))^2} &=\frac{\alpha P_2(x)}{P_1(x)(\alpha P_1(x)+P_2(x))},\qquad &\forall\, x\in\mathrm{supp}(P_1),\\ \frac{\partial^2 \mathrm{GJS}(P_1,P_2,\alpha)}{\partial (P_2(x))^2} &=\frac{\alpha P_1(x)}{P_2(x)(\alpha P_1(x)+P_2(x))},\qquad &\forall\, x\in\mathrm{supp}(P_2),\\ \frac{\partial^2 \mathrm{GJS}(P_1,P_2,\alpha)}{\partial P_1(x)P_2(x)} &=-\frac{\alpha}{\alpha P_1(x)+P_2(x)},\qquad &\forall\, x\in\mathrm{supp}(P_1)\cap\mathrm{supp}(P_2) \label{seconddlast}. \end{alignat} Using the results in \eqref{firstd1}--\eqref{seconddlast} and applying a Taylor expansion to $\mathrm{GJS}(\hat{T}_{x_1^N},\hat{T}_{y^n},\alpha)$ around $(P_1,P_2)$ for any $x_1^N\in\mathcal{B}_N(P_1)$ and $y^n\in\mathcal{B}_n(P_2)$, we obtain \begin{align} \nn&\mathrm{GJS}(\hat{T}_{x_1^N},\hat{T}_{y^n},\alpha)\\* \nn&=\mathrm{GJS}(P_1,P_2,\alpha)+\sum_{x\in\mathcal{X}}(\hat{T}_{x_1^N}(x)-P_1(x))\alpha\imath_1(x|P_1,P_2,\alpha)+\sum_{x\in\mathcal{X}}(\hat{T}_{y^n}(x)-P_2(x))\imath_2(x|P_1,P_2,\alpha)\\* &\qquad+O(\|\hat{T}_{x_1^N}-P_1\|^2+O(\|\hat{T}_{y^n}-P_2\|^2))\\* &=\frac{1}{n}\sum_{i\in[N]}\imath_1(x_{1,i}|P_1,P_2,\alpha) +\frac{1}{n}\sum_{i\in[n]}\imath_2(y_i|P_1,P_2,\alpha)+O\left(\frac{\log n}{n}\right)\label{taylor}, \end{align} where \eqref{taylor} follows because $N= \lceil n\alpha \rceil$ and the fact that the types in $\mathcal{B}_N(P_1)$ and $\mathcal{B}_n(P_2)$ are $O(\sqrt{\frac{\log n}{n}})$-close to the generating (underlying) distributions $P_1$ and $P_2$. Recall the definition of $\mathrm{V}(P_1,P_2,\alpha)$ in \eqref{def:v}. Let the linear combination of the third absolute moments of the information densities in \eqref{def:i} be defined as \begin{align} \mathrm{T}(P_1,P_2,\alpha) &:=\alpha\mathbb{E}_{P_1}\left[\big|\imath_1(X|P_1,P_2,\alpha)-\mathbb{E}_{P_i}[\imath_i(X|P_1,P_2,\alpha)]\big|^3\right] \nn\\* &\qquad+\mathbb{E}_{P_2}\left[\big|\imath_2(X|P_1,P_2,\alpha)-\mathbb{E}_{P_2}[\imath_2(X|P_1,P_2,\alpha)]\big|^3\right]\label{def:rmT}. \end{align} We can upper bound the type-II error probability as follows: \begin{align} \nn&\beta_2(\phi_n^{\rm{Gut}}|P_1,P_2)=\mathbb{P}_2\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)\leq \tilde{\lambda}\Big\}\\* \nn&\le\mathbb{P}_2\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)\leq\tilde{\lambda},X_1^N\in\mathcal{B}_N(P_1),Y^n\in\mathcal{B}_n(P_2)\Big\}\\* &\qquad+\mathbb{P}_2\Big\{X_1^N\notin\mathcal{B}_N(P_1)\mathrm{~or~}Y^n\notin\mathcal{B}_n(P_2)\Big\}\\ &=\mathbb{P}_2\bigg\{\frac{1}{n}\sum_{i\in[N]}\imath_1(X_{1,i}|P_1,P_2,\alpha) +\frac{1}{n}\sum_{i\in[n]}\imath_2(Y_i|P_1,P_2,\alpha)+O\left(\frac{\log n}{n}\right)\leq \tilde{\lambda}\bigg\}+\tau_n\label{usetaylor&atypical}\\ \nn&=\mathbb{P}_2\bigg\{\frac{1}{n+N}\sum_{i\in[N]}\big(\imath_1(X_{1,i}|P_1,P_2,\alpha)-\mathbb{E}_{P_1}[\imath_1(x|P_1,P_2,\alpha)]\big) \nn\\* &\quad +\frac{1}{n\!+\! N}\sum_{i\in[n]}\big(\imath_2(Y_i|P_1,P_2,\alpha)\!-\!\mathbb{E}_{P_2}[\imath_2(x|P_1,P_2,\alpha)]\big)\!\leq \!\frac{\lambda-\mathrm{GJS}(P_1,P_2,\alpha)+O (\frac{ \log n}{n} ) }{1+\alpha}\bigg\}\!+\!\tau_n\label{usel&ei}\\ &\leq \Phi\left(\left(\lambda-\mathrm{GJS}(P_1,P_2,\alpha)+O\left(\frac{\log n}{n}\right)\right)\sqrt{\frac{n}{\mathrm{V}(P_1,P_2,\alpha)}}\right)+\frac{6\mathrm{T}(P_1,P_2,\alpha)}{\sqrt{n(\mathrm{V}(P_1,P_2,\alpha))^3}}+\tau_n\label{useberry}, \end{align} where \eqref{usetaylor&atypical} follows from the bound in~\eqref{pofatypical} and the Taylor expansion in~\eqref{taylor}; \eqref{usel&ei} follows from the expression for $\mathrm{GJS}(P_1,P_2,\alpha)$ in \eqref{averageimath=GJS}, the fact that $N=n\alpha$ and the definition of $\tilde{\lambda}$ in \eqref{def:tildelambda}; and \eqref{useberry} follows from the Berry-Esseen theorem~\cite{berry1941accuracy,esseen1942}. Similarly to \eqref{def:type1err} and \eqref{def:type2err}, for $j\in[2]$, we define $\tilde{\mathbb{P}}_j\{\cdot\}:=\Pr\{\cdot|\mathrm{H}_j\}$ where $(X_1^N,X_2^N)$ are generated from the pair of distributions $(\tilde{P}_1,\tilde{P}_2)$. For all $(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2$, the type-I error probability can be upper bounded as follows: \begin{align} \beta_1(\phi_n^{\rm{Gut}}|\tilde{P}_1,\tilde{P}_2) &=\tilde{\mathbb{P}}_1\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)>\tilde{\lambda}\Big\}\\* &=\sum_{x_1^N,y^n: \mathrm{GJS}(\hat{T}_{x_1^N},\hat{T}_{y^n},\alpha)>\tilde{\lambda}}\tilde{P}_1^N(x_1^N)\tilde{P}_1^n(y^n)\\ &=\sum_{ (Q_1,Q_2): \mathrm{GJS}(Q_1,Q_2,\alpha)> \tilde{\lambda}} \tilde{P}_1^N(\mathcal{T}^N_{Q_1})\tilde{P}_1^n(\mathcal{T}^n_{Q_2})\\ &\leq \sum_{(Q_1,Q_2) :\mathrm{GJS}(Q_1,Q_2,\alpha)\geq \tilde{\lambda}}\exp\big\{-ND(Q_1\|\tilde{P}_1)-nD(Q_2\|\tilde{P}_1)\big\}\\ &\leq \sum_{ (Q_1,Q_2): \mathrm{GJS}(Q_1,Q_2,\alpha)\geq \tilde{\lambda}}\exp(-n\tilde{\lambda})\exp\bigg\{-n(1+\alpha)D\left(\frac{\alpha Q_1+Q_2}{1+\alpha}\Big\|\tilde{P}_1\right)\bigg\}\label{explain1}\\ &\leq \exp(-n\tilde{\lambda})\sum_{Q\in\mathcal{P}_{n+N}(\mathcal{X})}\exp\big\{-(n+N)D(Q\|\tilde{P}_1)\big\}\\ &\leq \exp(-n\tilde{\lambda})\sum_{Q\in\mathcal{P}_{n+N}(\mathcal{X})}(n+N+1)^{|\mathcal{X}|}\tilde{P}_1^{n+N}(\mathcal{T}^{n+N}_{Q})\\ &\leq \exp\Big\{-n\tilde{\lambda}+|\mathcal{X}|\log \big((1+\alpha)n+1\big)\Big\}\\* &=\exp(-n\lambda),\label{upptype2} \end{align} where \eqref{upptype2} follows from the definition of $\tilde{\lambda}$ in \eqref{def:tildelambda} and \eqref{explain1} follows since \begin{align} \nn&ND(Q_1\|\tilde{P}_1)+nD(Q_2\|\tilde{P}_1)\\* &=n\alpha\mathbb{E}_{Q_1}\left[\log\frac{Q_1(X)}{\tilde{P}_1(X)}\right]+n\mathbb{E}_{Q_2}\left[\log\frac{Q_2(X)}{\tilde{P}_1(X)}\right]\\ &=n\alpha\mathbb{E}_{Q_1}\left[\log\frac{(1+\alpha)Q_1(X)}{\alpha Q_1(X)+Q_2(X)}\right] +n\mathbb{E}_{Q_2}\left[\log\frac{(1+\alpha)Q_2(X)}{\alpha Q_1(X)+Q_2(X)}\right]\nn\\* &\qquad+n(1+\alpha)D\left(\frac{\alpha Q_1+Q_2}{1+\alpha}\Big\|\tilde{P}_1\right)\\ &=n\mathrm{GJS}(Q_1,Q_2,\alpha)+n(1+\alpha)D\left(\frac{\alpha Q_1+Q_2}{1+\alpha}\Big\|\tilde{P}_1\right)\\ &\geq n\lambda+n(1+\alpha)D\left(\frac{\alpha Q_1+Q_2}{1+\alpha}\Big\|\tilde{P}_1\right). \end{align} For brevity, let \begin{align} \rho_n&:=\frac{6\mathrm{T}(P_1,P_2,\alpha)}{\sqrt{n(\mathrm{V}(P_1,P_2,\alpha))^3}}+\tau_n\label{def:rhon}. \end{align} Combining the results in \eqref{useberry} and \eqref{upptype2}, if we choose $\lambda\in\mathbb{R}_+$ s.t., \begin{align} \lambda &=\mathrm{GJS}(P_1,P_2,\alpha)+\sqrt{\frac{\mathrm{V}(P_1,P_2,\alpha)}{n}}\Phi^{-1}\left(\varepsilon-\rho_n\right)+O\left(\frac{\log n}{n}\right), \end{align} Gutman's test with threshold $\tilde{\lambda}$ in \eqref{def:tildelambda} satisfies that i) $\beta_1(\phi_n^{\rm{Gut}}|\tilde{P}_1,\tilde{P}_2)\leq \exp(-n\lambda)$ for all $(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2$, and (ii) $\beta_2(\phi_n^{\rm{Gut}}|P_1,P_2)\leq \varepsilon$. Therefore, we conclude that \begin{align} \lambda^*(n,\alpha,\varepsilon|P_1,P_2) &\geq \mathrm{GJS}(P_1,P_2,\alpha)+\sqrt{\frac{\mathrm{V}(P_1,P_2,\alpha)}{n}}\Phi^{-1}(\varepsilon-\rho_n)+O\left(\frac{\log n}{n}\right)\\ &=\mathrm{GJS}(P_1,P_2,\alpha)+\sqrt{\frac{\mathrm{V}(P_1,P_2,\alpha)}{n}}\Phi^{-1}(\varepsilon)+O\left(\frac{\log n}{n}\right)\label{taylorphi}, \end{align} where \eqref{taylorphi} follows from a Taylor approximation of $\Phi^{-1}(\cdot)$ (cf. \cite[Corollary 51]{polyanskiy2010finite}). \subsubsection{Converse Proof} \label{sec:converse4bc} The following lemma relates the error probabilities of any test to a type-based test (i.e., a test which is a function of only the marginal types $(\hat{T}_{X_1^N}, \hat{T}_{X_2^N}, \hat{T}_{Y^n})$). \begin{lemma} \label{anytotype} For any arbitrary test $\phi_n$, given any $\kappa\in[0,1]$ and any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, we can construct a type-based test $\phi_n^\mathrm{T}$ such that \begin{align} \beta_1(\phi_n|P_1,P_2)&\geq \kappa \beta_1(\phi_n^\mathrm{T}|P_1,P_2),\\* \beta_2(\phi_n|P_1,P_2)&\geq (1-\kappa)\beta_2(\phi_n^\mathrm{T}|P_1,P_2). \end{align} \end{lemma} The proof of Lemma \ref{anytotype} is inspired by \cite[Lemma 2]{gutman1989asymptotically} and provided in Appendix \ref{proof:anytotype}. The following lemma shows that for any type-based test $\phi_n^\mathrm{T}$, if we constrain the type-I error probability to decay exponentially fast for all pairs of distributions, then the type-II error probability for any particular pair of distributions can be lower bounded by a certain cdf of the generalized Jensen-Shannon divergence evaluated at the marginal types of the training and test sequences. The lemma can be used to assert that Gutman's test in~\eqref{gutmanrule} is ``almost'' optimal when restricted to the class of all type-based tests. For brevity, given $(\alpha,t)\in\mathbb{R}_+^2$, let \begin{align} \eta_n(\alpha)&:=\frac{|\mathcal{X}|\log (n+1)}{n} + \frac{2|\mathcal{X}|\log (1+\alpha n)}{\alpha n}\label{def:etan}. \end{align} \begin{lemma} \label{typeconverse} For any $\lambda\in\mathbb{R}_+$ and any type-based test $\phi_n^\mathrm{T}$ satisfying that for all pairs of distributions $(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \beta_1(\phi_n^\mathrm{T}|\tilde{P}_1,\tilde{P}_2)\leq \exp(-n\lambda),\label{typeconstraint} \end{align} we have that for any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \beta_2(\phi_n^\mathrm{T}|P_1,P_2)&\geq \mathbb{P}_2\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)+\eta_n(\alpha)<\lambda\Big\}. \end{align} \end{lemma} The proof of Lemma \ref{typeconverse} is inspired by \cite[Theorem 1]{gutman1989asymptotically} and provided in Appendix \ref{proof:typeconverse}. Combining the results in Lemmas \ref{anytotype} and \ref{typeconverse} and letting $\kappa={1}/{n}$, we obtain the following corollary. \begin{corollary} \label{converse} Given any $\lambda\in\mathbb{R}_+$, for any test $\phi_n$ satisfying the condition that for all pairs of distributions $(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2$ \begin{align} \beta_1(\phi_n|\tilde{P}_1,\tilde{P}_2)\leq \exp(-n\lambda)\label{testconstraint}, \end{align} we have that any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \beta_2(\phi_n|P_1,P_2)\geq \left(1-\frac{1}{n}\right)\mathbb{P}_2\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)+\eta_n(\alpha)+\frac{\log n}{n}<\lambda\Big\}. \end{align} \end{corollary} Using Corollary \ref{converse}, the converse part of our second-order asymptotics can be proved similarly to the achievability part by using the result in \eqref{pofatypical}, the Taylor expansions in~\eqref{taylor}, the definition of $\rho_n$ in~\eqref{def:rhon} and applying the Berry-Esseen theorem similarly to \eqref{useberry}. Invoking Corollary~\ref{converse}, we obtain that for any test $\phi_n$ satisfying \eqref{testconstraint} and any pair of distributions $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \nn&\beta_2(\phi_n|P_1,P_2) \geq \Big(1-\frac{1}{n}\Big)\mathbb{P}_2\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)+O\left(\frac{\log n}{n}\right)<\lambda\Big\}\\ &\quad\geq \Big(1-\frac{1}{n}\Big)\mathbb{P}_2\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)+O\left(\frac{\log n}{n}\right)<\lambda,X_1^N\in\mathcal{B}_N(P_1),Y^n\in\mathcal{B}_n(P_2)\Big\}\\ \nn&\quad\geq\Big(1-\frac{1}{n}\Big)\mathbb{P}_2\bigg\{\frac{1}{n}\sum_{i\in[N]}\imath_1(X_{1,i}|P_1,P_2,\alpha)+\frac{1}{n}\sum_{i\in[n]}\imath_2(Y_i|P_1,P_2,\alpha)+O\left(\frac{\log n}{n}\right)<\lambda\bigg\}\\* &\qquad-\Big(1-\frac{1}{n}\Big)\mathbb{P}_2\Big\{X_1^N\notin\mathcal{B}_N(P_1)~\mathrm{or}~Y^n\notin\mathcal{B}_n(P_2)\Big\}\\ &\quad\geq \Big(1-\frac{1}{n}\Big)\bigg\{\Phi\left(\left(\lambda-\mathrm{GJS}(P_1,P_2,\alpha)+O\left(\frac{\log n}{n}\right)\right)\sqrt{\frac{n}{\mathrm{V}(P_1,P_2,\alpha)}}\right)-\rho_n\bigg\}\label{converse:step1}. \end{align} Using \eqref{converse:step1} and the definition of $\lambda^*(\cdot|\cdot)$ in \eqref{def:l1^*}, we conclude that for any $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \lambda^*(n,\alpha,\varepsilon|P_1,P_2) &\le\mathrm{GJS}(P_1,P_2,\alpha)+\sqrt{\frac{\mathrm{V}(P_1,P_2,\alpha)}{n}}\Phi^{-1}(\varepsilon)+O\left(\frac{\log n}{n}\right), \end{align} where a Taylor approximation of $\Phi^{-1}(\cdot)$ has been applied. \subsection{Proof of Proposition \ref{weakc4bc}} \label{proof:weakc4bc} \subsubsection{Preliminaries} In this subsection, we recall a weak convergence result of Unnikrishnan and Huang~\cite{unnikrishnan2016weak} and present a key lemma for the analysis of Gutman's decision rule in \eqref{gutmanrule}. Under $\mathrm{H}_1$, for all pairs of distributions $(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2$, the weak convergence result in Unnikrishnan and Huang~\cite[Lemma 5]{unnikrishnan2016weak} shows that \begin{align} 2n \mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)\stackrel{\mathrm{d}}{\longrightarrow}\chi^2_{|\mathcal{X}|-1}\label{weakconjh}. \end{align} The following properties of $F(P_1,P_2,\alpha,\lambda)$, defined in~\eqref{def:FP1P2l}, play an important role in our analyses. \begin{lemma} \label{propF} The type-II exponent function $F(P_1,P_2,\alpha,\lambda)$ satisfies that $F(P_1,P_2,\alpha,0)=D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2)$ and the distribution achieving $F(P_1,P_2,\alpha,0)$ is $Q^*=P^{(\frac{\alpha}{1+\alpha})}$, where $P^{(\gamma)}$ is the {\em tilted distribution} \begin{equation} P^{(\gamma)}(x):=\frac{P_1^{\gamma}(x)P_2(x)^{1-\gamma}}{\sum_{a\in\mathcal{X}} P_1^{\gamma}(a)P_2^{1-\gamma}(a)}\label{def:Pgamma}. \end{equation} \end{lemma} The proof of Lemma \ref{propF} follows directly from applying the KKT conditions~\cite{boyd2004convex} to $F(P_1,P_2,\alpha,0)$, defined in \eqref{def:FP1P2l}, and so it is omitted. \subsubsection{Achievability Proof} Recall Gutman's test $\phi_n^{\rm{Gut}}$ in \eqref{gutmanrule}. Also recall that $\mathrm{G}_k^{-1}(\cdot)$ is the inverse of the complementary cdf of a chi-square random variable with $k$ degrees of freedom. If we choose \begin{align} \lambda=\frac{1}{2n}\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\varepsilon)\label{lambddainwca}, \end{align} then using \eqref{weakconjh} and letting $Z\sim\chi^2_{|\mathcal{X}|-1}$, we have that for all $(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2$, \begin{align} \limsup_{n\to\infty}\beta_1(\phi_n^{\rm{Gut}}|\tilde{P}_1,\tilde{P}_2) &=\limsup_{n\to\infty}\tilde{\mathbb{P}}_1\Big\{\mathrm{GJS}(\hat{T}_{X_1^N},\hat{T}_{Y^n},\alpha)> \lambda\Big\}=\Pr\left\{Z>\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\varepsilon)\right\}=\varepsilon\label{needtouse}. \end{align} Furthermore, following similar steps as in \cite{gutman1989asymptotically}, for any $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, we can upper bound the type-II error probability as follows \begin{align} \beta_2(\phi_n^{\rm{Gut}}|P_1,P_2) &\leq (n+1)^{|\mathcal{X}|}(N+1)^{|\mathcal{X}|}\exp\{-nF(P_1,P_2,\alpha,\lambda)\}\label{methodoftypeseasy}. \end{align} Using Lemma \ref{propF} and the fact that $F(P_1,P_2,\alpha,\lambda)$ is continuous in $\lambda$ \cite[Lemma~12]{Tan11_IT}, we obtain that \begin{align} \liminf_{n\to\infty}-\frac{1}{n}\log \beta_2(\phi_n^{\rm{Gut}}|P_1,P_2) &\geq \liminf_{n\to\infty} F\left(P_1,P_2,\alpha,\frac{\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\varepsilon)}{2n}\right)\\ &=D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2). \end{align} \subsubsection{Converse Proof for Gutman's Test} From the result in \eqref{needtouse}, we conclude that in order for Gutman's test to satisfy that \begin{align} \limsup_{n\to\infty}\beta_1(\phi_n^{\rm{Gut}}|\tilde{P}_1,\tilde{P}_2)\leq \varepsilon,~\forall~(\tilde{P}_1,\tilde{P}_2)\in\mathcal{P}(\mathcal{X})^2, \end{align} the threshold $\lambda$ in Gutman's test in \eqref{gutmanrule} should satisfy that \begin{align} \lambda\geq \frac{1}{2n}\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\varepsilon)\label{needtouse2}. \end{align} For simplicity, similar to \eqref{def:FP1P2l}, let \begin{align} F_n(P_1,P_2,\alpha,\lambda) &:=\min_{\substack{(Q_1,Q_2)\in\mathcal{P}_N(\mathcal{X})\times\mathcal{P}_n(\mathcal{X}):\\\mathrm{GJS}(Q_1,Q_2,\alpha)\leq \lambda}} \alpha D(Q_1\|P_1)+D(Q_2\|P_2)\label{def:Fn}. \end{align} Using the decision rule in \eqref{gutmanrule}, for any $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$, we can lower bound the type-II error probability as follows: \begin{align} &\beta_2(\phi_n^{\rm{Gut}}|P_1,P_2)=\mathbb{P}_2\Big\{\phi_n^{\rm{Gut}}(Y^n,X_1^N,X_2^N)=\mathrm{H}_1\Big\}\\ &\quad=\sum_{(Q_1,Q_2) : \mathrm{GJS}(Q_1,Q_2,\alpha)\leq \lambda} P_2^n(\mathcal{T}^n_{Q_2})P_1^N(\mathcal{T}^N_{Q_1})\\ &\quad\geq \sum_{(Q_1,Q_2):\mathrm{GJS}(Q_1,Q_2,\alpha)\leq \lambda} (n+1)^{-|\mathcal{X}|}(N+1)^{-|\mathcal{X}|}\exp\big(-ND(Q_1\|P_1)-nD(Q_2\|P_2)\big)\\ &\quad\geq (n+1)^{-|\mathcal{X}|}(N+1)^{-|\mathcal{X}|}\exp(-nF_n(P_1,P_2,\alpha,\lambda))\\ &\quad\geq \exp\big(-nF_n(P_1,P_2,\alpha,0)-|\mathcal{X}|\log (n+1)-|\mathcal{X}|\log(n\alpha+1)\big)\label{decreasinl}. \end{align} where \eqref{decreasinl} follows since $\lambda\geq 0$ (see \eqref{needtouse2} and $F_n(P_1,P_2,\alpha,\lambda)$ is non-increasing in $\lambda$. The proof of the converse is completed by invoking the following lemma which relates $F_n(P_1,P_2,\alpha,0)$ to $F(P_1,P_2,\alpha,0)=D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2)$. For brevity, let $n':=\min\{n,N\}=\min\{n, \lceil n\alpha\rceil\}$. \begin{lemma} \label{fn<=f+} For any $(P_1,P_2)\in\mathcal{P}(\mathcal{X})^2$ and any $\alpha\in\mathbb{R}_+$, we have \begin{align} F_n(P_1,P_2,\alpha,0) &\leq D_{\frac{\alpha}{1+\alpha}}(P_1\|P_2)+\frac{(1+\alpha)|\mathcal{X}|}{n'}\log n'-\frac{\sum_x \log (P_1^\alpha(x)P_2(x))}{n'}. \end{align} \end{lemma} The proof of Lemma \ref{fn<=f+} is provided in Appendix \ref{proof:fn<=f+}. \subsection{Proof of Theorem \ref{second:cm}} \label{proof:second:cm} We present the proof for the second-order asymptotics for classification of multiple hypotheses with rejection. \subsubsection{Achievability Proof} We use a test proposed by Unnikrishnan in~\cite[Theorem~4.1]{unnikrishnan2015asymptotically}. To present this test, we need the following definitions. Given training sequences $\mathbf{x}^N$, a test sequence $y^n$, let \begin{align} i^*(\mathbf{x}^N,y^n) &:=\argmin_{i\in[M]} \mathrm{GJS}(\hat{T}_{x_i^N},\hat{T}_{y^n},\alpha)\label{firsti},\\* \tilde{h}(\mathbf{x}^N,y^n) &:=\min_{\substack{i\in[M]:i\neq i^*(\mathbf{x}^N,y^n)}} \mathrm{GJS}(\hat{T}_{x_i^N},\hat{T}_{y^n},\alpha)\label{def:tildeh}. \end{align} Now, given any training sequences $\mathbf{x}^N$ and test sequence $y^n$, with a appropriately chosen threshold $\tilde{\lambda}$, Unnikrishnan's test (abbreviated as Unn) operates as follows: \begin{align} \psi_n^{\rm{Unn}}(\mathbf{x}^N,y^n) &:= \left\{ \begin{array}{ll} \mathrm{H}_j&\mathrm{if}~i^*(\mathbf{x}^N,y^n)=j ,\tilde{h}(\mathbf{x}^N,y^n)\geq \tilde{\lambda}\\ \mathrm{H}_\mathrm{r}&\mathrm{if}~\tilde{h}(\mathbf{x}^N,y^n)<\tilde{\lambda}. \end{array} \right. \label{def:unntest} \end{align} Thus, given $\mathbf{P}$, the type-$j$ error and rejection probabilities for Unnikrishnan's test are \begin{align} \beta_j(\psi_n^{\rm{Unn}}|\mathbf{P}) &=\mathbb{P}_j\Big\{i^*(\mathbf{X}^N,Y^n)\neq j,\tilde{h}(\mathbf{X}^N,Y^n)\geq \tilde{\lambda}\Big\},\\* \zeta_j(\psi_n^{\rm{Unn}}|\mathbf{P}) &=\mathbb{P}_j\Big\{\tilde{h}(\mathbf{X}^N,Y^n)<\tilde{\lambda}\Big\}. \end{align} Similarly to \eqref{def:typejerror} and \eqref{def:typejreject}, for each $j\in[M]$, we define $\tilde{\mathbb{P}}_j\{\cdot\}:=\Pr\{\cdot|\mathrm{H}_j\}$ where the training sequences $\mathbf{X}^N$ are generated from $\tilde{\mathbf{P}}$. For each $j\in[M]$ and for all tuples of distributions $\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M$, we can upper bound the type-$j$ error probability as follows: \begin{align} \beta_j(\psi_n^{\rm{Unn}}|\tilde{\mathbf{P}} ) &=\tilde{\mathbb{P}}_j\Big\{i^*(\mathbf{X}^N,Y^n)\neq j,~\mathrm{GJS}(X_k^N,Y^n,\alpha)\geq \tilde{\lambda},\forall~k\neq i^*(\mathbf{X}^N,Y^n)\Big\}\\* &\leq \tilde{\mathbb{P}}_j\Big\{\mathrm{GJS}(X_j^N,Y^n,\alpha)\geq \tilde{\lambda}\Big\}\\ &\leq (n(1+\alpha)+1)^{|\mathcal{X}|}\exp(-n\tilde{\lambda}),\label{ptypejerror} \end{align} where \eqref{ptypejerror} follows similarly as \eqref{upptype2}. We then upper bound the type-$j$ rejection probability with respect to a particular tuple of distributions $\mathbf{P}$ satisfying the condition in Theorem \ref{second:cm}. In the following, for brevity, we will use $\imath_1(x|i,j)$ (resp.\ $\imath_2(x|i,j)$) to denote $\imath_1(x|P_i,P_j,\alpha)$ (resp.\ $\imath_2(x|P_i,P_j,\alpha)$) in \eqref{def:i}). In the following, we will first show that with high probability, the minimizer for $\tilde{h}(\mathbf{X}^N,Y^n,\alpha)$ in~\eqref{def:tildeh} is given by $i^*(j)$ (see \eqref{def:thetaj}) under hypothesis $\mathrm{H}_j$ for each $j\in[M]$. For each $j\in[M]$, we have that \begin{align} \nn &\mathbb{P}_j\Big\{\mathrm{GJS}(X_j^N,Y^n,\alpha)>\mathrm{GJS}(X_{i^*(j)}^N,Y^n,\alpha)\Big\}\\ &\leq \mathbb{P}_j\bigg\{ -\frac{ 1}{n}\Big(\sum_{k\in[N]}\imath_1(x_{i^*(j),k}|i^*(j),j)+\sum_{k\in[n]}\imath_2(y_k|i^*(j),j)\Big)<O\left(\frac{\log n}{n}\right)\bigg\}+2\tau_n\label{usetaylor}\\ &\leq \mathrm{Q}\bigg(\left(\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)+O\left(\frac{\log n}{n}\right)\right)\sqrt{\frac{n}{\mathrm{V}(P_i,P_j,\alpha)}}\bigg)+\frac{6\mathrm{T}(P_i,P_j,\alpha)}{\sqrt{n(\mathrm{V}(P_i,P_j,\alpha))^3}}+2\tau_n\label{useberryagaina},\\ &\leq \exp\bigg(-\frac{n(\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)+O(\frac{\log n}{n}))^2}{2\mathrm{V}(P_{i^*(j)},P_j,\alpha)}\bigg)+\frac{6\mathrm{T}(P_{i^*(j)},P_j,\alpha)}{\sqrt{n(\mathrm{V}(P_{i^*(j)},P_j,\alpha))^3}}+2\tau_n\label{useineqrmq}\\* &=:\mu_{1,n}(j)=O\left(\frac{1}{\sqrt{n}}\right)\label{def:mu1n}. \end{align} where \eqref{usetaylor} follows similarly to \eqref{usetaylor&atypical} and the fact that $\imath_l(x|j,j)=0$ for $l\in[2]$; \eqref{useberryagaina} follows from the Berry-Esseen theorem similarly to \eqref{useberry} and $\tau_n$ is defined in \eqref{pofatypical}; \eqref{useineqrmq} follows since $\mathrm{Q}(x)\leq \exp (-\frac{x^2}{2})$ for $x\geq 0$; and \eqref{def:mu1n} follows since $\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)> 0$ according to the assumption in Theorem \ref{second:cm} and thus the second term in \eqref{useineqrmq} dominates. Given any triple of distributions $(P_1,P_2,P_3)\in\mathcal{P}^3$, let \begin{align} \tilde{\mathrm{V}}(P_1,P_2,P_3,\alpha) &:=\alpha\mathrm{Var}_{P_1}[\imath_1(X|1,3)]+\alpha\mathrm{Var}_{P_2}[\imath_1(X|2,3)]+\mathrm{Var}_{P_3}\left[\imath_2(X|1,3)-\imath_2(X|2,3)\right]\label{def:tildermv},\\ \tilde{\mathrm{T}}(P_1,P_2,P_3,\alpha) \nn&:=\alpha\mathbb{E}_{P_1}[|\imath_1(X|1,3)-\mathbb{E}_{P_1}[\imath_1(X|1,3)]|^3]+\alpha\mathbb{E}_{P_2}[|\imath_1(X|2,3)-\mathbb{E}_{P_2}[\imath_1(X|2,3)]|^3]\\* &\qquad+\mathrm{Var}_{P_3}\left[|\imath_2(X|1,3)-\imath_2(X|2,3)-\mathbb{E}_{P_3}[\imath_2(X|1,3)]+\mathbb{E}_{P_3}[\imath_2(X|2,3)]|^3\right]\label{def:tildermt}. \end{align} Similarly to \eqref{def:mu1n}, we have that for each $j\in[M]$ and any $i\in[M]$ s.t. $i\neq j$ and $i\neq i^*(j)$, we have \begin{align} \nn&\mathbb{P}_j\Big\{\mathrm{GJS}(X_i^N,Y^n,\alpha)<\mathrm{GJS}(X_{i^*(j)}^N,Y^n,\alpha)\Big\}\\* &\leq \mathbb{P}_j\bigg\{ \frac{1}{n}\Big(\sum_{k\in[N]}\big(\imath_1(x_{i^*(j),k}|i^*(j),j)-\imath_1(x_{i,k}|i,j)\big)+\sum_{k\in[n]}\big(\imath_2(y_k|i^*(j),j)-\imath_2(y_k|i,j)\big)\Big)>O\left(\frac{\log n}{n}\right)\bigg\} \nn\\* &\qquad+2\tau_n\\ \nn&\leq \mathrm{Q}\Bigg(\left(\mathrm{GJS}(P_i,P_j,\alpha)-\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)+O\left(\frac{\log n}{n}\right)\right)\sqrt{\frac{n}{\tilde{\mathrm{V}}(P_{i^*(j)},P_i,P_j,\alpha)}}\Bigg)\\* &\qquad+\frac{6\tilde{\mathrm{T}}(P_{i^*(j)},P_i,P_j|\alpha)}{\sqrt{n(\tilde{\mathrm{V}}(P_{i^*(j)},P_i,P_j,\alpha))^3}} +2\tau_n\label{useberryagainb}\\ &\leq \exp\Bigg(\! -\! \frac{n\big(\mathrm{GJS}(P_i,P_j,\alpha)\! -\! \mathrm{GJS}(P_{i^*(j)},P_j,\alpha)\! +\! O (\frac{\log n}{n})\big)^2}{2\tilde{\mathrm{V}}(P_{i^*(j)},P_i,P_j,\alpha)}\Bigg)+\frac{6\tilde{\mathrm{T}}(P_{i^*(j)},P_i,P_j,\alpha)}{\sqrt{n(\tilde{\mathrm{V}}(P_{i^*(j)},P_i,P_j,\alpha))^3}} + 2\tau_n\label{beforemu2n}\\ &=:\mu_{2,n}(i,j)=O\left(\frac{1}{\sqrt{n}}\right)\label{def:mu2n}, \end{align} where \eqref{def:mu2n} holds since $\mathrm{GJS}(P_i,P_j,\alpha)>\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)$ according to assumption that the minimizer for $\theta_j$ (see \eqref{def:thetaj}) is unique and thus the second term in \eqref{beforemu2n} dominates. For each $j\in[M]$, let \begin{align} \mu_n(j)&:=\mu_{1,n}(j)+\sum_{i\in[M]:i\neq j,i\neq i^*(j)}\mu_{2,n}(i,j)=O\left(\frac{1}{\sqrt{n}}\right)\label{def:mun}. \end{align} Combining \eqref{def:mu1n} and \eqref{def:mu2n}, we conclude that for each $j\in[M]$, \begin{align} \mathbb{P}_j\Big\{\tilde{h}(\mathbf{X}^N,Y^n,\alpha)=\mathrm{GJS}(\hat{T}_{X_{i^*(j)}^N},\hat{T}_{Y^n},\alpha)\Big\}\geq 1-\mu_n(j). \end{align} Therefore, we have that for each $j\in[M]$, \begin{align} \zeta_j(\psi_n^{\rm{Unn}}|\mathbf{P}) &=\mathbb{P}_j\Big\{\tilde{h}(\mathbf{X}^N,Y^n,\alpha)<\tilde{\lambda}\Big\}\\ &\leq \mathbb{P}_j\Big\{\mathrm{GJS}(\hat{T}_{X_{i^*(j)}^N},\hat{T}_{Y^n},\alpha)<\tilde{\lambda}\Big\}+\mu_n(j)\\ &\leq \Phi\Bigg(\left(\tilde{\lambda}-\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)+O\left(\frac{\log n}{n}\right)\right)\sqrt{\frac{n}{\mathrm{V}(P_{i^*(j)},P_j,\alpha)}}\Bigg) \nn\\* &\qquad+\frac{6\mathrm{T}(P_{i^*(j)},P_j,\alpha)}{\sqrt{n(\mathrm{V}(P_{i^*(j)},P_j,\alpha))^3}}+\tau_n+\mu_n(j)\label{useberryagain}, \end{align} where \eqref{useberryagain} follows similarly to \eqref{useberry}, \eqref{useberryagaina} and \eqref{useberryagainb}. For each $j\in[M]$, let \begin{align} \rho_{j,n}&:=\frac{6\mathrm{T}(P_{i^*(j)},P_j,\alpha)}{\sqrt{n(\mathrm{V}(P_{i^*(j)},P_j,\alpha))^3}}+\tau_n+\mu_n(j)\label{def:rhojn} \end{align} Choose $\tilde{\lambda}$ such that \begin{align} \tilde{\lambda}&:= \min_{j\in[M]}\bigg\{\mathrm{GJS}(P_{i^*(j)},P_j,\alpha)+\sqrt{\frac{\mathrm{V}(P_{i^*(j)},P_j,\alpha)}{n}}\Phi^{-1}(\varepsilon_j-\rho_{j,n})\bigg\}+O\left(\frac{\log n}{n}\right)\label{choose:tlj}, \end{align} and let \begin{align} \lambda&:=\tilde{\lambda}-\frac{|\mathcal{X}|\log (n(1+\alpha)+1)}{n}\label{def:choose:lj}. \end{align} Invoking the results in \eqref{ptypejerror}, \eqref{useberryagain} and applying a Taylor expansions to $\Phi^{-1}(\cdot)$ (similarly to \eqref{taylorphi}), we conclude that Unnikrishnan's test $\psi_n^{\rm{Unn}}$ in \eqref{def:unntest} satisfies the following two conditions: \begin{itemize} \item for all tuples of distributions $\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M$ and for each $j\in[M]$, $\beta_j(\psi_n|\tilde{\mathbf{P}})\leq \exp(-n\lambda)$ ; \item for any tuple of distributions $\mathbf{P}$ satisfying the condition in Theorem \ref{second:cm}, $\zeta_j(\psi_n|\mathbf{P})\leq \varepsilon_j$. \end{itemize} The achievability proof of Theorem \ref{second:cm} is completed. \subsubsection{Converse Proof} Given any $\bm{\kappa}=(\kappa_1,\ldots,\kappa_M)\in[0,1]^M$, let \begin{align} \underline{\kappa}=\min_{t\in[M]}\kappa_t,\quad\mbox{and}\quad \kappa_{+}=\sum_{t\in[M]}\kappa_t. \end{align} Paralleling Lemma \ref{anytotype}, we relate the error and rejection probabilities of any arbitrary test to a type-based test (i.e., the test is a function of only the marginal types $(\hat{T}_{X_1^N},\ldots,\hat{T}_{X_M^N},\hat{T}_{Y^n})$). \begin{lemma} \label{anytotype:cm} Given any arbitrary test $\psi_n$ and any $\bm{\kappa}\in[0,1]^M$, for any tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$, we can construct a type-based test $\psi_n^\mathrm{T}$ such that for each $j\in[M]$, \begin{align} \beta_j(\psi_n|\mathbf{P})&\geq \underline{\kappa}\beta_j(\psi_n^\mathrm{T}|\mathbf{P}),\\ \zeta_j(\psi_n|\mathbf{P})&\geq (1-\kappa_{+})\zeta_j(\psi_n^\mathrm{T}|\mathbf{P}). \end{align} \end{lemma} The proof of Lemma \ref{anytotype:cm} is analogous to that of Lemma~\ref{anytotype} and is thus omitted. Paralleling Lemma \ref{typeconverse}, in the following lemma, we derive a lower bound on type-$j$ rejection probability for each $j\in[M]$ with respect to a particular tuple of distributions for any type-based test satisfying that type-$j$ error probability decays exponentially fast for each $j\in[M]$ and for all tuples of distributions. Recall the definition of $\tilde{h}(\cdot)$ in \eqref{def:tildeh}. For simplicity, let \begin{align} \eta_{n,M}&:=\frac{M|\mathcal{X}|\log (n\alpha+1)}{n\alpha}+\frac{|\mathcal{X}|\log(n+1)}{n}\label{def:etanM}. \end{align} \begin{lemma} \label{typeconverse:cm} For any $\lambda\in\mathbb{R}_+$ and any type-based test $\psi_n^\mathrm{T}$ such that for all tuples of distributions $\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M$, \begin{align} \beta_j(\psi_n^\mathrm{T}|\tilde{\mathbf{P}})\leq \exp(-n\lambda),\quad \forall \, j\in[M]\label{type:constraint:cm}, \end{align} we have that for any particular tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$, \begin{align} \zeta_j(\psi_n^\mathrm{T}) &\geq \mathbb{P}_j\Big\{\tilde{h}(\mathbf{X}^N,Y^n)+\eta_{n,M}<\lambda\Big\},\quad\forall\, j\in[M]. \end{align} \end{lemma} The proof of Lemma \ref{typeconverse:cm} is similar to that for Lemma \ref{typeconverse} and so it is omitted. Combining Lemmas \ref{anytotype:cm} and \ref{typeconverse:cm} and letting $\kappa_j=1/n$ for each $j\in[M]$, for any test $\psi_n$ satisfying that for all $\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M$, \begin{align} \beta_j(\psi_n|\tilde{\mathbf{P}})\leq \exp(-n\lambda),\quad\forall\, j\in[M],\label{converse:cm} \end{align} given any tuple of distributions $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$, we have that for each $j\in[M]$, \begin{align} \zeta_j(\psi_n|\mathbf{P})&\geq \left(1-\frac{M}{n}\right) \mathbb{P}_j\Big\{\tilde{h}(\mathbf{X}^N,Y^n)+\eta_{n,M}+\frac{\log n}{n}<\lambda\Big\}. \end{align} The rest of the converse proof for Theorem \ref{second:cm} is completed similarly to the achievability part. \subsection{Proof of Proposition \ref{dual:unn}} \label{proof:dual:unn} The proof of Proposition \ref{dual:unn} is similar to that of Proposition \ref{weakc4bc}. Recall Gutman's test in \eqref{gut:mwithreject} and the notations in Section \ref{proof:weakc4bc}. Given any triple of distributions $(P_j,P_i,P_k)\in\mathcal{P}(\mathcal{X})^3$ and any $\alpha\in\mathbb{R}_+$, define \begin{align} K(P_j,P_i,P_k,\lambda) &:=\min_{\substack{(Q_1,Q_2,Q_3)\in\mathcal{P}(\mathcal{X})^3:\\\mathrm{GJS}(Q_2,Q_1,\alpha)\leq \lambda\\\mathrm{GJS}(Q_3,Q_1,\alpha)\leq \lambda}} \big\{D(Q_1\|P_j)+\alpha D(Q_2\|P_i)+\alpha D(Q_3\|P_k)\big\}. \end{align} Using the KKT conditions~\cite{boyd2004convex} and the definition of $D_{\gamma}(\cdot,\cdot,\cdot)$ in \eqref{def:gdivergence}, one can easily verify that \begin{align} K(P_j,P_i,P_k,0) &=\min_{Q\in\mathcal{P}(\mathcal{X})} \big\{D(Q\|P_j)+\alpha D(Q\|P_i)+\alpha D(Q\|P_k)\big\}=D_{\frac{2\alpha}{1+2\alpha}}(P_j,P_i,P_k)\label{imtouse}. \end{align} \subsubsection{Achievability Proof} In the following analysis, we choose $\lambda$ as in \eqref{lambddainwca}. For any $j\in[M]$ and any $\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M$, given any $\varepsilon\in(0,1)$, we can upper bound $j$-th error probability as follows: \begin{align} \limsup_{n\to\infty}\beta_j(\Psi_n^{\rm{Gut}}|\tilde{\mathbf{P}}) &\leq \limsup_{n\to\infty} \mathbb{P}_j \Big\{\mathrm{GJS}(\hat{T}_{X_j^N},\hat{T}_{Y^n},\alpha)>\lambda\Big\}\\* &=\limsup_{n\to\infty} \Pr\Big\{Z>\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\varepsilon)\Big\}=\varepsilon\label{useweakcga}, \end{align} where \eqref{useweakcga} follows from the weak convergence analysis in Unnikrishnan and Huang~\cite{unnikrishnan2016weak}. Furthermore, for any $j\in[M]$ and for any $\mathbf{P}\in\mathcal{P}(\mathcal{X})^M$, the $j$-th rejection probability satisfies that \begin{align} \zeta_j(\Psi_n^{\rm{Gut}}|\mathbf{P}) &=\mathbb{P}_j\Big\{\exists~(i,k)\in\mathcal{M}\mathrm{~s.t.~}\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)\leq\lambda,~\mathrm{GJS}(\hat{T}_{X_k^n},\hat{T}_Y^n,\alpha)\leq\lambda\Big\}\\ &\leq \frac{M(M-1)}{2}\max_{(j,k)\in\mathcal{M}}\mathbb{P}_j\Big\{\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)\leq\lambda,~\mathrm{GJS}(\hat{T}_{X_k^n},\hat{T}_Y^n,\alpha)\leq\lambda\Big\}\\ &=\frac{M(M-1)}{2}(N+1)^{2|\mathcal{X}|}(n+1)^{|\mathcal{X}|}\exp\Big(-n\min_{(i,k)\in\mathcal{M}} K(P_j,P_i,P_k,\lambda)\Big),\label{methodoftypes} \end{align} where \eqref{methodoftypes} follows similarly to \eqref{methodoftypeseasy}. Hence, using the choice of $\lambda$ in \eqref{lambddainwca}, the equality in \eqref{imtouse}, and the continuity of $\lambda\mapsto K(P_j,P_i,P_k,\lambda)$ at $\lambda=0$, we have that for each $j\in[M]$, \begin{align} \liminf_{n\to\infty}-\frac{1}{n}\log \zeta_j(\Psi_n^{\rm{Gut}}|\mathbf{P}) &\geq \min_{(i,k)\in\mathcal{M}}K(P_j,P_i,P_k,0)=\min_{(i,k)\in\mathcal{M}} D_{\frac{2\alpha}{1+2\alpha}}(P_j,P_i,P_k).\label{eqn:ach_M} \end{align} \subsubsection{Converse Proof for Gutman's Test} For any $j\in[2:M]$, given any $\tilde{\mathbf{P}}\in\mathcal{P}(\mathcal{X})^M$, using the union bound, we can lower bound the $j$-th error probability as follows: \begin{align} \beta_j(\Psi_n^{\rm{Gut}}|\tilde{\mathbf{P}}) &\geq \tilde{\mathbb{P}}_j\Big\{\Psi_n^{\rm{Gut}}(\mathbf{X}^M,Y^n)=\mathrm{H}_1\Big\}\\ &\geq \tilde{\mathbb{P}}_j\Big\{\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)> \lambda,~\forall~i\in[2:M]\Big\}\\ &\geq \tilde{\mathbb{P}}_j\Big\{\mathrm{GJS}(\hat{T}_{X_j^N},\hat{T}_{Y^n},\alpha)>\lambda\Big\}-\sum_{i\in[2:M] \setminus\{ j\}}\tilde{\mathbb{P}}_j\Big\{\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)\leq\lambda\Big\}. \label{eqn:lwb} \end{align} Assume that $\lambda$ satisfies \begin{equation} \lambda=\frac{1}{2n}\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\varepsilon+\delta) \end{equation} for some $\delta>0$. Compare this choice to~\eqref{lambddainwca}. Since $\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)$ converges in probability to $\mathrm{GJS}(\tilde{P}_i, \tilde{P}_j, \alpha)>0$ under $\tilde{\mathbb{P}}_j$ and $\lambda\downarrow 0$, all the terms in the sum in \eqref{eqn:lwb} vanish. In addition, by weak convergence (cf.~\eqref{weakconjh}), the first term in \eqref{eqn:lwb} converges to $\varepsilon+\delta>\varepsilon$, contradicting the requirement that $\max_{j\in[M]}\beta_j(\Psi_n^{\rm{Gut}}|\tilde{\mathbf{P}})\leq \varepsilon$ for all $\tilde{\mathbf{P}}$; see~\eqref{def:caltau2}. Therefore, to fulfil this requirement, the threshold $\lambda$ in Gutman's test in~\eqref{gut:mwithreject} must satisfy \begin{align} \lambda\ge \frac{1}{2n}\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\varepsilon),\label{eqn:choice_lam} \end{align} because $\mathrm{G}_{|\mathcal{X}|-1}^{-1}(\cdot)$ is monotonically non-increasing (cf.~Section~\ref{sec:notation}). Furthermore, for each $j\in[M]$, we can lower bound the $j$-th rejection probability as follows: \begin{align} \zeta_j(\Psi_n^{\rm{Gut}}|\mathbf{P}) &\geq \max_{(i,k)\in\mathcal{M}}\mathbb{P}_j\Big\{\mathrm{GJS}(\hat{T}_{X_i^N},\hat{T}_{Y^n},\alpha)\leq\lambda,~\mathrm{GJS}(\hat{T}_{X_k^n},\hat{T}_Y^n,\alpha)\leq\lambda\Big\}\\* &\geq \max_{(i,k)\in\mathcal{M}}\exp \Big(-nD_{\frac{2\alpha}{1+2\alpha}}(P_j,P_i,P_k)+\Theta(\log n)\Big)\label{similartech}, \end{align} where \eqref{similartech} follows from~\eqref{eqn:choice_lam} and steps similar to those that led to~\eqref{decreasinl} and Lemma \ref{fn<=f+}. Hence, for each $j\in[M]$, \begin{align} \limsup_{n\to\infty}-\frac{1}{n}\log\max_{j\in[M]}\zeta_j(\Psi_n^{\rm{Gut}}|\mathbf{P}) \leq \min_{(i,k)\in\mathcal{M}}D_{\frac{2\alpha}{1+2\alpha}}(P_j,P_i,P_k). \end{align} This and \eqref{eqn:ach_M} complete the proof of Proposition~\ref{dual:unn}.
{ "timestamp": "2018-12-07T02:07:51", "yymm": "1806", "arxiv_id": "1806.00739", "language": "en", "url": "https://arxiv.org/abs/1806.00739" }
\section{Introduction} The emergence and persistence of cooperative behaviors in populations of selfish individuals is a key problem in evolutionary biology~\cite{alexrod(1981)science, rakoff-nahoum(2016)nature, strassmann(2008)nature, taylor(2007)nature, wedekind(2000)science, hauser(2014)nature, imhof(2010)prsb, vukov(2005)pre, mcnamara(2004)nature, szolnoki(2015)prsb, li(2017)science}. The Prisoner's Dilemma game provides a convenient paradigm to address this problem. In the game, a cooperator brings a benefit of $b$ to her coplayer at a cost of $c$ to herself. A defector produces no benefit and incurs no cost. As $b>c>0$, the best strategy for an individual is to defect irrespective of her coplayer's strategy. However, the total payoff would be maximized if both have adhered to cooperation. The dissonance in the optimal strategy for an individual and for the group leads to the social dilemma. To resolve the dilemma, a variety of mechanisms~\cite{nowak(2006)science, masuda(2007)prsb} promoting the evolution of cooperation have been proposed. The tag based interactions have been intensively studied~\cite{riolo(2001)nature, masuda(2015)pr, roberts(2002)nature, riolo(2002)nature, tanimoto(2007)jtb, traulsen(2003)pre, traulsen(2004)pre, traulsen(2007)plosone, wute(2013)jtb, fu(2012)sr}. Its essence lies in that individuals can recognize each other by tags and cooperators just help those sufficiently similar to themselves. By this simple rule, the authors~\cite{riolo(2001)nature} have pointed out that cooperation can arise without reciprocity. When ``never to donate" is incorporated as a possible strategy, similarity can still breed cooperation but only when mutations towards ``never to donate" are not very strong~\cite{roberts(2002)nature, riolo(2002)nature}. Traulsen and his coworkers have captured the essence of this model~\cite{riolo(2001)nature} by considering two tags and two levels of tolerance~\cite{traulsen(2003)pre}. They also extended this tag-based interactions to structured populations~\cite{traulsen(2004)pre} and well-mixed populations of finite size~\cite{traulsen(2007)plosone}. Tanimoto found that a two-dimensional tag space promotes cooperation more effectively than the one-variable tag system~\cite{tanimoto(2007)jtb}. When it comes to spatial populations, Axelrod \emph{et al.} have found that cooperation can coevolve with tags in structured populations. Another far-reaching work~\cite{jansen(2006)nature} has probed the effects of beard chromodynamics on the evolution of cooperation, and revealed that loosening coupling between tag and strategy attenuates the oscillation of the population dynamics, reinforcing the beard color diversity and thus inducing higher levels of cooperation. Our work~\cite{wute(2013)jtb} has shown that adaptive tag switching can reinforce the coevolution of tag diversity and contingent cooperation even when tag switching is costly. When strategy and tag mutate independently, a large body of work has addressed the evolution of cooperation under selection-mutation dynamics~\cite{antal(2009)pnas, tarnita(2009)pnas, tarnita(2011)pnas, zhang(2015)sr, ohtsuki(2006)prsb}. In Ref.~\cite{antal(2009)pnas}, the authors have considered a model with two strategies, contingent cooperation and defection. Each individual possesses a phenotype. Contingent cooperators just cooperate with these individuals of the same phenotype. By virtue of coalescent theory and perturbation theory, this work derived a very simple criteria, $(R-P)(1+\sqrt{3})>T-S$, for contingent cooperation to be selected. Using similar mathematical tools, Tarnita \emph{et al}. have considered the multiplexity of tags~\cite{tarnita(2009)pnas}. Each individual is affiliated with $n$ groups out of the total $m$ groups inhabitable. In this context, they gave the conditions under which cooperation can evolve. They have also applied this framework to study the competition of multiple strategies and give the terse condition for a specific strategy to be selected~\cite{tarnita(2011)pnas}. Generally speaking, these models share two points in common. First, individuals are equally accessible to each phenotype (or tag) in a pre-assigned set of phenotypes available. And secondly, the interaction rate is binary. In the case of phenotypes being discrete, when sharing the same phenotype, two individuals interact. In the context of continuous tags, individuals just help those who differ with them in tag by no more than their tolerances. Recent experimental studies have demonstrated that in terms of phenotype traits such as cooperativeness, resistance to antibiotic or competence, a high level of diversity can arise even in an isogenic population~\cite{acar(2008)ng, kussell(2005)genetics, wolf(2005)jtb, balaban(2004)science, ackermann(2008)nature, diard(2013)nature}. Individuals are capable of switching between phenotypes. By phenotype switching, the population can either optimize fitness~\cite{ackermann(2008)nature, diard(2013)nature}, or survive unpredictable environmental fluctuations~\cite{acar(2008)ng, kussell(2005)genetics, wolf(2005)jtb}, or preserve some properties~\cite{balaban(2004)science}. The implications of these observations are threefold: individuals reserve redundant phenotypes; individuals are endowed with ability to switch phenotypes; fitness of phenotypes varies with environments. Meanwhile these studies have mainly concentrated on the species-environment systems, and delved into the importance of phenotype switching and its diversity for viability of organisms. Yet direct interactions between individuals are left largely unconsidered~\cite{acar(2008)ng, kussell(2005)genetics, wolf(2005)jtb, balaban(2004)science, ackermann(2008)nature, diard(2013)nature}. Furthermore, although previous studies on phenotype similarity based cooperation~\cite{riolo(2001)nature, masuda(2015)pr, roberts(2002)nature, riolo(2002)nature, traulsen(2003)pre, traulsen(2004)pre, traulsen(2007)plosone, wute(2013)jtb, jansen(2006)nature, antal(2009)pnas, tarnita(2009)pnas, tarnita(2011)pnas, mcavity(2013)jtb} have considered direct interactions between individuals, these models have not dealt with difference in expressing phenotypes, leading to the binary interaction rate. A natural question arises what the evolutionary dynamics would be when individuals differ in ability of expressing phenotypes? We shall answer this question using a model integrating the diversity of phenotype expression and individual-individual interactions. Instead of uniformly expressing one of the potentially expressible phenotypes, our present model allows individuals each to express a subset of the potentially expressible phenotypes costly. The subset may vary with individual and evolves over time. This diversity in phenotype expression necessarily induces diversity in similarity. Unlike the zero-or-one interaction rate, two players met play the Prisoner's Dilemma game with likelihood positively relying on how many phenotypes they share of those they have expressed. The higher degree of the similarity between two individuals, the more likely they interact. In fact, several but not many studies have dedicated to investigating the effects of stochastic interactions~\cite{chen(2008)pre, traulsen(2007)jtb} on the evolution of cooperation. Traulsen \emph{et al}. have considered the stochasticity of interactions between neighbored individuals~\cite{traulsen(2007)jtb}. Chen \emph{et al}. have introduced the stochastic interactions into the spatial Prisoner's Dilemma game. In these studies~\cite{chen(2008)pre, traulsen(2007)jtb}, the interaction stochasticity is pre-assigned and thus does not evolve. \section{Model} Consider a well-mixed population of finite size $(=N)$. These $N$ individuals compete to survive. Reproduction is asexually and subject to mutation. Each individual $i$ is characterized by a triplet $(s_i, K_i, G_i)$. The first entry $s_i$ denotes $i$'s strategy. In the Prisoner's Dilemma game, $s_i$ can be either cooperation or defection. For the sake of calculation, we let $s_i=1$ if $i$ is a cooperator and $s_i=0$ if $i$ is a defector. $G_i$ is the potentially expressible phenotypes individual $i$ carries. For we are mainly concerned with the diversity of phenotype expression, we assume $G_i$ is a constant, say $G_i\equiv G$. $K_i$ represents the phenotypes individual $i$ has actually expressed. Each individual randomly expresses a number of the potentially expressible phenotypes. This means $K_i\subseteq G$. As the expression is random, even if two individuals express the same number of phenotypes, the specific phenotypes expressed can be different. Only phenotypes expressed are observable. Instead of assuming the zero-or-one interaction rate, we diversify the rate of interaction by associating it with the phenotypes shared in common. When two individuals $i$ and $j$ meet, the probability they play the game is dependent on how many identical phenotypes they possess of all the phenotypes expressed. Obviously, when both individuals $i$ and $j$ have expressed all phenotypes, they would interact with probability $1$. When both share no phenotype of all those expressed, no interaction would happen. The larger number of phenotypes they share, the more likely they interact. Therefore, we introduce a function $r(K_i, K_j)$ to denote the interaction rate. For simplicity we first consider the linear interaction rate. In this linear mode, the probability that two individuals interact is proportional to the phenotypes they share (see figure 1). At the same time, individuals need to bear the cost of expressing phenotypes. The cost is assumed to be proportional to the number of phenotypes expressed, that is, $\kappa_i(K_i)=\theta \cdot K_i$. Here we choose the simplest possible cost function of expressing phenotype. Therefore, the payoff consists of two parts: the payoff resulting from game interactions and the cost of expressing phenotypes. The net payoff $\pi_i$ determines the reproductive success of individual $i$. Here the fitness is an exponential function of payoff, say $f_i=e^{\beta \pi_i}$, where $\beta$ is the intensity of selection, specifying the contribution of the game to fitness. The evolutionary updating is represented by a frequency-dependent Moran process. At each time step, each individual interacts with all other $N-1$ individuals depending on their expressed phenotypes and accrues payoffs. Payoff is mapped into fitness. Then an individual is chosen to reproduce an offspring with probability proportional to its fitness. Following birth, a randomly selected individual in the population is assigned to die and replaced. The population size thus remains constant throughout the evolution. Reproduction is subject to mutation. With probability $\mu$, the offspring with equal probability adopts one of the two behavioral strategies and also randomly expresses a number, say $K^{'}_i$, of phenotypes at a cost $\theta K^{'}_i$. \section{Results} We start with presenting the pairwise invasion dynamics, which shall help better understand the full population dynamics. When mutant defectors attempt to invade resident defectors, game interactions, whether or not they really happen, bring about nothing to payoff. This is due to the fact that defector-defector interaction generates no benefit to and charges no cost of the defectors involved. Payoff just consists of the cost of expressing phenotypes. The more phenotypes they express, the higher cost defectors pay. As a consequence, those defectors who express too many phenotypes place themselves in a least competent position. This explains why the plot peaks at the bottom right corner and unfolds downward towards the top left corner (see figure 2d). In fact, this property can be rigorously corroborated. Suppose that the invading defectors express $K_Y$ phenotypes and the invaded defectors express $K_X$ phenotypes. Independent of the population composition and the phenotypes actually expressed, the fitness is $e^{-\beta \theta K_Y}$ and $e^{-\beta \theta K_X}$, for an invader and a resident, respectively. Denote by $\rho_{X\rightarrow Y}^{k}$ the fixation probability that a single mutant $Y$ takes over the resident population $X$ when $X$ and $Y$ share $k$ common phenotypes among their expressed phenotypes. We can readily obtain the fixation probability as $\rho_{X\rightarrow Y}^{k}=\frac{1-e^{\beta \theta (K_Y-K_X)}}{1-e^{\beta \theta N(K_Y-K_X)}}$ for all $k$ possibly allowed. Using Equation $(1)$ as presented in Methods Section, we can get the transition rate of the population moving from state $(X, K_X)$ to the state $(Y, K_Y)$ as $Q(X,Y; K_X, K_Y)=\frac{1-e^{\beta \theta (K_Y-K_X)}}{1-e^{\beta \theta N(K_Y-K_X)}}\cdot \sum_{k} 1$. It can be easily verified that the transition rate $Q(X,Y; K_X, K_Y)$ decreases with $K_Y$ and increases with $K_X$, respectively. The pairwise dynamics exhibit similar cascading property when mutant cooperators attempt to invade resident defectors (see figure 2b). Obviously, expressing more phenotypes affects cooperators' evolutionary fate in three fronts. It undoubtedly reinforces reciprocity between cooperators themselves. It increases cost of expressing these phenotypes. In the third place, it also gets more likely for defectors to chase after these cooperators by expressing more phenotypes, and thus raising the rate of interaction between cooperators and defectors. Indeed, when resident defectors express a very few phenotypes, they have very little chance to exploit cooperators. This leaves more time space for mutant cooperators to reach the invasion barrier. As $K_X$ increases, the third effect becomes more conspicuous though invaded defectors also incur a bit higher cost of expressing phenotypes. These two negative effects offset and then overwhelms the reinforced reciprocity of mutant cooperators, resulting from incremental $K_Y$. In other words, when resident defectors have already expressed many phenotypes, mutant cooperators are unlikely to increase the chance of invading by expressing more phenotypes. These are reasons why the black line, corresponding to $1/N$ fixation probability, rises approximately linearly for small $K_X$, and gradually gets flattened as $K_X$ further increases. Interesting scenarios are observed when mutant defectors attempt to invade resident cooperators (see figure 2c). Only when both mutation defectors and resident cooperators express a very large number of phenotypes, these defectors can take over the whole population with probability higher than $1/N$. This observation has an important implication. Whenever resident cooperators express so many phenotypes, defectors can expand their expressed phenotypes to exploit cooperators more severely. As it happens, payoffs for defectors due to the defector-cooperator interactions not only offset the cost of expressing more phenotypes but also put defectors in an advantageous position. This observation vanishes when resident cooperators express a very few phenotypes, suggesting that cooperators can dodge defectors' exploitation by unilaterally constraining phenotype expression. The invasion dynamics show a saddle shape along the line directing from $(1,1)$ to $(20, 20)$ when mutant cooperators compete with resident cooperators (see figure 2a). No matter how many phenotypes, say $K_X$, resident cooperators express, mutant cooperators that express phenotypes of a number $K_Y$ most approximating $K_X$ are most likely to invade resident cooperators, though the fixation is still less than $\frac{1}{N}$. Reasons for this property vary as $K_X$ differs. For small $K_X$, when $K_Y$ is larger than $K_X$, the cost of expressing more phenotypes cannot be compensated by combined effects of sucking resident cooperators more and weak reciprocity between mutant cooperators. When $K_Y$ is less than $K_X$, mutant cooperators can hardly mutually breed, and thus being eclipsed by resident cooperators who always interact with each other albeit at a low rate. For large $K_X$, small $K_Y$ would prevent mutant cooperators from exploiting resident cooperators. Moreover, reciprocity between resident cooperators is so strong that mutant cooperators are least likely to invade. Even for $K_X$ in between, mutant cooperators cannot raise the fixation probability by expressing either more or less phenotypes than $K_X$. With the pairwise invasion dynamics scrutinized, we are now able to address the full population dynamics. Consider the competition of an arbitrary number of strains $(=2G)$ in a finite-size population. The population is well-mixed. In the absence of mutation, the population dynamics inevitably end up in monomorphic state due to the stochastic nature of the evolutionary dynamics and the update rule. In the limit of small mutation, there are at most two strains competing in the population at the same time. Before next mutation occurs, the invaders either successfully take over the whole population or are wiped out. For this reason, the population dynamics can be approximated by an embedded Markov chain of size $2G$, with each homogeneous state corresponding to one possible state of the population associated with a strategy and a given number of phenotypes expressed. The transition rates between states are given as Equation $(1)$ in Methods Section. The stationary distribution of this Markov chain characterizes the fraction of time the population spends in each of these $2G$ homogeneous states, and can be analytically computed. Figure 3 presents the equilibrium level of these $2G$ strains as the parameter $\theta$ varies. Generally speaking, the population dynamics can be divided into three classes. For very small $\theta$, fraction of defective strains monotonically increases as phenotypes expressed grow (see figures 3a and b). When it comes to cooperative strains, the distribution of fractions exhibits a $U$-shape curve. It should be noted that the total level of defective strains is significantly higher than that of cooperative strains. As $\theta$ rises, for both cooperative strains and defective strains, those who express a very large number of phenotypes are depressed, while fractions of those expressing a very few phenotypes tilt upward (see figures 3c and d). For $\theta$ is as high as $0.10$, both distributions for defective strains and cooperative strains exhibit $U$-shapes, respectively. At this time, the total level of cooperative strains is comparable to that of defective strains. Further raising $\theta$ induces monotonically decreasing fraction distributions of both cooperative strains and defective strains (see figures 3d, e and f). That is, the more phenotypes expressed, the less fraction of corresponding strains. Cooperative strains enjoy remarkable advantage over defective strains in the evolutionary race. Explanations for these properties next follow. For small $\theta$, the more phenotypes the defectors express, the more likely they exploit cooperators, the higher their fractions in the long run, a feature invariable as long as $\theta$ is lower than a certain value. For cooperative strains, a different picture emerges. In order to escape defectors' exploitation, cooperators have two choices: either to reduce phenotypes expressed or to reinforce reciprocity by expressing more phenotypes. Either choice turns out effective in defending defectors' invasion. Of the two, the later is more effective but still to a limited degree. Worst situation emerges when cooperators express a modest number of phenotypes. They can neither shun off interactions with defectors nor breed themselves strong enough to resist invasion of defectors. As a result, the fractions of these cooperative strains are lowest. A representative evolutionary process is invoked to present the core component of the population dynamics (see figure 4a). As $\theta$ increases to moderate level such as $0.1$, advantages of defective strains expressing many phenotypes are depressed. It is because this advantage can be enjoyed only when competing cooperative strains also express a very large number of phenotypes. Recognizing this situation, cooperative strains can ward off invasion of defective strains by reducing expressing phenotypes. Once this happens, these cooperative strains always interact with each other with a constant small rate. Defective strains need to decipher which phenotypes cooperative strains have expressed from scratch. It takes them time. Hardly have they deciphered cooperative strains' phenotypes when they are already wiped out. It makes defective strains harder to chase after and exploit cooperative strains as $\theta$ further increases. As mutation is uniform and unbiased, the evolutionary force uniquely determines the eventual fate of strains. Resident defective strains expressing a large number of phenotypes can be easily invaded by defective strains, or cooperative strains, when both just express not so many phenotypes. For resident defective strains expressing a very few phenotypes, it may happen from time to time that they attempt to exploit cooperative strains by expressing more phenotypes, but as soon as they do so they will be pulled back. As a result, defective strains most of the time are entrenched in low levels with respect to phenotype expression. This no doubt constrains the ability of defective strains in deciphering and exploiting cooperative strains. For defective strains with low-level phenotype expression, they are outperformed by cooperative strains with similar level of phenotype expression as the later can enjoy the weak mutual breed almost certainly. The population dynamics are graphically illustrated by a typical process (see figure 4b). Hitherto, our results have shown that cooperation coevolves with diversity of phenotype expression under a wider range of conditions and that expressing fewer phenotypes can best promote cooperation. It pays for cooperators to express a very few phenotypes, thereby improving their opportunity of establishing weak reciprocity, as these few randomly expressed phenotypes serve as secret handshakes and are difficult for defectors to discover and chase after. Trade-off between these two forces leads to the symbiosis of diversity of phenotype expression and the highest fraction of such cooperative strains. Our results also show that once defective strains dominate, the population dynamics are extremely unstable. In contrast, cooperative strains can prevail over the population over continuous periods of time, or long or short, but a prerequisite is that these cooperative strains must have randomly expressed a very few phenotypes. Thus a strong interdependence between the evolution of cooperation and diversity of phenotype expression is formed. Moreover, further investigation should aim to explore the forms of cost of expressing phenotypes and relationship between interaction rate and degree of similarity. \section{Discussion} Mounting efforts have been invested in exploring solutions to the evolution of cooperation~\cite{rakoff-nahoum(2016)nature, strassmann(2008)nature, wang(2010)tac, traulsen(2012)prsb, roca(2006)prl, wu(2012)pre, sasaki(2013)prsb, wu(2015)njp, zhang(2015)sr, ohtsuki(2006)prsb, perc(2006)njp, rand(2009)science, bednarik(2014)prsb}. Our study provides a possible path for establishing cooperation, in which the evolved diversity of phenotype expression plays a crucial role. Our study still comes into the domain of chromodynamics of cooperation~\cite{jansen(2006)nature, traulsen(2007)plosone}, but differs decisively from preceding studies on this topic~\cite{jansen(2006)nature, traulsen(2007)plosone}. In these studies, cooperators try to ward off defectors' exploitation through secret tags. When they run faster enough, cooperators dominate the population. A variety of ways can achieve this purpose~\cite{traulsen(2003)pre, traulsen(2004)pre, traulsen(2007)plosone, jansen(2006)nature, wute(2013)jtb}, such as prescribing high levels of phenotypic diversity~\cite{traulsen(2007)plosone}, weakening the coupling of tag and strategy~\cite{jansen(2006)nature}, introducing interaction stochasticity~\cite{chen(2008)pre}. The mechanism we proposed that interaction diversity induced by evolvable phenotype expression engenders similar effects as theirs~\cite{jansen(2006)nature}. In this sense, our mechanism is paralleled to them~\cite{jansen(2006)nature, traulsen(2007)plosone} and thus broadening `other mechanisms that can accomplish the same stabilizing effect' as the authors of Ref.~\cite{jansen(2006)nature} have suggested. Moreover, our study is original in diversifying the expression of phenotypes. Previous studies dealing with interaction rates can be mainly divided into two classes. The first class~\cite{jansen(2006)nature, traulsen(2003)pre, traulsen(2004)pre, traulsen(2007)plosone} considers the zero-or-one interaction rate. When two individuals share the same (or similar enough) phenotype, they play game. Otherwise no interaction happens. In the second class~\cite{wute(2013)jtb, szolnoki(2016)pre, fu(2009)preb, pacheco(2009)prl, wu(2009)epl} interactions are contingent on the outcomes of previous interactions. This in fact forms a feedback loop. An exception comes when the interaction stochasticity is considered~\cite{chen(2008)pre}, but the stochasticity itself does not evolve. Our model introduces the diversity of interaction rate by associating the rate with the degree of similarity, which is measured by the number of the same expressed phenotypes between individuals. This diversity does not rely on the information of preceding interactions. Natural selection acts on the population totally on the individual level. Many mechanisms have been proposed maintaining phenotypic diversity under natural selection, such as mutant games~\cite{huang(2012)nc}, sexual selection~\cite{fisher(1930)book}, coevolving host-parasite population~\cite{milinski(2006)book}, occasional recombinations of tags and of strategies~\cite{jansen(2006)nature}, and phenotype noise~\cite{acar(2008)ng, kussell(2005)genetics, balaban(2004)science, wu(2017)ploscb, solopova(2014)pnas}. Diversity of phenotype expression can add to these mechanisms. Instead of assigning the number of phenotypes to express, our model allows individuals to express an arbitrary number of the potentially expressible phenotypes. The phenotypes expressed are subject to evolutionary force. Phenotypic diversity is therefore an evolvable trait. The frequency-dependent evolutionary dynamics drive the population to equilibrate at low levels of phenotype expression, and thus providing an adaptive explanation for the phenotypic diversity. Two key themes in evolutionary biology, the evolution of cooperation and the diversity of phenotype expression, are naturally combined into our model. Populations seek to optimize fitness in evolutionary processes, while survival of cooperators are constrained through competition with defectors. To overcome the survival threat, cooperators may evolve the system of phenotype expression diversity through which they can distinguish potential partners. This conjecture is of pertinent consequence to many observations in biological circles~\cite{sinervo(2006)pnas, solopova(2014)pnas,corl(2010)pnas}. \emph{Salmonella Typhimurium} can either express virulence, or not. The side-blotched male lizards exhibit diverse colors in throats~\cite{sinervo(2006)pnas, solopova(2014)pnas}. In these examples, individuals have no difference in terms of carrying the potentially expressible phenotypes. They can regulate expression of phenotypes according to competitors faced, or even fluctuating environments. Our model is abstracted from these biological examples and has implications for and awaits the confirmation of field experimental studies. \section{Methods} \subsection{Fixation probability} We here briefly illustrate the general procedure for calculating the fixation probability. In the limit of small mutation rate, the population simultaneously admits at most two different types of individuals, say $A$ and $B$. Denote by $K_A$ and $K_B$ the phenotypes expressed by individuals $A$ and $B$, respectively. Denote by $s_A$ and $s_B$ the behavioral strategies of individuals $A$ and $B$, respectively. Denote by $i$ the number of $A$s in the population. The Moran process that describes the evolutionary race has two absorbing states: $i=0$ and $i=N$. When the population arrives at either of these two states, it would stay there forever. Denote by $\phi_i$ the fixation probability that the population is eventually absorbed into the state $i=N$ when starting with the state $i$. With this preparing work, we can easily write down the expected payoffs for an individual $A$ and $B$ as $P_A$ and $P_B$, respectively. \begin{eqnarray} P_A=r(K_A, K_A)(b-c) s_A (i-1)+r(K_A, K_B)(bs_B-cs_A) (N-i)-\theta K_A\nonumber \end{eqnarray} \begin{eqnarray} P_B=r(K_A, K_B)(bs_A-cs_B) i+r(K_B, K_B)(b-c) s_B(N-i-1)-\theta K_B \nonumber \end{eqnarray} The fitness for $A$ and $B$ is $f_A=e^{\beta P_A}$ and $f_B=e^{\beta P_B}$, respectively. We consider the linear interaction rate as $r(K_A, K_B)=\frac{k}{|G|}$, with $k$ being the number of identical phenotypes individuals $A$ and $B$ possess. $|G|$ is the number of potentially expressible phenotypes carried. In an updating event, the population can increase one, decrease one, remain unchanged in terms of the number of $A$s, and corresponding probabilities are $T_{i, i+1}=\frac{if_A}{if_A+(N-i)f_B}\cdot\frac{N-i}{N}$, $T_{i, i-1}=\frac{(N-i)f_B}{if_A+(N-i)f_B}\cdot\frac{i}{N}$, and $T_{i, i}=1-T_{i, i+1}-T_{i, i-1}$, respectively. Then we have \begin{eqnarray} \phi_i=T_{i, i+1} \phi_{i+1}+T_{i, i-1} \phi_{i-1}+T_{i, i} \phi_{i} \nonumber \end{eqnarray} Using the boundary conditions $ \phi_{0}=0$ and $ \phi_{N}=1$ we can obtain the fixation probability as \begin{eqnarray} \phi_{1}=\big(1+\sum_{l=1}^{N-1} \prod_{k=1}^l\frac{T_{k,k-1}}{T_{k, k+1}}\big)^{-1}\nonumber \end{eqnarray} \subsection{Transition rate for pairwise competing strains} We here compute the rate the population transits from state $X$ to $Y$, which means the probability that strain $Y$ as mutant invades and takes over the population of $X$ strain. Suppose strain $X$ expresses $K_X$ phenotypes and strain $Y$ expresses $K_Y$ phenotypes. We should always bear in mind that as phenotype expression is random, for a giver number of phenotypes to express, the actually expressed phenotypes can vary. Denote by $\rho_{X\rightarrow Y}^{k}$ the fixation probability that a single mutant $Y$ takes over the resident population $X$ when $X$ and $Y$ share $k$ common phenotypes among their expressed phenotypes. Then the expected transition rate from state $X$ to $Y$, say $Q(X, Y; K_X, K_Y)$, is given by \begin{eqnarray} Q(X, Y; K_X, K_Y)&=&\sum_{k=max\{0,K_X+K_Y-G\}}^{min\{K_X, K_Y\}}\frac{\Big(\begin{array}{c}K_x\\k\end{array}\Big)\Big(\begin{array}{c}G-K_x\\K_y-k\end{array}\Big)}{\Big(\begin{array}{c}G\\K_y\end{array}\Big)}\rho_{X\rightarrow Y}^{k} \end{eqnarray} Here $min\{x, y\}$ means the minimal of $x$ and $y$ and $max\{x, y\}$ means the maximal of $x$ and $y$, and $\Big(\begin{array}{c}K_x\\k\end{array}\Big)$ means the combination number of choosing $k$ from $K_x$. Some explanations on the lower and upper boundary of the summing are necessary. Whenever $K_X$ and $K_Y$ is no more than $G$, the expressed phenotypes strains $X$ and $Y$ share can be $0$. Whenever $K_X$ and $K_Y$ is larger than $G$, the expressed phenotypes strain $X$ and $Y$ share can be at least $K_X+K_Y-G$. The expressed phenotypes strain $X$ and $Y$ share can be at most $min\{K_X, K_Y\}$. This situation occurs when both strain $X$ and $Y$ happen to express phenotypes including these $min\{K_X, K_Y\}$ phenotypes. We can use Equation $(1)$ to analytically derive the transition rates between different population states in the limit of rare mutations and for any intensity of selection $\beta$. \subsection{Stationary distribution} As the cost of expressing phenotype linearly increases with the number of phenotypes expressed, individuals expressing too many phenotypes would be easily invaded by those who express not so many phenotypes. In the long run, their fractions would be negligible. It is therefore reasonable to bound the number of potentially expressible phenotypes by a number $|G|$. In the limit of rare mutations, the population will most of the time stay in one of these $2G$ homogeneous states. In other words, there are simultaneously at most two differing strains present in the population before next mutation occurs. Therefore, the population dynamics of $2G$ strains can be well approximated by an embedded Markov chain between these $G$ full defective states and these $G$ full cooperative states. For convenience's sake, we label cooperative strains with even numbers $2K_C$, and defective strains with odd numbers, $2K_D-1$, for $1\leq K_C \leq G$ and $1 \leq K_D \leq G$. For strain $X$ expressing $K_X$ phenotypes and strain $Y$ expressing $K_Y$ phenotypes, the expected transition rate from state $X$ to state $Y$ is $Q(X, Y; K_X, K_Y)$ as shown by Equation $(1)$. We can then easily get the transition matrix $A$ with dimension $2G \times 2G$. The $ij$th entry of matrix $A$ is $Q(i, j; K_i, K_j)$ for $i\neq j$, and the $ii$th entry is one minus the sum of all other entries in the $i$th row. It is worth noting that we have analytically derived the transition rates between any two competing strains and thus the transition matrix. The normalized left eigenvector associated with the eigenvalue $1$ of the transition matrix $A$ provides the stationary distribution of these $2G$ states. The overall cooperative level can be obtained by summing all the elements with even indices in normalized eigenvector~\cite{fudenberg(2006)jet, young(1993)e}. \section*{Acknowledgements} Financial support from NSFC (61751301 and 61533001) is gratefully acknowledged. Te Wu is also supported by the Fundamental Research Funds for Central Universities, Xidian University (JB180413). \section*{Data, code and materials} Codes (C++ and Matlab) and data that can be used to replicate the results in the paper are available upon request.
{ "timestamp": "2018-06-05T02:10:51", "yymm": "1806", "arxiv_id": "1806.00784", "language": "en", "url": "https://arxiv.org/abs/1806.00784" }
\section{Introduction} \label{sec1} Pulsars exhibit two varieties of rotational irregularities that are expected to be related to the dynamics of the interior fluid: spin glitches and timing noise. Glitches are sudden increases in the rotational frequency $\nu$ of the pulsar, with fractional amplitudes spanning $10^{-11}<\Delta \nu/\nu<10^{-4}$ across the pulsar population (see {\it e.g.}\,, \citealt{rad69,esp11}). The glitch event is unresolved by radio timing data, with a current upper limit of $40\,{\rm s}$ obtained from the 2000 January glitch in the Vela pulsar \citep{dod02}. Glitches are believed to arise from the global motion of superfluid vorticity in the neutron star crust that is caused by, {\it e.g.}\,, a noisy creep process \citep{and75b}, thermal heating induced by star quakes \citep{lin96,lar02}, a self-organized critical process \citep{mel08,war08} or a coherent noise process \citep{mel09}. The subsequent glitch recovery occurs over timescales ranging from days to years \citep{mcc87,mcc90,fla90,won01,dod02} and is attributed to dynamical relaxation of the neutron superfluid of the inner crust \citep{alp84b,alp93,alp96,lin14} and of the neutron-proton superfluid mixture of the core \citep{bay69a,eas79a,alp84a,van10,van14,lin14}. Distinct from glitches is {\sl timing noise}, the stochastic wander of pulse phase, frequency, and frequency derivative. This noise process might have many underlying causes and is thought to represent true variations in the star's spin rate \citep{boy72,cor80c,cor85,arz94,dal95,hob06b,hob10}. Possible contributing effects include variations in the external spin-down torque ({\it e.g.}\,, \citealt{che87a,che87b,ura06,lyn10}), variable torques exerted on the crust by the multiple fluid components \citep{alp86,jon90}, microglitches \citep{jan06}, and accretion \citep{qia03}. Variations in the interstellar medium ({\it e.g.}\,, \citealt{liu11}), could also play a role in timing noise. More speculatively, timing noise may be connected with underlying superfluid turbulence, which could produce stochastic variations in the pulsar spin frequency by exerting a variable viscous torque on the rigid crust \citep{mel14}. \citet{gre70} originally suggested that superfluid turbulence prevails in the core of a spinning-down neutron star. Various hydrodynamic instabilities that might lead to turbulence have been proposed as the cause of spin glitches and other timing irregularities. The outer core may be unstable to, {\it e.g.}\,, a variant of the Kelvin-Helmholtz instability occurring at the interface between the $^1S_0$-- and $^3P_2$--paired neutron superfluids \citep{mas05}. Two-stream instabilities in the interpenetrating neutron and proton condensates could be present in the rotating outer core, driven by Fermi-liquid interactions \citep{and04} and vortex-mediated processes \citep{gla09,and13}. \citet{lin12b} argued that slow slippage of vortices induced by relative flow between the neutron superfluid and crust is inherently unstable. An analogous instability was identified in the core, driven by the relative motion between the neutron superfluid and the flux tube array \citep{lin12a}. The Donnelly--Glaberson instability, studied in laboratory superfluid helium, is also expected to have a counterpart in neutron stars if the charged fluid component achieves a critical velocity along the rotation axis \citep{gla74,sid08}. Such a flow would be produced by precession of the star \citep{gla08}. \citet{mel12} has argued that if the inner core of the neutron star retains a high rotation rate from its birth, the outer core becomes susceptible to various instabilities in spherical Couette flow \citep{per06a,per08,per09}. Connecting glitches and timing noise with turbulence in the outer core presents two immediate challenges. One challenge is to identify instabilities that can grow to produce a turbulent state. A second, and more serious, challenge is to demonstrate how the turbulent state begins and ends. Steadily driven classical hydrodynamic systems that become unstable develop a quasi-steady turbulent cascade without global transient behavior. Some studies \citep{mas05,gla09} find instability growth times short enough to be consistent with the observed glitch rise time of $\lesssim 60$ s (Vela), but a description of how turbulence develops and produces a glitch has not been advanced. Some studies find evidence that timing irregularities are consistent with a state of underlying turbulence in the outer core \citep{mel07,mel14}, but the origin of this turbulence needs to be rigorously assessed. An interesting question is whether hydrodynamic instabilities are quenched by magnetic stresses. A general feature of magnetic equilibria is a twisted, tangled structure in which the toroidal field is greater than or equal to the poloidal field \citep{bra06,bra09}. \citet{van08} demonstrated that poloidal magnetic stresses have a stabilizing effect on a particular class of two-stream instabilities. In this paper, we evaluate the stability of the relative flow between the interpenetrating neutron and proton fluids. Relative flow would arise naturally as the crust and charged components of the star are spun down by the magnetic dipole torque, but vortex pinning prevents the neutron superfluid from corotation with the charged components. We consider pinning of neutron vortices to flux tubes in the outer core, accounting for slippage of the two lattices with respect to one another (imperfect pinning). We study the stabilizing effects of the toroidal plus poloidal magnetic field and demonstrate that the magnetic field stabilizes the unstable inertial modes for toroidal magnetic fields greater than $10^{10}\,{\rm G}$. We find that the instability of \citet{lin12a,lin12b} is not present. Instabilities generated by flows along the rotation axis may arise from, {\it e.g.}\,, precession, for which we distinguish two instabilities. The two-stream instability identified by \citet{gla08} is stabilized by the magnetic field for wobble angles less than $0.1^{ \circ}$, as shown by \citet{van08}. Under imperfect pinning, the Donnelly--Glaberson instability occurs, which remains present for arbitrary magnetic field strength and which may be excited for wobble angles as small as $10^{-7\; \circ}$. The paper is structured as follows. In \S\ref{sec2}, we review the magnetohydrodynamic (MHD) theory of neutron star cores. We estimate the relevant hydrodynamic parameters in \S\ref{sec3}. In \S\ref{sec4}, we study two-stream instabilities driven by mutual friction, for rotating fluids (\S\ref{sec4a}) and flows along the rotation axis (\S\ref{sec4b}). Our conclusions are summarized in \S\ref{sec5}. \section{Hydrodynamics of a superfluid mixture } \label{sec2} The core of a neutron star is composed primarily of neutrons, with $\sim 5-10\%$ of the mass in protons; for the electrically neutral medium the number density of electrons is equal to that of the protons. At the supra-nuclear densities of the outer core, the Fermi energy for protons and neutrons is well above the typical temperature of a mature neutron star, and both the neutrons and protons are expected to condense into BCS superfluids, with $^3PF_2$ and $^1S_0$ Cooper pairing respectively \citep{mig59,bay69a}. To support rotation, the neutron superfluid forms an array of quantized vortices, filaments of microscopic cross section, each carrying one quantum of circulation. The superconductivity of the protons is predicted to be type II, and the magnetic field is supported by an array of quantized flux tubes, each carrying one quantum of magnetic flux. Fermi-liquid interactions between the two condensates results in a nondissipative coupling between the mass currents of the two species \citep{and75a,cha06}, so that the neutron vortices are magnetized by entrained proton currents \citep{alp84a}. Electron scattering from magnetized vortices and flux tubes produces dissipative and non-dissipative forces on the vortices and flux tubes. The magnetic interaction at junctions between magnetized neutron vortices and flux tubes is energetic enough to produce pinning, wherein the neutron vortices pin to the dense array of flux tubes in the outer core \citep{sri90,jon91,cha92,rud98,lin12a}, similar to the predicted pinning of the vortices to the nuclear lattice of the crust \citep{and75b,alp77,eps88,don06,avo07,lin09}. Thermal fluctuations stochastically excite vortex motion, causing the neutron vortices to slip with respect to the flux tubes \citep{din93,sid09b,lin14}. In this section, we present the governing MHD equations describing the outer core of a neutron star. In \S\ref{sec2a}, we describe the equations relevant for this study of unstable inertial modes in the outer core. The perturbations of the equations about rotational equilibrium are presented in \S\ref{sec2b}. \subsection{Hydrodynamic treatment} \label{sec2a} To study the stability of flows much larger than the intervortex spacing $d_n$, it is convenient to perform a smooth-averaging of many vortex lines or flux tubes over scales much larger than $d_n$ \citep{hal56a,hal56b,hal60,kha65,hil77,bay83,cha86,men91a,men91b,gla11}. Over length scales that exceed $d_n$, the smooth-averaged vorticity of a rotating neutron condensate is \begin{eqnarray} \omegabf_n&=&n_{vn} \kappa \, \hat{\omegabf}_n=\nabla \times \vbf_n\,, \label{eq1} \end{eqnarray} where $\kappa=\pi \hbar/m$ is the quantum of circulation for neutrons of mass $m$, $n_{vn}$ is the areal density of vortex lines, $\hat{\omegabf}_n$ is the vorticity unit vector directed along the vortex lines, and $\vbf_n$ is the smooth-averaged velocity of the neutron superfluid. The smooth-averaged magnetic field $\Bbf$ in a type II superconductor is \begin{eqnarray} \Bbf&=&n_{vp} \phi_0 \, \hat{\bbf} \,, \label{eq2} \end{eqnarray} where $\phi_0=\pi \hbar c/e= m c\kappa/e $ is the quantum of magnetic flux, $n_{vp}$ is the areal density of flux tubes, and $\hat{\bbf}$ is the unit vector directed along the flux tubes. In the outer core of a neutron star rotating at angular velocity $\Omega_n$ and with magnetic field $B_0$, the flux tubes far outnumber the vortex lines: \begin{eqnarray} \frac{n_{vp}}{n_{vn}}\sim 8\times 10^{13} \left(\frac{B_0}{10^{12}\,{\rm G}}\right)\left(\frac{\Omega_n}{20\pi \, {\rm rad\,s^{-1}}}\right)^{-1}\,. \label{eq3} \end{eqnarray} Contributions to the magnetic field arising from the rotation of the proton and neutron condensates are of order $n_{vn}/n_{vp}\sim 10^{-14}$. We neglect these small corrections. In this paper, we focus our attention on the stabilizing effects of the magnetic stresses on the inertial mode instabilities. We neglect buoyancy and compressibility restoring forces by assuming constant density flows, which gives \begin{eqnarray} \nabla \cdot \vbf_x&=& 0 \,, \label{eq4} \end{eqnarray} for $x=n,p$. This assumption neglects g-modes and p-modes, which may be unstable in neutron star cores (see {\it e.g.}\,, \citealt{and04,gus13,hab16,pas16}). We do not study instabilities related to g-modes and p-modes in this paper, but refer the reader to the above works; we return to g-modes and p-modes in the Conclusions. We also neglect nuclear entrainment in this paper. Instabilities driven by entrainment coupling do not occur in the parameter range expected in neutron stars \citep{and03}, a result that we have verified using a more comprehensive stability analysis reported in \S\ref{secAe} and discussed further in the Conclusions. Entrainment has a small effect on the mode frequencies. The momentum equations for the neutron and proton--electron fluids in the MHD approximation are \citep{men91a,men91b,gla11} \begin{eqnarray} \frac{\partial \vbf_n}{\partial t} + \left(\nabla \times \vbf_n\right) \times \vbf_n &=&-\nabla p_n -\Tbf_n+\Fbf_n\,, \label{eq5} \\ \frac{\partial \vbf_p}{\partial t}+ \left(\nabla \times \vbf_p\right)\times \vbf_p &=&-\nabla p_p-\Tbf_p -\frac{\rho_n}{\rho_p}\Fbf_n +\nu_{ee} \nabla^2 \vbf_p + \Fbf_{dip} \,, \label{eq6} \end{eqnarray} where $\vbf_p$ is the smooth-averaged velocity of the proton--electron fluid, $\rho_{n,p}$ are the mass densities of the fluids, $p_{n,p}$ are scalar potentials related to thermodynamic variables in Equation (\ref{eqC29}), and $\Fbf_{dip}$ is the external driving force associated with the magnetic dipole torque on the star. The neutron fluid is inviscid, while the proton--electron fluid has kinematic viscosity $\nu_{ee}$ arising from electron--electron scattering. The two fluids are coupled by the mutual friction force $\Fbf_n$, which arises from electron scattering from magnetized neutron vortices and pinning interactions. The force acts equally and oppositely on the two fluids and is given by (see {\it e.g.}\,, \citealt{hal60,kha65,hil77,bar83,cha86,men91b,per07,gla11}), \begin{eqnarray} \Fbf_n&=& \mathcal{ B}_n \hat{\omegabf}_n \times \left[\omegabf_n \times \left( \vbf_n- \vbf_p \right)+\Tbf_n\right]+\mathcal{B}_n' \left[ \omegabf_n \times \left(\vbf_n- \vbf_p \right)+\Tbf_n\right] \,, \label{eq7} \end{eqnarray} where $\mathcal{B}_n$ and $\mathcal{B}'_n$ are the mutual friction coefficients; the first term is dissipative and the second term is nondissipative. The mutual friction coefficients are related to scattering and pinning parameters in \S\ref{sec3}. Electron scattering from flux tubes is connected with the evolution of the magnetic field and describes processes analogous to ohmic and Hall diffusion; see {\it e.g.}\,, \citet{gra15}. These effects are small compared with the inertial modes studied in this paper; see \S\ref{secB} for further discussion. The restoring force due to tension of the vortex lines is (see {\it e.g.}\,, \citealt{hal60,kha65,hil77,bay83,men91a,per07,gla11}), \begin{eqnarray} \Tbf_n&=& \frac{1}{\rho_n} \omegabf_n \times \left( \nabla \times \rho_n \nu_n \hat{\omegabf}_n \right) \,, \end{eqnarray} where $\nu_n$ is the vortex line tension parameter, defined in (\ref{eqA24}). The vortex line tension is negligible compared with other terms in (\ref{eq5}) and (\ref{eq6}); see Equation (\ref{eqC2}). We set the vortex tension to zero everywhere in this paper except in the analysis of the Donnelly--Glaberson instability in \S\ref{sec4ba}, where it determines the instability condition. In a type II superconductor the magnetic stresses arise from the tension of the array of the quantized flux tubes and is given by \citep{eas77} \begin{eqnarray} \Tbf_p= \frac{\Bbf}{4\pi \rho_p} \times \nabla \times \left( H_{c1} \hat{\bbf}\right)\,, \label{eq8} \end{eqnarray} where $H_{c1} \simeq 10^{15}$ is the lower critical field for type II superconductivity. The evolution of the magnetic field is determined by the induction equation \begin{eqnarray} \frac{\partial \Bbf}{\partial t}&=&\nabla \times \left(\vbf_p \times \Bbf \right)\,. \label{eq9} \end{eqnarray} The equations (\ref{eq1})--(\ref{eq9}) suffice to study the stabilizing effects of magnetic fields on the instabilities of interest. With $\Tbf_n=0$, the equations do not include vortex line tension forces that produce Kelvin waves, which are small compared with the Coriolis force. The evolution of the magnetic field is slow with respect to the timescales for oscillation modes. Magnetic stresses generated by rotation of charged fluid components, \ie, the London field, are also negligible. For a detailed discussion of the magnetohydrodynamic theory of \citet{gla11}, and a scaling analysis determining the relevant terms, the reader is referred to \S\ref{secA}--\ref{secB}. \subsection{Perturbation equations} \label{sec2b} Consider a neutron star comprising a neutron and a proton--electron fluid rotating as rigid bodies with angular velocities $\Omega_{n,p}$. The star is spinning down under a constant external torque that acts only on the proton--electron fluid over the spin-down time of the star. Meanwhile, the proton--electron fluid spins down the neutrons via the vortex-mediated mutual friction force, $\Fbf_n$. As a consequence, the neutron fluid is rotating faster than the proton--electron fluid by an amount $\Delta \Omega =\left(\Omega_n-\Omega_p\right)$. Taking $\hat{z}$ to be the rotation axis and denoting the unperturbed state with subscript $0$, we write the unperturbed velocities in the inertial frame as \begin{eqnarray} \vbf_{n0}&=& \Omega_n \hat{z} \times {\bf r}+ \Delta v_z \hat{z} \,,\\ \vbf_{p0}&=&\left(\Omega_n-\Delta \Omega\right) \hat{z} \times {\bf r} \,, \end{eqnarray} where the parameter $\Delta v_{z}$ is introduced to study the two-stream instabilities arising from relative velocity between the two fluids along the rotation axis. The lag $\Delta \Omega$ in the unperturbed state is determined by the momentum equations (\ref{eq5}) and (\ref{eq6}). Assuming that the spin-down rate ($\dot{\Omega}_p/\Omega_p$) is much slower than the rotation frequency, in cylindrical coordinates $(r,\phi,z)$ the azimuthal components of (\ref{eq5}) and (\ref{eq6}) give \begin{eqnarray} \dot{\Omega}_n &=& -2\Omega_n \mathcal{B}_n \Delta \Omega \,, \label{eq12aa} \\ \dot{\Omega}_p &=& \frac{2\Omega_n \rho_n \mathcal{B}_n \Delta \Omega}{\rho_p} +\frac{F_{dip,\phi}}{r}\,. \label{eq12ab} \end{eqnarray} Defining the pulsar spin-down time $\tau_{sd}=\Omega_p/(2 | \dot{\Omega}_p | )$, where $| \dot{\Omega}_n |/2 \pi = | \dot{\nu}|$ is the magnitude of the frequency derivative of the the pulsar's observed spin rate, and assuming that the spin-down rates of the neutron fluid and proton--electron fluid is equal ($\dot\Omega_n=\dot\Omega_p$) and $\Delta \Omega/\Omega_n \ll 1$, we find that Equation (\ref{eq12aa}) gives the lag \begin{eqnarray} \Delta \Omega &=& \left( 4 \tau_{sd} \, \mathcal{B}_n \right)^{-1} \,. \label{eq14z} \end{eqnarray} The lag depends on the dissipative mutual friction coupling between the two fluids. The coefficient $\mathcal{B}_n$ depends on the scattering of electrons with vortices and the pinning between vortices and flux tubes in the outer core and is discussed further in \S\ref{sec3}. Combining (\ref{eq12aa}) and (\ref{eq12ab}), multiplying by $r$ and integrating over the volume of the star gives the spin-down equation \begin{eqnarray} I \dot\Omega_p &=&-N_{dip} \,, \end{eqnarray} where $I=8\pi (\rho_n+\rho_p) R^5/15$ is the moment of inertia, $N_{dip}=-\int r \rho_p F_{dip,\phi} dV =2 B^2 R^6 \Omega_p^3/3c^3$ is the external dipole torque, and $R$ is the stellar radius. We now study the stability of the state described by Equations (\ref{eq12aa}) and (\ref{eq12ab}). We use a local plane wave analysis, taking $x,y$ to be the local radial and azimuthal coordinates, respectively. The local plane wave analysis is adequate for wavenumbers $k R \gg 1$. Recall that the hydrodynamic approximation is valid for $k d_n \ll 1$ where $d_n$ is the neutron vortex spacing. These conditions restrict the treatment to wavenumbers in the range $d_n \ll k^{-1}\ll R$. In this coordinate system, the velocities in the inertial frame are \begin{eqnarray} \vbf_{n0}&=&R \Omega_n \hat{y}+ \Delta v_z \hat{z} \,, \label{eq12} \\ \vbf_{p0}&=&R \left(\Omega_n-\Delta \Omega \right) \hat{y} \,. \label{eq13} \end{eqnarray} The unperturbed magnetic field has poloidal $\hat{z}$ and toroidal $\hat{y}$ components and is given by \begin{eqnarray} \Bbf_0&=&B_0 \hat{\bbf}_0=B_{0y} \hat{y}+ B_{0z} \hat{z} \,. \end{eqnarray} Denoting the perturbed quantities by $\delta$, the perturbed momentum equations for the neutron and proton--electron fluids are \begin{eqnarray} \frac{\partial \delta \vbf_n}{\partial t} + 2\Omega_n \hat{z} \times \delta \vbf_n + \left(\nabla \times \delta \vbf_n\right) \times \vbf_{n0}&=&-\nabla \delta p_n - \delta \Tbf_n + \delta \Fbf_n\,, \label{eq14} \\ \frac{\partial \delta \vbf_p}{\partial t}+ 2\left(\Omega_n-\Delta \Omega \right) \hat{z} \times \delta \vbf_p + \left(\nabla \times \delta \vbf_p\right) \times \vbf_{p0} &=&-\nabla \delta p_p- \delta \Tbf_p -\frac{\rho_n}{\rho_p} \delta \Fbf_n +\nu_{ee} \nabla^2 \delta \vbf_p \,. \label{eq15} \end{eqnarray} The perturbations of the vortex line tension are \begin{eqnarray} \delta \Tbf_n &=& - \nu_n \hat{z}\cdot\nabla \left[\delta \omegabf_n - \hat{z} \left(\hat{z}\cdot \delta \omegabf_n\right)\right]\,. \end{eqnarray} The perturbed flux tube tension is \begin{eqnarray} \delta \Tbf_p= -\frac{H_{c1} }{4\pi \rho_p} \hat{\bbf}_0 \cdot \nabla \left[ \delta \Bbf- \hat{\bbf}_0 \left( \hat{\bbf}_0 \cdot \delta \Bbf \right) \right]\,, \end{eqnarray} and the mutual friction force is \begin{eqnarray} \delta \Fbf_n&=& \mathcal{ B}_n R\Delta \Omega \hat{x} \times \left(\nabla \times \delta \vbf_n \right) - \mathcal{ B}_n \Delta v_z \hat{z}\times \left[ \hat{z}\times \left(\nabla \times \delta \vbf_n \right) \right] +\mathcal{ B}_n 2\Omega_n \hat{z}\times\left[ \hat{z}\times \left(\delta \vbf_n -\delta \vbf_p\right) \right] + \mathcal{ B}_n \hat{z} \times \delta \Tbf_n \nonumber \\ &+& \mathcal{B}'_n \left(\nabla \times \delta \vbf_n \right)\times \left( R\Delta \Omega \hat{y} + \Delta v_z \hat{z} \right) + \mathcal{B}'_n 2\Omega_n \hat{z} \times \left(\delta \vbf_n -\delta \vbf_p\right) + \mathcal{ B}'_n \delta \Tbf_n\,. \label{eq23z} \end{eqnarray} Here we ignore dependence of $\mathcal{B}_n$ and $\mathcal{B}'_n$ on fluid velocity; see \S\ref{sec3} and \S\ref{secD} for further discussion of this point. The induction equation for the perturbations is \begin{eqnarray} \frac{\partial \delta \Bbf}{\partial t}&=&\nabla \times \left( \vbf_{p0} \times \delta \Bbf +\delta \vbf_p \times \Bbf_0 \right)\,. \label{eq18} \end{eqnarray} The spin-down rate ($\dot{\Omega}_p/\Omega_p$) is much slower than the frequency of any hydrodynamic mode in the system, and perturbations of the external torque $\Fbf_{dip}$ are negligible. To satisfy the continuity equations for the perturbations, we introduce the potential \begin{eqnarray} \delta\vbf_n=\nabla \times \left( \psi_{n x}\hat{x}+ \psi_{ny}\hat{y} + \psi_{nz}\hat{z} \right)\,, \label{eq19} \end{eqnarray} and similarly for proton--electron fluid. For the magnetic field, we write \begin{eqnarray} \delta \Bbf=\nabla \times \left( A_{x}\hat{x}+ A_{y}\hat{y}+ A_{z}\hat{z}\right)\,. \label{eq20} \end{eqnarray} To solve the system, we assume solutions of the form $e^{i {\bf k}\cdot{\bf x}-i \omega t}$ for all parameters. One component of the potentials in Equations (\ref{eq19}) and (\ref{eq20}) is redundant, and we take $ \psi_{nz}=A_{z}=0$. Eliminating $\delta p_{n,p}$ using the $\hat{z}$ components of (\ref{eq14}) and (\ref{eq15}), the $x$ and $y$ components of the (\ref{eq14}) and (\ref{eq15}) and the induction equation (\ref{eq18}) give a matrix system of six equations in the unknowns $\psi_{nx}$, $\psi_{ny}$, $\psi_{px}$, $\psi_{py}$, $A_x$ and $A_y$. The complete dispersion relation is extremely lengthy, and we do not present it here. In \S\ref{sec4} we consider limits of the full dispersion relation that elucidate each of the instabilities present in the system. \section{Neutron star parameters and relevant terms} \label{sec3} Before solving the perturbation equations, we obtain numerical estimates of the quantities that appear. An approximate expression for the electron--electron scattering contribution to the viscosity is provided by \citet{cut87}. More recent calculations by \citet{sht08} account for transverse Landau damping in charged particle collisions and find a viscosity approximately a factor of three smaller than that of \citet{cut87}. The kinematic viscosity $v_{ee}$ is defined in terms of the shear viscosity $\eta$ by \begin{equation} \label{eq50} \nu_{ee}=\frac{\eta}{\rho_p}=6\times 10^{5}\, \left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right) \left(\frac{x_p}{0.1}\right)^{-1} \left(\frac{\it{T}}{10^8 \, \rm{K}}\right)^{-2}\, \rm{cm^{2} s^{-1} }\,. \end{equation} The relative size of the viscous forces and Coriolis force is parameterized by the Ekman number, $E=\nu_{ee}/(\Omega_n R^2)$. Equation (\ref{eq50}) gives \begin{eqnarray} E= 10^{-9} \left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right)^{-1}\left(\frac{x_p}{0.1}\right) \left(\frac{\it{T}}{10^8 \, \rm{K}}\right)^{2} \left(\frac{R}{10^6\,{\rm cm}}\right)^{-2} \left(\frac{\Omega_n}{20\pi\, {\rm rad\,s^{-1}}}\right)^{-1}\,. \label{eq76} \end{eqnarray} Viscosity plays an important role in damping of high-wavenumber perturbations. To estimate the importance of magnetic stresses, we note that magnetic stresses dominate the inertial forces when the vortex-cyclotron crossing time becomes shorter than the rotational period, \ie, for wavenumbers satisfying $\left| \vbf_{vc} \cdot \kbf \right| \gg 2\Omega_n$ where $|\vbf_{vc}|=\sqrt{H_{c1} B_0/(4 \pi \rho_p)} $ is the vortex-cyclotron wave speed. The magnetic stress dominates the inertial force when \begin{eqnarray} k R & \gg& 10^2 \left(\frac{H_{c1}}{4\times 10^{14} \,{\rm G}}\right)^{-1/2} \left(\frac{B_0}{10^{12} \,{\rm G}}\right)^{-1/2} \left(\frac{x_p}{0.1}\right)^{1/2} \left(\frac{\rho}{3\times 10^{14} \,{\rm g\,cm^{-3}}}\right)^{1/2} \left(\frac{\Omega }{20 \pi \,{\rm rad\,s^{-1}}}\right)\left(\frac{R}{10^6\,{\rm cm}}\right) \,. \label{eq114a} \end{eqnarray} In this limit, the flux tube array appears infinitely rigid to the neutron fluid, and the neutron fluid decouples from the proton--electron fluid. For low wavenumbers with $k R\sim 1$, magnetic stresses are negligible. To estimate the mutual friction coefficients when the vortex lines and flux tubes are pinned together, we consider the rotational equilibrium described in \S\ref{sec2b}, for which $\dot{\Omega}_n=\dot{\Omega}_p$. Pinning forces can sustain a relative angular velocity $\Delta\Omega$ between the neutron and proton--electron fluids of up to the critical angular velocity for unpinning $\Delta \Omega_{crit}$. Numerical estimates for conditions in the outer core give $\Delta \Omega_{crit} \approx 0.1 \,{\rm rad\,s^{-1}}$ \citep{lin14}. From Equation (\ref{eq14z}), ${\cal B}_n$ is related to $\Delta\Omega$ by \begin{eqnarray} \mathcal{B}_n &=& \left( 4 \tau_{sd} \,\Delta \Omega \right)^{-1} \label{eq33} \,. \end{eqnarray} In the microscopic treatment of thermally activated vortex motion, the mutual friction coefficients take the form \citep{lin14} [see Equation (51) therein] \begin{eqnarray} \mathcal{B}_n&=& \frac{\gamma \mathcal{R}_{n}}{1+\mathcal{R}_{n}^2} \label{eq24a} \,, \\ 1-\mathcal{B}'_n&=& \frac{\gamma}{1+\mathcal{R}_{n}^2} \,, \label{eq24} \end{eqnarray} where $\mathcal{R}_n$ is a scattering coefficient related to electron scattering from magnetized vortex lines, $\gamma={\rm e}^{-A/k_B T}<<1$ is the fraction of unpinned vorticity, $A$ is the activation energy for unpinning, $k_B$ is Boltzmann's constant, and $T$ is the temperature. The activation energy depends on the lag $\Delta\Omega$. For a given ${\cal R}_n$ and $T$, the value of the activation energy adjusts so that (\ref{eq33}) holds. For typical parameters of a neutron star, the equilibrium lag is very close to the critical value; see \citet{lin14} and \S\ref{secD} for a detailed calculation. We take $\Delta\Omega=\Delta\Omega_{crit}$ in (\ref{eq33}) and below when making numerical estimates. Recall that the mutual friction force takes the form (\ref{eq7}), \begin{eqnarray} \Fbf_n&=& \mathcal{ B}_n \hat{\omegabf}_n \times \left[\omegabf_n \times \left( \vbf_n- \vbf_p \right)+\Tbf_n\right]+\mathcal{B}_n' \left[ \omegabf_n \times \left(\vbf_n- \vbf_p \right)+\Tbf_n\right] \,, \label{eq7z} \end{eqnarray} In perturbing this force, we took the mutual friction coefficients to be constant; see Equation (\ref{eq23z}). Thermally activated vortex motion causes the mutual friction coefficients to depend on $\vert \omegabf_n\times (\vbf_n-\vbf_p)\vert$ through the activation energy, which must be included when perturbing (\ref{eq7z}). This is explored in detail in \S\ref{secD}. The scattering coefficient $\mathcal{R}_n$ is calculated from the relaxation time for the electron distribution function due to relativistic electron scattering from a magnetized neutron vortex. The coefficient is related to the scattering time $\tau_{sn}$ by $\mathcal{R}_{n} = ( |\omegabf_n| \tau_{sn})^{-1}$ and is given by \citep{alp84a,har86,jon87} \begin{eqnarray} \mathcal{R}_{n}&=& \frac{\rho_{p}}{ \rho_n}\left(\frac{\rho_{np} }{\rho_{pp}}\right)^2 \frac{3 \pi e^2 \phi_0^2}{64 m_p c E_F \Lambda \kappa} \label{eq26}\,, \end{eqnarray} where $E_{Fe}=\hbar c (3 \pi^2 \rho_p/m)^{1/3}$ is the Fermi energy of the electrons. Based on the results of \citet{alp84a}, \citet{men91b} obtained the approximate expression \begin{eqnarray} \mathcal{R}_{n}&=&0.011 \left(\frac{m_p^*-m_p}{m_p}\right)^2\left(\frac{m_p}{m_p^*}\right)^{1/2} \left(\frac{x_p^{7/6}}{1-x_p}\right)\left(\frac{\rho}{10^{14}\,{\rm g\,cm^{-3}}}\right)^{1/6}\,. \label{eq35z} \end{eqnarray} The scattering coefficient $\mathcal{R}_n$ is related to the drag coefficient $\eta_n$ and dissipation angle $\theta_{d}$ used by other authors \citep{alp84a,har86,jon87,lin14} by \begin{eqnarray} \mathcal{R}_{n}=\frac{\eta_n}{\rho_n \kappa} = \tan \theta_{d}\,. \end{eqnarray} From (\ref{eq33}), (\ref{eq24a}) and (\ref{eq24}), the nondissipative mutual friction coefficient is \begin{eqnarray} 1- \mathcal{B}'_n &=& \left( 4 \tau_{sd} \,\Delta \Omega \mathcal{R}_{n} \right)^{-1} \label{eq34} \,. \end{eqnarray} Using estimates for the critical velocity for unpinning in the outer core obtained by \citet{lin14}, we find Equations (\ref{eq33}) and (\ref{eq34}) give \begin{eqnarray} \mathcal{B}_n &=& 8\times 10^{-12} \left( \frac{\Delta \Omega_{crit}}{0.1 \,{\rm rad\,s^{-1}}} \right)^{-1} \left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right)^{-1} \,, \label{eq32a} \\ 1-\mathcal{B}_n'&=&2\times 10^{-8} \left( \frac{ \mathcal{R}_{n} }{4\times10^{-4}} \right)^{-1}\left( \frac{\Delta \Omega_{crit} }{0.1 \,{\rm rad\,s^{-1}}} \right)^{-1}\left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right)^{-1}\,. \label{eq32b} \end{eqnarray} We stress that these are crude estimates; for thermally activated vortex motion, these coefficients depend on the fluid velocities. \section{Two-stream instabilities driven by mutual friction} \label{sec4} \subsection{Rotational Lag during Spin-down} \label{sec4a} As a neutron spins down under the magnetic dipole torque, pinning forces produce a rotational lag between the neutron and proton--electron fluids; see \S\ref{sec2b}. Two instabilities appear in this system: a fast two-stream instability with a growth time of seconds, and a slow two-stream instability with a growth time of days.. Instabilities of this nature have been studied by \citet{gla09} and \citet{and13} respectively, by looking at selective modes in spherical geometry and neglecting the magnetic field. We consider these instabilities in \S\ref{sec4aa} and \S\ref{sec4ab} and demonstrate that both are stabilized by the toroidal component of the magnetic field. In \S\ref{sec4ac} the instabilities of \citet{lin12a,lin12b} are revisited in the full two-fluid hydrodynamic theory. An algebraic error in those papers is corrected and the system is shown to be stable. \subsubsection{A Fast Two-stream instability} \label{sec4aa} The dispersion relation derived in \S\ref{sec2b} has significant algebraic complexity and we begin by exploring the parameter space numerically. We identify an instability with a growth time of seconds. This instability is stabilized by the toroidal component of the magnetic field $B_{0y}$. To understand this instability, we explore the numerical solutions to the dispersion relation further. We find that approximating the mutual friction coefficients (\ref{eq32a}) and (\ref{eq32b}) by $\mathcal{B}_n=1-\mathcal{B}'_n=0$ has no significant effect on the instability. For simplicity, we set $\mathcal{B}_n=1-\mathcal{B}'_n=0$ in the calculation presented here; this is discussed further later. This approximation implies that the vortices and flux tubes move together, an approximation referred to as `perfect pinning' elsewhere. The poloidal field has no significant affect on the instability and we assume $B_{0z}=0$. Only the toroidal field field $B_{0y}$ plays an essential role in this instability. We ignore the vortex line tension and take $\nu_n=0$. The dispersion relation under these assumptions reduces to \begin{eqnarray} \omega_n^2\left(A \omega_n^4+ B \omega_n^3+C \omega_n^2+D\omega_n +E\right)=0\,, \label{eq42z} \end{eqnarray} where \begin{eqnarray} A &=& |k|^4 x_p^2 \,, \nonumber \\ B &=& 2 i x_p^2 |k|^6 \nu_{ee} \,, \nonumber \\ C &=& -|k|^2\left\{ 4 k_z^2 \left[\Omega_n^2+2 x_p \Omega_n \Delta\Omega +\left(\Omega_n-\Delta\Omega\right)^2 x_p^2 \right] + x_p^2 k_y^2 \left(|k|^2+k_y^2\right)v^2_{vcy} + x_p^2 |k|^6 \nu_{ee}^2 \right\} \,, \nonumber \\ D &=& 8 k_y k_z^2|k|^2 \Omega_n \Delta\Omega R \left( \Omega_n+\Delta\Omega x_p-\Omega_n x_p\right) - \nu_{ee} |k|^4 \left[8 \Omega_n^2 k_z^2+x_p k_y^2 \left(|k|^2+k_y^2\right) v_{vcy}^2 \right] \,, \nonumber \\ E &=&4 k_z^2 \Omega_n^2 \left(4 k_z^2 \Omega_n^2+x_p k_y^4 v^2_{vcy}-\Delta \Omega_n^2 k_y^2 |k|^2 R^2\right) + x_p k_y^2 |k|^2 v_{vcy}^2\left(4k_z^2 \Omega_n^2+k_y^4 x_p v^2_{vcy} \right) \,, \end{eqnarray} and $v_{vcy}=\sqrt{H_{c1} B_{0y}/(4 \pi \rho_p)}$ is the speed of vortex-cyclotron waves. In the unperturbed state, the neutron vortex lines move with the proton--electron fluid. The frequency in this frame $\omega_n$ is related to the frequency in the inertial frame $\omega$ by \begin{eqnarray} \omega = \left( \Omega_n-\Delta\Omega\right) k_y R +\omega_n \,. \end{eqnarray} Therefore the dispersion relation (\ref{eq42z}) has two solutions that are zero in the rotating frame, which become $\left(\Omega_n-\Delta \Omega\right) k_y R $ after transforming back into the inertial frame. \begin{figure*} \centering \includegraphics[width=.4\linewidth]{Figure1.pdf} \caption{\footnotesize Growth time $\tau$ of the unstable solution of (\ref{eq42z}) as a function of dimensionless wavenumber $|k| R$ for no magnetic field. Growth time is plotted for $\theta_c$ given by (\ref{eq101}). Three values of $\Delta \Omega$ are plotted: $10^{-2}\,{\rm rad\,s^{-1}}$ (dot-dashed), $10^{-3/2}\,{\rm rad\,s^{-1}}$ (dashed), and $10^{-1}\,{\rm rad\,s^{-1}}$ (solid). Viscous forces suppress the instability at high wavenumber. } \label{fig3a} \end{figure*} First, we examine the instability in the absence of the magnetic field. Writing the wavenumber in spherical coordinates as $k_x=|k| \sin\theta \cos \phi $, $k_y=|k| \sin\theta \sin\phi$ and $k_z=|k| \cos\theta$, the dispersion relation is \begin{eqnarray} \omega_n^2 \left[\omega_n^2+\left(B_+ + i B_i\right) \omega_n + C_+ \right]\left[\omega_n^2+\left(B_-+i B_i \right) \omega_n + C_-\right]=0\,, \label{eq45z} \end{eqnarray} where \begin{eqnarray} B_\pm &=& \pm \frac{2 \cos\theta }{x_p} \left(\Omega_n - \Omega_n x_p+\Delta \Omega x_p \right) \,, \nonumber \\ B_i&=& |k|^2 \nu_{ee} \,, \nonumber \\ C_\pm &=& -\frac{2 \Omega_n \cos\theta }{x_p} \left(2 \Omega_n \cos\theta \pm \Delta \Omega |k| R \sin\theta \sin\phi \right) \,. \end{eqnarray} Separating out the real and imaginary parts, the unstable solutions to (\ref{eq45z}) can be written \begin{eqnarray} \omega_n&=&-\frac{B_\pm}{2} \pm \frac{1}{2\sqrt{2}}\sqrt{\sqrt{\left(B_\pm^2-B_i^2-4C_\pm\right)^2+\left(2 B_\pm B_i\right)^2}+\left(B_\pm^2-B_i^2-4C_\pm\right)} \nonumber \\ &-& i \left[ \frac{B_i}{2} - \frac{1}{2\sqrt{2}}\sqrt{\sqrt{\left(B_\pm^2-B_i^2-4C_\pm\right)^2+\left(2 B_\pm B_i\right)^2}-\left(B_\pm^2-B_i^2-4C_\pm\right)} \right] \,. \label{eq47z} \end{eqnarray} The solution (\ref{eq47z}) is unstable when the term in the square braces is negative. This occurs for $C_\pm > 0$, yielding the instability condition \begin{eqnarray} \pm |k| R \tan\theta \sin\phi > \frac{2 \Omega_n }{ \Delta\Omega } \,. \label{eq116f} \end{eqnarray} Generally, $\Delta \Omega \ll \Omega_n$ and $x_p\ll 1$. Viscous stresses are negligible compared to the inertial forces when $B_i \ll B_\pm$, which occurs for wavenumbers satisfying $|k| \ll \sqrt{2\Omega_n/\nu_{ee} x_p}$. Using the neutron star parameters in \S\ref{sec3}, this gives \begin{eqnarray} |k| R \ll 10^5 \left(\frac{\Omega_n }{20 \pi \, {\rm rad\,s^{-1}} }\right)^{1/2} \left(\frac{R }{10^6 \, {\rm cm} }\right) \left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right)^{-1/2} \left(\frac{\it{T}}{10^8 \, \rm{K}}\right) \,. \label{eq43z} \end{eqnarray} Under these assumptions, the `$-$' solution of (\ref{eq47z}) reduces to \begin{eqnarray} \omega_n &=& \frac{\cos\theta}{x_p}\left[\Omega_n + i \sqrt{\Omega_n \left(2\Delta \Omega |k|R x_p \tan\theta\sin\phi -\Omega_n\right) } \right] \nonumber \\ &-&\frac{|k|^2 \nu_{ee} }{2}\left[i + \frac{ \Omega_n }{\sqrt{\Omega_n \left(2\Delta \Omega |k|R x_p \tan\theta\sin\phi -\Omega_n\right) } }\right]\,. \label{eq50z} \end{eqnarray} The solution (\ref{eq50z}) has two distinct growth times depending on the sign of the term under each square root. If $0<2\Delta \Omega |k|R x_p \tan\theta\sin\phi<\Omega_n$, the term under each square root is negative, and there is an instability with growth time determined by the second term, namely $\tau\sim (|k|^2 \nu_{ee})^{-1}$. The growth time of this instability is \begin{equation} \label{eq102z} \tau =5\, \left(\frac{|k| R}{2 \pi }\right)^{-2} \left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right)^{-1}\left(\frac{x_p}{0.1}\right) \left(\frac{\it{T}}{10^8 \, \rm{K}}\right)^{2} \, {\rm days } \,. \label{eq50g} \end{equation} If $2\Delta \Omega |k|R x_p \tan\theta\sin\phi>\Omega_n$, the term under the square root in (\ref{eq50z}) is positive and the first term dominates the growth time. The quickest growth time occurs for $\cos\phi=0$ and an angle $\theta_c$ given by \begin{eqnarray} \tan 2 \theta_c \approx \pm \frac{2 x_p \Delta\Omega |k|R }{\Omega_n } \,. \label{eq101} \end{eqnarray} Substituting this result into (\ref{eq50z}), and noting that $\sin\theta_c \approx 1$ and $\cos\theta_c=\Delta \Omega |k|R x_p/\Omega_n \ll1$, we find the growth time is approximately $\tau \approx \left( \Delta \Omega |k| R \right)^{-1}$. Typical neutron star numbers in \S\ref{sec3} give \begin{equation} \label{eq102} \tau =2\, \left(\frac{|k| R}{2 \pi }\right)^{-1} \left(\frac{\Delta \Omega}{0.1\, {\rm rad\,s^{-1}}}\right)^{-1} \, {\rm s } \,. \end{equation} This growth time for this instability is much faster than (\ref{eq102z}). Similar arguments apply to the `$+$' solution of (\ref{eq47z}), which can be obtained by making the replacement $\theta\rightarrow -\theta$. Therefore the instability condition for the fast instability is \begin{eqnarray} \pm |k| R \tan\theta \sin\phi > \frac{\Omega_n }{ 2 x_p \Delta\Omega } \,. \label{eq51z} \end{eqnarray} For wavenumbers satisfying (\ref{eq116f}) and not (\ref{eq51z}), the slow instability with growth time (\ref{eq50g}) occurs. In Figure \ref{fig3a}, we plot the growth time (in seconds) of the unstable solution of (\ref{eq47z}) as a function of the dimensionless wavenumber $|k|R$. The orientation of the wave vector is chosen to give the quickest growth time, given by (\ref{eq101}). At low wavenumbers, defined by (\ref{eq43z}), the growth time is well approximated by (\ref{eq102}). In this regime, the growth time becomes faster as the wavenumber increases. At high wavenumbers, viscous forces slow the growth time of the instability. The growth time becomes infinitely long as the wavenumber approaches infinity. \begin{figure*} \centering \includegraphics[width=.4\linewidth]{Figure2a.pdf} \includegraphics[width=.4\linewidth]{Figure2b.pdf} \\ \caption{\footnotesize Growth time $\tau$ of the unstable solution of (\ref{eq42z}) as a function of the azimuthal field $B_{0y}$ for $\theta_c$ given by (\ref{eq101}). Left-hand panel shows three values of $\Delta \Omega$: $10^{-2}\,{\rm rad\,s^{-1}}$ (dot-dashed curve), $10^{-3/2}\,{\rm rad\,s^{-1}}$ (dashed), $10^{-1}\,{\rm rad\,s^{-1}}$ (solid), and $|k|R=2 \pi$. Right-hand shows three values of $|k|R$: $2 \pi$ (solid curve), $8 \pi$ (dashed curve) and $24\pi$ (dot-dashed curve) for $\Delta \Omega=10^{-1}\,{\rm rad\,s^{-1}}$. The instability is stabilized for magnetic fields above the critical value (\ref{eq53z}). } \label{fig3} \end{figure*} We now turn on the azimuthal magnetic field $B_{0y}$ and examine the growth time as a function of magnetic field strength. As before, we examine the instability when the growth time is quickest, given by (\ref{eq101}). In Figure \ref{fig3} we plot the growth time of the unstable solution of (\ref{eq42z}) as a function of $B_{0y}$. In the left-hand panel, we plot three values of $\Delta \Omega$: $10^{-2}\,{\rm rad\,s^{-1}}$ (dot-dashed curve), $10^{-3/2}\,{\rm rad\,s^{-1}}$ (dashed), and $10^{-1}\,{\rm rad\,s^{-1}}$ (solid) for $|k|R=2 \pi$. All remaining parameters correspond to those given in \S\ref{sec3}. The instability is present below a critical value of $B_{0y}$, at which it abruptly disappears. This panel shows that the critical value of $B_{0y}$ scales as $\Delta \Omega^2$. In the right-hand panel, we plot three values of $|k|R$: $2 \pi$ (solid curve), $8 \pi$ (dashed), and $24\pi$ (dot-dashed) for $\Delta \Omega=10^{-1}\,{\rm rad\,s^{-1}}$. This panel shows that the critical value of $B_{0y}$ is independent of the dimensionless wavenumber $|k|R$. Further exploration of the parameter space demonstrates that the critical value of $B_{0y}$ depends weakly on all other parameters except $R$. These findings suggest that the critical $B_{0y}$ scales as $B_{0y} \sim R^2 \Delta \Omega^2$. To obtain the proportionality factor, we assume that the turnover occurs when vortex-cyclotron velocity satisfies $v_{vcy}^2=B_{0y}H_{c1}/4 \pi \rho_p\sim R^2 \Delta \Omega^2$. The critical azimuthal field is then \begin{eqnarray} B_{0ycrit}=9\times 10^{9}\, {\rm G }\left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right) \left(\frac{x_p}{0.1}\right) \left(\frac{R}{10^6\, {\rm cm}}\right)^2 \left(\frac{\Delta \Omega}{0.1\, {\rm rad\,s^{-1}}}\right)^2 \left(\frac{H_{c1}}{4 \times 10^{14}\, {\rm G}}\right)^{-1} \,. \label{eq53z} \end{eqnarray} This result has agrees well with the critical values obtained numerically in Figure \ref{fig3}. Stable magnetic field configurations in a neutron star require that the toroidal field component exceed the poloidal component \citep{bra06,bra09}. For a typical neutron star magnetic field of $10^{12}\,{\rm G}$, the lag in the outer core must exceed $1\,{\rm rad\,s^{-1}}$ for instability to occur; see Equation (\ref{eq53z}). However, \citet{lin14} estimates $\Delta\Omega \lesssim 0.1 \,{\rm rad\,s^{-1}}$, so the toroidal field component will quench this instability. The instability identified here occurs when the wave vector for the perturbations has a component oriented parallel to the relative background flow, in this case the azimuthal direction. Therefore, the perturbations must be nonaxisymmetric for the inertial mode instability to operate. The instability is stabilized by a sufficiently large component of the magnetic field that is also oriented parallel to the relative flow. These generic properties for stabilizing two-stream inertial mode instabilities by magnetic stresses are also found in the later sections \S\ref{sec4ab} and \S\ref{sec4ba}. We now compare our findings with those of \citet{gla09}. In their paper, \citet{gla09} solved the governing equations (\ref{eq5}) and (\ref{eq6}) neglecting viscous and magnetic stresses. In contrast to the present plane wave analysis, \citet{gla09} solved for the unstable inertial modes in spherical geometry. By assuming a power-law radial dependence and an r-mode angular dependence for the modes, \citet{gla09} showed that the $l=m$ mode is unstable when \begin{eqnarray} m>\sqrt{\frac{\Omega_n}{2 x_p \Delta \Omega}}\,. \label{eq110f} \end{eqnarray} This instability condition has qualitative agreement with the result of this paper; see Equation (\ref{eq51z}). Because the instabilities are both solutions to the same governing equations perturbed about the same background, we expect a similar result for the instability condition. Because of the similar nature of the problem, and because we expect the stabilizing of unstable inertial modes in two-fluid systems by the magnetic field to be a generic result, we expect that the instability studied in \citet{gla09} is also stabilized by the toroidal component of the magnetic field for realistic neutron star configurations, in which the toroidal component of the magnetic field is comparable to or larger than the poloidal field component; see {\it e.g.}\,, \citet{bra06} and \citet{bra09}. The instability in this section has been derived by approximating $\mathcal{B}_n=1-\mathcal{B}'_n=0$. Numerical solutions to the complete dispersion relation derived in \S\ref{sec2b} show that the stability criteria and growth times of the instability considered in this section are not significantly changed for the realistic neutron star parameters (\ref{eq32a}) and (\ref{eq32b}).  Exploring the numerical solutions to the dispersion relation accounting for thermal activation, presented in \S\ref{secD}, we find that thermal activation of pinned vorticity does not significantly alter instability. \subsubsection{A slow two-stream instability} \label{sec4ab} Exploring the solutions to the dispersion relation in \S\ref{sec2b} further, we find a second instability with a growth time of days. This instability is also stabilized by the toroidal component of the magnetic field $B_{0y}$. \begin{figure*} \centering \includegraphics[width=.4\linewidth]{Figure3.pdf} \caption{\footnotesize Growth time of the unstable solution to (\ref{eq121g}) as a function of dimensionless wavenumber $k_y R$ for no magnetic field. Curves correspond to $\Delta\Omega=10^{-1}$ (solid curve), $10^{-3/2}$ (dashed curve), and $ 10^{-2}$ (dot-dashed curve). At high wavenumbers, the instability is suppressed by viscous forces. } \label{fig4b} \end{figure*} To understand this instability, we explore the parameter space numerically and find that this instability occurs when the wavenumber is oriented in the azimuthal direction, and we take $k_x=k_z=0$. The poloidal field has no significant effect on the instability, and we take $B_{0z}=0$. Only the toroidal field $B_{0y}$ plays an essential role in this instability. We neglect the vortex tension and take $\nu_n=0$. The dispersion relation in Section \S\ref{sec2b} reduces to \begin{eqnarray} \left[ \omega_p - \Delta\Omega k_y R\left(1-\mathcal{B}_n'\right) \right] \left(\omega_p^2 + i \nu_{ee} k_y^2 \omega_p - k_y^2 v_{vcy}^2\right) \left( \omega_p^3 + B \omega_p^2 + C \omega_p + D \right)=0\,. \label{eq121g} \end{eqnarray} where \begin{eqnarray} B&=&-\Delta\Omega \left(1-\mathcal{B}'_n\right) k_y R + \frac{2 i \Omega_n }{x_p}\left(1 + x_p\right)\mathcal{B}_n + i \nu_{ee} k_y^2 \nonumber \,, \\ C&=&- 2 \Omega_n \mathcal{B}_n \nu_{ee} k_y^2 - v_{vcy}^2 k_y^2 -\frac{ 2 i \Omega_n \Delta\Omega }{x_p}\mathcal{B}_n k_y R- i \Delta\Omega \left(1-\mathcal{B}'_n \right) \nu_{ee} k_y^3 R \nonumber \,, \\ D&=& k_y^2 v_{vcy}^2 \left[ -2 i \Omega_n \mathcal{B}_n + \Delta\Omega \left(1-\mathcal{B}'_n \right) k_y R\right]\,, \end{eqnarray} and $v_{vcy}=\sqrt{H_{c1} B_{0y}/(4 \pi \rho_p)}$. The frequency in the frame rotating with the proton--electron fluid is related to the frequency in the inertial frame by \begin{eqnarray} \omega = \left( \Omega_n-\Delta\Omega\right) k_y R +\omega_p \,. \end{eqnarray} The cubic factor in (\ref{eq121g}) gives unstable modes. The instability is identified in the limit $v_{vcy}=\nu_{ee}=0$, reducing this factor to a quadratic in $\omega_p$. After separating out the real and imaginary parts, the unstable solution is \begin{eqnarray} \omega_p&=&-\frac{B_r}{2} - \frac{1}{2\sqrt{2}}\sqrt{\sqrt{\left(B_r^2-B_i^2\right)^2+\left(2 B_r B_i-4C_i\right)^2}+\left(B_r^2-B_i^2\right)} \nonumber \\ &-& i \left[ \frac{B_i}{2} - \frac{1}{2\sqrt{2}}\sqrt{\sqrt{\left(B_r^2-B_i^2\right)^2+\left(2 B_r B_i-4C_i\right)^2}-\left(B_r^2-B_i^2\right)} \right] \,, \label{eq126g} \end{eqnarray} where the subscripts $r$ and $i$ denote the real and imaginary components of $B$ and $C$ for $v_{vcy}=\nu_{ee}=0$. The solution (\ref{eq126g}) is unstable for $C_i\left(C_i-B_r B_i\right)>0$, which gives \begin{eqnarray} \mathcal{B}_n \Omega_n \Delta \Omega k_y R \left[1 - \left(1-\mathcal{B}'_n\right)\left(1+x_p\right)\right]>0 \,. \end{eqnarray} For the neutron star parameters in \S\ref{sec3}, the solution (\ref{eq126g}) is unstable for $k_y>0$. The imaginary component in (\ref{eq126g}) is dominated by $C_i$, giving the growth time $\tau\approx \sqrt{2/|C_i|}=\sqrt{x_p/\mathcal{B}_n \Omega_n \Delta \Omega k_y R}$. Using the scaling (\ref{eq33}) for the dissipative mutual friction coefficient yields \begin{eqnarray} \tau &=& 0.2 \, \left(\frac{x_p}{0.1} \right)^{1/2}\left(\frac{k_y R}{2\pi }\right)^{-1/2} \left( \frac{\Omega_n}{20\pi \,{\rm rad\,s^{-1}}} \right)^{-1/2}\left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right)^{1/2} \, {\rm days} \,. \label{eq128g} \end{eqnarray} The growth time shortens with increasing with wavenumber according to (\ref{eq128g}) until viscous forces become important. Viscous stresses are negligible when the square of the imaginary component of $B$ in (\ref{eq126g}) is much less than $C_i$, or $\nu_{ee}^2 k_y^3 \ll 4 \Omega_n \Delta \Omega \mathcal{B}_n R/x_p$. Using the scalings (\ref{eq50}) and (\ref{eq33}) gives \begin{eqnarray} k_y R &\ll & 10^2 \, \left(\frac{x_p}{0.1} \right)^{1/3} \left( \frac{\Omega_n}{20\pi \,{\rm rad\,s^{-1}}} \right)^{1/3} \left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right)^{-1/3} \left( \frac{R}{10^6 \,{\rm cm}} \right)^{4/3} \left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right)^{-1/3} \left(\frac{\it{T}}{10^8 \, \rm{K}}\right)^{4/3} \,. \label{eq62z} \end{eqnarray} In Figure \ref{fig4b}, we plot the growth time of the unstable solution to (\ref{eq121g}) as a function of dimensionless wavenumber $k_y R$ for zero magnetic field, $v_{vcy}=0$. Three values of $\Delta \Omega$ are plotted: $10^{-1}$ (solid curve), $10^{-3/2}$ (dashed curve), and $10^{-2}$ (dot-dashed curve). For low wavenumbers, defined by (\ref{eq62z}), the growth time is given by (\ref{eq128g}). At high wavenumbers, the instability is suppressed by viscous forces. \begin{figure*} \centering \includegraphics[width=.4\linewidth]{Figure4.pdf} \caption{\footnotesize Growth time of the unstable solution to (\ref{eq121g}) as a function of azimuthal magnetic field. Curves correspond to $k_y=2\pi/R$ (solid curve), $20\pi/R$ (dashed curve), and $ 200\pi/R$ (dot-dashed curve). The instability is stabilized for magnetic fields above the critical value (\ref{eq102}), as for Figure \ref{fig3}. } \label{fig4a} \end{figure*} We now turn on the azimuthal magnetic field $B_{0y}$ and examine the growth time. Figure \ref{fig4a} shows the growth time of the unstable solution to (\ref{eq121g}) as a function of $B_{0y}$. Three values of $k_y$ are plotted: $2\pi/R$ (solid), $20\pi/R$ (dashed), and $200\pi/R$ (dot-dashed). For small $B_{0y}$, the growth time is nearly independent of $B_{0y}$ and given approximately by (\ref{eq128g}). At larger $B_{0y}$ the magnetic field begins to influence the growth time, which becomes independent of $k_y$. In this regime, the growth time is approximately $\tau \approx v_{vcy} x_p/\mathcal{B}_n \Omega_n \Delta \Omega R$. Using the mutual friction scaling (\ref{eq33}) yields \begin{eqnarray} \tau &=& 30 \, \left(\frac{H_{c1}}{3.8\times 10^{14} \,{\rm G}}\right)^{1/2} \left(\frac{B_{0y}}{10^{6} \,{\rm G}}\right)^{1/2} \left(\frac{x_p}{0.1}\right)^{1/2} \left(\frac{\rho}{3\times 10^{14} \,{\rm g\,cm^{-3}}}\right)^{-1/2} \nonumber \\ &\times& \left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right) \left( \frac{\Omega_n}{20\pi \,{\rm rad\,s^{-1}}} \right)^{-1}\left( \frac{R}{10^6 \,{\rm cm}} \right)^{-1} \, {\rm days} \,. \label{eq129g} \end{eqnarray} Comparing the growth times (\ref{eq128g}) and (\ref{eq129g}), we see the turnover between the two solutions for the growth times occurs at $v^2_{vcy}\sim \mathcal{B}_n \Omega_n \Delta \Omega R/x_p k_y$. Using the scalings for the mutual friction coefficients derived in the pinning regime in \S\ref{sec3} gives \begin{eqnarray} B_{0y} &=& 70 \, \left(\frac{H_{c1}}{3.8\times 10^{14} \,{\rm G}}\right)^{-1} \left(\frac{\rho}{3\times 10^{14} \,{\rm g\,cm^{-3}}}\right)^{-1/2} \left( \frac{k_y R}{2\pi } \right)^{-1} \left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right)^{-1} \left( \frac{\Omega_n}{20\pi \,{\rm rad\,s^{-1}}} \right)\left( \frac{R}{10^6 \,{\rm cm}} \right)\, {\rm G} \,. \label{eq10g} \end{eqnarray} At a field of $\sim 10^{12}\,{\rm G}$, the growth time becomes infinite and the instability is quenched. This occurs at $v_{vcy}^2\sim R^2 \Delta \Omega^2$, identical to the result obtained in \S\ref{sec4aa}. The findings in \S\ref{sec4aa} and this section suggest that when there is no magnetic field, inertial modes coupled by mutual friction become unstable when the background fluids rotate relative to each other. However, these instabilities are stabilized by the azimuthal (toroidal) magnetic field $B_{0y}$. We conclude that there are no instabilities in neutron stars when the neutron and proton--electron fluids rotate with respect to one another in realistic magnetic field configurations. These findings are verified by a thorough numerical search of the parameter space of the complete dispersion obtained using the equations derived in \S\ref{sec2b} and \S\ref{secAe}. For the instabilities considered in \S\ref{sec4aa} and \S\ref{sec4ab}, the mode must have a nonvanishing projection of the wavenumber in the azimuthal direction for the instability to operate. The unstable mode is stabilized for a sufficiently large component of the magnetic field oriented in the same direction. For realistic neutron star configurations, in which the toroidal field component is greater than or equal to the poloidal field component, the instabilities in \S\ref{sec4aa} and \S\ref{sec4ab} are stabilized by the toroidal field. The poloidal magnetic field has no effect on the instability. \citet{and13} studied the unstable inertial modes in two fluids rotating with respect to each other and coupled by mutual friction, the same problem considered here but neglecting the magnetic field. \citet{and13} generalized the study of \citet{gla09} to consider arbitrary mutual friction coefficients, assuming a power-law radial dependence and an r-mode angular dependence for the modes, and focusing on the $l=m$ mode as before. In \S\ref{sec4aa}, we showed that the growth times are qualitatively similar to those found by \citet{gla09}. Similarly, the secular growth times for the instability studied in this section arise from the dissipative mutual friction in a manner similar to that of \citet{and13}. Because \citet{gla09} and \citet{and13} solve the same equations as those in this study but in a different coordinate system, we expect that the instabilities found in \citet{gla09} and \citet{and13} will also be stabilized by the toroidal magnetic field. In general, we find no unstable modes for neutron stars in which the condensates rotate relative to one another. In summary, we expect that all such instabilities are stabilized by the toroidal component of the magnetic field. In \S\ref{secD}, we show that thermal activation does not alter the results of this section. \subsubsection{\citet{lin12b,lin12a} Instabilities} \label{sec4ac} In \S\ref{sec4aa} and \S\ref{sec4ab}, we showed that all unstable inertial modes in condensates rotating relative to one another are stabilized by the toroidal magnetic field. This finding contradicts that of \citet{lin12a}, who reported an instability in the neutron superfluid when the pinned neutron vortices undergo slow slippage with respect to the rigid flux tube lattice due to thermal activation in the outer core. An analogous instability was reported in the neutron star crust, where the slow slippage of vortices with respect to the nuclear lattice was shown to be unstable \citep{lin12b}. We revisit the calculations of \citet{lin12a,lin12b} and show that these results are in error and that there is no instability. In \citet{lin12b,lin12a}, it was assumed that the pinned vortices in the neutron superfluid undergo slippage with respect to a rigid lattice due to thermal activation. In \citet{lin12b} the lattice is the crust; in \citet{lin12a} the lattice is the dense array of flux tubes in the outer core. To reproduce the latter calculation, we take the limit of infinite flux tube tension in the outer core, $v_{vc}\rightarrow\infty$. In this limit, the neutron superfluid decouples from the proton--electron fluid. The dispersion relation is found by solving (\ref{eq14}) and (\ref{eq23z}) and neglecting perturbations in the proton--electron fluid ($\delta \vbf_p=0$). The resulting dispersion relation is equivalent to that obtained for the neutron superfluid modes in the limit $v_{vc}\rightarrow\infty$. We take the vortex line tension to be negligible ($\Tbf_n=0$). After defining the wave-vector components $k_x=|k| \cos\phi\sin\theta$, $k_y=|k| \sin\theta\sin\phi$ and $k_z=|k| \cos\theta$, the dispersion relation is \begin{eqnarray} \omega_n^2&+&2 i \Omega_n \mathcal{B}_n \left(1+\cos ^2\theta \right) \omega_n-\left(2 \Omega_n \cos\theta\right)^2 \left[\left(1-\mathcal{B}_n'\right)^2+\mathcal{B}_n^2\right]=0\,, \label{eq109} \end{eqnarray} where $\omega_n$ is the frequency in the frame rotating with the neutron vortices, related to the frequency in the inertial frame by \begin{equation} \omega= \vbf_{Ln0}\cdot \kbf +\omega_n = \mathcal{B}_n \Delta \Omega k_x R+\left(\Omega_n - \Delta\Omega \mathcal {B}'_n \right)k_y R +\omega_n \,. \label{eq110} \end{equation} The solutions to (\ref{eq109}) are \begin{eqnarray} \omega_n &=& -i \Omega_n \mathcal{B}_n \left(1+\cos ^2\theta \right) \pm i \Omega_n \left\{ \mathcal{B}_n^2 \left(1+\cos ^2\theta \right) ^2 - \left(2 \Omega_n \cos\theta\right)^2 \left[\left(1-\mathcal{B}_n'\right)^2+\mathcal{B}_n^2\right] \right\}^{1/2} \,. \label{eq111} \end{eqnarray} The imaginary component of (\ref{eq111}) is always negative, so there are no unstable inertial modes. The error in \citet{lin12a,lin12b} can be traced to an incorrect perturbation of the neutron superfluid vorticity unit vector. We also revisit the assumption that the flux tube array provides an infinitely rigid pinning lattice for neutron vortices using the two-fluid magnetohydrodynamic theory in \S\ref{sec2}. Scaling arguments in \S\ref{sec3} demonstrate that the magnetic stresses only dominate the inertial forces for large wavenumber; see Equation (\ref{eq114a}). Therefore, the flux tube array only appears infinitely rigid to the neutron superfluid for large wavenumbers satisfying (\ref{eq114a}), and not for small wavenumbers with $k R \sim 1$. In \S\ref{secD}, we account for the effects of thermal activation. We find that no new instabilities are present. \subsection{Relative Flow along the Rotation Axis} \label{sec4b} In \S\ref{sec4a}, we studied instabilities that arise when condensates in the outer core rotate relative to one another. The condensates may also develop relative flow along the rotation axis, which may drive additional instabilities. We examine two possibilities under which this may occur: (1) the Ekman flow induced by the spin-down of the pulsar and (2) precession. First, we examine the possibility of the development of a flow along the rotation axis arising from the spin down of a pulsar. If the magnetic field penetrates the entire star, the crust and the proton--electron fluid in the outer core are coupled by the magnetic field during spin-down. However, if the magnetic field does not penetrate the outer core, the fluid there will respond via viscous forces. Rapidly rotating fluids respond to changes in the angular velocity of their container via {\it Ekman pumping}, wherein a secondary meridional flow transports angular momentum from viscous boundary layers into the interior on a timescale $E^{-1/2}\Omega_n^{-1}$, where $E$ is the Ekman number defined in (\ref{eq76}) (see, {\it e.g.}\,, \citealt{gre63,gre68,van10,van14}). The component of secondary flow along the rotation axis scales as $Ro E^{1/2} \Omega_n R$, where the Rossby number $Ro$ is a dimensionless angular velocity change of the container, typically the fractional increase in angular velocity for impulsive spin-up problems. For steady spin-down, the relevant timescale for the Rossby number is set by the external torque, and the velocity of the secondary flow scales as $ (\tau_{sd} \Omega_n)^{-1} E^{1/2} \Omega_n R$, where $\tau_{sd}$ is the spin-down time defined in (\ref{eq14z}). Compared with the rotational velocity of the star, the secondary flow along the rotation axis induced by Ekman pumping scales as \begin{eqnarray} Ro E^{1/2} & \sim & 10^{-18} \left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right)^{-1} \left(\frac{\Omega_n}{20\pi\, {\rm rad\,s^{-1}}}\right)^{-3/2} \left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right)^{-1/2}\left(\frac{\it{T}}{10^8 \, \rm{K}}\right) \left(\frac{R}{10^6\,{\rm cm}}\right) \,. \label{eq69z} \end{eqnarray} We show below that such a tiny Ekman flow cannot induce instability. The second possibility for developing relative flow along the rotation axis is precession of a neutron star. During precession, the neutron and proton--electron angular velocity vectors are misaligned, inducing a relative flow along the proton--electron fluid along the rotation axis of the neutron fluid that can be directly related to the wobble angle of the precession. \citet{gla08} found an unstable mode in this context with a growth time of fractions of a second at small wavelengths, however the \citet{gla08} do not account for the magnetic field, which significantly modifies the instability. \citet{van08} included the magnetic field in their analysis. Assuming perfect pinning, they show that the magnetic field stabilizes the instability. In the following sections, we revisit the instabilities driven by relative flow along the rotation axis. We distinguish two distinct instabilities in this system: a two-stream instability and the Donnelly--Glaberson instability. The two-stream instability develops in both fluids and is stabilized by sufficiently large magnetic fields. This instability is studied by \citet{van08}. The second instability is the Donnelly--Glaberson instability, which is also driven by relative flow along the rotation axis, but only develops in the neutron superfluid and is unaffected by magnetic stresses. To investigate instabilities arising from relative flow along the rotating axis, we consider flows with nonzero $\Delta v_z$ in \S\ref{sec2b}. The only relevant component of the wave vector is along the vortex lines, and we take $k_x=k_y=0$. The neutron vortex line tension $\nu_n$ is retained because it plays an important role in the Donnelly--Glaberson instability. Under these assumptions, the equations in \S\ref{sec2b} give the dispersion relation \begin{eqnarray} \left( \omega^3+A_+ \omega^2 + B_+ \omega+C_+\right)\left( \omega^3+A_- \omega^2 + B_- \omega+C_-\right)=0\,, \label{eq122g} \end{eqnarray} where \begin{eqnarray} A_\pm &=& \pm \frac{2\Omega_n}{x_p} \left(1 - x_p\right) \pm 2 \Delta \Omega+ i k_z^2 \nu_{ee} + \left[i \mathcal{B}_n \mp \left(1-\mathcal{B}'_n \right) \right] \left[\frac{2 \Omega_n}{x_p}+ \left(2 \Omega_n + k_z^2 \nu_n \pm k_z \Delta v_z \right) \right] \,, \nonumber \\ B_\pm &=& \left\{-\frac{2 \Omega_n}{x_p} + \left[i \mathcal{B}_n \mp \left(1-\mathcal{B}'_n\right) \right] \left[\mp \frac{2 \Omega_n}{x_p} \left(1 + x_p\right) \pm 2 \Delta \Omega + i k_z^2 \nu_{ee}\right] \right\} \left(2 \Omega_n + k_z^2 \nu_n \pm k_z \Delta v_z\right) - k_z^2 v_{cvz}^2 \,, \nonumber \\ C_\pm &=& -v_{cvz}^2 k_z^2 \left[i \mathcal{B}_n \mp\left(1-\mathcal{B}'_n\right)\right] \left(2 \Omega_n + k_z^2 \nu_n \pm k_z \Delta v_z\right) \,, \end{eqnarray} and $v_{cvz}=\sqrt{H_{c1} B_{0z}/(4 \pi \rho_p)}$ is the vortex-cyclotron wave speed. The `$+$' factor in (\ref{eq122g}) is identical to the dispersion relation obtained by \citet{van08} (see Appendix A therein), with the addition of the lag $\Delta \Omega$ and vortex tension $\nu_n$ terms. Analytic solutions to the cubics in (\ref{eq122g}) can be obtained but are cumbersome and uninformative, so we do not present them here. We now study the two distinct instabilities in this system in turn. \subsubsection{Two-stream instability} \label{sec4ba} To study the two-stream instability in this system, we approximate the mutual friction coefficients (\ref{eq32a}) and (\ref{eq32b}) with $\mathcal{B}_n=1-\mathcal{B}'_n=0$. This instability was studied by \citet{gla08} neglecting magnetic fields, and by \citet{van08} including magnetic fields. To put this instability in context with additional results in this paper, we summarize the results of \citet{van08} here, expanding on the discussion of the role of viscosity and growth times. Exploring the instability numerically, we find that the vortex tension and lag are negligible, and we set $\Delta \Omega=\nu_n=0$. Assuming $\mathcal{B}_n=1-\mathcal{B}'_n=0$, the dispersion relation (\ref{eq122g}) reduces to \begin{eqnarray} \omega^2 \left[\omega^2+\left(B_+ + i B_i\right) \omega + C_+ \right]\left[\omega^2+\left(B_-+i B_i \right) \omega + C_-\right]=0\,, \label{eq122} \end{eqnarray} where \begin{eqnarray} B_\pm &=& \pm \frac{2\Omega_n}{x_p} \left(1 - x_p\right) \,, \nonumber \\ B_i&=& k_z^2 \nu_{ee} \,, \nonumber \\ C_\pm &=& -\frac{2 \Omega_n}{x_p} \left(2 \Omega_n \pm k_z \Delta v_z\right) - k_z^2 v_{vcz}^2 \,. \end{eqnarray} Separating out the real and imaginary parts, the unstable solutions to (\ref{eq122}) can be written \begin{eqnarray} \omega&=&-\frac{B_\pm}{2} \pm \frac{1}{2\sqrt{2}}\sqrt{\sqrt{\left(B_\pm^2-B_i^2-4C_\pm\right)^2+\left(2 B_\pm B_i\right)^2}+\left(B_\pm^2-B_i^2-4C_\pm\right)} \nonumber \\ &-& i \left[ \frac{B_i}{2} - \frac{1}{2\sqrt{2}}\sqrt{\sqrt{\left(B_\pm^2-B_i^2-4C_\pm\right)^2+\left(2 B_\pm B_i\right)^2}-\left(B_\pm^2-B_i^2-4C_\pm\right)} \right] \,. \label{eq105f} \end{eqnarray} The solution is unstable for $C_\pm > 0$. Focusing on the `$-$' solution, we find (\ref{eq105f}) is unstable for wavenumbers in the range $k_-<k_z<k_+$ where \begin{eqnarray} k_\pm =\frac{\Omega_n}{v_{vcz}^2 x_p} \left[ \Delta v_z \pm \sqrt{ \Delta v_z^2 -4 x_p v_{vcz}^2 } \right] \,, \label{eq120a} \end{eqnarray} which has real and distinct bounds when \begin{eqnarray} \Delta v_z \geq 2 \sqrt{x_p} v_{vcz} \,. \label{eq2a} \end{eqnarray} This is the condition for instability, as found by \citet{van08}. Viscous stresses are negligible when the viscous damping time is much longer than the vortex-cyclotron crossing time, \ie, $\nu_{ee} k_z^2 \ll v_{cvz} k_z$. Using the results in \S\ref{sec3}, this occurs for wavenumbers satisfying \begin{eqnarray} k_z R &\ll& 2\times 10^7 \left(\frac{H_{c1}}{4\times 10^{14} \,{\rm G}}\right)^{1/2} \left(\frac{B_0}{10^{12} \,{\rm G}}\right)^{1/2} \left(\frac{x_p}{0.1}\right)^{1/2} \left(\frac{\it{T}}{10^8 \, \rm{K}}\right)^{2} \left(\frac{\rho}{3\times 10^{14} \,{\rm g\,cm^{-3}}}\right)^{-3/2} \left(\frac{R}{10^6\,{\rm cm}}\right) \label{eq77z} \,. \end{eqnarray} In this regime, we can approximate (\ref{eq105f}) by taking the limit $B_i^2 \ll B_\pm^2-4 C_\pm $. The unstable `$-$' solution is \begin{eqnarray} \omega&=&\Omega_n \left( \frac{1-x_p}{x_p} \right) + \frac{i}{x_p}\sqrt{2 \Omega_n x_p \Delta v_z k_z -\Omega_n^2\left(1+x_p\right)^2-x_p^2 v_{vcz}^2 k_z^2 } \nonumber \\ &-&\frac{i \nu_{ee} k_z^2 }{2} \left[1- \frac{ i \Omega_n \left(1-x_p\right)}{\sqrt{2 \Omega_n x_p \Delta v_z k_z -\Omega_n^2\left(1+x_p\right)^2-x_p^2 v_{vcz}^2 k_z^2 }}\right]\,. \label{eq119a} \end{eqnarray} The instability can be separated into two distinct regions depending on the sign of the quantity under the square root. For $k'_-<k_z<k'_+$ where \begin{eqnarray} k'_\pm =\frac{\Omega_n}{v_{vcz}^2 x_p} \left[ \Delta v_z \pm \sqrt{ \Delta v_z^2 -v_{vcz}^2 \left(1+x_p\right)^2} \right] \,, \label{eq120b} \end{eqnarray} the expression under the square root is positive and the second term in (\ref{eq119a}) is imaginary. The third term is negligible compared with the first and second, and the growth time is determined by the second term. For wavenumbers within the bounds given by (\ref{eq120a}) but not those given by (\ref{eq120b}), the second term is real, and the third term is imaginary and determines the growth time. These results agree with those obtained in Appendix B of \citet{van08}. The instability criterion obtained by \citet{gla08} is recovered by taking $v_{vcz}\rightarrow 0$ in (\ref{eq119a}) and (\ref{eq120b}). We now estimate the instability condition in a neutron star using the typical neutron star parameters in \S\ref{sec3}. The instability condition (\ref{eq2a}) requires \begin{eqnarray} \frac{\Delta v_z}{\Omega_n R} & > & 10^{-2} \left(\frac{H_{c1}}{4\times 10^{14} \,{\rm G}}\right)^{1/2} \left(\frac{B_0}{10^{12} \,{\rm G}}\right)^{1/2}\left(\frac{\rho}{3\times 10^{14} \,{\rm g\,cm^{-3}}}\right)^{-1/2} \left(\frac{\Omega_n }{20 \pi \,{\rm rad\,s^{-1}}}\right)^{-1}\left(\frac{R}{10^6\,{\rm cm}}\right)^{-1} \,. \label{eq122b} \end{eqnarray} Therefore a relative velocity along the rotation axis greater than the vortex-cyclotron speed, or approximately one-hundredth of the equatorial velocity of the star, is required for instability. This critical velocity is too large to be achieved by Ekman pumping during spin-down, which only induces a relative flow of $10^{-18}$; see Equation (\ref{eq69z}). In a freely precessing neutron star in which the neutron condensate is strongly pinned to the flux tubes, the wobble angle is related to the relative flow along the rotation axis by \citep{gla08} \begin{eqnarray} \Delta v_z= \frac{ \theta_w \Omega_n R }{x_p}\,. \label{eq81z} \end{eqnarray} From (\ref{eq122b}) we find the critical wobble angle (in degrees) for instability is \begin{eqnarray} \theta_w & > & 0.06^{\circ} \left(\frac{H_{c1}}{4\times 10^{14} \,{\rm G}}\right)^{1/2} \left(\frac{B_0}{10^{12} \,{\rm G}}\right)^{1/2}\left(\frac{\rho}{3\times 10^{14} \,{\rm g\,cm^{-3}}}\right)^{-1/2} \left(\frac{x_p}{0.1}\right) \left(\frac{\Omega_n }{20 \pi \,{\rm rad\,s^{-1}}}\right)^{-1}\left(\frac{R}{10^6\,{\rm cm}}\right)^{-1} \,. \label{eq122z} \end{eqnarray} The strongest precession candidate, PSR B1828-11, has an estimated wobble angle of $3^\circ$ \citep{sta00,cut03,akg06,lin07}. Therefore this instability is likely to play a role in that object if the putative precession is real. We now estimate the growth time of the instability in a neutron star. For wavenumbers between the bounds (\ref{eq120b}), the second term in (\ref{eq119a}) yields the approximate growth time $\tau\approx\sqrt{x_p/2 \Delta v_z k_z \Omega_n}$. Using the neutron star numbers in \S\ref{sec3} this gives \begin{eqnarray} \tau & = & 1 \times 10^{-4} \, \left(\frac{\Delta v_z}{\Omega_n R} \right)^{-1/2}\left(\frac{k_z R}{10^4}\right)^{-1/2} \left(\frac{x_p}{0.1}\right)^{1/2} \left(\frac{\Omega_n }{20 \pi \,{\rm rad\,s^{-1}}}\right)^{1/2} \, {\rm s} \,. \label{eq15a} \end{eqnarray} For wavenumbers outside the bounds (\ref{eq120b}), but within the bounds (\ref{eq120a}), the third term in (\ref{eq119a}) yields the approximate growth time $\tau\approx 2/ \nu_{ee} k_z^2$. Using the scaling (\ref{eq50}), this gives \begin{eqnarray} \tau & = & 2 \times 10^2 \, \left(\frac{\rho}{3\times 10^{14} \, \rm{g\,cm^{-3}}}\right)^{-1} \left(\frac{x_p}{0.1}\right) \left(\frac{\it{T}}{10^8 \, \rm{K}}\right)^{2} \left(\frac{k_z R}{10^{1/2}}\right)^{-2} \, {\rm days} \,. \label{eq15b} \end{eqnarray} \begin{figure*} \centering \includegraphics[width=.4\linewidth]{Figure5.pdf} \caption{Growth time for the unstable solution of (\ref{eq122}) as a function of the dimensionless wavenumber for $\Delta v_z/\Omega_n R=0.1$ (solid curve), $0.025$ (dashed curve), and $0.012$ (dot-dashed curve). } \label{fig5} \end{figure*} In Figure \ref{fig5}, we plot the growth time of the two-stream instability, as determined from the `$-$' solution (\ref {eq105f}). Curves are plotted for a poloidal field $B_{0z}=10^{12}\,{\rm G}$ and three values of relative flow along the rotation axis: $\Delta v_z/\Omega_n R=0.1$ (solid curve), $0.025$ (dashed curve), and $0.012$ (dot-dashed curve). Using the relation (\ref{eq81z}), these correspond to wobble angles of $0.57^\circ$, $0.14^\circ$ and $0.07^\circ$ respectively. For wavenumbers between the bounds (\ref{eq120b}), the growth time is quick and given approximately by (\ref{eq15a}). For wavenumbers outside the bounds (\ref{eq120b}) but within the bounds (\ref{eq120a}), the growth time determined by the viscosity and given approximately by (\ref{eq15b}). The dot-dashed curve has $\Delta v_z< (1+x_p) v_{vcz}$, and the bounds (\ref{eq120b}) are imaginary. In this case, only the slow instability with growth time (\ref{eq15b}) operates. The instability window broadens as $\Delta v_z$ increases. Even for the relatively large $\Delta v_z$, the instability window occurs for $k_z$ much less than the condition (\ref{eq77z}), so viscosity has a negligible effect on the growth time. We now compare the characteristics of the instability studied in this section with those of the two-stream instability in \S\ref{sec4aa} and note some similar features. Both instabilities operate when a component of the wave vector for the perturbations is oriented parallel to the relative background flow. In this case, the wave vector is along the rotation axis, whereas in \S\ref{sec4aa} the wave vector requires an azimuthal component. In both cases, the instability is suppressed by a sufficiently large component of the magnetic field oriented in the same direction as the relative flow. We find that these are general characteristics of the two-stream instabilities considered in this paper. We emphasize again that the instability considered in this section is two-stream in nature and develops in both the neutron and proton--electron fluids. This distinguishes the instability from the Donnelly--Glaberson instability, as the latter only develops in the neutron superfluid. \subsubsection{Donnelly--Glaberson instability} \label{sec4bb} The second instability present in the dispersion relation (\ref{eq122g}) is the Donnelly--Glaberson instability. We find that, in contrast with other instabilities considered in this paper, it is not suppressed by the magnetic field. This instability only occurs for $\mathcal{B}_n\neq \left(1-\mathcal{B}'_n\right)\neq 0$ and was not studied by \citet{van08} who derived the general dispersion relation (\ref{eq122g}) but only studied instabilities for $\mathcal{B}_n = \left(1-\mathcal{B}'_n\right) = 0$. \citet{gla08} studied this instability, but did not consider the effects of the magnetic field. The Donnelly--Glaberson instability is present in rotating superfluids such as terrestrial helium II. The instability is excited when a normal fluid component, comprising thermal excitations, flows parallel to the vortex lines in the superfluid. For a single vortex in an external flow, the critical velocity is given by the product of the vortex line tension and the wavenumber of the perturbed Kelvin waves, \ie, $\Delta v_z>\nu_n k_z$. In the hydrodynamic limit for many vortices, the instability criterion becomes $\Delta v_z > 2\sqrt{2\Omega_n\nu_n}$ \citep{gla74,don05}. This instability has an analog in neutron stars, where the charged fluid component plays the role of the normal fluid component driving the instability \citep{sid08}. To study the Donnelly--Glaberson instability in this system, we consider the high wavenumber limit (\ref{eq114a}). In this limit, the magnetic stresses in the proton--electron fluid dominate the inertial forces, and the neutron fluid decouples from the proton--electron fluid. This is equivalent to considering the problem of a neutron fluid coupled to a rigid lattice; see also \S\ref{sec4ac}. In the limit (\ref{eq114a}), $v_{vcz}\rightarrow \infty $, and the dispersion relation for the neutron modes is a quadratic in $\omega$: \begin{eqnarray} \left(\omega+C_+\right)\left(\omega+C_-\right) = 0 \,, \label{eq85z} \end{eqnarray} where \begin{eqnarray} C_\pm = \left[\mp \left(1-\mathcal{B}'_n\right)+ i \mathcal{B}_n\right] \left[ \left( 2\Omega_n + \nu_n k_z^2\right) \pm k_z \Delta v_z\right] \,. \label{eq86z} \end{eqnarray} Let us consider the stability of the `$-$' solution of (\ref{eq85z}), given by \begin{eqnarray} \omega= \left[\left(1-\mathcal{B}'_n\right)+ i \mathcal{B}_n\right] \left[k_z \Delta v_z - \left( 2\Omega_n + \nu_n k_z^2\right) \right]\,. \label{eq105} \end{eqnarray} Because the neutron fluid is decoupled from the proton--electron fluid in this limit, the viscosity does not affect the mode (\ref{eq105}). For instability, we require that the imaginary component of (\ref{eq105}) is positive, which occurs for $k_z$ between in the range $k_-<k_z<k_+$ , where \begin{eqnarray} k_\pm = \frac{1}{2\nu_n} \left( \Delta v_z \pm \sqrt{\Delta v_z^2 -8 \Omega_n \nu_n}\right) \,. \label{eq106b} \end{eqnarray} For two real, distinct bounds, we must have \begin{eqnarray} \Delta v_z > 2\sqrt{2 \Omega_n \nu_n} \,, \label{eq106c} \end{eqnarray} which recovers the condition for the Donnelly--Glaberson instability \citep{gla74}. We now estimate the instability condition in neutron stars using the numbers in \S\ref{sec3}. The condition (\ref{eq106c}) requires \begin{eqnarray} \frac{\Delta v_z}{\Omega_n R} & > & 2\times 10^{-8} \left(\frac{\nu_n}{4\times 10^{-3} \,{\rm cm^2\,s^{-1}}}\right)^{1/2} \left(\frac{\Omega_n }{20 \pi \,{\rm rad\,s^{-1}}}\right)^{-1/2}\left(\frac{R}{10^6\,{\rm cm}}\right)^{-1} \,. \label{eq122a} \end{eqnarray} The relative flow along the rotation axis induced by Ekman pumping during spin-down is $10^{-18}$; see Equation (\ref{eq69z}). Therefore this instability is not excited during spin-down. Next, we consider whether this relative velocity is likely to occur in a neutron star precessing with wobble angle $\theta_w$. Using the previous result to relate the wobble angle to the relative flow along the rotation axis (\ref{eq81z}), the critical wobble angle (in degrees) for instability is \begin{eqnarray} \theta_w & > & 10^{-7 \; \circ} \left(\frac{\nu_n}{4\times 10^{-3} \,{\rm cm^2\,s^{-1}}}\right)^{1/2} \left(\frac{\Omega_n }{20 \pi \,{\rm rad\,s^{-1}}}\right)^{-1/2}\left(\frac{R}{10^6\,{\rm cm}}\right)^{-1} \left(\frac{x_p}{0.1}\right) \,. \label{eq122f} \end{eqnarray} Therefore the Donnelly--Glaberson instability is likely to be relevant in precessing neutron stars with relatively small wobble angles. We now estimate the critical wavenumber and growth time for instability. Assuming the relative flow along the rotation axis greatly exceeds the critical velocity $\Delta v_z \gg 2\sqrt{2 \Omega_n \nu_n}$, we find the lower bound in (\ref{eq106b}) is approximately $2\Omega_n/\Delta v_z$, which gives \begin{eqnarray} k_- R > 2 \left(\frac{\Delta v_z}{\Omega_n R} \right)^{-1}\,. \label{eq123a} \end{eqnarray} This lower bound becomes larger as $\Delta v_z$ becomes smaller. For $\Delta v_z< 0.02 \, \Omega_n R $, the lower critical wavenumber for instability satisfies the assumption that the flux tube lattice appears infinitely rigid to the neutron superfluid, given by (\ref{eq114a}). The upper bound in (\ref{eq106b}) is approximately $\Delta v_z/\nu_n$, which is the critical wavenumber for instability on an individual vortex filament. Using the neutron star parameters in \S\ref{sec3}, we find \begin{eqnarray} k_+ R < 2\times 10^{16} \left(\frac{\Delta v_z}{\Omega_n R} \right) \left(\frac{\nu_n}{4\times 10^{-3} \,{\rm cm^2\,s^{-1}}}\right)^{-1} \left(\frac{\Omega_n }{20 \pi \,{\rm rad\,s^{-1}}}\right)\left(\frac{R}{10^6\,{\rm cm}}\right)^{2} \,. \label{eq123b} \end{eqnarray} The hydrodynamic approximation breaks down for wavenumbers greater than $2\pi /d_n$, which occurs for \begin{eqnarray} k_z R> 2\times 10^9 \left(\frac{\Omega}{20 \pi \,{\rm rad\,s^{-1}}}\right)^{1/2} \left(\frac{R}{10^6\,{\rm cm}}\right) \,. \label{eq94z} \end{eqnarray} Therefore the upper limit (\ref{eq123b}) is outside the range of validity of the hydrodynamic approximation. For wavenumbers greater than (\ref{eq94z}) and less than (\ref{eq123b}), individual vortex filaments are unstable to the Donnelly--Glaberson instability. The growth time for the Donnelly--Glaberson instability is $\tau\approx \left(\mathcal{B}_n \Delta v_z k_z\right)^{-1} $, yielding \begin{eqnarray} \tau &=& 2 \, \left(\frac{\Delta v_z}{\Omega_n R} \right)^{-1}\left(\frac{k_z R}{10^4}\right)^{-1} \left( \frac{\Delta \Omega_{crit}}{0.1 \,{\rm rad\,s^{-1}}} \right)\left(\frac{\tau_{sd}}{10\,{\rm kyr}}\right) \, {\rm days} \,. \label{eq10a} \end{eqnarray} \subsubsection{General Instability for Relative Flow along the Rotation Axis} \begin{figure*} \centering \includegraphics[width=.4\linewidth]{Figure6.pdf} \caption{Growth time for the unstable solution of (\ref{eq122g}) as a function of the dimensionless wavenumber for $\Delta v_z/\Omega_n R=0.1$ (solid curve), $0.025$ (dashed curve), $0.012$ (dot-dashed curve), and $0.003$ (dotted curve). The corresponding growth time for the Donnelly--Glaberson instability (\ref{eq86z}) is plotted as thin lines for comparison. At low wavenumbers the two-stream instability dominates. At larger wavenumbers the two-stream instability is stabilized by the magnetic field, and the Donnelly--Glaberson instability operates. The dotted curve has the smallest $\Delta v_z$, for which the two-stream instability is always stabilized by the magnetic field.} \label{fig6} \end{figure*} We now combine the results of \S\ref{sec4ba} and \S\ref{sec4bb} to study the unstable solution of the complete dispersion relation (\ref{eq122g}). In Figure \ref{fig6}, the growth time of the unstable mode of (\ref{eq122g}) is plotted for the typical pulsar parameters in \S\ref{sec3}, taking the mutual friction coefficients (\ref{eq32a}) and (\ref{eq32b}) and poloidal magnetic field $B_{0z}=10^{12}\,{\rm G}$. The growth time is plotted as a function of the dimensionless wavenumber for four values of relative flow along the rotation axis: $\Delta v_z/\Omega_n R=0.1$ (heavy solid curve), $0.025$ (heavy dashed curve), $0.012$ (heavy dot-dashed curve), and $0.003$ (heavy dotted curve). Using the relation (\ref{eq81z}), these correspond to wobble angles of $0.57^\circ$, $0.14^\circ$, $0.07^\circ$ and $0.02^\circ$ respectively. Plotted for comparison in corresponding thin lines is the growth time of the unstable Donnelly--Glaberson solution (\ref{eq86z}). Comparing Figures \ref{fig5} and \ref{fig6}, we see that both instabilities in \S\ref{sec4ba} and \S\ref{sec4bb} manifest in the same unstable mode. At low wavenumbers, the two-stream instability studied in \S\ref{sec4ba} dominates the Donnelly--Glaberson instability. There is a negligible different between the results for $\mathcal{B}_n$ and $1-\mathcal{B}'_n$ given by (\ref{eq32a}) and (\ref{eq32b}), and $\mathcal{B}_n=1-\mathcal{B}'_n=0$. At wavenumbers exceeding the upper bound (\ref{eq120a}), the two-stream instability is quenched, and the growth time of the unstable mode is determined by the Donnelly--Glaberson instability. The Donnelly--Glaberson instability is never suppressed by the magnetic field, and its growth time continues to shorten until the hydrodynamic approximation breaks down at the wavenumber given by (\ref{eq94z}), not shown in Figure \ref{fig6}. At higher wavenumbers, the growth time continues to shorten until the critical wavenumber for instability of an individual vortex line (\ref{eq123b}) is reached. The instability window for the two-stream instability decreases as $\Delta v_z$ decreases. For $\Delta v_z = 0.003\, \Omega_n R$ (dotted curve) and below, the two-stream instability does not operate, and the growth time is determined by the Donnelly--Glaberson instability. When the growth time of the unstable mode is determined by the Donnelly--Glaberson instability, the solution (\ref{eq105}) is a good approximation to the unstable solution of (\ref{eq122g}). A distinguishing characteristic of the Donnelly--Glaberson instability is that it only develops in the superfluid; there is no velocity perturbation in the normal fluid. Therefore the development of the Donnelly--Glaberson instability is not inhibited even when magnetic stresses in the proton--electron fluid become large. Figure \ref{fig6} demonstrates that, for relative flows along the rotation axis exceeding the critical value (\ref{eq122a}) or wobble angle (\ref{eq122f}), the outer core of a neutron star is always unstable to the Donnelly--Glaberson instability at high wavenumbers. This suggests that the dynamics in the outer core of precessing neutron stars may be very different from nonprecessing stars. In \S\ref{secD} we include the effects of thermal activation in the stability analysis. We find that only the Donnelly--Glaberson instability is significantly modified by this effect. The instability growth time is lengthened from days to decades in the regime where the hydrodynamic approximation is valid. The upper bound for instability (\ref{eq123b}) is also increased. The lower bound (\ref{eq123a}), critical velocity (\ref{eq122a}), and hence the critical wobble angle (\ref{eq122f}) are unchanged. A comparison of the growth time as a function of wavenumber with and without thermal activation is shown in Figure \ref{fig7}. \section{Discussion and Conclusions} \label{sec5} Hydrodynamic instabilities in neutron stars are of interest for their possible role in spin glitches, timing noise, and precession. In this connection, transitions in and out of states of superfluid turbulence, driven by relative flow between the neutron and proton--electron fluids, have been hypothesized to be responsible for spin glitches \citep{and04,gla09,and13}. The purpose of this study was to determine whether magnetic stresses stabilize the candidate instabilities. A summary of our conclusions for instabilities driven by relative rotation and relative flow along the rotation axis is presented in Table \ref{tab1}. As a neutron star spins down due to the external torque, an angular velocity difference develops between the neutron and proton--electron fluids. Our chief conclusion is that this state possesses no unstable inertial modes. The two-stream instabilities in this system are stabilized by the toroidal magnetic field when the vortex-cyclotron speed becomes larger than the relative velocity of two condensates; this stabilization occurs for toroidal field strengths of order $10^{10}\,{\rm G}$ or higher. Calculations of magnetostatic neutron star equilibria give toroidal fields that are at least as strong as the poloidal component \citep{bra06,bra09}. Therefore we expect that a neutron star should be stable against the instabilities found by \citet{gla09} and \citet{and13}. \citet{lin12a,lin12b} erroneously found that relative flow between the neutron and proton--electron fluids is unstable when the neutron vortices slip with respect to the flux tubes through thermal activation. We have ascertained that this process is actually stable. If relative flow along the rotation axis is produced by precession, for example, there are two instabilities of possible relevance. At low wavenumber, a two-stream instability operates with growth time shorter than a second. This instability is suppressed by the magnetic field at high wavenumbers, as shown by \citet{van08}. However, at high wavenumbers the Donnelly--Glaberson instability occurs, which is not suppressed by the magnetic field. In contrast with the two-stream instabilities considered in this paper, the Donnelly--Glaberson instability only develops in the neutron superfluid and can therefore operate even when the magnetic stresses in the proton--electron fluid become large. In precessing neutron stars, the two-stream instability is excited for wobble angles of a fraction of a degree, while the Donnelly--Glaberson instability can be excited by wobble angles as small as $10^{-7}$ degrees. The wobble angle for PSR B1828-11 within a precession interpretation is much larger than these critical values \citep{sta00,cut03,akg06,lin07}, so hydrodynamic instability and turbulence could be important in this object. The local conditions for which instability occurs depend on the local density, and hence upon the dense-matter equation of state to some extent. For the two-stream instabilities studied in \S\ref{sec4aa} and \S\ref{sec4ab}, the critical value of the toroidal field for is relatively low ($\sim 10^{10}$ G) for any reasonable equation of state. For the two-stream instability of \S\ref{sec4ba}, stability depends upon the proton fraction and the vortex-cyclotron velocity. Our estimate for the critical flow along the $z$ axis thus depends on density, but more importantly on the strength of the dipole field, which varies significantly from star to star. The critical velocity for the Donnelly--Glaberson instability (\S\ref{sec4bb}) depends on only the spin rate of the star and on the vortex line tension; the latter has a dependence on local density that is fairly weak. More accurate numbers for the onset of these instabilities could be obtained with a stellar-structure model, but we expect our estimates to be reliable. \begin{table} \begin{tabular}{|llllll} \hline Relative flow & Instability type & Growth time & Toroidal field & Poloidal field & Ref. \\ \hline $\Delta \Omega$ & Two-stream & Seconds & $R\Delta \Omega< v_{vcy}$ & No effect & \S\ref{sec4aa}\\ $\Delta \Omega$ & Two-stream & Days & $R\Delta \Omega< v_{vcy}$ & No effect & \S\ref{sec4ab}\\ $\Delta v_z$ & Two-stream & Seconds & No effect & $\Delta v_z < 2\sqrt{x_p} v_{vcz}$ & \S\ref{sec4ba}\\ $\Delta v_z$ & Donnelly--Glaberson & Days & No effect & No effect & \S\ref{sec4bb} \end{tabular} \caption{\footnotesize Summary of the results obtained in this paper. The first and second columns give the relative flow and type of instability. The third column gives the characteristic growth time. The fourth and fifth columns list the stabilization condition for the toroidal and poloidal fields. The final column gives the section of this paper in which the instability is studied. } \label{tab1} \end{table} Two-stream instabilities have also been reported for the Fermi-liquid entrainment coupling between the two fluids in the absence of mutual friction ($\rho_{np}\neq 0$ and $\Fbf_n=0$). See {\it e.g.}\,, \citet{and04}, but they find no instability for values of the entrainment parameter in the expected range for a realistic neutron star. We verify this result using a more general study reported in \S\ref{secAe}, and we do not present results for two-stream instabilities driven by entrainment coupling. Entrainment has a small effect on the stability analysis in this paper and modifies the inertial mode frequencies by a factor of less than two. It appears unlikely that the effects of compressibility and buoyancy would alter our conclusions. \citet{gus13} have noted that low-frequency thermal g-modes in a star composed of superfluid neutrons, superconducting protons, and normal electrons is unstable at low densities. However, \citet{pas16} have shown that this instability is weak and likely to only operate just below the crust in young neutron stars, where only very short wavelengths are unstable. Deeper in the core, g-modes are restored by muon composition gradients and are expected to be stable \citep{kan14}. These modes have kilohertz frequencies and are unlikely to be modified by the magnetic stresses, entrainment, or mutual friction forces, which are much smaller than the buoyancy restoring forces. \citet{and04} showed that relative flow between two chemically coupled superfluids produces unstable sound modes. The instability is shown to operate in the outer core just below the crust, where the required relative flow is a significant fraction of the speed of sound of the neutron gas. Such a large relative flow, of order $10^8\,{\rm cm \, s^{-1}}$, is unlikely in a realistic neutron star, making this instability difficult to excite. For the conditions that prevail in a spinning-down neutron star, we conclude that the hydrodynamic flow is stable; in particular, hydrodynamic turbulence does not develop and therefore is not the cause of spin glitches as postulated by, {\it e.g.}\,, \citet{gla09} and \citet{and13}. Should an instability with sufficiently fast rise time exist, however, two challenges still remain. The first challenge is to demonstrate how the instability develops to produce a glitch. The second challenge is to demonstrate that this turbulent state ends and resets the system for the next glitch. Steadily driven classical systems susceptible to instability develop a quasi-steady turbulent cascade without global transient behavior; therefore, if the spin-down under an external torque is unstable, we expect the turbulent state to persist. On the other hand, hydrodynamic instabilities could indeed play a role in precessing neutron stars. The development of such an instability and its effects is an interesting problem for future study. For completeness, the complete MHD theory including entrainment, Kelvin waves, and magnetic field evolution, is presented in \S\ref{secAe}. Stability is studied assuming constant density flow, and the dispersion relation is explored numerically over the relevant range of parameters in neutron stars. We find no additional instabilities. The effects of thermal activation are studied in \S\ref{secD}. The growth time of the Donnelly--Glaberson instability is lengthened to decades for wavenumbers in the hydrodynamic regime. The other instabilities considered in this paper are unaffected by thermal activation. \acknowledgments We thank Y. Levin for helpful comments on this work. This work was supported by NSF award AST-1211391 and NASA award NNX12AF88G.
{ "timestamp": "2018-09-20T02:05:04", "yymm": "1806", "arxiv_id": "1806.00967", "language": "en", "url": "https://arxiv.org/abs/1806.00967" }
\section{Introduction} \subsection{Basic definitions and notation} Unless otherwise stated, we shall use small letters such as $x$ to denote non-negative integers or elements of a set, capital letters such as $X$ to denote sets, and calligraphic letters such as $\mathcal{F}$ to denote \emph{families} (that is, sets whose members are sets themselves). Arbitrary sets and families are taken to be finite and may be the \emph{empty set} $\emptyset$. An \emph{$r$-element set} is a set of size $r$, that is, a set having exactly $r$ elements (also called members). The set of positive integers is denoted by $\mathbb{N}$. For $m, n \in \mathbb{N}$, the set $\{i \in \mathbb{N} \colon m \leq i \leq n\}$ is denoted by $[m,n]$. We abbreviate $[1,n]$ to $[n]$, and we take $[0]$ to be $\emptyset$. For a set $X$, the \emph{power set of $X$} (that is, $\{A \colon A \subseteq X\}$) is denoted by $2^X$, and the family $\{A \subseteq X \colon |A| = r\}$ is denoted by $X \choose r$. We say that a set $A$ \emph{$t$-intersects} a set $B$ if $A$ and $B$ have at least $t$ common elements. A family $\mathcal{A}$ is said to be \emph{$t$-intersecting} if for every $A, B \in \mathcal{A}$, $A$ $t$-intersects $B$. A $1$-intersecting family is also simply called an \emph{intersecting family}. A $t$-intersecting family $\mathcal{A}$ is said to be \emph{trivial} if its sets have at least $t$ common elements. For a family $\mathcal{F}$ and a $t$-element set $T$, the family $\{A \in \mathcal{F} \colon T \subseteq A\}$ is denoted by $\mathcal{F}(T)$ and called a \emph{$t$-star of $\mathcal{F}$}. Note that non-empty $t$-stars are trivial $t$-intersecting families. We say that $\mathcal{F}$ has the \emph{$t$-star property} if at least one of the largest $t$-intersecting subfamilies of $\mathcal{F}$ is a $t$-star of $\mathcal{F}$. \subsection{Intersecting families} One of the most popular endeavours in extremal set theory is that of determining the size or the structure of a largest $t$-intersecting subfamily of a given family $\mathcal{F}$. This originated in \cite{EKR}, which features the classical result referred to as the Erd\H os-Ko-Rado (EKR) Theorem. The EKR Theorem says that for $1 \leq t \leq r$ there exists an integer $n_0(r,t)$ such that for $n \geq n_0(r,t)$, the size of a largest $t$-intersecting subfamily of ${[n] \choose r}$ is ${n-t \choose r-t}$, meaning that ${[n] \choose r}$ has the $t$-star property. It also says that the smallest possible $n_0(r,1)$ is $2r$; among the various proofs of this fact (see \cite{EKR,Kat,HM,K,D,FF2}) there is a short one by Katona \cite{K}, introducing the elegant cycle method, and another one by Daykin \cite{D}, using the Kruskal-Katona Theorem \cite{Kr,Ka}. Note that ${[n] \choose r}$ itself is intersecting if $n < 2r$. The EKR Theorem inspired a sequence of results \cite{F_t1,W,FF,AK1} that culminated in the complete solution of the problem for $t$-intersecting subfamilies of ${[n] \choose r}$. The solution had been conjectured by Frankl \cite{F_t1}. It particularly tells us that the smallest possible $n_0(r,t)$ is $(t+1)(r-t+1)$; this was established by Frankl \cite{F_t1} and Wilson \cite{W}. Ahlswede and Khachatrian \cite{AK1} settled the case $n < (t+1)(r-t+1)$. The $t$-intersection problem for $2^{[n]}$ was solved by Katona \cite{Kat}. These are among the most prominent results in extremal set theory. The EKR Theorem inspired a wealth of results that establish how large a system of sets can be under certain intersection conditions; see \cite{DF,F,F2,HST,HT,Borg7,FTsurvey}. A set $B$ in a family $\mathcal{F}$ is called a \emph{base of $\mathcal{F}$} if for each $A \in \mathcal{F}$, $B$ is not a proper subset of $A$. The size of a smallest base of $\mathcal{F}$ is denoted by $\mu(\mathcal{F})$. The family of $r$-element sets in $\mathcal{F}$ is denoted by $\mathcal{F}^{(r)}$ and called the \emph{$r$th level of $\mathcal{F}$}. A family $\mathcal{F}$ is said to be \emph{hereditary} if for each $A \in \mathcal{F}$, all the subsets of $A$ are members of $\mathcal{F}$. In the literature, a hereditary family is also called an \emph{ideal}, a \emph{downset}, and an \emph{abstract simplicial complex}. Hereditary families are important combinatorial objects that have attracted much attention. The various interesting examples include the family of \emph{independent sets} of a \emph{graph} or a \emph{matroid}. The power set is the simplest example. In fact, by definition, a family is hereditary if and only if it is a union of power sets. Note that if $X_1, \dots, X_k$ are the bases of a hereditary family $\mathcal{H}$, then $\mathcal{H} = 2^{X_1} \cup \dots \cup 2^{X_k}$. The most basic result on intersecting families, also proved in the seminal EKR paper \cite{EKR}, is that the hereditary family $2^{[n]}$ has the $1$-star property. One of the central conjectures in extremal set theory, due to Chv\'atal \cite{Chv}, is that every hereditary family $\mathcal{H}$ has the $1$-star property. Several cases have been verified \cite{Chva, Sterboul, Schonheim, Miklos2, Miklos, Wang, Sn} (see also \cite{Chvatalsite}), many of which are captured by Snevily's result \cite{Sn} (\cite{Borg4b} provides a generalization obtained by means of a self-contained alternative argument). For $t \geq 2$, the $t$-star property fails already for $\mathcal{H} = 2^{[n]}$ with $n \geq t+2$; the largest $t$-intersecting subfamilies of $2^{[n]}$ were determined by Katona \cite{Kat}. However, for levels of hereditary families, we have the following generalization of the Holroyd--Talbot Conjecture \cite[Conjecture~7]{HT}. \begin{conj}[\cite{Borg}] \label{AK gen} If $1 \leq t \leq r$ and $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq (t+1)(r-t+1)$, then $\mathcal{H}^{(r)}$ has the $t$-star property. \end{conj} Note that if $\mathcal{H} = 2^{[n]}$, then $\mathcal{H}^{(r)} = {[n] \choose r}$ and $\mu(\mathcal{H}) = n$. It follows by the above-mentioned results for ${[n] \choose r}$ that the conjecture is true for $\mathcal{H}=2^{[n]}$ and that the condition $\mu(\mathcal{H}) \geq (t+1)(r-t+1)$ cannot be improved. The author verified the conjecture for $\mu(\mathcal{H})$ sufficiently large depending only on $r$ and $t$. \begin{theorem}[\cite{Borg}]\label{t int her} Conjecture~\ref{AK gen} is true if $\mu(\mathcal{H}) \geq (r-t){3r-2t-1 \choose t+1} + r$. \end{theorem} By \cite[Theorem~1.2 and Section~4.1]{Borgmaxprod}, Conjecture~\ref{AK gen} is also true if $\mu(\mathcal{H}) \geq (r-t)r{r \choose t} + r$. \subsection{Cross-intersecting families} A popular variant of the intersection problem described above is the cross-intersection problem. Two families $\mathcal{A}$ and $\mathcal{B}$ are said to be \emph{cross-$t$-intersecting} if each set in $\mathcal{A}$ $t$-intersects each set in $\mathcal{B}$. Cross-$1$-intersecting families are also simply called \emph{cross-intersecting families}. For $t$-intersecting subfamilies of a given family $\mathcal{F}$, the natural question to ask is how large they can be. For cross-$t$-intersecting families, two natural parameters arise: the sum and the product of sizes of the cross-$t$-intersecting families. The problem of maximizing the sum or the product of sizes of cross-$t$-intersecting subfamilies of a given family $\mathcal{F}$ has been attracting much attention (many of the results to date are referenced in \cite{Borg8, Borgmaxprod, BorgJLMS}). In this paper, we are concerned with the sum problem for the case where, as in Theorem~\ref{t int her}, $\mathcal{F}$ is a level of a hereditary family, but we also address the problem where the cross-$t$-intersecting families come from different levels and are non-empty. Thus, it is convenient to introduce the following notation. For two families $\mathcal{F}$ and $\mathcal{G}$, let \begin{center} $C(\mathcal{F},\mathcal{G},t) = \{(\mathcal{A},\mathcal{B}) \colon \emptyset \neq \mathcal{A} \subseteq \mathcal{F}, \emptyset \neq \mathcal{B} \subseteq \mathcal{G}, \mbox{$\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting}\},$ \end{center} \begin{center} $m(\mathcal{F},\mathcal{G},t) = \max\{|\mathcal{A}| + |\mathcal{B}| \colon (\mathcal{A},\mathcal{B}) \in C(\mathcal{F},\mathcal{G},t)\},$ \end{center} \begin{center} $M(\mathcal{F},\mathcal{G},t) = \{(\mathcal{A},\mathcal{B}) \in C(\mathcal{F},\mathcal{G},t) \colon |\mathcal{A}| + |\mathcal{B}| = m(\mathcal{F},\mathcal{G},t)\}$. \end{center} Hilton and Milner \cite{HM} showed that if $\mathcal{A}$ and $\mathcal{B}$ are non-empty cross-intersecting subfamilies of ${[n] \choose r}$ with $1 \leq r \leq n/2$, then $|\mathcal{A}| + |\mathcal{B}| \leq {n \choose r} - {n - r \choose r} + 1$. Equality holds if $\mathcal{A}$ consists of $[r]$ only and $\mathcal{B}$ consists of all the sets in ${[n] \choose r}$ that intersect $[r]$. In other words, if $1 = t \leq r \leq n/2$ and $\mathcal{F} = \mathcal{G} = {[n] \choose r}$, then $(\{[r]\},\{B \in \mathcal{G} \colon B \cap [r] \neq \emptyset\}) \in M(\mathcal{F}, \mathcal{G}, t)$. Frankl and Tokushige \cite{FT1} showed that the same holds in the more general case where $1 = t \leq r \leq s$, $n \geq r+s$, $\mathcal{F} = {[n] \choose r}$, and $\mathcal{G} = {[n] \choose s}$. Wang and Zhang \cite{WZ2} generalized this for $t \geq 1$. They proved that if $t < \min\{r,s\}$, $n \geq r+s-t+1$, ${n \choose r} \leq {n \choose s}$, $\mathcal{F} = {[n] \choose r}$, and $\mathcal{G} = {[n] \choose s}$, then $(\{[r]\}, \{B \in \mathcal{G} \colon |B \cap [r]| \geq t\}) \in M(\mathcal{F}, \mathcal{G}, t)$ (an independent proof for $r=s$ has been obtained by Frankl and Kupavskii \cite{FK}); they also determined the pairs in $M(\mathcal{F}, \mathcal{G}, t)$. It immediately follows that if we allow the cross-$t$-intersecting families $\mathcal{A}$ and $\mathcal{B}$ to be empty, then $|\mathcal{A}| + |\mathcal{B}|$ is maximum if $\mathcal{A} = \emptyset$ and $\mathcal{B} = {[n] \choose s}$. \subsection{The main result} As pointed out above, ${[n] \choose r} = \mathcal{H}^{(r)}$ with $\mathcal{H} = 2^{[n]}$. Thus, the theorem of Wang and Zhang deals with the $r$th level and the $s$th level of the hereditary family $2^{[n]}$. We characterize the pairs in $M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t)$ for any hereditary family $\mathcal{H}$ with $\mu(\mathcal{H})$ sufficiently large depending on $r$, $s$, and $t$. The paper \cite{Borg9} features the following two conjectures for $t=1$. \begin{conj}[Weak Form \cite{Borg9}] \label{conj1} If $1 \leq r \leq s$ and $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq r+s$, then for some $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},1)$, $\mathcal{A}$ is a trivial $1$-intersecting family. \end{conj} \begin{conj}[Strong Form \cite{Borg9}] \label{conj2} If $1 \leq r \leq s$ and $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq r+s$, then there exists a set $I$ in $\mathcal{H}$ such that $1 \leq |I| \leq r$ and for some $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},1)$, $\mathcal{A} = \mathcal{H}^{(r)}(I)$ and $\mathcal{B} = \{B \in \mathcal{H}^{(s)} \colon B \cap I \neq \emptyset\}$. \end{conj} Generalizing the above-mentioned result of Frankl and Tokushige \cite{FT1}, the main result in \cite{Borg9} tells us that for certain hereditary families $\mathcal{H}$, Conjecture~\ref{conj2} holds with $|I| = r$, in which case $\mathcal{A}$ consists of $I$ only and $\mathcal{B}$ consists of all the sets in $\mathcal{H}^{(s)}$ intersecting $I$. A question that arises immediately is whether this holds for every hereditary family. This is answered in the negative in \cite{Borg9} too; \cite[Proposition~2.1]{Borg9} tells us that for any $2 \leq r \leq s$ and $n \geq r+s$, there are hereditary families $\mathcal{H}$ such that $\mu(\mathcal{H}) = n$ and no $(\mathcal{A}, \mathcal{B})$ in $M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},1)$ satisfies Conjecture~\ref{conj2} with $|I| = r$. Throughout the paper, we take \begin{equation} c(r,s,t) = r + (s-t)\max \left\{2{s \choose t}, \; 2^r(r-t){r \choose t} + 1 \right\}. \nonumber \end{equation} Note that Conjecture~\ref{conj2} is significantly stronger than Conjecture~\ref{conj1}. In Section~\ref{nonemptysection}, we prove the following generalization for $M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t)$ with $\mu(\mathcal{H}) \geq c(r,s,t)$, hence verifying Conjecture~\ref{conj2} for $\mu(\mathcal{H}) \geq c(r,s,1)$. \begin{theorem} \label{nonemptyfam} If $1 \leq t \leq r \leq s$, $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq c(r,s,t)$, and $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t)$, then for some set $I$ in $\mathcal{H}$ with $t \leq |I| \leq r$, either \[\mbox{$\mathcal{A} = \mathcal{H}^{(r)}(I)$ and $\mathcal{B} = \{B \in \mathcal{H}^{(s)} \colon |B \cap I| \geq t\}$,}\] or \[\mbox{$r = s$, $t < |I|$, $\mathcal{A} = \{A \in \mathcal{H}^{(r)} \colon |A \cap I| \geq t\}$, and $\mathcal{B} = \mathcal{H}^{(s)}(I)$.}\] \end{theorem} It immediately follows that \begin{equation} (\mathcal{H}^{(r)}(I), \{B \in \mathcal{H}^{(s)} \colon |B \cap I| \geq t\}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t) \end{equation} (with $I$ as in Theorem~\ref{nonemptyfam}). Thus, the following holds. \begin{theorem} If $1 \leq t \leq r \leq s$ and $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq c(r,s,t)$, then \[m(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t) = |\mathcal{H}^{(r)}(I)| + |\{B \in \mathcal{H}^{(s)} \colon |B \cap I| \geq t\}|\] for some set $I$ in $\mathcal{H}$ with $t \leq |I| \leq r$. \end{theorem} \begin{problem} For $1 \leq t \leq r \leq s$, let $\eta(r,s,t)$ be the smallest integer $n$ such that for every hereditary family $\mathcal{H}$ with $\mu(\mathcal{H}) \geq n$, $(\mathcal{H}^{(r)}(I), \{B \in \mathcal{H}^{(s)} \colon |B \cap I| \geq t\}) \in M(\mathcal{H}^{(r)}, \mathcal{H}^{(s)},t)$ for some $I \in \mathcal{H}$ with $t \leq |I| \leq r$. What is the value of $\eta(r,s,t)$? \end{problem} By Theorem~\ref{nonemptyfam}, $\eta(r,s,t) \leq c(r,s,t)$. Clearly, for $\mathcal{H} = 2^{[n]}$, we have $\mu(\mathcal{H}) = n$, and $\mathcal{H}^{(r)}$ and $\mathcal{H}^{(s)}$ are cross-$t$-intersecting if and only if $n \leq r + s - t$. Thus, $\eta(r,s,t) \geq r + s - t + 1$. We conjecture that equality holds. \begin{conj} \label{nonemptyconj} For $1 \leq t \leq r \leq s$, $\eta(r,s,t) = r + s - t + 1$. \end{conj} A \emph{graph} $G$ is a pair $(V,\mathcal{E})$ with $\mathcal{E} \subseteq {V \choose 2}$, and a subset $S$ of $V$ is called an \emph{independent set of $G$} if $\{i,j\} \notin \mathcal{E}$ for every $i, j \in S$. Let $\mathcal{I}_G$ denote the family of all independent sets of a graph $G$. The EKR problem for $\mathcal{I}_G$ was introduced in \cite{HT} and inspired many results \cite{BH1,BH2,HST,HT,HK,Wr}. Many EKR-type results can be phrased in terms of independent sets of graphs; see \cite[page 2878]{BH2}. Clearly, $\mathcal{I}_G$ is a hereditary family. Kamat \cite{Kamat} conjectured that if $\mu(\mathcal{I}_G) \geq 2r$, and $\mathcal{A}$ and $\mathcal{B}$ are cross-intersecting subfamilies of ${\mathcal{I}_G}^{(r)}$, then $|\mathcal{A}| + |\mathcal{B}| \leq |{\mathcal{I}_G}^{(r)}|$. We suggest the following strong generalization. \begin{conj}\label{nonemptyconjcor} If $1 \leq t \leq r \leq s$, $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq r+s-t+1$, $\mathcal{A} \subseteq \mathcal{H}^{(r)}$, $\mathcal{B} \subseteq \mathcal{H}^{(s)}$, and $\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting, then $|\mathcal{A}| + |\mathcal{B}| \leq |{\mathcal{H}}^{(s)}|$. \end{conj} In other words, we conjecture that for $\mu(\mathcal{H}) \geq r+s-t+1$, if the cross-$t$-intersecting families $\mathcal{A}$ and $\mathcal{B}$ are allowed to be empty, then their sum of sizes is maximum if $\mathcal{A}$ is empty and $\mathcal{B}$ is $\mathcal{H}^{(s)}$. In Section~\ref{propertysection}, we establish some key properties of hereditary families that enable us to prove Theorem~\ref{nonemptyfam} and the following result. \begin{lemma}\label{nonemptylemma} If $1 \leq t \leq r \leq s$, $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq r+s-t+1$, $I$ is a set in $\mathcal{H}$ with $t \leq |I| \leq r$, $\mathcal{A} = \mathcal{H}^{(r)}(I)$, and $\mathcal{B} = \{B \in \mathcal{H}^{(s)} \colon |B \cap I| \geq t\}$, then $|\mathcal{A}| + |\mathcal{B}| \leq |\mathcal{H}^{(s)}|$, and equality holds only if $t= 1$ and $\mu(\mathcal{H}) = r+s$. \end{lemma} Lemma~\ref{nonemptylemma} is also proved in Section~\ref{propertysection}. It immediately gives us the following. \begin{theorem} If Conjecture~\ref{nonemptyconj} is true, then Conjecture~\ref{nonemptyconjcor} is true. \end{theorem} Together with Theorem~\ref{nonemptyfam}, Lemma~\ref{nonemptylemma} also immediately yields the following. \begin{theorem} \label{nonemptyfamcor} If $1 \leq t \leq r \leq s$, $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq c(r,s,t)$, $\mathcal{A} \subseteq \mathcal{H}^{(r)}$, $\mathcal{B} \subseteq \mathcal{H}^{(s)}$, and $\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting, then $(\mathcal{A},\mathcal{B}) = (\emptyset,\mathcal{H}^{(s)})$ or $r = s$ and $(\mathcal{A},\mathcal{B}) = (\mathcal{H}^{(r)},\emptyset)$. \end{theorem} Therefore, Conjecture~\ref{nonemptyconjcor} is true if $\mu(\mathcal{H}) \geq c(r,s,t)$, and hence Kamat's conjecture is true if $\mu(\mathcal{I}_G) \geq c(r,r,1)$. We mention that the analogous problem for cross-intersecting subfamilies of $\mathcal{H}$ is solved in \cite{Borg5}. We now start working towards proving Theorem~\ref{nonemptyfam} and Lemma~\ref{nonemptylemma}. \section{Key properties of hereditary families} \label{propertysection} Hereditary families exhibit undesirable phenomena; see, for example, \cite[Example~1]{Borg}. The complete absence of symmetry makes intersection problems like the ones described above very difficult to deal with. Many of the well-known techniques in extremal set theory, such as the shifting technique (see \cite{F}), fail to work for hereditary families. The lemmas in this section and the next are the tools that will enable us to overcome such difficulties. The two results below establish the properties of hereditary families that are fundamental to our work. The first one is given by {\cite[Corollary~3.2]{Borg}}. \begin{lemma}[{\cite{Borg}}]\label{Spernercor} If $\mathcal{H}$ is a hereditary family and $0 \leq r \leq s \leq \mu(\mathcal{H}) - r$, then \[|\mathcal{H}^{(s)}| \geq \frac{{\mu(\mathcal{H}) - r \choose s - r}}{{s \choose s-r}}|\mathcal{H}^{(r)}|.\] \end{lemma} \begin{lemma}\label{mulemma} If $\mathcal{H}$ is a hereditary family, $X \subseteq Y$, $\mathcal{G}$ is the family $\{H \in \mathcal{H} \colon H \cap Y = X\}$, and $\mathcal{G} \neq \emptyset$, then \[\mu(\{G \backslash X \colon G \in \mathcal{G}\}) \geq \mu(\mathcal{H}) - |Y|.\] \end{lemma} \textbf{Proof.} Let $\mathcal{F} = \{G \backslash X \colon G \in \mathcal{G}\}$. Since $\mathcal{G} \neq \emptyset$, $\mathcal{F} \neq \emptyset$. Let $B$ be a base of $\mathcal{F}$ of size $\mu(\mathcal{F})$. Let $C = B \cup X$. Then $C \in \mathcal{G}$, and hence $C \in \mathcal{H}$. Let $D$ be a base of $\mathcal{H}$ such that $C \subseteq D$. Then $X \subseteq D$. Let $E = (D \backslash Y) \cup X$. Since $\mathcal{H}$ is hereditary and $E \subseteq D \in \mathcal{H}$, $E \in \mathcal{H}$. Since $E \cap Y = X$, $E \in \mathcal{G}$. Let $F = E \backslash X$. Then $F \in \mathcal{F}$. Since $C \subseteq D$ and $C \cap Y = E \cap Y = X$, $B \subseteq F$. Since $B$ is a base of $\mathcal{F}$, $B = F$. Thus, we have $\mu(\mathcal{F}) = |B| = |F| = |E| - |X| = |D \backslash Y| \geq |D| - |Y| \geq \mu(\mathcal{H}) - |Y|$.~\hfill{$\Box$} \\ For $X = Y$, the lemma above holds even if the family is not hereditary. \begin{lemma} \label{mucor} If $\mathcal{F}$ is a family and $X$ is a set such that $\mathcal{F}(X) \neq \emptyset$, then \[\mu(\{F \backslash X \colon F \in \mathcal{F} ( X )\}) \geq \mu(\mathcal{F}) - |X|.\] \end{lemma} \textbf{Proof.} Let $\mathcal{G} = \{F \backslash X \colon F \in \mathcal{F} ( X )\}$. Let $B$ be a base of $\mathcal{G}$ of size $\mu(\mathcal{G})$. Then $B \cup X$ is a base of $\mathcal{F}$. Thus, $\mu(\mathcal{F}) \leq |B| + |X| = \mu(\mathcal{G}) + |X|$.~\hfill{$\Box$} \begin{lemma}\label{mainlemma2} If $0 \leq t \leq u \leq r$, $s \geq r + t - u$, $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq r + s - t$, and $T$ is a $t$-element subset of a $u$-element set $U$ such that $\mathcal{H}^{(r)}(U) \neq \emptyset$, then \[|\{H \in \mathcal{H}^{(s)} \colon H \cap U = T\}| \geq \frac{ {\mu(\mathcal{H}) - r \choose s + u - r - t} }{ {s - t \choose s + u - r - t} } |\mathcal{H}^{(r)}(U)|.\] \end{lemma} \textbf{Proof.} Let $\mathcal{S} = \{H \in \mathcal{H}^{(s)} \colon H \cap U = T\}$. Since $\mathcal{H}^{(r)}(U) \neq \emptyset$, $\mathcal{H}(U) \neq \emptyset$. Let $\mathcal{I} = \{H \backslash U \colon H \in \mathcal{H}(U)\}$. Since $\mathcal{H}$ is hereditary, $\mathcal{I}$ is hereditary. By Lemma~\ref{mucor}, $\mu(\mathcal{I}) \geq \mu(\mathcal{H}) - u$. Let $p = r - u$ and $q = s - t$. Since $\mu(\mathcal{H}) \geq r + s - t$, $\mu(\mathcal{I}) \geq r + s - t - u = p + q$. We have $0 \leq p \leq q \leq \mu(\mathcal{I}) - p$. Therefore, by Lemma~\ref{Spernercor}, \begin{gather} |\mathcal{I}^{(q)}| \geq \frac{ {\mu(\mathcal{I})-p \choose q - p} }{ {q \choose q-p} } |\mathcal{I}^{(p)}|. \label{mainlemma2.1} \end{gather} Clearly, $|\mathcal{I}^{(p)}| = |\mathcal{H}^{(r)}(U)|$. Consider any $A \in \mathcal{I}^{(q)}$. Since $A \cup T \subseteq A \cup U \in \mathcal{H}(U)$ and $\mathcal{H}$ is hereditary, $A \cup T \in \mathcal{H}$. Since $|A \cup T| = s$ and $(A \cup T) \cap U = T$, it follows that $A \cup T \in \mathcal{S}$. Thus, $|\mathcal{I}^{(q)}| \leq |\mathcal{S}|$. Therefore, by (\ref{mainlemma2.1}), \begin{align} |\mathcal{S}| &\geq \frac{ {\mu(\mathcal{I})-p \choose q - p} }{ {q \choose q - p} } |\mathcal{H}^{(r)}(U)| \geq \frac{ {(\mu(\mathcal{H}) - u) - (r - u) \choose (s-t) - (r-u)} }{ {s - t \choose (s-t) - (r-u)} } |\mathcal{H}^{(r)}(U)| = \frac{ {\mu(\mathcal{H}) - r \choose s + u - r - t} }{ {s - t \choose s + u - r - t} } |\mathcal{H}^{(r)}(U)|, \nonumber \end{align} as required.~\hfill{$\Box$}\\ \\ \textbf{Proof of Lemma~\ref{nonemptylemma}.} Let $t' = t-1$. For each $T \in {I \choose t'}$, let $\mathcal{S}_T = \{H \in \mathcal{H}^{(s)} \colon H \cap I = T\}$. Consider any $T \in {I \choose t'}$. We have $\mathcal{S}_T \cap \mathcal{B} = \emptyset$. Also, by Lemma~\ref{mainlemma2}, \[|\mathcal{S}_T| \geq \frac{ {\mu(\mathcal{H}) - r \choose s + |I| - r - t'} }{ {s - t' \choose s + |I| - r - t'} } |\mathcal{H}^{(r)}(I)| \geq \frac{ {s-t+1 \choose s + |I| - r - t + 1} }{ {s - t + 1 \choose s + |I| - r - t + 1}} |\mathcal{H}^{(r)}(I)| = |\mathcal{A}|,\] and equality holds throughout only if $\mu(\mathcal{H}) = r+s-t+1$. We have $|\mathcal{H}^{(s)}| \geq |\mathcal{B} \cup \bigcup_{T \in {I \choose t'}} \mathcal{S}_T| = |\mathcal{B}| + \sum_{T \in {I \choose t'}} |\mathcal{S}_T| \geq |\mathcal{B}| + {|I| \choose t'}|\mathcal{A}|\geq |\mathcal{A}| + |\mathcal{B}|$, and equality holds throughout only if $\mu(\mathcal{H}) = r+s-t+1$ and $t' = 0$. The result follows.~\hfill{$\Box$} \section{Proof of Theorem~\ref{nonemptyfam}}\label{nonemptysection} If a set $X$ $t$-intersects each set in a family $\mathcal{A}$, then we call $X$ a \emph{$t$-transversal of $\mathcal{A}$}. \begin{lemma}\label{transversalemma1} If $X$ is a $t$-transversal of a family $\mathcal{A}$, then \[|\mathcal{A}| \leq {|X| \choose t} |\mathcal{A}(T)|\] for some $T \in {X \choose t}$. \end{lemma} \textbf{Proof.} Let $\mathcal{X} = {X \choose t}$. Let $T \in {X \choose t}$ such that $|\mathcal{A}(I)| \leq |\mathcal{A}(T)|$ for each $I \in \mathcal{X}$. Since $|A \cap X| \geq t$ for each $A \in \mathcal{A}$, we clearly have $\mathcal{A} = \bigcup_{I \in \mathcal{X}} \mathcal{A}(I)$. Thus, $|\mathcal{A}| = \left| \bigcup_{I \in \mathcal{X}}\mathcal{A}(I) \right| \leq \sum_{I \in \mathcal{X}} |\mathcal{A}(I)| \leq \sum_{I \in \mathcal{X}}|\mathcal{A}(T)| = |\mathcal{X}| |\mathcal{A}(T)| = {|X| \choose t} |\mathcal{A}(T)|$.~\hfill{$\Box$} \begin{lemma} \label{transversalemma2} If $X$ is a $t$-transversal of a family $\mathcal{A}$, $T$ is a set of size $t$, and $T \nsubseteq X$, then \[\mathcal{A}(T) = \bigcup_{x \in X \backslash T} \mathcal{A}(T \cup \{x\}).\] \end{lemma} \textbf{Proof.} Obviously, $\bigcup_{x \in X \backslash T} \mathcal{A}(T \cup \{x\}) \subseteq \mathcal{A}(T)$. For each $A \in \mathcal{A}$, we have \[t \leq |A \cap X| = |A \cap (X \cap T)| + |A \cap (X \backslash T)| \leq t-1 + |A \cap (X \backslash T)|\] (as $|T| = t$ and $T \nsubseteq X$), and hence $|A \cap (X \backslash T)| \geq 1$. Thus, for each $A \in \mathcal{A}(T)$, we have $a \in A$ for some $a \in X \backslash T$, and hence $A \in \mathcal{A}(T \cup \{a\}) \subseteq \bigcup_{x \in X \backslash T} \mathcal{A}(T \cup \{x\})$. Therefore, we have $\mathcal{A}(T) \subseteq \bigcup_{x \in X \backslash T} \mathcal{A}(T \cup \{x\}) \subseteq \mathcal{A}(T)$. The result follows.~\hfill{$\Box$} \begin{lemma} \label{transversalemma3} If $\mathcal{A}$ and $\mathcal{B}$ are non-empty cross-$t$-intersecting families such that $\mathcal{A}$ is $r$-uniform, $\mathcal{B}$ is $s$-uniform, and $\mathcal{B}$ is not a trivial $t$-intersecting family, then there exist $B, X \in \mathcal{B}$ such that \[|\mathcal{A}| \leq s{s \choose t} |\mathcal{A}(T \cup \{x\})|\] for some $T \in {B \choose t}$ and some $x \in X \backslash T$. \end{lemma} \textbf{Proof.} Since $\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting, each set in $\mathcal{A}$ is a $t$-transversal of $\mathcal{B}$, and each set in $\mathcal{B}$ is a $t$-transversal of $\mathcal{A}$. Let $B \in \mathcal{B}$. By Lemma~\ref{transversalemma1}, $|\mathcal{A}| \leq {|B| \choose t} |\mathcal{A}(T)| = {s \choose t}|\mathcal{A}(T)|$ for some $T \in {B \choose t}$. Since $\mathcal{B}$ is not a trivial $t$-intersecting family, $T \nsubseteq X$ for some $X \in \mathcal{B}$. By Lemma~\ref{transversalemma2}, $\mathcal{A}(T) = \bigcup_{x \in X \backslash T} \mathcal{A}(T \cup \{x\})$, so $|\mathcal{A}(T)| \leq \sum_{x \in X \backslash T} |\mathcal{A}(T \cup \{x\})|$. Let $x^* \in X \backslash T$ such that $|\mathcal{A}(T \cup \{x\})| \leq |\mathcal{A}(T \cup \{x^*\})|$ for each $x \in X \backslash T$. Let $Y = T \cup \{x^*\}$. Thus, $|\mathcal{A}(T)| \leq \sum_{x \in X \backslash T} |\mathcal{A}(Y)| = |X \backslash T| |\mathcal{A}(Y)| \leq s |\mathcal{A}(Y)|$, and hence $|\mathcal{A}| \leq {s \choose t}s|\mathcal{A}(Y)|$.~\hfill{$\Box$} \begin{lemma} \label{transversalemma4} If $1 \leq t \leq r$, $\mathcal{H}$ is a hereditary family with $\mu(\mathcal{H}) \geq 2r-t$, $\emptyset \neq \mathcal{A} \subseteq \mathcal{H}^{(r)}$, $\mathcal{B}$ is a non-empty $s$-uniform family that is not a trivial $t$-intersecting family, and $\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting, then there exists a $t$-element set $T$ such that \[|\mathcal{A}| < \frac{s(r-t)}{\mu(\mathcal{H})-r}{s \choose t} |\mathcal{H}^{(r)}(T)|\] and $T \subseteq B$ for some $B \in \mathcal{B}$. \end{lemma} \textbf{Proof.} By Lemma~\ref{transversalemma3}, there exist $B, X \in \mathcal{B}$ such that such that $|\mathcal{A}| \leq s{s \choose t} |\mathcal{A}(T \cup \{x\})|$ for some $T \in {B \choose t}$ and some $x \in X \backslash T$. Since $\mathcal{A} \neq \emptyset$, it follows that $\mathcal{A}(T \cup \{x\}) \neq \emptyset$, so $\mathcal{H}^{(r)}(T \cup \{x\}) \neq \emptyset$. Let $\mathcal{G} = \{H \in \mathcal{H}^{(r)} \colon H \cap (T \cup \{x\}) = T\}$. We have $|\mathcal{A}(T \cup \{x\})| \leq |\mathcal{H}^{(r)}(T \cup \{x\})| \leq \frac{r-t}{\mu(\mathcal{H})-r} |\mathcal{G}|$ by Lemma~\ref{mainlemma2}. Since $|\mathcal{H}^{(r)}(T)| = |\mathcal{G}| + |\mathcal{H}^{(r)}(T \cup \{x\})| > |\mathcal{G}|$, we obtain $|\mathcal{A}(T \cup \{x\})| < \frac{r-t}{\mu(\mathcal{H})-r} |\mathcal{H}^{(r)}(T)|$. Since $|\mathcal{A}| \leq s{s \choose t} |\mathcal{A}(T \cup \{x\})|$, the result follows.~\hfill{$\Box$} \\ We now settle a few calculations so that in the formal proof of the theorem we can focus on the combinatorial argument. \begin{prop}\label{calc} If $1 \leq t \leq r \leq s$, $(r,s) \neq (t,t)$, and $n \geq c(r,s,t)$, then \begin{flalign*} \mbox{(i) } \; &\frac{r(s-t)}{n - s} {r \choose t} < \frac{1}{2}. &\nonumber \\ \mbox{(ii) } \; &{s \choose t} \leq \frac{1}{2} \frac{{n-r \choose s-r}}{{s-t \choose s-r}} \mbox{ if } r < s. &\nonumber \end{flalign*} \end{prop} \textbf{Proof.} By straightforward induction, $2^a \geq 2a$ for every positive integer $a$. Since $t \leq r \leq s$ and $(r,s) \neq (t,t)$, either $t < r$ or $t = r < s$. If $t < r$, then, since $n \geq 2^r(r-t)(s-t){r \choose t} + r + s - t$, we have $n > 2r(s-t){r \choose t} + s$, which yields (i). If $t = r < s$, then, since $n \geq 2(s-t){s \choose t} + r$, we have $n \geq 2(s-t){t+1 \choose t} + t = 2(t+1)(s-t) + t > 2t(s-t) + s = 2r(s-t){r \choose t} + s$ (as $r = t$), which yields (i). Suppose $s > r$. Then $s > t$. Since $n \geq 2(s-t){s \choose t} + r$, we have $n - r > s-t > 0$ and ${s \choose t} \leq \frac{1}{2} \left( \frac{n-r}{s-t} \right)$. Thus, ${s \choose t} \leq \frac{1}{2} \prod_{i=0}^{s-r-1} \left( \frac{n-r-i}{s-t-i} \right) = \frac{1}{2} \frac{{n-r \choose s-r}}{{s-t \choose s-r}}$, which confirms (ii).~\hfill{$\Box$} \\ \\ \textbf{Proof of Theorem~\ref{nonemptyfam}.} Let $n = c(r,s,t)$. Let $\mathcal{A}$ and $\mathcal{B}$ be as in the theorem.\medskip \textit{Case 1: $\mathcal{A}$ is a trivial $t$-intersecting family.} Let $I = \bigcap_{A \in \mathcal{A}} A$, $\mathcal{C} = \mathcal{H}^{(r)}(I)$, and $\mathcal{D} = \{H \in \mathcal{H}^{(s)} \colon |H \cap I| \geq t\}$. Then $t \leq |I| \leq r$, $I \in \mathcal{H}$ (as $\mathcal{H}$ is hereditary), and $\mathcal{A} \subseteq \mathcal{C}$. Suppose $|I| = r$. Then $\mathcal{A} = \{I\}$ and, since $\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting, $\mathcal{B} \subseteq \mathcal{D}$. Since $\{I\}$ and $\mathcal{D}$ are cross-$t$-intersecting, and since $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t)$, we obtain $\mathcal{B} = \mathcal{D}$, as required. Now suppose $|I| < r$. Let $\mathcal{A}' = \{A \backslash I \colon A \in \mathcal{A}\}$, $\mathcal{I} = \{H \backslash I \colon H \in \mathcal{H}(I)\}$, and $r' = r-|I|$. Then $\mathcal{A}' \subseteq \mathcal{I}^{(r')}$, $\mathcal{I}$ is hereditary, and, by Lemma~\ref{mucor}, $\mu(\mathcal{I}) \geq \mu(\mathcal{H}) - |I|$. By the definition of $I$, $\bigcap_{E \in \mathcal{A}'} E = \emptyset$. Thus, $\mathcal{A}'$ is not a trivial $1$-intersecting family. For each $i \in \{0\} \cup [t-1]$, let $\mathcal{B}_i = \{B \in \mathcal{B} \colon |B \cap I| = i\}$. Let $\mathcal{B}_t = \{B \in \mathcal{B} \colon |B \cap I| \geq t\}$. Then $\mathcal{B} = \bigcup_{i=0}^t \mathcal{B}_i$. Let $J = \{i \in \{0\} \cup [t-1] \colon \mathcal{B}_i \neq \emptyset\}$. Suppose $J = \emptyset$. Then $\mathcal{B} = \mathcal{B}_t$. Hence $\mathcal{B} \subseteq \mathcal{D}$. Thus, as required, we obtain $\mathcal{A} = \mathcal{C}$ and $\mathcal{B} = \mathcal{D}$, because $\mathcal{A} \subseteq \mathcal{C}$, $\mathcal{C}$ and $\mathcal{D}$ are cross-$t$-intersecting, and $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t)$. We now show that indeed $J = \emptyset$. Suppose $J \neq \emptyset$. Consider any $j \in J$. For any $S \in {I \choose j}$, let $\mathcal{B}_{j,S} = \{B \in \mathcal{B}_j \colon B \cap I = S\}$. Then $\mathcal{B}_j = \bigcup_{S \in {I \choose j}} \mathcal{B}_{j,S}$. Let $\mathcal{S}_j = \{S \in {I \choose j} \colon \mathcal{B}_{j,S} \neq \emptyset\}$. Since $\mathcal{B}_j \neq \emptyset$, $\mathcal{S}_j \neq \emptyset$. Consider any $S \in \mathcal{S}_j$. Let $\mathcal{B}_{j,S}' = \{B \backslash S \colon B \in \mathcal{B}_{j,S}\}$, $\mathcal{H}_{j,S} = \{H \in \mathcal{H} \colon H \cap I = S\}$, $\mathcal{J}_{j,S} = \{H \backslash S \colon H \in \mathcal{H}_{j,S}\}$, $s_j = s - j$, and $t_j = t - j$. Then $\emptyset \neq \mathcal{B}_{j,S}' \subseteq {\mathcal{J}_{j,S}}^{(s_j)}$, $\mathcal{J}_{j,S}$ is hereditary, and, by Lemma~\ref{mulemma}, \[\mu(\mathcal{J}_{j,S}) \geq \mu(\mathcal{H}) - |I| > n - r \geq 2(s-t){s \choose t} \geq 2s(s-t) \geq 2s > 2s_j - t_j\] (note that $s > t$ as $t \leq |I| < r \leq s$). Since $\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting, $\mathcal{A}'$ and $\mathcal{B}_{j,S}'$ are cross-$t_j$-intersecting. Since $t_j \geq 1$ and $\mathcal{A}'$ is not a trivial $1$-intersecting family, $\mathcal{A}'$ is not a trivial $t_j$-intersecting family. By Lemma~\ref{transversalemma4}, there exists a $t_j$-element set $X_{j,S}$ such that \[|\mathcal{B}_{j,S}'| < \frac{r'(s_j-t_j)}{\mu(\mathcal{J}_{j,S}) - s_j} {r' \choose t_j} |{\mathcal{J}_{j,S}}^{(s_j)}(X_{j,S}) |\] and $X_{j,S} \subseteq E_{j,S}$ for some $E_{j,S} \in \mathcal{A}'$. We have $|\mathcal{B}_{j,S}'| = |\mathcal{B}_{j,S}|$. Let $T_{j,S} = S \cup X_{j,S}$. Then $|{\mathcal{J}_{j,S}}^{(s_j)}(X_{j,S})| = |{\mathcal{H}_{j,S}}^{(s)}(T_{j,S})|$. Thus, \begin{align} |\mathcal{B}_{j,S}| &< \frac{r'(s_j-t_j)}{\mu(\mathcal{J}_{j,S}) - s_j} {r' \choose t_j}|{\mathcal{H}_{j,S}}^{(s)}(T_{j,S})| \leq \frac{(r-|I|)(s-t)}{\mu(\mathcal{H}) - |I| + j - s}{r-|I| \choose t-j}|{\mathcal{H}_{j,S}}^{(s)}(T_{j,S})|. \nonumber \end{align} Since $\mathcal{A}'$ and $\mathcal{B}_{j,S}'$ are cross-$t_j$-intersecting, we have $r' \geq t_j$, that is, $r - |I| \geq t - j$. Since $0 \leq j \leq t-1$, $t \leq |I| \leq r-1$, and $\mu(\mathcal{H}) \geq n$, we therefore have \begin{align} |\mathcal{B}_{j,S}| &< \frac{(r-t)(s-t)}{n + t - r - s} {r-j \choose t-j} |{\mathcal{H}_{j,S}}^{(s)}(T_{j,S})| \leq \frac{1}{2^r} |{\mathcal{H}_{j,S}}^{(s)}(T_{j,S})| \nonumber \end{align} as $n \geq (r-t)(s-t)2^r{r \choose t} + r + s - t \geq (r-t)(s-t)2^r{r-j \choose t-j} + r + s - t$. Let $j^* \in J$ and $S^* \in \mathcal{S}_{j^*}$ such that for each $j \in J$, $|{\mathcal{H}_{j,S}}^{(s)} ( T_{j,S} )| \leq |{\mathcal{H}_{j^*,S^*}}^{(s)} ( T_{j^*,S^*} )|$ for each $S \in \mathcal{S}_j$. We have \begin{align} |\mathcal{B}| &= |\mathcal{B}_t| + \sum_{j \in J} |\mathcal{B}_j| \leq |\mathcal{D}| + \sum_{j \in J} \sum_{S \in \mathcal{S}_j}|\mathcal{B}_{j,S}| < |\mathcal{D}| + \sum_{j \in J} \sum_{S \in \mathcal{S}_j} \frac{1}{2^r} |{\mathcal{H}_{j,S}}^{(s)} ( T_{j,S} ) | \nonumber \\ &\leq |\mathcal{D}| + \sum_{j \in J} \sum_{S \in \mathcal{S}_j} \frac{1}{2^r} |{\mathcal{H}_{j^*,S^*}}^{(s)} ( T_{j^*,S^*} ) | \leq |\mathcal{D}| + \frac{1}{2^r} |{\mathcal{H}_{j^*,S^*}}^{(s)} ( T_{j^*,S^*} ) |\sum_{j \in J} \sum_{S \in \mathcal{S}_j} 1 \nonumber \end{align} and $\sum_{j \in J} \sum_{S \in \mathcal{S}_j} 1 = \sum_{j \in J} |\mathcal{S}_j| < \sum_{j = 0}^{|I|} {|I| \choose j} = 2^{|I|} \leq 2^{r-1}$. Thus, \begin{equation} |\mathcal{B}| < |\mathcal{D}| + \frac{1}{2}|{\mathcal{H}_{j^*,S^*}}^{(s)} ( T_{j^*,S^*} )|. \label{18} \end{equation} For convenience, let $j = j^*$ and $S = S^*$. Let $B' \in \mathcal{B}_{j,S}'$. Recall that $\mathcal{A}'$ and $\mathcal{B}_{j,S}'$ are cross-$t_j$-intersecting, so $B'$ is a $t_j$-transversal of $\mathcal{A}'$. By Lemma~\ref{transversalemma1}, $|\mathcal{A}'| \leq {|B'| \choose t_j}|\mathcal{A}'(X^*)|$ for some $X^* \in {B' \choose t_j}$. Thus, we have \begin{align} 0 < |\mathcal{A}| &= |\mathcal{A}'| \leq {s-j \choose t-j}|\mathcal{I}^{(r')}(X^*)| \leq {s \choose t}|\mathcal{I}^{(r')}(X^*)|. \label{20} \end{align} Let $\mathcal{K} = \{E \backslash X^* \colon E \in \mathcal{I} ( X^* )\}$, $p = r' - |X^*|$, and $q = s_j - |X^*|$. We have $p = r - |I| - t_j = r - |I| - t + j \leq r-t-1$ and $q = s_j - t_j = s - t \geq r-t \geq p+1$. Since $\mathcal{I}$ is hereditary, $\mathcal{K}$ is hereditary. Since $|\mathcal{I}(X^*)| \geq |\mathcal{I}^{(r')}(X^*)|$, $|\mathcal{I}(X^*)| > 0$ by (\ref{20}). Thus, by Lemma~\ref{mucor}, $\mu(\mathcal{K}) \geq \mu(\mathcal{I}) - |X^*| \geq \mu(\mathcal{H}) - |I| - t_j \geq n - |I| - t_j > r+s - |I| - t_j = p + s > p + q$. By Lemma~\ref{Spernercor}, \begin{equation} |\mathcal{K}^{(q)}| \geq \frac{{\mu(\mathcal{K}) - p \choose q-p}}{{q \choose q-p}}|\mathcal{K}^{(p)}| = |\mathcal{K}^{(p)}|\prod_{i = 0}^{q-p-1} \frac{\mu(\mathcal{K}) - p - i}{q-i} \geq |\mathcal{K}^{(p)}|\left(\frac{\mu(\mathcal{K}) - p}{q}\right)^{q-p}. \nonumber \end{equation} Since $q - p \geq 1$ and $\mu(\mathcal{K}) \geq n - |I| - t_j = n - r + p \geq p + 2(s-t){s \choose t} = p + 2q{s \choose t}$, $|\mathcal{K}^{(q)}| \geq 2{s \choose t}|\mathcal{K}^{(p)}|$. Thus, since $|\mathcal{K}^{(p)}| = |\mathcal{I}^{(r')}(X^*)|$ and $|\mathcal{K}^{(q)}| = |\mathcal{I}^{(s_j)}(X^*)|$, \begin{equation} {s \choose t} |\mathcal{I}^{(r')}(X^*)| \leq \frac{1}{2}|\mathcal{I}^{(s_j)}(X^*)|. \label{22} \end{equation} Let $\mathcal{L} = \mathcal{H}^{(|I| + s_j)}(I \cup X^*)$. Then $\mathcal{I}^{(s_j)}(X^*) = \{H \backslash I \colon H \in \mathcal{L}\}$. Let $\mathcal{L}' = \{L \backslash (I \backslash S) \colon L \in \mathcal{L}\}$. Since $\mathcal{H}$ is hereditary, $\mathcal{L}' \subseteq \mathcal{H}$. For each $H \in \mathcal{L}'$, we have $|H| = s_j + |I| - (|I|-|S|) = s$, $H \cap I = S$, and $S \cup X^* \subseteq H$. Thus, $\mathcal{L}' \subseteq {\mathcal{H}_{j,S}}^{(s)}(S \cup X^*)$. Let $T_1 = S \cup X^*$. We have $|\mathcal{I}^{(s_j)}(X^*)| = |\mathcal{L}| = |\mathcal{L}'| \leq |{\mathcal{H}_{j,S}}^{(s)}(T_1)|$. Together with (\ref{20}) and (\ref{22}), this gives us \begin{equation} |\mathcal{A}| \leq \frac{1}{2}|{\mathcal{H}_{j,S}}^{(s)}(T_1)|. \label{24} \end{equation} Let $T_2 = T_{j,S}$. Let $\mathcal{E}$ be a member of $\{{\mathcal{H}_{j,S}}^{(s)}(T_1), {\mathcal{H}_{j,S}}^{(s)}(T_2)\}$ of maximum size. Recall that above we set $j = j^*$ and $S = S^*$. By (\ref{18}) and (\ref{24}), \begin{equation} |\mathcal{A}| + |\mathcal{B}| < \frac{1}{2}|{\mathcal{H}_{j,S}}^{(s)}(T_1)| + |\mathcal{D}| + \frac{1}{2}|{\mathcal{H}_{j,S}}^{(s)}(T_2)| \leq |\mathcal{D}| + |\mathcal{E}|. \label{25} \end{equation} Let \[X' = \left\{ \begin{array}{ll} X^* & \mbox{if $\mathcal{E} = {\mathcal{H}_{j,S}}^{(s)}(T_1)$;}\\ X_{j,S} & \mbox{if $\mathcal{E} = {\mathcal{H}_{j,S}}^{(s)}(T_2)$.} \end{array} \right.\] Let $F = I \cup X'$. Let $\mathcal{F} = \mathcal{H}^{(r)}(F)$ and $\mathcal{G} = \mathcal{D} \cup \mathcal{E}$. If $X' = X^*$, then, since $|\mathcal{F}| = |\mathcal{H}^{(|I|+r')}( I \cup X^* )| = |\mathcal{I}^{(r')}( X^* )|$, $|\mathcal{F}| > 0$ by (\ref{20}). If $X' = X_{j,S}$, then, since $X_{j,S} \subseteq E_{j,S} \in \mathcal{A}'$, we have $F \subseteq I \cup E_{j,S} \in \mathcal{A}$, and hence $I \cup E_{j,S} \in \mathcal{F}$. Therefore, $\mathcal{F} \neq \emptyset$. By (\ref{25}), $\mathcal{G} \neq \emptyset$. For each $G \in \mathcal{D}$, $|G \cap F| \geq |G \cap I| \geq t$. For some $i \in [2]$, $\mathcal{E} = {\mathcal{H}_{j,S}}^{(s)}(T_i)$ and $T_i = S \cup X'$; thus, for each $G \in \mathcal{E}$, $|G \cap F| \geq |T_i \cap F| = |S| + |X'| = j + t_j = t$. For every $G \in \mathcal{G}$ and every $H \in \mathcal{F}$, $|G \cap H| \geq |G \cap F|$, so $|G \cap H| \geq t$. Thus, $\mathcal{F}$ and $\mathcal{G}$ are cross-$t$-intersecting. For each $H \in \mathcal{E}$, $|H \cap I| = |S| = j < t$. Thus, $\mathcal{D} \cap \mathcal{E} = \emptyset$, and hence $|\mathcal{G}| = |\mathcal{D}| + |\mathcal{E}|$. Bringing all the pieces together, we have that $\emptyset \neq \mathcal{F} \subseteq \mathcal{H}^{(r)}$, $\emptyset \neq \mathcal{G} \subseteq \mathcal{H}^{(s)}$, $\mathcal{F}$ and $\mathcal{G}$ are cross-$t$-intersecting, and, by (\ref{25}), \[|\mathcal{A}| + |\mathcal{B}| < |\mathcal{G}| < |\mathcal{F}| + |\mathcal{G}|,\] contradicting $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t)$.\medskip \textit{Case 2: $\mathcal{A}$ is not a trivial $t$-intersecting family.} If $t = s$, then $t=r=s$ and $n = r = 2s-t$. If $t < s$, then $n > 2s$. Thus, $\mu(\mathcal{H}) \geq 2s-t$. By Lemma~\ref{transversalemma4}, there exists a $t$-element set $T_{\mathcal{B}}$ such that \begin{equation} |\mathcal{B}| < \frac{r(s-t)}{\mu(\mathcal{H}) - s} {r \choose t} |\mathcal{H}^{(s)}(T_{\mathcal{B}})|. \label{14} \end{equation} Suppose $r < s$. Let $D \in \mathcal{B}$. Since $\mathcal{A}$ and $\mathcal{B}$ are cross-$t$-intersecting, $D$ is a $t$-transversal of $\mathcal{A}$. By Lemma~\ref{transversalemma1}, \begin{equation} |\mathcal{A}| \leq {|D| \choose t}|\mathcal{A}(T_D)| \leq {s \choose t}|\mathcal{H}^{(r)} (T_D)| \label{15} \end{equation} for some $T_D \in {D \choose t}$. Let $\mathcal{G} = \{H \backslash T_D \colon H \in \mathcal{H}( T_D )\}$. Then $\mathcal{G}$ is hereditary. Since $0 < |\mathcal{A}| \leq {s \choose t}|\mathcal{H}^{(r)}(T_D)| \leq {s \choose t}|\mathcal{H}(T_D)| = {s \choose t}|\mathcal{G}|$, $\mathcal{G} \neq \emptyset$. Thus, by Lemma~\ref{mucor}, $\mu(\mathcal{G}) \geq \mu(\mathcal{H}) - |T_D| = \mu(\mathcal{H}) - t$. By Lemma~\ref{Spernercor}, \[|\mathcal{G}^{(s-t)}| \geq \frac{{\mu(\mathcal{G}) - (r-t) \choose (s-t) - (r-t)}}{{s-t \choose (s-t) - (r-t)}}|\mathcal{G}^{(r-t)}| = \frac{{\mu(\mathcal{G}) + t - r \choose s-r}}{{s-t \choose s-r}}|\mathcal{G}^{(r-t)}|.\] Clearly, $|\mathcal{H}^{(r)} ( T_{D} )| = |\mathcal{G}^{(r-t)}|$ and $|\mathcal{H}^{(s)} ( T_{D} )| = |\mathcal{G}^{(s-t)}|$. Let $T' \in \mathcal{H}^{(t)}$ such that $|\mathcal{H}^{(s)}( T )| \leq |\mathcal{H}^{(s)}( T' )|$ for all $T \in \mathcal{H}^{(t)}$. Since $\mathcal{A} \neq \emptyset$, $|\mathcal{H}^{(r)} ( T_{D} )| > 0$ by (\ref{15}). Since $\mathcal{H}$ is hereditary and $T_D$ is a $t$-element subset of every member of $\mathcal{H}^{(r)}(T_{D})$, we have $T_D \in \mathcal{H}^{(t)}$, and hence $|\mathcal{H}^{(s)}(T_{D})| \leq |\mathcal{H}^{(s)}(T')|$. Thus, we have \begin{align} 0 < \frac{{\mu(\mathcal{H}) - r \choose s-r}}{{s-t \choose s-r}}|\mathcal{H}^{(r)} ( T_{D} )| &\leq \frac{{\mu(\mathcal{G}) + t - r \choose s-r}}{{s-t \choose s-r}}|\mathcal{H}^{(r)} ( T_{D} )| = \frac{{\mu(\mathcal{G}) + t - r \choose s-r}}{{s-t \choose s-r}}|\mathcal{G}^{(r-t)}| \nonumber \\ &\leq |\mathcal{G}^{(s-t)}| = |\mathcal{H}^{(s)} ( T_{D} )| \leq |\mathcal{H}^{(s)}(T')|. \label{16a} \end{align} Thus, $\mathcal{H}^{(s)}(T') \neq \emptyset$. Since $\mathcal{H}$ is hereditary and every set in $\mathcal{H}^{(s)}(T')$ has an $r$-element subset containing $T'$, $\mathcal{H}^{(r)}(T') \neq \emptyset$. By (\ref{14}), $|\mathcal{H}^{(s)}(T_{\mathcal{B}})| > 0$. Thus, $T_{\mathcal{B}} \in \mathcal{H}^{(t)}$ as $\mathcal{H}$ is hereditary and $T_{\mathcal{B}}$ is a $t$-element subset of every set in $\mathcal{H}^{(s)}(T_{\mathcal{B}})$. Hence \begin{equation} |\mathcal{H}^{(s)}(T_{\mathcal{B}})| \leq |\mathcal{H}^{(s)}(T')|. \label{16b} \end{equation} We have \begin{align}|\mathcal{A}| + |\mathcal{B}| &< {s \choose t} |\mathcal{H}^{(r)}(T_D)| + \frac{r(s-t)}{\mu(\mathcal{H}) - s} {r \choose t} |\mathcal{H}^{(s)}(T_{\mathcal{B}})| \quad \mbox{(by (\ref{14}) and (\ref{15}))} \nonumber \\ &< \frac{1}{2} \frac{{\mu(\mathcal{H}) - r \choose s-r}}{{s-t \choose s-r}}|\mathcal{H}^{(r)}(T_{D})| + \frac{1}{2} |\mathcal{H}^{(s)}(T_{\mathcal{B}})| \quad \mbox{(by Proposition~\ref{calc} (i) and (ii))} \nonumber \\ &\leq \frac{1}{2}|\mathcal{H}^{(s)}(T')| + \frac{1}{2}|\mathcal{H}^{(s)}(T')| \quad \mbox{(by (\ref{16a}) and (\ref{16b}))} \nonumber \\ &= |\mathcal{H}^{(s)}(T')| < |\mathcal{H}^{(r)}(T')| + |\mathcal{H}^{(s)}(T')|,\nonumber \end{align} which is a contradiction since $\emptyset \neq \mathcal{H}^{(r)}(T') \subseteq \mathcal{H}^{(r)}$, $\emptyset \neq \mathcal{H}^{(s)}(T') \subseteq \mathcal{H}^{(s)}$, $\mathcal{H}^{(r)}(T')$ and $\mathcal{H}^{(s)}(T')$ are cross-$t$-intersecting, and $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)},\mathcal{H}^{(s)},t)$. Therefore, $r = s$. Suppose that $\mathcal{B}$ is not a trivial $t$-intersecting family. By Lemma~\ref{transversalemma4}, there exists a $t$-element set $T_{\mathcal{A}}$ such that \[|\mathcal{A}| < \frac{s(r-t)}{\mu(\mathcal{H})-r}{s \choose t} |\mathcal{H}^{(r)}(T_{\mathcal{A}})|.\] Thus, $r - t > 0$. Let $T'$ be as defined above (for the case $r < s$). We have \begin{align}|\mathcal{A}| + |\mathcal{B}| &< \frac{s(r-t)}{\mu(\mathcal{H})-r}{s \choose t} |\mathcal{H}^{(r)}(T_{\mathcal{A}})| + \frac{r(s-t)}{\mu(\mathcal{H}) - s} {r \choose t} |\mathcal{H}^{(s)}(T_{\mathcal{B}})| \nonumber \\ & = \frac{r(r-t)}{\mu(\mathcal{H})-r}{r \choose t} \left( |\mathcal{H}^{(r)}(T_{\mathcal{A}})| + |\mathcal{H}^{(r)}(T_{\mathcal{B}})| \right) \quad \mbox{(as $r=s$)} \nonumber \\ &< \frac{1}{2} \left( |\mathcal{H}^{(r)}(T_{\mathcal{A}})| + |\mathcal{H}^{(r)}(T_{\mathcal{B}})| \right) \quad \mbox{(by Proposition~\ref{calc} (i))} \nonumber \\ &< |\mathcal{H}^{(r)}(T')| + |\mathcal{H}^{(r)}(T')|,\nonumber \end{align} which is a contradiction because, as in the case $r < s$ above, $\emptyset \neq \mathcal{H}^{(r)}(T') \subseteq \mathcal{H}^{(r)}$, $\mathcal{H}^{(r)}(T')$ and $\mathcal{H}^{(r)}(T')$ are cross-$t$-intersecting, and $(\mathcal{A}, \mathcal{B}) \in M(\mathcal{H}^{(r)}, \mathcal{H}^{(r)},t)$. Therefore, $\mathcal{B}$ is a trivial $t$-intersecting family. Thus, since $r = s$, we can apply the argument in Case~1 to obtain that there exists some $I \in \mathcal{H}$ such that $t \leq |I| \leq r$, $\mathcal{B} = \mathcal{H}^{(r)}(I)$, and $\mathcal{A} = \{H \in \mathcal{H}^{(r)} \colon |H \cap I| \geq t\}$. Since $\mathcal{A}$ is not a trivial $t$-intersecting family, $t < |I|$. It remains to show that $(\mathcal{A}, \mathcal{B}) \neq (\mathcal{H}^{(r)}(I), \{H \in \mathcal{H}^{(r)} \colon |H \cap I| \geq t\})$ (as the theorem states that the two possibilities resulting from it are mutually exclusive.) Since $t < |I|$, $t < r$. Let $T \in {I \choose t}$. Let $B$ be a base of $\mathcal{H}$ such that $I \subseteq B$. Since $\mu(\mathcal{H}) \geq c(r,r,t) \geq r + 2{r \choose t} \geq 3r$, $|B| \geq 3r$. Since $|I| \leq r$, $|B \backslash I| \geq 2r$. Let $X \in {B \backslash I \choose r-t}$. Since $\mathcal{H}$ is hereditary and $T \cup X \subseteq B \in \mathcal{H}$, $T \cup X \in \mathcal{H}$. Thus, $T \cup X \in \mathcal{A} \backslash \mathcal{H}^{(r)}(I)$, and hence $\mathcal{A} \neq \mathcal{H}^{(r)}(I)$. Therefore, $(\mathcal{A}, \mathcal{B}) \neq (\mathcal{H}^{(r)}(I), \{H \in \mathcal{H}^{(r)} \colon |H \cap I| \geq t\})$, as required.~\hfill{$\Box$} \footnotesize
{ "timestamp": "2018-06-05T02:17:55", "yymm": "1806", "arxiv_id": "1806.01093", "language": "en", "url": "https://arxiv.org/abs/1806.01093" }
\section{Introduction}\label{sec:intro} The search for the primordial B-mode polarization of the cosmic microwave background (CMB) radiation at large angular scales in the sky is one of the most exciting challenge of modern cosmology, because such a signal would be the direct signature of the primordial gravitational waves predicted by inflation~\cite{inf}. The amplitude of the primordial B-mode signal, termed as tensor-to-scalar ratio, $r$, will determine the energy scale of inflation, ${E_{\rm inf} \simeq (r/0.008)^{1\over 4}\,10^{16}}$\,GeV. CMB satellite concepts (\emph{LiteBIRD}~\cite{ltb}, \emph{CORE}~\cite{core}, \emph{PIXIE}~\cite{pixie}, \emph{PICO}~\cite{pico}) are being proposed to detect large-scale CMB B-modes at ${r \lesssim 10^{-3}}$. This is a real challenge because the signal is extremely faint ($\lesssim 50$\,nK r.m.s. fluctuations in the sky) and obscured by very bright polarized Galactic foreground emissions by many orders of magnitude. In addition, gravitational lensing effects by large-scale structures transform CMB E-modes into noise-like B-modes, while spurious B-modes are created by instrumental systematic effects. In this context, component separation methods are critical to subtract the foregrounds and extract the CMB B-mode signal, since the residual foreground contamination will set the ultimate uncertainty limit with which $r$ can be measured. In this article, we report on recent B-mode detection forecasts with the CMB satellite concept \emph{CORE}~\cite{core2}, and briefly discuss about the problem of foregrounds and component separation for B-modes, by highlighting subtle issues that arise in this context. \section{B-mode component separation forecasts for \emph{CORE}}\label{sec:core} The proposed space mission \emph{CORE}~\cite{core} is designed to observe the full sky with high sensitivity through 19 frequency bands, ranging from $60$ to $600$\,GHz. We report on the results~\cite{core2} of component separation and primordial CMB B-mode reconstruction, based on \emph{CORE} sky simulations. \subsection{Sky simulations}\label{subsec:prod} Using the {\sc PSM} (Planck Sky Model) software~\cite{psm}, we have simulated full-sky polarization maps for the $19$ frequency bands ($60$ to $600$ GHz) of \emph{CORE}. Our simulated sky maps~\cite{core2} include: CMB E- and B-mode polarization, with an optical depth to reionization $\tau=0.055$ and a tensor-to-scalar ratio ranging from $r=10^{-3}$ to $10^{-2}$; lensing E- and B-modes; Galactic and extra-galactic foreground polarization. Galactic foregrounds consist of thermal dust emission, based on the \emph{Planck} {\sc GNILC} dust template~\cite{gnilc} at $353$\,GHz, with average polarization fraction of $5$-$10$\% over the sky; polarized Galactic synchrotron emission, as observed by \emph{WMAP} at $23$\, GHz~\cite{mamd}; and Galactic anomalous microwave emission (AME) with 1\% polarization fraction. Extra-galactic foregrounds include compact radio and infrared sources with respectively $3$\%-$5$\% and $1$\% mean polarization fractions. The dust map is interpolated across the \emph{CORE} frequency bands through a modified blackbody (MBB) emission law having variable spectral index and temperature over the sky, with mean values $\langle\beta_d\rangle=1.6$ and $\langle T_d\rangle=19.4$\,K, as measured by \emph{Planck}~\cite{gnilc}. The synchrotron map is extrapolated across frequencies through a power-law with an average spectral index of ${\langle\beta_s\rangle=-3}$ varying over the sky~\cite{mamd}. The emission law for extrapolating the AME component is modelled by assuming a Cold Neutral Medium~\cite{ame}. Compact source templates are extrapolated across \emph{CORE} frequencies by assuming random steep or flat power-laws for radio sources, and both modified blackbodies and power-laws for infrared sources. The component maps at each frequency are coadded, convolved by a Gaussian beam using the \emph{CORE} FWHM values, and instrumental white noise is added to each frequency map using the sensitivities quoted by \emph{CORE}~\cite{core}. \subsection{Component separation methods}\label{subsec:methods} We have applied four independent component separation algorithms~\cite{core2} to the \emph{CORE} sky simulations to perform foreground removal, reconstruction of the CMB B-mode power spectrum, and estimation of the tensor-to-scalar ratio: {\sc Commander}~\cite{compsep}, a Bayesian parametric method for a multi-component pixel-by-pixel spectral fit using MCMC Gibbs sampling; {\sc Smica}~\cite{compsep}, a blind method for a power-spectra fit in harmonic space; {\sc Nilc}~\cite{compsep}, a blind method for minimum-variance internal linear combination in wavelet space; and {\sc xForecast}~\cite{xforecast}, an alternative parametric fitting approach in pixel space. The first three algorithms have already a strong heritage from real \emph{Planck} data analysis~\cite{compsep}. Parametric methods are only limited by the accuracy with which the foregrounds are modelled in the fit, while blind methods do not rely on any assumptions about the foregrounds but are limited by the overall variance of the foregrounds and the number of frequency channels and multipole modes available to minimize this variance. Since the variance of the foregrounds is much larger at the reionization scales ($\ell \simeq 10$), parametric fitting was preferred to reconstruct CMB B-modes at low multipoles $\ell < 50$ (reionization peak), while blind methods were used to reconstruct the signal at large multipoles $\ell \geq 50$ (recombination peak). \subsection{Results}\label{subsec:results} The left panel of Fig.~\ref{fig:core} shows the reconstruction of the primordial CMB B-mode after foreground cleaning with {\sc Commander} and {\sc Smica} for a fiducial tensor-to-scalar ratio of ${r = 5\times 10^{-3}}$, in the absence of lensing. The broad frequency range of \emph{CORE} allows us to recover the primordial B-mode signal at both reionization and recombination peaks, and to measure the posterior distribution of ${r = 5\times 10^{-3}}$ without bias at $12\sigma$ significance (right panel of Fig.~\ref{fig:core}) after foreground cleaning. In the presence of lensing contamination, a shortcut was adopted to perform delensing. Instead of correcting for the lensing variance in the foreground-cleaned CMB B-mode map, as real delensing approaches would do, we left $40$\% of the lensing B-mode power in the CMB map realization of the simulation, then performed foreground cleaning on the modified simulation. This is equivalent to performing foreground cleaning and $60$\% delensing, which is the delensing capability quoted by \emph{CORE}~\cite{core3}. In the presence of lensing, ${r = 5\times 10^{-3}}$ is detected at $4\sigma$ significance after foreground cleaning and $60$\% delensing~\cite{core2}, putting \emph{CORE} in an excellent position to constrain the energy scale of inflation for the Starobinsky's $R^2$ inflation model~\cite{inf} \begin{figure \vspace{-1cm} \centering \includegraphics[width=0.4\linewidth]{final_commander-smica_r0-005.png} \includegraphics[width=0.43\linewidth]{likelihood_r_model18v5_r5em3_hybrid_joint-smica.png}~\\ \caption[]{Primordial B-mode reconstruction at $r=5\times 10^{-3}$ (\emph{left}) and estimate of $r$ (\emph{right}) for \emph{CORE}.} \label{fig:core} \end{figure} For a tensor-to-scalar ratio as low as $r=10^{-3}$, the residual foreground contamination in the CMB B-mode power spectrum after component separation is significant at all angular scales for all the methods~\cite{core2}, resulting in a $3\sigma$ bias on the measurement of $r=10^{-3}$ by \emph{CORE}. The bias is attributed to the available frequency range $60$-$600$ GHz of \emph{CORE}, for which the minimized variance of the foregrounds achieved by blind methods ({\sc Nilc} and {\sc Smica}) still exceeds $r=10^{-3}$ in power while being lower than $r=5\times 10^{-3}$. For parametric methods ({\sc Commander}), the absence of frequencies below $60$\,GHz prevent the synchrotron spectral index, $\beta_s$, to be constrained at the level of precision required for $r=10^{-3}$: while the recovered distribution of $\beta_s$ over the sky has same mean and standard deviation than the actual distribution, it is more Gaussian-distributed, which results in a $2$\% mismatch on $\beta_s$. This error on $\beta_s$ is large enough to cause an excess B-mode power at a level of $r\approx 2.5 \times 10^{-3}$ when extrapolating synchrotron B-modes to CMB frequencies~\cite{core2}. Subpercent precision on foreground spectral indices is thus required to measure $r=10^{-3}$ without bias, which can be achieved with broader frequency ranges (Sect.~\ref{sec:discussion}). \section{Concluding remarks: subtle issues for B-mode component separation}\label{sec:discussion} {\bf On the importance of a broad frequency range.} The CMB satellite concept \emph{PICO}~\cite{pico} benefits from a broader frequency range ($21$-$800$\,GHz) than \emph{CORE}. The reconstruction of the CMB B-mode power spectrum at $r=10^{-3}$ with \emph{PICO} is shown in the left panel of Fig.~\ref{fig:pico}, for the same sky simulation. Due to a larger frequency range of $21$-$800$ GHz, \emph{PICO} allows {\sc Commander} to control the foreground contamination at the desired accuracy to measure ${r=10^{-3}}$ with $2.5\sigma$ significance, without any bias, from low multipoles $2\leq \ell \leq 50$. Conversely, narrowing the baseline frequency range of \emph{PICO} to $43$-$462$\,GHz (right panel of Fig.~\ref{fig:pico}) introduces a bias at large angular scales on the recovered B-mode power spectrum because of residual dust contamination. In the absence of high frequencies $\gtrsim 400$\,GHz, the dust MBB temperature is constrained with lower accuracy (left-corner stamp in the right panel of Fig.~\ref{fig:pico}), which results in spectral degeneracies in the fit and translates into a bias on the reconstructed CMB B-mode at $r=10^{-3}$. {\bf Foreground mismodelling.} Due to the very large dynamic range between foregrounds and CMB B-mode fluctuations, component separation for polarization is much more sensitive to foreground modelling uncertainties than for temperature. Mismodelling two MBB dust components as a single MBB dust component in the {\sc Commander} fit was shown to bias $r=5\times 10^{-2}$ by more than $3\sigma$ for any CMB satellite concept~\cite{bias}. Most important, CMB experiments with narrower frequency ranges show no chi-square evidence for incorrect dust modelling~\cite{bias}, the fit of the overall sky emission being still accurate in narrow frequency ranges while it suffers from spectral degeneracies. Frequencies below $60$\,GHz and above $400$\,GHz are thus critical for CMB B-mode experiments to get chi-square evidence for incorrect foreground modelling and false detections of $r$. It could be argued that increasing the frequency range of observations will introduce additional foregrounds. However, Galactic foregrounds are not fully decorrelated across frequencies, so that the increase in foreground complexity (extra degrees of freedom) should be more than compensated by the increase of information (extra frequencies) for component separation. {\bf Spectral averaging effects.} Foreground spectral indices vary in the sky from line-of-sight to line-of-sight, but sky map observations are pixelized and do not have infinite resolution, so that different spectral indices are averaged within pixels or beams~\cite{moments}. The averaging of power-laws with different spectral indices in a pixel is no longer a power-law, instead it introduces spurious curvatures in the effective emission law across frequencies~\cite{core2}. Say otherwise, the effective emission laws of the foregrounds on the pixelized maps may differ from the real emission laws in the sky. Averaging effects are critical for parametric fitting methods in the context of B-modes. Ignoring in the parametric fit a spurious dust curvature of $0.05$ caused by averaging effects results in a bias of $\Delta r \gtrsim 10^{-3}$ on the tensor-to-scalar ratio~\cite{core2}. To tackle this issue, moment-expansion approaches~\cite{moments}, rather than astrophysical model fitting, might provide an interesting avenue \begin{figure \vspace{-1cm} \centering \includegraphics[width=0.4\linewidth]{pico.png} \includegraphics[width=0.4\linewidth]{descoped_pico.png}~\\ \caption[]{CMB B-mode reconstruction for \emph{PICO} 21-800 GHz (\emph{left}) versus \emph{descoped PICO} 43-462 GHz (\emph{right}).} \label{fig:pico} \end{figure} \section*{Acknowledgments} The author acknowledges funding from the ERC Consolidator Grant {\it CMBSPEC} (No.~725456). \section*{References}
{ "timestamp": "2018-06-05T02:16:03", "yymm": "1806", "arxiv_id": "1806.01026", "language": "en", "url": "https://arxiv.org/abs/1806.01026" }
\section{Introduction} Given a large prime $p \geq 2$, and a number $N \leq p$. The standard analytic methods demonstrate the existence of primitive roots in any short interval \begin{equation} \label{eq175.03} \left [M, M+N \right ] \end{equation} for any number $N \gg p^{1/2+\varepsilon} $, where $M \geq 2$ is a fixed number, and $ \varepsilon>0$ is a small number, see \cite{ES57}, \cite{DH37}, \cite{CL53}, \cite{PS90}. More elaborate exponential sums methods can reduce the size of the interval to $N \gg p^{1/4+\varepsilon}$, see \cite{BD67}. Further, the explicit upper bound claims that the least primitive root $g(p) \geq 2$ satisfies the inequality \begin{equation} \label{eq175.13} g(p) <\sqrt{p}-2 \end{equation} for all primes $p >409$, see \cite{CT15}, and \cite{MT15}. Assuming the GRH, it was proved that $g(p) =O\left ( \log ^6 p\right )$, and the average value is $\overline{g(p)} =O\left ( (\log \log p)^2 \right )$, see \cite{SV92} and \cite{BE93} respectively. \\ Almost all these results are based on the standard indicator function in Lemma \ref{lem333.2}. This note introduces a new technique based on the indicator function in Lemma \ref{lem333.3} to improve the results for primitive roots in short intervals. \begin{thm} \label{thm1.1} Given a small number $ \varepsilon>0$, and a sufficiently large prime \(p \geq 2\), let $N \gg (\log p)^{1+\varepsilon}$. Then, the short interval \begin{equation} \label{el03} \left [ M, M+ N\right ] \end{equation} contains a primitive root for any fixed $M \geq 2$. In particular, the least primitive root $g(p) =O\left ( (\log p)^{1+\varepsilon} \right )$ unconditionally. \end{thm} As the probability of a primitive root modulo $p$ is $O(1/\log \log p)$, this result is nearly optimal, see Section \ref{s222} for a discussion.\\ The existence of prime primitive roots in short interval $[M,M+N]$ requires information about primes in short intervals such that $N < p^{1/2}$, and $M \geq 2$ is any fixed number, which is not available in the literature. But, for the long interval $[2, x]$, it is feasible. Recently, it was proved that the least prime primitive root $ g^*(p)= O\left ( p^{\varepsilon} \right )$, unconditionally, see \cite{CN17}. Moreover, assuming standard conjectures, the least prime primitive root is expected to be $g^{*}(p) =O\left ( (\log p) (\log \log p)^2 \right )$, see \cite{BE97}. A very close upper bound is provided here. \begin{thm} \label{thm1.2} If \(p \geq 2\) is a sufficiently large prime, then, the least prime primitive root satisfies \begin{equation} \label{el05} g^{*}(p) =O\left ( (\log p)^{1+\varepsilon} \right ) \end{equation} for any small number $ \varepsilon>0$, unconditionally. \end{thm} \begin{thm} \label{thm1.3} Let \(p \geq 2\) be a sufficiently large prime, and let $N \gg p^{.525}$. Then, the short interval \begin{equation} \label{el07} \left [ M, M+ N\right ] \end{equation} contains a prime primitive root for any fixed $M \geq 2$ unconditionally. \end{thm} The fundamental background materials are discussed in the earlier sections. Section \ref{s887} presents a proof of Theorem \ref{thm1.1}, the penultimate section presents a proofs of Theorem \ref{thm1.2}, and the last section presents a proof of Theorem \ref{thm1.3}. \\ \section{Primitive Roots Test} \label{969} For a prime $p \geq 2$, the multiplicative group of the finite fields $\mathbb{F}_p$ is a cyclic group for all primes. \begin{dfn} { \normalfont The order $\min \{k \in \mathbb{N}: u^k \equiv 1 \bmod p \}$ of an element $u \in \mathbb{F}_p$ is denoted by $\ord_p(u)$. An element is a \textit{primitive root} if and only if $\ord_p(u)=p-1$. } \end{dfn} The Euler totient function counts the number of relatively prime integers \(\varphi (n)=\#\{ k:\gcd (k,n)=1 \}\). This counting function is compactly expressed by the analytic formula \(\varphi (n)=n\prod_{p \mid n}(1-1/p),n\in \mathbb{N} .\) \begin{lem} {\normalfont (Fermat-Euler)} \label{lem2.1}If \(a\in \mathbb{Z}\) is an integer such that \(\gcd (a,n)=1,\) then \(a^{\varphi (n)}\equiv 1 \bmod n\). \end{lem} \begin{lem} \label{lem969.05} {\normalfont (Primitive root test)} An integer $u \in \Z$ is a primitive root modulo an integer $n \in \N$ if and only if \begin{equation*}\label{eq969.52} u^{\varphi (n)/p} -1\not \equiv 0 \mod n \end{equation*} for all prime divisors $p \mid \varphi (n)$. \end{lem} The primitive root test is a special case of the Lucas primality test, introduced in \cite[p.\ 302]{ LE78}. A more recent version appears in \cite[Theorem 4.1.1]{CP05}, and similar sources. \begin{lem} \label{lem969.21} {\normalfont (Complexity of primitive root test)} Given a prime $p \geq 2$, and the squarefree part $p_1 p_2 \cdots p_v \mid p-1$, a primitive root modulo $p$ can be determined in deterministic polynomial time $O(\log ^c p)$, some constant $c >1$. \end{lem} \begin{proof} The mechanics of the deterministic polynomial time algorithm are specified in \cite[Chapter 11]{SV08}. By Theorem \ref{thm1.2}, the algorithm is repeated at most $O\left ( (\log p)^{1+\varepsilon} \right )$ times for each $u=O\left ( (\log p)^{1+\varepsilon} \right )$. These prove the claim. \end{proof} \section{Representations of the Characteristic Functions} \label{s333} The characteristic function \(\Psi :G\longrightarrow \{ 0, 1 \}\) of primitive elements is one of the standard analytic tools employed to investigate the various properties of primitive roots in cyclic groups \(G\). Many equivalent representations of the characteristic function $\Psi $ of primitive elements are possible. Several of these representations are studied in this section. \subsection{Divisors Dependent Characteristic Function} A representation of the characteristic function dependent on the orders of the cyclic groups is given below. This representation is sensitive to the primes decompositions $q=p_1^{e_1}p_2^{e_2}\cdots p_t^{e_t}$, with $p_i$ prime and $e_i\geq1$, of the orders of the cyclic groups $q=\# G$. \\ \begin{lem} \label{lem333.2} Let \(G\) be a finite cyclic group of order \(p-1=\# G\), and let \(0\neq u\in G\) be an invertible element of the group. Then \begin{equation} \label{eq333.02} \Psi (u)=\frac{\varphi (p-1)}{p-1}\sum _{d \mid p-1} \frac{\mu (d)}{\varphi (d)}\sum _{\ord(\chi ) = d} \chi (u)= \left \{\begin{array}{ll} 1 & \text{ if } \ord_p (u)=p-1, \\ 0 & \text{ if } \ord_p (u)\neq p-1. \\ \end{array} \right . \end{equation} \end{lem} \begin{proof} Assume that $u=\tau^{qm}$ is a $q$th power residue modulo $p$, where $q\mid p-1$ and $\gcd(m,p-1)=1$. Then, the inner sum \begin{equation} \sum _{ \ord(\chi) = q} \chi (u)= \sum _{ \ord(\chi) = q} \chi (\tau^{qm})=\sum _{ \ord(\chi) = q} \chi (\tau^{m})^q=\varphi(q)=q-1, \end{equation} where $\chi(v)^q=1$. Replacing this information into the product \begin{eqnarray} \frac{\phi(p-1)}{p-1} \sum_{d \mid p-1}\frac{\mu(d)}{\varphi(d)} \sum_{\ord(\chi)=d}\chi(u) &=&\frac{\phi(p-1)}{p-1} \prod_{q \mid p-1} \left (1- \frac{\sum_{\ord(\chi)=q}\chi(u)}{q-1} \right ) \nonumber \\ &=&\frac{\phi(p-1)}{p-1} \prod_{q \mid p-1} \left (1- \frac{q-1}{q-1} \right )=0 . \end{eqnarray} shows that both sides of the equation vanish if the element $u \in G$ has order $\ord_p(u) =q \mid p-1$ and $q < p-1$. Now, assume that $u=\tau^{m}$ is not $q$th power residue modulo $p$ for any $q \mid p-1$, where $\gcd(m,p-1)=1$. Then, the inner sum \begin{equation} \sum _{ \ord(\psi) = q} \chi (u)= \sum _{ \ord(\psi) = q} \chi (\tau^{m})=-1. \end{equation} Replacing this information into the product \begin{eqnarray} \frac{\phi(p-1)}{p-1} \sum_{d \mid p-1}\frac{\mu(d)}{\varphi(d)} \sum_{\ord(\chi)=d}\chi(u) &=&\frac{\phi(p-1)}{p-1} \prod_{q \mid p-1} \left (1- \frac{\sum_{\ord(\chi)=q}\chi(u)}{q-1} \right ) \nonumber \\ &=&\frac{\phi(p-1)}{p-1} \prod_{q \mid p-1} \left (1- \frac{-1}{q-1} \right )=1 . \end{eqnarray} These verify that both sides of the equation vanishes if and only if the element $u \in G$ has order $\ord_p(u) =q \mid p-1$ and $q < p-1$. \end{proof} The precise source of formula \eqref{eq333.02} is not clear. The authors in \cite{DH37}, and \cite{WR01} attributed this formula to Vinogradov, and other authors have attributed it to Landau. The proof and other details on the characteristic function are given in \cite[p. 863]{ES57}, \cite[p.\ 258]{LN97}, \cite[p.\ 18]{MP07}. The characteristic function for multiple primitive roots is used in \cite[p.\ 146]{CZ98} to study consecutive primitive roots. In \cite{DS12} it is used to study the gap between primitive roots with respect to the Hamming metric. And in \cite{WR01} it is used to prove the existence of primitive roots in certain small subsets \(A\subset \mathbb{F}_p\). In \cite{DH37} it is used to prove that some finite fields do not have primitive roots of the form $a\tau+b$, with $\tau$ primitive and $a,b \in \mathbb{F}_p$ constants. In addition, the Artin primitive root conjecture for polynomials over finite fields was proved in \cite{PS95} using this formula. \subsection{Divisors Free Characteristic Function} It often difficult to derive any meaningful result using the usual divisors dependent characteristic function of primitive elements given in Lemma \ref{lem333.2}. This difficulty is due to the large number of terms that can be generated by the divisors, for example, \(d\mid p-1\), involved in the calculations, see \cite{ES57}, \cite{DS12} for typical applications and \cite[p.\ 19]{MP04} for a discussion. \\ A new \textit{divisors-free} representation of the characteristic function of primitive element is developed here. This representation can overcomes some of the limitations of its counterpart in certain applications. The \textit{divisors dependent representation} of the characteristic function of primitive roots, Lemma \ref{lem333.2}, detects the order \(\ord_p (u)\) of the element \(u\in \mathbb{F}_p\) by means of the divisors of the totient \(p-1\). In contrast, the \textit{divisors-free representation} of the characteristic function, Lemma \ref{lem333.3}, detects the order \(\text{ord}_p(u) \geq 1\) of the element \(u\in \mathbb{F}_p\) by means of the solutions of the equation \(\tau ^n-u=0\) in \(\mathbb{F}_p\), where \(u,\tau\) are constants, and \(1\leq n<p-1, \gcd (n,p-1)=1,\) is a variable. \begin{lem} \label{lem333.3} Let \(p\geq 2\) be a prime, and let \(\tau\) be a primitive root mod \(p\). If \(u\in\mathbb{F}_p\) is a nonzero element, and \(\psi \neq 1\) is a nonprincipal additive character of order \(\ord \psi =p\), then \begin{equation} \Psi (u)=\sum _{\gcd (n,p-1)=1} \frac{1}{p}\sum _{0\leq m\leq p-1} \psi \left ((\tau ^n-u)m\right)=\left \{ \begin{array}{ll} 1 & \text{ if } \ord_p(u)=p-1, \\ 0 & \text{ if } \ord_p(u)\neq p-1. \\ \end{array} \right . \end{equation} \end{lem} \begin{proof} As the index \(n\geq 1\) ranges over the integers relatively prime to \(p-1\), the element \(\tau ^n\in \mathbb{F}_p\) ranges over the primitive roots \(\text{mod } p\). Ergo, the equation \begin{equation}\label{eq33.30} \tau ^n- u=0 \end{equation} has a solution if and only if the fixed element \(u\in \mathbb{F}_p\) is a primitive root. Next, replace \(\psi (z)=e^{i 2\pi z/p }\) to obtain \begin{equation} \Psi(u)=\sum_{\gcd (n,p-1)=1} \frac{1}{p}\sum_{0\leq m\leq p-1} e^{i 2\pi (\tau ^n-u)m/p }=\left \{ \begin{array}{ll} 1 & \text{ if } \ord_p (u)=p-1, \\ 0 & \text{ if } \ord_p (u)\neq p-1. \\ \end{array} \right. \end{equation} This follows from the geometric series identity $\sum_{0\leq m\leq N-1} w^{ m }=(w^N-1)/(w-1)$ with $w \ne 1$, applied to the inner sum. \end{proof} \section{Primes Numbers Results} \label{s533} Some prime numbers results focusing on the local minima of the ratio \begin{equation}\label{eq533.30} \frac{\varphi(n)}{n}=\prod_{p \mid n}\left( 1- \frac{1}{p} \right)> \frac{1}{e^{\gamma} \log \log n+5/(2 \log \log n)} \end{equation} are recorded in this section. The conditional results are studied in \cite{NJ12}, and the unconditional results are proved by various authors as \cite[Theorem 7 and Theorem 15]{RS62}, and \cite[Theorem 2.9]{MV07}. \begin{lem} \label{lem533.01} Let \(n\geq 1\) be a large integer, and let $\omega(n)$ be the number of prime divisors $p \mid n$. Then \begin{enumerate}[font=\normalfont, label=(\roman*)] \item $\displaystyle\omega(n) \ll \log \log n,$ \tabto{8cm}the average number of prime divisors. \item $\displaystyle \omega(n) \ll \log n/ \log \log n,$\tabto{8cm}the maximal number of prime divisors. \end{enumerate} \end{lem} \begin{proof} These are standard results in analytic number theory, see \cite[Theorem 2.6]{MV07}. \end{proof} \begin{lem} \label{lem533.21} Let \(x\geq 2\) be a large number, then \begin{enumerate}[font=\normalfont, label=(\roman*)] \item $\displaystyle \prod_{p \leq x}\left( 1- \frac{1}{p} \right) =\frac{1}{e^{\gamma} \log x}+ O\left (e^{-c_0 \sqrt{ \log x}}\right ),$ \tabto{8cm} unconditionally. \item $\displaystyle \prod_{p \leq x}\left( 1- \frac{1}{p} \right) =\frac{1}{e^{\gamma} \log x}+\Omega_{\pm} \left (\frac{\log \log \log x}{x^{1/2}} \right ),$\tabto{8cm}unconditional oscillation. \item $\displaystyle \prod_{p \leq x}\left( 1- \frac{1}{p} \right) =\frac{1}{e^{\gamma} \log x}+ O\left (\frac{\log x}{ x^{1/2}} \right ),$\tabto{8cm}conditional on the RH. \end{enumerate} The symbol $\gamma$ is the Euler constant, and $c_0>0$ is an absolute constant. \end{lem} The explicit estimates are given in \cite[Theorem 7]{RS62}, and the results for products over arithmetic progression are proved in \cite{LZ07}, et alii. The nonquantitative unconditional oscillations of the error of the product of primes is implied by the work of Phragmen, refer to equation (\ref{eq533.8}), and \cite[p.\ 182]{NW00}. Since then, various authors have developed quantitative versions, see \cite{RS62}, \cite{DP09}, et alii. \section{Basic Statistic For Primitive Roots} \label{s222} \subsection{Probability Of Primitive Roots} The probability of primitive roots in a finite field $\F_p$ has the closed form $\varphi(p-1)/(p-1) \leq 1/2$. The maximal probability $\varphi(p-1)/(p-1) = 1/2$ occurs on the subset of Fermat primes \begin{equation} \mathcal{F}=\{p=2^{2^n}+1: n \geq 0\}=\{3,5,17,257, 65537, \ldots \}. \end{equation} This is followed by the subset of Germain primes \begin{equation} \mathcal{S}=\{p=2^aq+1: q \geq 2 \text{ is prime, and } a \geq 1 \}=\{5,7,11, 13, 23, 29, \ldots \}, \end{equation} which has $\varphi(p-1)/p =(1/2)(1-1/q)$, et cetera. Some basic questions such as the sizes of these subsets of primes are open problems. In contrast, the minimal probabilities occur on the various subsets of primes with highly composite totients $p-1$. For example, the subset \begin{equation} \mathcal{R}=\{p \geq 2: p-1=2^{v_2}\cdot 3^{v_3}\cdot 5^{v_5}\cdots q^{v_q}, \text{ and } v_i \geq 1\}=\{3,7,31, 191, \ldots \}. \end{equation} In these cases, the probability function can have a complicated expression such as \begin{equation} \label{eq222.8} \frac{\varphi(p-1)}{p-1}\asymp\prod_{q \ll \log p}\left( 1- \frac{1}{q} \right) =\frac{1}{e^{\gamma} \log \log p}+\Omega_{\pm} \left (\frac{\log \log \log \log p}{(\log p)^{1/2}} \right ). \end{equation} This is derived from the standard results in Lemma \ref{lem533.01}, and in Lemma \ref{lem533.21}. Further, the average probability over all the primes $p \leq x$ is a well known constant \begin{equation} \label{eq222.21} a_0=\frac{1}{\pi(x)} \sum_{p \leq x}\frac{\varphi(p-1)}{p-1}=\prod_{p >2}\left( 1- \frac{1}{p(p-1)} \right) +o(1)= 0.3739558136 \ldots. \end{equation} The analysis of the average appears in \cite{GM68}, \cite{SP69}, and an early numerical calculations is given in \cite{WJ61}. The distribution of primitive root for highly composite totients $p-1$ is approximately a Poisson distribution with parameter $\lambda>0$. For $k \geq 0$, and $1 \leq t \leq \delta \log \log p$, with $\delta >0$, the probability function has the asymptotic formula \begin{equation} \label{eq222.33} P_k(t) \sim e^{-\lambda} \frac{\lambda^k}{k!}, \end{equation} confer \cite[Theorem 2]{CZ98} for the finer details. \subsection{Average Gap Between Primitive Roots} Let $p\geq2 $ be a prime, and let $g_1, g_2, \ldots, g_t$ be the sequence of primitive roots in increasing order, with $t=\varphi(p-1)$. Given a fixed prime $p\geq 2$, the average gap between a pair of consecutive primitive roots is defined by \begin{equation} \label{eq222.33} d_n=g_{n+1}-g_n=\frac{p-1}{\varphi(p-1)} \ll \log \log p. \end{equation} \begin{lem} \label{lem222.41} Let \(x\geq 1\) be a large number, then the average gap between consecutive primitive roots over all the primes $p \leq x$ is bounded by a constant. In particular, for any constant $c>2$, \begin{equation} \label{eq222.33} \overline{d_n}=\prod_{p \geq 2}\left( 1- \frac{1}{(p-1)^2} \right) \li(x)+ O\left (\frac{x}{\log^{c-1} x}\right ). \end{equation} \end{lem} \begin{proof} The identity $n/\varphi(n)=\sum_{d\mid n}\mu^2(d)/\varphi(d)$ is used here to compute the average over all the primes $p \leq x$: \begin{eqnarray} \label{eq222.74} \sum_{p \leq x}\frac{p-1}{\varphi(p-1)} &=&\sum_{p \leq x} \sum_{d\mid p-1}\frac{\mu^2(d)}{\varphi(d)} \\ &=& \sum_{d\leq x}\frac{\mu^2(d)}{\varphi(d)}\sum_{\substack{p \leq x\\ p \equiv 1 \bmod d}} 1\nonumber. \end{eqnarray} To apply the prime number theorem to the inner sum, use a dyadic partition \begin{equation} \label{eq222.76} \sum_{d\leq x}\frac{\mu^2(d)}{\varphi(d)}\sum_{\substack{p \leq x\\ p \equiv 1 \bmod d}} 1= \sum_{d\leq \log^c x}\frac{\mu^2(d)}{\varphi(d)}\sum_{\substack{p \leq x\\ p \equiv 1 \bmod d}} 1+ \sum_{d\geq \log^cx}\frac{\mu^2(d)}{\varphi(d)}\sum_{\substack{p \leq x\\ p \equiv 1 \bmod d}} 1, \end{equation} where $c>0$ is an arbitrary constant. The first sum has the asymptotic expression \begin{eqnarray} \label{eq222.78} \sum_{d\leq \log^C x}\frac{\mu^2(d)}{\varphi(d)}\sum_{\substack{p \leq x\\ p \equiv 1 \bmod d}} 1 &=& \sum_{d\leq \log^C x}\frac{\mu^2(d)}{\varphi(d)} \left( \frac{\li(x)}{\varphi(d)}+ O\left (\frac{x}{\log^b x}\right ) \right ) \\ &=&\li(x) \sum_{d\geq 2}\frac{\mu^2(d)}{\varphi(d)^2}+ O\left (\frac{x}{\log^b x}\right ) \nonumber, \end{eqnarray} where $b>c+1$. The second sum has the asymptotic expression \begin{equation} \label{eq222.80} \sum_{d\geq \log^Cx}\frac{\mu^2(d)}{\varphi(d)}\sum_{\substack{p \leq x\\ p \equiv 1 \bmod d}} 1\ll \frac{x}{\log^c x} \sum_{d\geq \log^cx}\frac{1}{\varphi(d)}= O\left (\frac{x}{\log^{c-1} x}\right ) , \end{equation} Combining the last two expressions (\ref{eq222.78}) and (\ref{eq222.80}) completes the proof. \end{proof} The average gap between consecutive primitive roots is precise the value of the constant \begin{equation} \label{eq222.82} \prod_{p \geq 2}\left( 1- \frac{1}{(p-1)^2} \right) =2.82638409425598556075406 \ldots . \end{equation} \begin{lem} \label{lem222.47} Let \(p\geq 1\) be a large prime, and let $t\leq\varphi(p-1)$ be a large number. Then, the $g_1, g_2, \ldots, g_t$ be the sequence of primitive roots in increasing order are uniformly distributed over the interval $[2, p-2]$. \end{lem} \begin{proof} Apply Theorem \ref{thm3.4} to the Bohl-Weil criterion \begin{equation} \frac{1}{p} \sum_{1\leq n \leq t} e^{i2\pi g_n/p} =o(1) , \end{equation} where $p^{1/2}< t\leq \varphi(p-1)$. \end{proof} \section{Estimates Of Exponential Sums} \label{s4} This section provides simple estimates for the exponential sums of interest in this analysis. There are two objectives: To determine an upper bound, proved in Theorem \ref{thm3.4}, and to show that \begin{equation} \label{eq3.201} \sum_{\gcd(n,p-1)=1} e^{i2\pi b \tau^n/p} =\sum_{\gcd(n,p-1)=1} e^{i2\pi \tau^n/p}+E(p), \end{equation} where $E(p)$ is an error term, this is proved in Lemma \ref{lem333.22}. The proofs of these results are entirely based on established results and elementary techniques. \subsection{Incomplete And Complete Exponential Sums} Let $f: \C \longrightarrow \C$ be a function, and let $q \in \N$ be a large integer. The finite Fourier transform \begin{equation} \label{eq3.370} \hat{f}(t)=\frac{1}{q} \sum_{0 \leq s\leq q-1} e^{i \pi st/q} \end{equation} and its inverse are used here to derive a summation kernel function, which is almost identical to the Dirichlet kernel. \begin{dfn} \label{dfn3.23} {\normalfont Let $ p$ and $ q $ be primes, and let $\omega=e^{i 2 \pi/q}$, and $\zeta=e^{i 2 \pi/p}$ be roots of unity. The \textit{finite summation kernel} is defined by the finite Fourier transform identity \begin{equation} \label{eq3.373} \mathcal{K}(f(n))=\frac{1}{q} \sum_{0 \leq t\leq q-1,} \sum_{0 \leq s\leq p-1} \omega^{t(n-s)}f(s)=f(n).\end{equation} } \end{dfn} This simple identity is very effective in computing upper bounds of some exponential sums \begin{equation} \sum_{ n \leq x} f(n)= \sum_{ n \leq x} \mathcal{K}(f(n)), \end{equation} where $x \leq p < q$. Two applications are illustrated here. \begin{thm} \label{thm3.2} {\normalfont (\cite{SR73}, \cite{ML72}) } Let \(p\geq 2\) be a large prime, and let \(\tau \in \mathbb{F}_p\) be an element of large multiplicative order $\ord_p(\tau) \mid p-1$. Then, for any $b \in [1, p-1]$, and $x\leq p-1$, \begin{equation} \sum_{ n \leq x} e^{i2\pi b \tau^{n}/p} \ll p^{1/2} \log p. \end{equation} \end{thm} \begin{proof} Let $q=p+o(p)>p$ be a large prime, and let $f(n)=e^{i 2 \pi b\tau^{n} /p}$, where $\tau$ is a primitive root modulo $p$. Applying the finite summation kernel in Definition \ref{dfn3.23}, yields \begin{equation} \label{eq3.372} \sum_{ n \leq x} e^{i2\pi b \tau^{n}/p}= \sum_{ n \leq x}\frac{1}{q} \sum_{0 \leq t\leq q-1,} \sum_{1 \leq s\leq p-1} \omega^{t(n-s)}e^{i2\pi b \tau^{s}/p} . \end{equation} The term $t=0$ contributes $-x/q$, and rearranging it yield \begin{eqnarray} \label{eq3.374} \sum_{ n \leq x} e^{i2\pi b \tau^{n}/p} &=&\frac{1}{q} \sum_{ n \leq x,} \sum_{1 \leq t\leq q-1,} \sum_{1 \leq s\leq p-1} \omega^{t(n-s)}e^{i2\pi b \tau^{s}/p}-\frac{x}{q} \\ &=&\frac{1}{q} \sum_{1 \leq t\leq q-1} \left (\sum_{1 \leq s\leq p-1} \omega^{-ts}e^{i2\pi b \tau^{s}/p} \right ) \left (\sum_{ n \leq x}\omega^{tn} \right )-\frac{x}{q}\nonumber. \end{eqnarray} Taking absolute value, and applying Lemma \ref{lem333.20}, and Lemma \ref{lem333.27}, yield \begin{eqnarray} \label{eq3.376} \left | \sum_{ n \leq x} e^{i2\pi b \tau^{n}/p} \right | &\leq&\frac{1}{q} \sum_{1 \leq t\leq q-1} \left | \sum_{0 \leq s\leq p-1} \omega^{-ts}e^{i2\pi b \tau^{s}/p} \right | \cdot \left | \sum_{ n \leq x}\omega^{tn} \right |+ \frac{x}{q}\nonumber \\ &\ll&\frac{1}{q} \sum_{1 \leq t\leq q-1} \left ( 2q^{1/2} \log q \right ) \cdot \left ( \frac{2q}{\pi t} \right )+\frac{x}{q}\\ &\ll& p^{1/2} \log^2 p\nonumber . \end{eqnarray} The last summation in (\ref{eq3.376}) uses the estimate \begin{equation} \label{eq3.392} \sum_{1 \leq t\leq q-1}\frac{1}{t}\ll \log q\ll \log p \end{equation} since $q=p+o(p)>p$, and $x/q\leq 1$. \end{proof} This appears to be the best possible upper bound. The above proof generalizes the sum of resolvents method used in \cite{ML72}. Here, it is reformulated as a finite Fourier transform method, which is applicable to a wide range of functions. A similar upper bound for composite moduli $p=m$ is also proved, [op. cit., equation (2.29)]. \begin{thm} \label{thm3.4} Let \(p\geq 2\) be a large prime, and let $\tau $ be a primitive root modulo $p$. Then, \begin{equation} \sum_{\gcd(n,p-1)=1} e^{i2\pi b \tau^n/p} \ll p^{1-\varepsilon} \end{equation} for any $b \in [1, p-1]$, and any arbitrary small number $\varepsilon \in (0, 1/2)$. \end{thm} \begin{proof} Let $q=p+o(p)>p$ be a large prime, and let $f(n)=e^{i 2 \pi b\tau^{n} /p}$, where $\tau$ is a primitive root modulo $p$. Start with the representation \begin{equation} \label{eq3.392} \sum_{ \gcd(n,p-1)=1} e^{\frac{i2\pi b \tau^n}{p}}= \sum_{ \gcd(n,p-1)=1}\frac{1}{q} \sum_{0 \leq t\leq q-1,} \sum_{1 \leq s\leq p-1} \omega^{t(n-s)}e^{\frac{i2\pi b \tau^s}{p}} , \end{equation} see Definition \ref{dfn3.23}. Use the inclusion exclusion principle to rewrite the exponential sum as \begin{equation}\label{eq3.346} \sum_{\gcd(n,p-1)=1} e^{ \frac{i2\pi b \tau^n}{p}} = \sum_{ n \leq p-1}\frac{1}{q} \sum_{0 \leq t\leq q-1,} \sum_{1 \leq s\leq p-1} \omega^{t(n-s)}e^{\frac{i2\pi b \tau^s}{p}} \sum_{\substack{d \mid p-1 \\ d \mid n}}\mu(d) . \end{equation} The term $t=0$ contributes $-\varphi(p)/q$, and rearranging it yield \begin{eqnarray}\label{eq3.348} &&\sum_{\gcd(n,p-1)=1} e^{ \frac{i2\pi b \tau^n}{p}} \\ &=& \sum_{ n \leq p-1}\frac{1}{q} \sum_{1 \leq t\leq q-1,} \sum_{1 \leq s\leq p-1} \omega^{t(n-s)}e^{\frac{i2\pi b \tau^s}{p}} \sum_{\substack{d \mid p-1 \\ d \mid n}}\mu(d) -\frac{\varphi(p)}{q} \nonumber \\ &=&\frac{1}{q} \sum_{1 \leq t\leq q-1} \left ( \sum_{1 \leq s\leq p-1} \omega^{-ts}e^{\frac{i2\pi b \tau^s}{p}}\right )\left (\sum_{d \mid p-1} \mu(d) \sum_{\substack{n \leq p-1, \\ d \mid n}} \omega^{tn} \right ) -\frac{\varphi(p)}{q} \nonumber. \end{eqnarray} Taking absolute value, and applying Lemma \ref{lem333.24}, and Lemma \ref{lem333.27}, yield \begin{eqnarray} \label{eq3.379} && \left | \sum_{ \gcd(n, p-1)=1} e^{\frac{i2\pi b \tau^n}{p}} \right | \\ &\leq&\frac{1}{q} \sum_{1 \leq t\leq q-1} \left | \sum_{1 \leq s\leq p-1} \omega^{-ts}e^{i2\pi b \tau^{s}/p} \right | \cdot \left |\sum_{d \mid p-1} \mu(d) \sum_{\substack{n \leq p-1, \\ d \mid n}} \omega^{tn} \right | +\frac{\varphi(p)}{q}\nonumber \\ &\ll&\frac{1}{q} \sum_{1 \leq t\leq q-1} \left ( 2q^{1/2} \log q \right ) \cdot \left ( \frac{4q \log \log p}{\pi t} \right )+\frac{\varphi(p)}{q}\nonumber\\ &\ll& p^{1/2} \log^3 p \nonumber. \end{eqnarray} The last summation in (\ref{eq3.379}) uses the estimate \begin{equation} \label{eq3.392} \sum_{1 \leq t\leq q-1}\frac{1}{t}\ll \log q\ll \log p \end{equation} since $q=p+o(p)>p$, and $\varphi(p)/q \leq 1$. This is restated in the simpler notation $p^{1/2}\log ^3 p \leq p^{1-\varepsilon}$ for any arbitrary small number $\varepsilon \in (0,1/2)$. \end{proof} The upper bound given in Theorem \ref{thm3.4} seems to be optimum. A different proof, which has a weaker upper bound, appears in \cite[Theorem 6]{FS00}, and related results are given in \cite{CC09}, \cite{FS01}, \cite{GZ05}, and \cite[Theorem 1]{GK05}. \subsection{Equivalent Exponential Sums} For any fixed $ 0 \ne b \in \mathbb{F}_p$, the map $ \tau^n \longrightarrow b \tau^n$ is one-to-one in $\mathbb{F}_p$. Consequently, the subsets \begin{equation} \label{eq3.220} \{ \tau^n: \gcd(n,p-1)=1 \}\quad \text { and } \quad \{ b\tau^n: \gcd(n,p-1)=1 \} \subset \mathbb{F}_p \end{equation} have the same cardinalities. As a direct consequence the exponential sums \begin{equation} \label{3.330} \sum_{\gcd(n,p-1)=1} e^{i2\pi b \tau^n/p} \quad \text{ and } \quad \sum_{\gcd(n,p-1)=1} e^{i2\pi \tau^n/p}, \end{equation} have the same upper bound up to an error term. An asymptotic relation for the exponential sums (\ref{3.330}) is provided in Lemma \ref{lem333.22}. This result expresses the first exponential sum in (\ref{3.330}) as a sum of simpler exponential sum and an error term. \begin{lem} \label{lem333.22} Let \(p\geq 2\) be a large primes. If $\tau $ be a primitive root modulo $p$, then, \begin{equation} \sum_{\gcd(n,p-1)=1} e^{i2\pi b \tau^n/p} = \sum_{\gcd(n,p-1)=1} e^{i2\pi \tau^n/p} + O(p^{1/2} \log^3 p), \end{equation} for any $ b \in [1, p-1]$. \end{lem} \begin{proof} For $b\ne 1$, the exponential sum has the representation \begin{eqnarray} \label{eq3.320} && \sum_{\gcd(n,p-1)=1} e^{\frac{i2\pi b \tau^n}{p}} \\ &=&\frac{1}{q} \sum_{1 \leq t\leq q-1} \left ( \sum_{1 \leq s\leq p-1} \omega^{-ts}e^{\frac{i2\pi b \tau^s}{p}}\right )\left (\sum_{d \mid p-1} \mu(d) \sum_{\substack{n \leq p-1, \\ d \mid n}} \omega^{tn} \right ) -\frac{\varphi(p)}{q}\nonumber, \end{eqnarray} confer equation (\ref{eq3.348}) for details. And, for $b=1$, \begin{eqnarray} \label{eq3.321} && \sum_{\gcd(n,p-1)=1} e^{\frac{i2\pi \tau^n}{p}} \\ &=& \frac{1}{q} \sum_{1 \leq t\leq q-1} \left ( \sum_{1 \leq s\leq p-1} \omega^{-ts}e^{\frac{i2\pi \tau^s}{p}}\right )\left (\sum_{d \mid p-1} \mu(d) \sum_{\substack{n \leq p-1, \\ d \mid n}} \omega^{tn} \right ) -\frac{\varphi(p)}{q}\nonumber, \end{eqnarray} respectively, see (\ref{eq3.348}). Differencing (\ref{eq3.320}) and (\ref{eq3.321}) produces \begin{eqnarray} \label{eq3.90} & & \sum_{\gcd(n,p-1)=1} e^{i2\pi b \tau^n/p} -\sum_{\gcd(n,p-1)=1} e^{i2\pi \tau^n/p} \\ &=& \frac{1}{q} \sum_{0 \leq t\leq q-1} \left ( \sum_{1 \leq s\leq p-1} \omega^{-ts}e^{\frac{i2\pi b \tau^s}{p}}-\sum_{1 \leq s\leq p-1} \omega^{-ts}e^{\frac{i2\pi \tau^s}{p}}\right ) \nonumber \\ && \times \left (\sum_{d \mid p-1} \mu(d) \sum_{\substack{n \leq p-1, \\ d \mid n}} \omega^{tn} \right ) \nonumber. \end{eqnarray} By Lemma \ref{lem333.24}, the relatively prime summation kernel is bounded by \begin{eqnarray} \label{eq3.93} \left |\sum_{d \mid p-1} \mu(d) \sum_{\substack{n \leq p-1, \\ d \mid n}} \omega^{tn} \right | &=& \left | \sum_{\gcd(n, p-1)=1}\omega^{tn} \right | \nonumber \\ &\leq & \frac{4 q \log \log p} {\pi t}, \end{eqnarray} and by Lemma \ref{lem333.27}, the difference of two Gauss sums is bounded by \begin{eqnarray} \label{eq3.95} && \left | \sum_{1 \leq s\leq p-1} \omega^{-ts}e^{\frac{i2\pi b \tau^s}{p}}-\sum_{1 \leq s\leq p-1} \omega^{-ts}e^{\frac{i2\pi \tau^s}{p}}\right | \nonumber \\ &=& \left | \sum_{1 \leq s\leq p-1} \chi(s) \psi_b(s) - \sum_{1 \leq s\leq p-1} \chi(s) \psi_1(s) \right | \nonumber \\ &\leq & 4 p^{1/2} \log p, \end{eqnarray} where $\chi(s)=e^{i \pi s t/p}$, and $ \psi_b(s)=e^{i2\pi b \tau^s/p}$. Taking absolute value in (\ref{eq3.90}) and replacing (\ref{eq3.93}), and (\ref{eq3.95}), return \begin{eqnarray} \label{388} && \left| \sum_{\gcd(n,p-1)=1} e^{i2\pi b \tau^n/p} -\sum_{\gcd(n,p-1)=1} e^{i2\pi \tau^n/p} \right| \nonumber\\ & \leq & \frac{1}{q} \sum_{0 \leq t\leq q-1} \left ( 4q^{1/2} \log q \right ) \cdot \left ( \frac{4 q \log \log p} {t} \right ) \\ &\leq & 16q^{1/2} (\log q)(\log q)( \log \log p )\nonumber\\ &\leq & 16p^{1/2} \log^3 p \nonumber, \end{eqnarray} where $q=p+o(p)$. \end{proof} The same proof works for many other subsets of elements $\mathcal{A} \subset \mathbb{F}_p$. For example, \begin{equation} \sum_{n \in \mathcal{A}} e^{i2\pi b \tau^n/p} = \sum_{n \in \mathcal{A}} e^{i2\pi \tau^n/p} + O(p^{1/2} \log^c p), \end{equation} for some constant $c>0$. \subsection{Finite Summation Kernels And Gaussian Sums} \begin{lem} \label{lem333.20} Let \(p\geq 2\) and $q=p+o(p)>p$ be large primes. Let $\omega=e^{i2 \pi/q} $ be a $q$th root of unity, and let $t \in [1, p-1]$. Then, \begin{enumerate}[font=\normalfont, label=(\roman*)] \item $\displaystyle\sum_{n \leq p-1} \omega^{tn} = \frac{\omega^{t}-\omega^{tp}}{1-\omega^{t}},$ \item $\displaystyle \left | \sum_{n \leq p-1} \omega^{tn} \right |\leq \frac{2q }{\pi t}.$ \end{enumerate} \end{lem} \begin{proof} (i) Use the geometric series to compute this simple exponential sum as \begin{eqnarray} \label{eq3.340} \sum_{n \leq p-1} \omega^{tn} &=& \frac{\omega^{t}-\omega^{tp}}{1-\omega^{t}} \nonumber. \end{eqnarray} (ii) Observe that the parameters $q=p+o(p)>p$ is prime, $\omega=e^{i2 \pi/q}$, the integers $t \in [1, p-1]$, and $d \leq p-1<q-1$. This data implies that $\pi t/q\ne k \pi $ with $k \in \mathbb{Z}$, so the sine function $\sin(\pi t/q)\ne 0$ is well defined. Using standard manipulations, and $z/2 \leq \sin(z) <z$ for $0<|z|<\pi/2$, the last expression becomes \begin{equation} \left |\frac{\omega^{t}-\omega^{tp}}{1-\omega^{t}} \right |\leq \left | \frac{2}{\sin( \pi t/ q)} \right | \leq \frac{2q}{\pi t}. \end{equation} \end{proof} \begin{lem} \label{lem333.24} Let \(p\geq 2\) and $q=p+o(p)>p$ be large primes, and let $\omega=e^{i2 \pi/q} $ be a $q$th root of unity. Then, \begin{enumerate}[font=\normalfont, label=(\roman*)] \item $\displaystyle\sum_{\gcd(n,p-1)=1} \omega^{tn} = \sum_{d \mid p-1} \mu(d) \frac{\omega^{dt}-\omega^{dt((p-1)/d+1)}}{1-\omega^{dt}},$ \item $\displaystyle \left | \sum_{\gcd(n,p-1)=1} \omega^{tn} \right |\leq \frac{4q \log \log p}{\pi t},$ \end{enumerate} where $\mu(k)$ is the Mobius function, for any fixed pair $d \mid p-1$ and $t \in [1, p-1]$. \end{lem} \begin{proof} (i) Use the inclusion exclusion principle to rewrite the exponential sum as \begin{eqnarray} \label{eq360} \sum_{\gcd(n,p-1)=1} \omega^{tn}&=& \sum_{n \leq p-1} \omega^{tn} \sum_{\substack{d \mid p-1 \\ d \mid n}}\mu(d) \nonumber \\ &=& \sum_{d \mid p-1} \mu(d) \sum_{\substack{n \leq p-1 \\ d \mid n}} \omega^{tn}\nonumber \\ & =&\sum_{d\mid p-1} \mu(d) \sum_{m \leq (p-1)/ d} \omega^{dtm} \\ &=& \sum_{d \mid p-1} \mu(d) \frac{\omega^{dt}-\omega^{dt((p-1)/d+1)}}{1-\omega^{dt}} \nonumber. \end{eqnarray} (ii) Observe that the parameters $q=p+o(p)>p$ is prime, $\omega=e^{i2 \pi/q}$, the integers $t \in [1, p-1]$, and $d \leq p-1<q-1$. This data implies that $\pi dt/q\ne k \pi $ with $k \in \mathbb{Z}$, so the sine function $\sin(\pi dt/q)\ne 0$ is well defined. Using standard manipulations, and $z/2 \leq \sin(z) <z$ for $0<|z|<\pi/2$, the last expression becomes \begin{equation} \left |\frac{\omega^{dt}-\omega^{dtp}}{1-\omega^{dt}} \right |\leq \left | \frac{2}{\sin( \pi dt/ q)} \right | \leq \frac{2q}{\pi dt} \end{equation} for $1 \leq d \leq p-1$. Finally, the upper bound is \begin{eqnarray} \left| \sum_{d \mid p-1} \mu(d) \frac{\omega^{dt}-\omega^{dt((p-1)/d+1)}}{1-\omega^{dt}} \right| &\leq&\frac{2q}{\pi t} \sum_{d \mid p-1} \frac{1}{d} \\ &\leq& \frac{4q \log \log p}{\pi t} \nonumber. \end{eqnarray} The last inequality uses the elementary estimate $ \sum_{d \mid n} d^{-1} \leq 2 \log \log n$. \end{proof} \begin{lem} {\normalfont (Gauss sums)} \label{lem333.27} Let \(p\geq 2\) and $q$ be large primes. Let $\chi(t)=e^{i2 \pi t/q} $ and $\psi(t)=e^{i2\pi \tau^t/p}$ be a pair of characters. Then, the Gaussian sum has the upper bound \begin{equation} \label{eq3-355} \left |\sum_{1 \leq t \leq q-1} \chi(t) \psi(t) \right | \leq 2 q^{1/2} \log q. \end{equation} \end{lem} \section{Maximal Error Term} \label{s899} The upper bounds for exponential sums over subsets of elements in finite fields $\mathbb{F}_p$ studied in Section \ref{s4} are used to estimate the error terms $E(x,y)$ and $E(x,\Lambda)$ in the proofs of Theorem \ref{thm1.1} and Theorem \ref{thm1.2} respectively. \subsection{Short Intervals} \begin{lem} \label{lem899.06} Let \(p\geq 2\) be a large prime, let \(\psi \neq 1\) be an additive character, and let \(\tau\) be a primitive root mod \(p\). If the element \(u\ne 0\) is not a primitive root, then, \begin{equation} \label{el899.00} \frac{1}{p}\sum_{x \leq u\leq y,} \sum_{\gcd(n,p-1)=1,} \sum_{ 0<m \leq p-1} \psi \left((\tau ^n-u)m\right)\ll \frac{y-x }{p^{\varepsilon}} \end{equation} for all sufficiently large numbers $1 \leq x< y\leq p$, and an arbitrarily small number \(\varepsilon >0\). \end{lem} \begin{proof} By hypothesis $\tau ^n-u\ne 0$, so $\sum_{ 0<m\leq p-1} \psi \left((\tau ^n-u)m\right)= -1$. Since $ \varphi(p-1)/p\leq 1/2$, a nontrivial error term \begin{equation} \left | E(x,y) \right | < \left |-\frac{\varphi(p-1)}{p}(y-x)\right | \leq \frac{y-x}{2} \end{equation} can be computed. Toward this end let $\psi(z)=e^{i 2 \pi z/p}$, and rearrange the triple finite sum in the form \begin{eqnarray} \label{eq899.05} E(x,y)&=&\frac{1}{p}\sum_{x \leq u \leq y,} \sum_{ 0<m\leq p-1,} \sum_{\gcd(n,p-1)=1} \psi ((\tau ^n-u)m) \\ &= & \frac{1}{p}\sum_{x \leq u \leq y} \left (\sum_{ 0<m\leq p-1,} e^{-i 2 \pi um/p} \right ) \left ( \sum_{\gcd(n,p-1)=1} e^{i 2 \pi m\tau ^n/p} \right )\nonumber \\ &= & \frac{1}{p}\sum_{x \leq u \leq y} \left (\sum_{ 0<m\leq p-1,} e^{-i 2 \pi um/p} \right ) \left ( \sum_{\gcd(n, p-1)=1} e^{i2\pi \tau^{n}/p} + O(p^{1/2} \log^3 p) \right )\nonumber \\ &= & \frac{1}{p}\sum_{x \leq u \leq y} U_p \cdot V_p \nonumber. \end{eqnarray} The third line in equation (\ref{eq899.05}) follows from Lemma \ref{lem333.22}. The first exponential sum $U_p$ has the exact evaluation \begin{equation}\label{eq899.13} | U_p| = \left |\sum_{ 0<m\leq p-1} e^{-i 2 \pi um/p} \right |=1, \end{equation} where $\sum_{ 0<m\leq p-1} e^{i 2 \pi um/p}=-1$ for any $u \in [x,y]$, with $1\leq x < y<p$. The second exponential sum $V_p$ has the upper bound \begin{eqnarray} \label{eq899.15} |V_p|&=& \left |\sum_{\gcd(n,p-1)=1} e^{i2 \pi \tau ^n/p}+ O\left (p^{1/2} \log^3 p \right ) \right |\nonumber \\ &\ll &\left |\sum_{\gcd(n,p-1)=1} e^{i2 \pi \tau ^n/p} \right |+p^{1/2} \log^3 p \\ &\ll& p^{1-\varepsilon} \nonumber, \end{eqnarray} where \(\varepsilon <1/2 \) is an arbitrarily small number, see Theorem \ref{thm3.4}. \\ Taking absolute value in (\ref{eq899.05}), and replacing the estimates (\ref{eq899.13}) and (\ref{eq899.15}) return \begin{eqnarray} \label{el89991} \frac{1}{p}\sum_{x \leq u \leq y} \left | U_p \cdot V_p \right | &\leq & \frac{1}{p} \sum_{x \leq u \leq y} \left | U_p \right | \cdot |V_p| \nonumber \\ &\ll &\frac{1}{p} \sum_{x \leq u \leq y} (1) \cdot p^{1-\varepsilon } \\ &\ll & \frac{ 1}{p^{\varepsilon }}\sum_{x \leq u \leq y} 1 \nonumber \\ &\ll & \frac{y-x} {p^{\varepsilon}}\nonumber, \end{eqnarray} These complete the verification. \end{proof} \subsection{Long Intervals} The results available in the literature for primes in small intervals of the forms $[x, x+y]$ with $y < x^{1/2}$ are not uniform. In light of this fact, only the error term for the simpler intervals $[2,x]$ can be computed effectively. \begin{lem} \label{lem899.16} Let \(p\geq 2\) be a large prime, let \(\psi \neq 1\) be an additive character, and let \(\tau\) be a primitive root mod \(p\). If the element \(u\ne 0\) is not a primitive root, then, \begin{equation} \label{el899.00} \frac{1}{p}\sum_{ u\leq x,} \sum_{\gcd(n,p-1)=1,} \sum_{ 0<m\leq p-1} \psi \left((\tau ^n-u)m\right) \Lambda(u) \ll \frac{x }{p^{\varepsilon}} \end{equation} for all sufficiently large numbers $x \geq 1$, and an arbitrarily small number \(\varepsilon >0\). \end{lem} \begin{proof} Same as the previous one. \end{proof} \section{Asymtotics For The Main Terms} \label{s999} The notation $f(x)\asymp g(x)$ is defined by $af(x)<g(x)<bf(x)$ for some constants $a,b >0$. \subsection{Short Intervals For Primitive Root} \begin{lem} \label{lem999.76} Let \(p\geq 2\) be a large prime, and let $1 \leq x <y<p$ be a pair of numbers. Then, \begin{equation} \label{el999.38} \sum _{x \leq u\leq y} \frac{1}{p}\sum_{\gcd(n,p-1)=1}1\gg \frac{y-x}{\log \log p}\left (1+O\left((\log \log p) e^{-c_0 \sqrt{ \log \log p}} \right ) \right ) . \end{equation} \end{lem} \begin{proof} The maximal number $\omega(p-1)$ of prime divisors of highly composite totients $p-1$ satisfies $\omega(p-1) \gg \log p/ \log \log p$. This implies that $z \asymp \log p$. An application of Lemma \ref{lem533.21} to the ratio returns \begin{eqnarray} \frac{\varphi(p-1)}{p}&=&\frac{p-1}{p} \frac{1}{p-1}\prod_{q \mid p-1}\left( 1- \frac{1}{q} \right)\nonumber\\ &\geq& \prod_{q \leq z}\left( 1- \frac{1}{q} \right)\nonumber\\ &=&\frac{1}{e^{\gamma} \log z}+ O\left (e^{-c_0 \sqrt{ \log z}}\right )\\ &\gg&\frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\nonumber. \end{eqnarray} Substituting this, the main term reduces to \begin{eqnarray} \label{el999.53} M(x,y)&=&\sum _{x \leq u\leq y} \frac{1}{p}\sum_{\gcd(n,p-1)=1}1 \nonumber\\ &=& \frac{\varphi(p-1)}{p}\left ( y-x \right ) \\ &\gg& \left ( \frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\right )\left ( y-x\right ) \nonumber . \end{eqnarray} The proves the claim. \end{proof} \subsection{Long Intervals For Prime Primitive Root} \begin{lem} \label{lem999.86} Let \(p\geq 2\) be a large prime, and let \(x < p \) be a number. Then, \begin{equation} \label{el999.48} \sum _{u\leq x} \frac{1}{p}\sum_{\gcd(n,p-1)=1}\Lambda(u)\gg \frac{x}{\log \log p}\left (1+ O\left ( \frac{e^{\gamma} \log \log p}{e^{c_0 \sqrt{ \log \log p}}}\right )\right ) \end{equation} for some constant $c_0>0$. \end{lem} \begin{proof} The maximal number $\omega(p-1)$ of prime divisors of highly composite totients $p-1$ satisfies $\omega(p-1) \gg \log p/ \log \log p$. This implies that $z \asymp \log p$. An application of Lemma \ref{lem533.21} to the ratio returns \begin{eqnarray} \frac{\varphi(p-1)}{p}&=&\frac{p-1}{p} \frac{1}{p-1}\prod_{q \mid p-1}\left( 1- \frac{1}{q} \right)\nonumber\\ &\geq& \prod_{q \leq z}\left( 1- \frac{1}{q} \right)\nonumber\\ &=&\frac{1}{e^{\gamma} \log z}+ O\left (e^{-c_0 \sqrt{ \log z}}\right )\\ &\gg&\frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\nonumber. \end{eqnarray} In addition, using the prime number theorem in the form $\sum_{n \leq x}\Lambda(n)=x +O\left (xe^{-c_0 \sqrt{ \log x}}\right )$, the main term reduces to \begin{eqnarray} \label{el999.66} M(x,\Lambda)&=&\sum _{u\leq x} \frac{1}{p}\sum_{\gcd(n,p-1)=1}\Lambda(u) \nonumber\\ &=& \frac{\varphi(p-1)}{p}\sum _{u\leq x}\Lambda(u) \nonumber\\ &=& \frac{\varphi(p-1)}{p}\left ( x+ O\left (xe^{-c_0 \sqrt{ \log x}}\right )\right ) \\ &\gg& \left ( \frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\right )\left ( x+ O\left (xe^{-c_0 \sqrt{ \log x}}\right )\right ) \nonumber \\ &\gg& \frac{x}{\log \log p}\left (1+O\left((\log \log p) e^{-c_0 \sqrt{ \log \log p}} \right ) \right ) \left ( 1+ O\left (e^{-c_0 \sqrt{ \log x}}\right )\right ) \nonumber\\ &\gg& \frac{x}{\log \log p}\left (1+ O\left ( \frac{e^{\gamma} \log \log p}{e^{c_0 \sqrt{ \log \log p}}}\right )\right ) \nonumber. \end{eqnarray} This proves the claim. \end{proof} \subsection{Short Intervals For Prime Primitive Root} \begin{lem} \label{lem999.96} Let \(p\geq 2\) be a large prime, and let $1 \leq p^{.525} <N<p$ be a pair of numbers. Then,for any number $M <p$, \begin{equation} \label{el999.39} \sum _{M \leq u\leq M+N} \frac{1}{p}\sum_{\gcd(n,p-1)=1}\Lambda(u)\gg \frac{N}{e^{\gamma} \log \log p} \left (1+ O\left ( \frac{e^{\gamma} \log \log p}{e^{c_0 \sqrt{ \log \log p}}}\right )\right ) . \end{equation} \end{lem} \begin{proof} The maximal number $\omega(p-1)$ of prime divisors of highly composite totients $p-1$ satisfies $\omega(p-1) \gg \log p/ \log \log p$. This implies that $z \asymp \log p$. An application of Lemma \ref{lem533.21} to the ratio returns \begin{eqnarray} \frac{\varphi(p-1)}{p}&=&\frac{p-1}{p} \frac{1}{p-1}\prod_{q \mid p-1}\left( 1- \frac{1}{q} \right)\nonumber\\ &\geq& \prod_{q \leq z}\left( 1- \frac{1}{q} \right)\nonumber\\ &=&\frac{1}{e^{\gamma} \log z}+ O\left (e^{-c_0 \sqrt{ \log z}}\right )\\ &\gg&\frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\nonumber. \end{eqnarray} Let $x=M$, and $y=M+N$. Substituting this, the main term reduces to \begin{eqnarray} \label{el999.51} M(x,y,\Lambda)&=&\sum _{x \leq u\leq y} \frac{1}{p}\sum_{\gcd(n,p-1)=1}\Lambda(u) \nonumber\\ &=& \frac{\varphi(p-1)}{p}\sum _{x \leq u\leq y} \Lambda(u)\\ &\gg& \left ( \frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\right )\sum _{x \leq u\leq y} \Lambda(u) \nonumber . \end{eqnarray} Applying the prime number theorem in short intervals $\sum _{x \leq n\leq y} \Lambda(n) \gg y-x=N$, see \cite{BP01}, to the last inequality yields \begin{eqnarray} \label{el999.51} M(x,y,\Lambda)&\gg& \left ( \frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\right )\left ( y-x\right ) \\ &\gg& \frac{N}{e^{\gamma} \log \log p} \left (1+ O\left ( \frac{e^{\gamma} \log \log p}{e^{c_0 \sqrt{ \log \log p}}}\right )\right ) \nonumber . \end{eqnarray} The proves the claim. \end{proof} \section{Primitive Roots In Short Intervals} \label{s887} The previous sections provide sufficient background materials to assemble the proof of the existence of primitive roots in a short interval $\left [M, M+N \right ]$ for any sufficiently large prime $p \geq 2$, a number $N \gg (\log p)^{1+\varepsilon} $, and the fixed parameters $ M \geq 2$ and $\varepsilon >0$. \\ The analysis below indicates that the local minima of the ratio $\varphi(p-1)/p$ at the highly composite totients $p-1$ are the primary factor determining the size of the short interval. \begin{proof} (Theorem \ref{thm1.1}) Suppose that the short interval $\left [M, M+N \right ]=[x,y]$, with $1 \leq x <y<p$, does not contain a primitive root modulo a large primes \(p\geq 2\), and consider the sum of the characteristic function over the short interval, that is, \begin{equation} \label{el887.40} 0=\sum _{x \leq u\leq y} \Psi (u). \end{equation} Replacing the characteristic function, Lemma \ref{lem333.3}, and expanding the nonexistence equation (\ref{el887.40}) yield \begin{eqnarray} \label{el887.50} 0&=&\sum _{x \leq u\leq y} \Psi (u) \nonumber \\ &=&\sum _{x \leq u\leq y} \left (\frac{1}{p}\sum_{\gcd(n,p-1)=1,} \sum_{ 0\leq m\leq p-1} \psi \left((\tau ^n-u)m\right) \right ) \\ &=& \frac{c_p}{p} \sum _{x \leq u\leq y,} \sum_{\gcd(n,p-1)=1} 1+\frac{1}{p}\sum _{x \leq u\leq y,} \sum_{\gcd(n,p-1)=1,} \sum_{ 0<m\leq p-1} \psi \left((\tau ^n-u)m\right)\nonumber\\ &=&M(x,y) + E(x,y)\nonumber, \end{eqnarray} where $c_p \geq 0$ is a local correction constant depending on the fixed prime $p\geq 2$. The main term $M(x,y)$ is determined by a finite sum over the trivial additive character \(\psi =1\), and the error term $E(x,y)$ is determined by a finite sum over the nontrivial additive characters \(\psi(t) =e^{i 2\pi t /p}\neq 1\).\\ An application of Lemma \ref{lem999.76} to the main term, and an an application of Lemma \ref{lem899.06} to the error term yield \begin{eqnarray} \label{el887.60} \sum _{x \leq u\leq y} \Psi (u) &=&M(x,y) + E(x,y) \nonumber\\ &\gg& \left ( \frac{1}{e^{\gamma} \log \log p}+ O\left (e^{-c_0 \sqrt{ \log \log p}}\right )\right ) (y-x)+O\left(\frac{y-x}{p^{\varepsilon}} \right ) \nonumber \\ &\gg& \frac{y-x}{\log \log p}\left (1+ O\left ( \frac{e^{\gamma} \log \log p}{e^{c_0 \sqrt{ \log \log p}}}\right )\right ) \nonumber \\ &>&0 \nonumber, \end{eqnarray} where the implied constant $d_p=e^{-\gamma}a_pc_p \geq 0$ depends on local information and the fixed prime $p\geq 2$. However, a short interval $[x,y]$ of length $y-x=N \gg (\log p)^{1+\varepsilon} > 0$ contradicts the hypothesis (\ref{el887.40}) for all sufficiently large primes $p \geq 2$. Ergo, the short interval $\left [M, M+N \right ]$ contains a primitive root for any sufficiently large prime $p \geq 2$ and the fixed parameters $ M \geq 2$ and $\varepsilon >0$. \end{proof} \section{Least Prime Primitive Roots} \label{s888} A modified version of the previous result demonstrate the existence of prime primitive roots in an interval $\left [2,x \right ]$ for any sufficiently large prime $p \geq 2$. The analysis below indicates that the local minima of the ratio $\varphi(p-1)/p$ at the highly composite totients $p-1$, and the number of primes $\sum_{p \leq x}\Lambda(n)$ are the primary factors determining the size of the interval $\left [2,x \right ]$. \begin{proof} (Theorem \ref{thm1.2}) Suppose that the interval $[2,x]$, with $1 \leq x <p$, does not contain a prime primitive root modulo a large primes \(p\geq 2\), and consider the sum of the weighted characteristic function over the integers $u \leq x$, that is, \begin{equation} \label{el887.80} 0=\sum _{ u\leq x} \Psi (u) \Lambda(u). \end{equation}\\ Replacing the characteristic function, Lemma \ref{lem333.3}, and expanding the nonexistence equation (\ref{el887.40}) yield \begin{eqnarray} \label{el887.55} 0&=&\sum _{ u\leq x} \Psi (u) \Lambda(u)\nonumber \\ &=&\sum _{u\leq x} \left (\frac{1}{p}\sum_{\gcd(n,p-1)=1,} \sum_{ 0\leq m\leq p-1} \psi \left((\tau ^n-u)m\right) \right ) \Lambda(u)\\ &=& \frac{c_p}{p} \sum _{ u\leq x} \Lambda(u) \sum_{\gcd(n,p-1)=1} 1+\frac{1}{p}\sum _{ u\leq x} \Lambda(u) \sum_{\gcd(n,p-1)=1,} \sum_{ 0<m\leq p-1} \psi \left((\tau ^n-u)m\right)\nonumber\\ &=&M(x,\Lambda) + E(x,\Lambda)\nonumber, \end{eqnarray} where $c_p \geq 0$ is a local correction constant depending on the fixed prime $p\geq 2$. The main term $M(x,\Lambda)$ is determined by a finite sum over the trivial additive character \(\psi =1\), and the error term $E(x,\Lambda)$ is determined by a finite sum over the nontrivial additive characters \(\psi(t) =e^{i 2\pi t /p}\neq 1\).\\ An application of Lemma \ref{lem999.86} to the main term, and an application of Lemma \ref{lem899.16} to the error term yield \begin{eqnarray} \label{el887.65} \sum _{ u\leq y} \Psi (u)\Lambda(u) &=&M(x,\Lambda) + E(x,\Lambda) \nonumber\\ &\gg& \frac{x}{\log \log p}\left (1+O\left((\log \log p) e^{-c_0 \sqrt{ \log \log p}} \right ) \right ) +O\left(\frac{x}{p^{\varepsilon}} \right ) \nonumber \\ &\gg& \frac{x}{\log \log p}\left (1+ O\left ( \frac{e^{\gamma} \log \log p}{e^{c_0 \sqrt{ \log \log p}}}\right )\right ) \nonumber \\ &>&0 \nonumber, \end{eqnarray} where the implied constant $d_p=e^{-\gamma}a_pc_p \geq 0$ depends on local information and the fixed prime $p\geq 2$. But, an interval $[2,x]$ of length $x-2 \gg (\log p)^{1+\varepsilon} > 0$ contradicts the hypothesis (\ref{el887.80}) for all sufficiently large primes $p \geq 2$. Ergo, the short interval $\left [2,x \right ]$ contains a prime primitive root for any sufficiently large prime $p \geq 2$ and a fixed parameter $\varepsilon >0$. \end{proof} \section{Prime Primitive Roots In Short Intervals} \label{s1088} The prime number theorem in short intervals $\sum _{M \leq n\leq M+N} \Lambda(n) \gg N$, see \cite{BP01}. A modified version of the previous result will prove the existence of prime primitive roots in short interval $\left [M,M+N \right ]$ for any sufficiently large prime $p \geq 2$, $N \gg p^{.525}$ and any $M<p$. The analysis below indicates that the number of primes $\sum_{M\leq p \leq M+N}\Lambda(n)$ in a short interval $\left [M,M+N \right ]$ is the primary factor determining the size of the interval $N$. The local minima of the ratio $\varphi(p-1)/p$ at the highly composite totients $p-1$ have a minor impact on the analysis. \begin{proof} (Theorem \ref{thm1.3}) Suppose that the interval $[2,x]$, with $1 \leq x <p$, does not contain a prime primitive root modulo a large primes \(p\geq 2\), and consider the sum of the weighted characteristic function over the integers $u \leq x$, that is, \begin{equation} \label{el1088.80} 0=\sum _{ M\leq u\leq M+N} \Psi (u) \Lambda(u). \end{equation}\\ Replacing the characteristic function, Lemma \ref{lem333.3}, and expanding the nonexistence equation (\ref{el887.40}) yield\\ \begin{eqnarray} \label{el1088.55} 0&=&\sum _{ M\leq u\leq M+N} \Psi (u) \Lambda(u)\nonumber \\ &=&\sum _{ M\leq u\leq M+N} \left (\frac{1}{p}\sum_{\gcd(n,p-1)=1,} \sum_{ 0\leq m\leq p-1} \psi \left((\tau ^n-u)m\right) \right ) \Lambda(u)\\ &=&\frac{c_p}{p} \sum _{ M\leq u\leq M+N}\Lambda(u) \sum_{\gcd(n,p-1)=1} 1+\frac{1}{p}\sum _{ M\leq u\leq M+N} \Lambda(u) \sum_{\gcd(n,p-1)=1,} \sum_{ 0<m\leq p-1} \psi \left((\tau ^n-u)m\right)\nonumber\\ &=&M(N,\Lambda) + E(N,\Lambda)\nonumber, \end{eqnarray} where $c_p \geq 0$ is a local correction constant depending on the fixed prime $p\geq 2$. The main term $M(N,\Lambda)$ is determined by a finite sum over the trivial additive character \(\psi =1\), and the error term $E(N,\Lambda)$ is determined by a finite sum over the nontrivial additive characters \(\psi(t) =e^{i 2\pi t /p}\neq 1\).\\ An application of Lemma \ref{lem999.96} to the main term, and an application of Lemma \ref{lem899.16} to the error term yield \begin{eqnarray} \label{el1088.65} \sum _{ M\leq u\leq M+N} \Psi (u)\Lambda(u) &=&M(N,\Lambda) + E(N,\Lambda) \nonumber\\ &\gg& \frac{N}{\log \log p}\left (1+O\left((\log \log p) e^{-c_0 \sqrt{ \log \log p}} \right ) \right ) +O\left(\frac{x}{p^{\varepsilon}} \right ) \nonumber \\ &\gg& \frac{N}{\log \log p}\left (1+ O\left ( \frac{e^{\gamma} \log \log p}{e^{c_0 \sqrt{ \log \log p}}}\right )\right ) \nonumber \\ &>&0 \nonumber, \end{eqnarray} where the implied constant $d_p=e^{-\gamma}a_pc_p \geq 0$ depends on local information and the fixed prime $p\geq 2$. But, an interval $[M,M+N]$ of length $N \gg p^{.525} > 0$ contradicts the hypothesis (\ref{el1088.80}) for all sufficiently large primes $p \geq 2$. Ergo, the short interval $\left [M,M+N \right ]$ contains a prime primitive root for any sufficiently large prime $p \geq 2$ and a fixed parameter $M\geq 0$. \end{proof} \section{Problems} \begin{exe} { \normalfont Determine an explicit interval $\left [M, M+N \right ]$, where $N \geq c_0(\log \log p)^{1+\varepsilon}$, $c_0>0$ is a constant, and $\varepsilon \leq 2$, such the the interval contains a primitive root for any prime $p\geq p_0$, and $ M \geq 2$.} \end{exe} \begin{exe} { \normalfont Let $a_0=\prod_{p >2}\left( 1- 1/p(p-1) \right) = 0.3739558136 \ldots$ be the average probability of a primitive root modulo a prime $p \geq 2$. Determine the length $N \geq2$ of the average short interval $\left [M, M+N \right ]$ that contains $N\cdot (0.3739\ldots )^k (1-0.3739\ldots )^{N-k}\geq k$ primitive roots, where $N \geq (\log \log p)^{1+\varepsilon}\geq k$, $k\geq 1$, and $\varepsilon=1$.} \end{exe} \begin{exe} { \normalfont Show that the distribution of primitive root modulo a large Germain prime $p=2^aq+1$ with $q \geq 2$ prime, and $a \geq 1$, has a normal approximation with mean $\mu\approx 2^{a-1}q(1-1/q)$ and standard deviation $\sigma \approx \sqrt{2^{a-2}q(1-1/q^2)}$.} \end{exe} \begin{exe} { \normalfont Estimate the number of highly composite totients $p-1$ in a short interval, that is, $$\sum_{\substack{x \leq p \leq x+y\\ \omega(p-1)\gg \log p/\log \log p}}1, $$ where $x \geq 1$ is a large number, and $1 <y<x$. } \end{exe}
{ "timestamp": "2020-05-27T02:15:07", "yymm": "1806", "arxiv_id": "1806.01150", "language": "en", "url": "https://arxiv.org/abs/1806.01150" }
\section{Introduction} \label{Before} The conjecture of \emph{M-theory} \cite{Townsend95, Witten95I} (see \cite{Duff99B, BeckerBeckerSchwarz06}) says, roughly, that there exists a non-perturbative physical theory, which makes the following schematic diagram commute: \begin{equation} \label{TheMillionDollarQuestion} \raisebox{55pt}{ \xymatrix@C=10pt{ & \fbox{\footnotesize M-Theory} \ar[dl]_{ \mbox{ \tiny \begin{tabular}{c} double dimensional reduction \\ along $S^1$-fibration \end{tabular} } } \ar[dr]^{ \tiny \begin{tabular}{c} low energy \\ approximation \end{tabular} } \\ \fbox{\footnotesize \begin{tabular}{c} Perturbative \\ type IIA string theory \end{tabular} } \ar[dr]_{ \tiny\begin{tabular}{c} low energy \\ approximation \end{tabular} } && \fbox{\footnotesize \begin{tabular}{c} 11d supergravity \end{tabular} } \ar[dl]^-{~~ \mbox{ \tiny \begin{tabular}{c} dimensional KK-reduction \\ along $S^1$-fibration \end{tabular} } } \\ & \fbox{\footnotesize \begin{tabular}{c} 10d type IIA \\ supergravity \end{tabular} } } } \end{equation} Here both \emph{perturbative string theory} as well as \emph{higher-dimensional quantum supergravity} may, with some effort, be well-defined as \emph{perturbative} S-matrix theories (e.g. \cite[Sec. 12.5]{Polchinski01}\cite{Witten12} and \cite{Donoghue95}\cite{BFR13}); and the conjecture is that there is a joint \emph{non-perturbative} completion of 11-dimensional supergravity and of IIA string theory, hence of any effective quantum field theory approximating the latter. Even though the actual nature of M-theory has remained an open problem \cite[Sec. 12]{Moore14}\footnote{At least in mathematics it is not uncommon that a theory is conjectured to exist before its actual nature is known---famous examples of this include the theory of \emph{motives}, which has meanwhile been discovered, and the \emph{field with one element.}}, there is a huge and steadily growing network of hints supporting the M-theory conjecture. This should be regarded in light of the situation of perturbative quantum field theory as used in the Standard Model of particle physics, where the identification of any aspect of its non-perturbative completion is a crucial but wide-open problem, referred to as one of the ``millennium problems'' \cite{ClayInstitute}. \medskip In a series of articles \cite{FSS13, FSS16a, FSS16b, HuertaSchreiber17}, we have shown that a systematic analysis of the \emph{Green--Schwarz sigma-models} (which define \emph{fundamental} super $p$-branes, such as the fundamental membrane that gives M-theory its name) from the point of view of \emph{super homotopy theory} provides a concrete handle on some previously elusive aspects of M-theory; see \cite{Schreiber17c, FSS19} for exposition of this perspective. Specifically, in the companion article \cite{ADE} it is shown that the existence and classification of \emph{black M-branes at real ADE singularities} can be systematically derived and analyzed in the supergeometric enhancement of \emph{equivariant homotopy theory} (see \cite{Blu17}). However, the M-theory folklore suggests (\cite[Sec. 2]{Sen97}, see e.g. \cite[Sec. 6.3.3]{IbanezUranga12}, also \cite{AcharyaGukov04}) that understanding black branes at ADE-singularities also holds the key to the all-important, widely expected yet still mysterious phenomenon of \emph{gauge enhancement} in M-theory. This suggests that the super homotopy theoretic analysis may also shed light on the true nature of gauge enhancement in M-theory. Here we show that this is indeed the case. \medskip In the remainder of this introduction we review the issue of gauge enhancement in various guises, survey what is known, what is conjectured, and which problems remain essentially unsolved. After establishing some mathematical results in Sec. \ref{HomotopyTheory} and Sec. \ref{TheATypeOrbispaceOfThe4Sphere}, we explain our solution to the gauge enhancement problem in Sec. \ref{TheMechanism}. \medskip \noindent {\bf Double dimensional reduction.} At the heart of the matter is \emph{double dimensional reduction}, originally due to \cite{DuffHoweInamiStelle87, Townsend95b} and whose rigorous formulation from \cite[Sec. 3]{FSS16a} \cite[Sec. 3]{FSS16b} we recall and expand upon below in Sec. \ref{TheMechanism}. Going back to ideas of Kaluza and Klein a century ago, in ordinary \emph{dimensional reduction}, we take spacetime to be a pseudo-Riemannian $S^1$-fibration over a $(D-1)$-dimensional base space and consider the limit in which the circle fiber becomes infinitesimal. In this limit, we obtain a field theory on the $(D-1)$-dimensional base space of the circle fibration, hence in a lower-dimensional spacetime, which has a larger space of field species including the Fourier modes of the original fields on the circle fiber. \begin{equation} \label{DD} \raisebox{44pt}{ \scalebox{.9}{ \xymatrix@C=-32pt{ Y \ar[dd]^{ \pi_{{}_{S^1}} } &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,& \fbox{ \begin{tabular}{c} 11-dimensional \\ spacetime \end{tabular} } \ar[dd]|{ \mbox{ \footnotesize \begin{tabular}{c} \tiny circle bundle \\ \tiny projection \end{tabular} } } &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,& \fbox{ \begin{tabular}{c} $D = 11$ supergravity/ \\ M-theory\end{tabular} } \ar@{|->}[dd]|{ \vphantom{\big(}\mbox{ \tiny dimensional reduction } } &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,& & \fbox{ \begin{tabular}{c} $p$-brane \end{tabular} } \ar@{|->}[ddl]|{ \phantom{A \atop A} } \ar@{|->}[ddr]|{ \phantom{ A \atop A } } \ar@{}[dd]|{ \mbox{ \begin{tabular}{c} \tiny double dimensional reduction \end{tabular} } } \\ \\ X = Y /\!\!/ S^1 &\,\,\,& \fbox{ \begin{tabular}{c} 10-dimensional \\ spacetime \end{tabular} } &\,\,\,& \fbox{ \begin{tabular}{c} $D=10$ supergravity / \\ type IIA string theory \end{tabular} } &\,\,\,& \fbox{ $(p-1)$-brane } && \fbox{ $p$-brane } }} } \end{equation} Secondly, one finds higher-dimensional analogs of black holes in higher-dimensional supergravity, with $(p+1)$-dimensional singularities, called \emph{black} (or ``solitonic'') \emph{$p$-branes} \cite{DIPSS88}. Under dimensional reduction of the ambient supergravity theory, the singularity of a black $p$-brane may or may not extend along the circle fiber that is being shrunk away. If it does not, then the result is again a black $p$-brane solution, now in the lower-dimensional supergravity theory. But if it does, then along with reduction in spacetime dimension from $D$ to $(D -1)$, the black $p$-brane singularity effectively appears as a black $(p-1)$-brane solution in the lower-dimensional supergravity theory; whence ``double dimensional reduction''. \medskip \noindent {\bf Chan--Paton gauge enhancement.} In its most immediate (albeit naive) form, the formulation of the problem of \emph{gauge enhancement} in M-theory proceeds from the expected double dimensional reduction \eqref{DD} of the black branes of M-theory (see \cite{ADE} for a precise account), to those of type IIA string theory, which for black branes looks as follows \cite{Townsend95, Townsend95b, FSa, FS}: $$ \mathpalette\mathclapinternal{ \xymatrix@C=-1pt@R=1.2em{ \fbox{\hspace{-3mm} \footnotesize \begin{tabular}{c} Black brane species \\ in M-theory \end{tabular} \hspace{-3mm} } \ar[dd]|{\mbox{\tiny double dimensional reduction }} & & \mathrm{MW} \ar@{|->}[ddl] & & \mathrm{M2} \ar@{|->}[ddl] \ar@{|->}[ddr] && && \mathrm{M5} \ar@{|->}[ddl] \ar@{|->}[ddr] && \mathrm{MK6} \ar@{|->}[ddr] &&& & \mathrm{MO9} \ar@{|->}[ddl] & \fbox{ \begin{tabular}{c} \multirow{2}{*}{ \color{gray} unknown } \\ $\phantom{A}$ \end{tabular} } \ar[dd] \\ \\ \fbox{\hspace{-3mm} \footnotesize \begin{tabular}{c} Black brane species in \\ type IIA string theory \end{tabular} \hspace{-3mm} } & \mathrm{D0} && \mathrm{NS1} && \mathrm{D2} && \mathrm{D4} && \mathrm{NS5} && {\mathrm{D6}} && {\mathrm{O8}} && \fbox{\hspace{-2mm}\footnotesize \begin{tabular}{c} Chan--Paton \\ gauge enhancement \end{tabular} \hspace{-3mm}} }} $$ From perturbative string theory one finds that open fundamental strings ending on the D-branes behave as quanta for an abelian $U(1)$-gauge theory (i.e. electromagnetism) on the worldvolume of the D-brane. A widely accepted but informal\footnote{ In \cite[first line on p. 8]{Witten96} the argument was introduced as an ``obvious guess''. Most subsequent references cite this as a fact, e.g. the review \cite[Sec. 3]{Myers03}, despite the lack of a formal argument. } argument \cite[Sec. 3]{Witten96} indicates that if $N$ such D-branes are \emph{coincident} then the gauge group \emph{enhances} from the abelian group $(U(1))^N $ to the non-abelian group $U(N)$. The idea is that massless open fundamental strings stretch in $N \times N$ possible ways between the $N$ coincident D-branes, thus constituting gauge bosons that organize in $N \times N$ unitary matrices, called \lq\lq Chan--Paton factors''. However, it is non-trivial to check that scattering of these open strings reproduces the scattering amplitudes of gauge bosons for non-abelian gauge theory (Yang-Mills theory). First approximate numerical checks of this idea are due to \cite{ColettiSigalovTaylor03} and similar numerical checks as well as an exact derivation under simplifying assumptions are given in \cite{BerkovitsSchnabl03}; a full derivation was claimed in \cite{Lee17}. \medskip This phenomenon of \emph{gauge enhancement on D-branes} is of paramount importance for string theory, in particular as a candidate for a theory of realistic physics. The fundamental gauge fields observed in nature, per the Standard Model of particle physics, do of course involve non-abelian gauge groups corresponding to the weak and strong nuclear forces. While Kaluza--Klein dimensional reduction from 11-dimensional supergravity may exhibit such non-abelian gauge groups, this happens in a manner incompatible with realistic coupling of fermionic matter fields \cite{Witten81}. Instead, realistic gauge fields must arise from gauge enhancement on coincident D-branes (see \cite[Sec. 10]{IbanezUranga12}). Moreover, all discussion of modern string theoretic topics such as \emph{AdS/CFT duality} (see \cite{AGMOO99}) or \emph{geometric engineering of gauge theories} (going back to \cite{HananyWitten97}, see e.g. \cite{Fazzi17}), such as for classification of 6d SCFTs (as in \cite{ZHTV15}), depend crucially on gauge enhancement on coincident D-branes. But, under the M-theory conjecture, double dimensional reduction should exhibit an equivalence (``duality'') between (strongly coupled) type IIA string theory and M-theory, in particular between the full non-perturbative theory of D-branes and their M-brane pre-images. Hence if M-theory exists, then gauge enhancement on coincident D-branes must correspond to, and is potentially explained by, a corresponding phenomenon on M-branes. The most immediate incarnation of the problem of gauge enhancement in M-theory can, therefore, be succinctly phrased as: \vspace{.2cm} \noindent {\bf Open Problem, version 1:} \vspace{-2mm} \begin{quote} {\it What is the lift to M-theory of the non-abelian Chan--Paton gauge field degrees of freedom on coincident D-branes?} \end{quote} \noindent Since the string theory literature tends to blur the distinction between what is known and what is conjectured, we briefly highlight what the folklore on this problem (e.g. \cite[Sec. 6.3.3]{IbanezUranga12}, \cite{AcharyaGukov04}) does and does not achieve: \begin{itemize} \vspace{-2mm} \item A celebrated recent result (see \cite{BLMP13} for a review) shows the existence of a class of non-abelian gauge field theories that are plausible candidates for the worldvolume theories expected to live on black M2-branes sitting at ADE singularities (the \lq\lq BLG-model'' \cite{BL, Gu} and, more generally, the \lq\lq ABJM-model'' \cite{ABJM08}). However, a derivation of these field theories from M-theoretic degrees of freedom is missing; the argument works just by consistency checks. \vspace{0mm} \item On the other hand, a conjectural sketch of an explicit derivation does exist for the M-brane species $\mathrm{MK6}$, whose image in the low-energy approximation provided by 11-dimensional supergravity is supposed to be the \emph{Kaluza--Klein monopole spacetime} and which becomes the black D6-brane under dimensional reduction \cite{Townsend95}, \cite[Sec. 1]{Sen97}. For an \emph{abelian} gauge field on the D6-brane, a straightforward analysis shows that it is sourced by doubly dimensionally reduced M2-branes ending on the KK-monopole \cite[Sec. 2]{GomezManjarin02}, \cite[Sec. 5.4]{Manjarin04}. But more generally, KK monopoles may be argued to be the fixed point loci of spacetime orbifolds locally of the form $\mathbb{R}^{6,1} \times \mathbb{C}^2 /\!\!/ G_{\mathrm{ADE}}$ for a finite subgroup $G_{\mathrm{ADE}} \subset SU(2)$. A classical theorem \cite{DuVal} (see \cite{Reid} for a review) implies that such singularities are canonically resolved by spheres touching along the shape of a simply-laced Dynkin diagram: \begin{center} \includegraphics[width=.47\textwidth]{Dynkin-with-labels-1} \hspace{-1mm} \includegraphics[width=.37\textwidth]{Dynkin-with-labels-2} \end{center} If one here imagines that M2-branes wrap these \emph{vanishing 2-cycles} then, under double dimensional reduction, this situation looks like an M-theoretic lift of the strings stretching between several coincident D6-branes, as indicated in the figure above, and hence looks like an M-theoretic lift of the Chan--Paton gauge enhancement mechanism discussed above. This argument goes back to \cite[Sec. 2]{Sen97}; for review see \cite[Sec. 6.3.3]{IbanezUranga12}. While this story is appealing, it is unsatisfactory that it has to treat the membrane in 11-dimensional spacetime as a direct analogue of the fundamental string in 10-dimensional spacetime; after all, the very term \lq\lq M-theory'' instead of \lq\lq Membrane theory'' was chosen as a reminder that this direct analogy is too naive.\footnote{\cite{HoravaWitten96a}: ``{\it As it has been proposed that [this] theory is a supermembrane theory but there are some reasons to doubt that interpretation, we will non-committedly call it the $M$-theory, leaving to the future the relation of $M$ to membranes.}'' } \end{itemize} In short, the traditional attempts to understand gauge enhancement on M-branes suffer from the lack of any handle on the actual nature of M-theory. But what is worse, these stories argue a point that has meanwhile come to be thought as being inaccurate: \medskip \noindent {\bf K-Theoretic gauge enhancement.} A well-known series of arguments \cite{MinasianMoore97, Witten98, FreedWitten99, FreedHopkins00, MooreWitten00} shows that the gauge fields carried by D-branes do not actually have well-defined existence in themselves. Instead, one must view all the $\mathrm{D}p$-branes taken together and then their gauge fields serve as representatives for classes in \emph{twisted K-theory} \cite{BouwknegtMathai00, Witten00, Freed00, BCMMS02, BEV03, MathaiSati04, EvslinSati06, Evslin06}. It is only these twisted K-theory classes that are supposed to have an intrinsic meaning. In other words, non-abelian gauge fields on separate D-brane species are much like coordinate charts on spacetime: a convenient but non-invariant means of presenting a specific structure. In actual reality, the D-brane species $Dp$, $p \in \{0,2,4,6,8\}$ only have a unified joint existence and gauge enhancement to on separate D-brane species is only one presentation, out of many, of a unified \emph{higher gauge field}: a cocycle in twisted K-theory. In view of this, the problem of gauge enhancement in M-theory is really the following:\footnote{ While \emph{a derivation of K-theory from M-theory} is suggested by the title of \cite{ADerivationofK}, that article only checks that the behavior of the partition function of the 11d supergravity $C$-field is compatible with the \emph{a priori} K-theory classification of D-branes. Seeking a generalized cohomology describing the M-field and M-branes was originally advocated for in \cite{S1, S2, S3, S4}. } \vspace{.2cm} \noindent {\bf Open Problem, version 2:} \vspace{-2mm} \begin{quote} {\it What is the cohomology theory classifying M-brane charge, and how does double dimensional reduction reduce it to the classification of D-brane charge in twisted K-theory?} \end{quote} \noindent Hence the refined perspective of twisted K-theory shifts the focus away from Chan--Paton-like gauge enhancement on one particular D-brane species (which seems to have no invariant meaning in the full theory) and instead highlights the problem of how the full list of D-brane species arises and carries a unified charge in twisted K-theory. However, this is in conflict with the M-theoretic origin of the D-branes in the traditional story recalled above: only the $\mathrm{M2}$-brane and $\mathrm{M5}$-brane exist as \emph{fundamental branes}, meaning that they have corresponding Green--Schwarz type sigma-models (we recall this in detail below in Sec. \ref{FundamentalpBranes}). The double dimensional reduction of these yields the fundamental brane species in type IIA string theory, \emph{except} the $\mathrm{D6}$ and the $\mathrm{D8}$ (while the $\mathrm{D0}$ now encodes the circle fibration itself): $$ \xymatrix@C=6pt@R=1.2em{ \fbox{ \begin{tabular}{c} \footnotesize Fundamental brane species \\ \footnotesize in M-theory \end{tabular} } \ar[dd]|{\mbox{\tiny double dimensional reduction }} & && & \mathrm{M2} \ar@{|->}[ddl] \ar@{|->}[ddr] && && \mathrm{M5} \ar@{|->}[ddl] \ar@{|->}[ddr] && \\ \\ \fbox{ \begin{tabular}{c} \footnotesize Fundamental brane species \\ \footnotesize in type IIA string theory \end{tabular} } & ~~~~~~ \mathrm{D0} && \mathrm{F1} && \mathrm{D2} && \mathrm{D4} && \mathrm{NS5} && {\color{gray} \mathrm{D6}} && {\color{gray} \mathrm{D8}} } $$ On the other hand, up to now the M-theoretic origin of the $\mathrm{D6}$-brane has only been argued in its \emph{black brane} incarnation, which is supposed to be given by the $\mathrm{MK6}$ as recalled above. The nature of the $\mathrm{D8}$-brane is yet more subtle \cite{BRGPT96}---an M-theory lift has been proposed in \cite{Hull98}, but the proposal is not among the usual list of expected black M-branes. On the other hand, the reduction of the $\mathrm{MO9}$-brane, whose existence in M-theory is solid (see \cite{ADE}), is not the black $\mathrm{D}8$-brane, but rather the $\mathrm{O}8$-plane (e.g. \cite[Sec. 3]{GKST01}). This highlights that the core of the open problem, from the refined K-theoretic perspective, is really in the appearance of the D6- and D8-brane: \vspace{.2cm} \noindent {\bf Open Problem, version 3:} \vspace{-2mm} \begin{quote} \noindent {\it What is the lift to M-theory of the fundamental D6-branes and D8-branes in type IIA string theory, such that the unified $\mathrm{D}p$-branes are jointly classified by twisted K-theory?} \end{quote} \vspace{.2cm} \noindent {\bf K-Theoretic gauge enhancement in rational approximation}. Twisted K-theory is a comparatively complicated structure, and the fine detail of which of its various variants really applies to D-branes is still the subject of discussion (\cite{KS2, DFM09, S4, GS19}). Of course the glaring problem here is, once more, that the non-perturbative theory that ought to answer this question is missing. It is worth highlighting that the issue of gauge enhancement is visible, and has remained unresolved, already in the \emph{rational} approximation (i.e. ignoring all torsion-group effects), where a cocycle in twisted K-theory reduces to a cocycle in \emph{twisted de Rham cohomology}. Associated to each brane species is a differential form on spacetime---its \emph{flux form}---which corresponds to the brane in analogy to the correspondence between the \emph{Faraday tensor} and charged particles (0-brane) in electromagnetism. The double dimensional reduction of these flux forms is as follows (see \cite[Sec 4.2]{MathaiSati04}), parallel to the pattern of the double dimensional reduction of the fundamental branes. \begin{equation} \label{Gysin} \hspace{-5mm} \mathpalette\mathclapinternal{ \raisebox{37pt}{ \xymatrix@C=-.1em@R=1.2em{ \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Flux forms in \\ \footnotesize $D = 11$, $\mathcal{N} = 1$ supergravity \end{tabular} \hspace{-3mm}} \ar[dd]|{ \vphantom{\big(}\mbox{\tiny double dimensional reduction }} & && & G_4 \ar@{|->}[ddl] \ar@{|->}[ddr] && && G_7 \ar@{|->}[ddl] \ar@{|->}[ddr] && &&&& \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Cocycle on $D = 11$ spacetime \\ \footnotesize in rational cohomotopy \end{tabular} \hspace{-3mm}} \ar[dd]|-{ \vphantom{\big(}\mbox{ \tiny Gysin sequence } } \\ \\ \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Flux forms in \\ \footnotesize $D =10$, $\mathcal{N} = (1,1)$ supergravity \end{tabular} \hspace{-3mm}} & F_2 && H_3 && F_4 && F_6 && H_7 && {\color{gray} F_8} && {\color{gray} F_{10}} & \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Cocycle on $D = 10$ spacetime \\ \footnotesize in rational 6-truncated \\ \footnotesize twisted K-theory \end{tabular} \hspace{-3mm}} }}} \end{equation} \noindent Here the right hand sides have been recognized in \cite[Sec 2.5]{S-top} \cite{cohomotopy} for the top part and derived in \cite{FSS16a, FSS16b} for the bottom part; we discuss this in detail below in Sec. \ref{With}. Notice that the $F_2$-contribution does arise from plain double dimensional reduction, not directly from the $(G_4,G_7)$, but as a rational image of the first Chern-class of the circle fibration itself. Moreover, these differential forms satisfy relations (``twisted Bianchi identities'') that identify them, via the de Rham theorem, as cocycles in the rationalization of a generalized cohomology theory: \begin{center} \begin{tabular}{|c||c|c|} \hline \begin{tabular}{c} Flux \\ forms \end{tabular} & \begin{tabular}{c} \bf Twisted \\ \bf Bianchi identity \end{tabular} & \begin{tabular}{c} \bf Rational \\ \bf cocycle in \end{tabular} \\ \hline \hline {\bf M-theory} & $ \begin{aligned} d G_4 & = 0 \\ d G_7 & = -\tfrac{1}{2}G_4 \wedge G_4 \end{aligned} $ & \begin{tabular}{c} cohomotopy \\ in degree 4 \end{tabular} \\ \hline \begin{tabular}{c} \bf Type IIA \\ \bf string theory \end{tabular} & $ \begin{aligned} d H_3 & =0 \\ d F_{2p + 4} & = H_3 \wedge F_{2 p + 2} \end{aligned} $ & \begin{tabular}{c} twisted K-theory \\ in even degree \end{tabular} \\ \hline \end{tabular} \end{center} In this rational approximation, the core of the gauge enhancement problem is still fully visible: \vspace{.2cm} \label{FirstRationalProblem} \noindent \hypertarget{FirstRational}{{\bf Open Problem, rational version 1:}} \vspace{-2mm} \begin{quote} {\it What is the origin of the RR-flux forms $F_8$ and $F_{10}$ in M-theory, such that these unify with the double dimensional reduction of the M-flux $(G_4, G_7)$ to an un-truncated cocycle in rational twisted K-theory (i.e. in twisted de Rham cohomology)?} \end{quote} \noindent \rm We present a solution to this version of the problem in Sec. \ref{With}. Though working in the rationalized setting means that we are disregarding all torsion \footnote{This torsion is in the sense of cohomology or homotopy classes. In the following paragraph we use torsion in the sense of differential (super)geometry. We hope that the distinction will be clear from the context.} information for the time being, it has the striking advantage that these rationalized relations follow rigorously from a \emph{first principles}-definition of M-branes (recalled below in Sec. \ref{FundamentalpBranes}), and hence serve as a starting point for a systematic analysis of the problem of gauge enhancement. \medskip To properly take local supersymmetry into account, one has to consider the refinement of the plain flux forms to super-flux forms on super-spacetime. The torsion-freeness constraints of supergravity geometrically require \cite{Lott90, EE} the bifermionic component of these super-flux forms to be covariantly constant on each super tangent space (see \cite[Sec. 1]{Higher-T}), where they correspond to those cocycles $\mu_{{}_{p+2}}$ in the supersymmetry super Lie algebra cohomology defining the Green--Schwarz-type sigma-models for the fundamental $p$-branes \cite{AETW87, AzTo89}. This identification locates the problem in the precise context of \emph{super homotopy theory} of super-spacetimes. The beauty of this is that homotopy theory is governed by \emph{universal constructions}, which, roughly, means that it exhibits the emergence of \lq\lq god-given'' structures from a minimum of input. In fact, in super homotopy theory the super-cocycles $\mu_{{}_{M2}}$ and $\mu_{{}_{M5}}$ witnessing the fundamental M-branes emerge by a universal construction (a kind of equivariant Whitehead tower) from nothing but the superpoint \cite{FSS13, HuertaSchreiber17}, and their double dimensional reduction is reflected by another universal construction \cite[Sec. 3]{FSS16a} \cite[Sec. 3]{FSS16b}---the \emph{$\mathrm{Ext}/\mathrm{Cyc}$-adjunction} (discussed in detail in Sec. \ref{TheAdjunction} below). \begin{equation} \label{DDCoc} \hspace{-3mm} \mathpalette\mathclapinternal{ \raisebox{45pt}{ \xymatrix@C=-.1pt@R=1.2em{ \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Fundamental branes \\ \footnotesize in M-theory \end{tabular} \hspace{-3mm}} \ar[dd]|{ \vphantom{\big(}\mbox{\tiny double dimensional reduction }} & && & \mu_{{}_{\mathrm{M2}}} \ar@{|->}[ddl] \ar@{|->}[ddr] && && \mu_{{}_{\mathrm{M5}}} \ar@{|->}[ddl] \ar@{|->}[ddr] && &&&& \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Super-cocycle \\ \footnotesize on super-spacetime \\ \footnotesize in rational cohomotopy \end{tabular} \hspace{-3mm}} \ar[dd]|{ \vphantom{\big(} \mbox{ \tiny cyclification } } \\ \\ \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Fundamental branes \\ \footnotesize in type IIA string theory \end{tabular} \hspace{-4mm} } & \mu_{{}_{\mathrm{D0}}} && \mu_{{}_{\mathrm{F1}}} && \mu_{{}_{\mathrm{D2}}} && \mu_{{}_{\mathrm{D4}}} && \mu_{{}_{\mathrm{NS5}}} && {\color{gray} \mu_{{}_{\mathrm{D6}}}} && {\color{gray} \mu_{{}_{\mathrm{D8}}}} & \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Super-cocycle \\ \footnotesize on super-spacetime \\ \footnotesize in rational 6-truncated \\ \footnotesize twisted K-theory \end{tabular} \hspace{-3mm}} }}} \end{equation} \begin{center} \begin{tabular}{|c||c|c|} \hline \begin{tabular}{c} Fundamental brane \\ super-cocycle \end{tabular} & \begin{tabular}{c} \bf Cocycle \\ \bf condition \end{tabular} & \begin{tabular}{c} \bf in rational \\ \bf cohomology theory \end{tabular} \\ \hline \hline {\bf M-theory} & $ \begin{aligned} d \mu_{{}_{M2}} & = 0 \\ d \mu_{{}_{M5}} & = -\tfrac{1}{2} \mu_{{}_{M2}} \wedge \mu_{{}_{M2}} \end{aligned} $ & \begin{tabular}{c} cohomotopy \\ in degree 4 \end{tabular} \\ \hline \begin{tabular}{c} \bf Type IIA \\ \bf string theory \end{tabular} & $ \begin{aligned} d \mu_{{}_{F1}} & =0 \\ d \mu_{{}_{2p+2}} & = \mu_{{}_{F1}} \wedge \mu_{{}_{2p}} \end{aligned} $ & \begin{tabular}{c} twisted K-theory \\ in even degree \end{tabular} \\ \hline \end{tabular} \end{center} \vspace{1mm} \noindent Here (rational) \emph{cohomotopy} in degree 4 is the generalized \emph{non-abelian} cohomology theory represented by the (rationalized) 4-sphere, meaning that the joint $\mathrm{M2}/\mathrm{M5}$-brane cocycle is a morphism in the rational super homotopy category of the form \cite{cohomotopy,FSS16a} $$ \xymatrix{ \mathbb{R}^{10,1\vert \mathbf{32}} \ar[rr]^-{ \mu_{{}_{M2/M5}} } && S^4 } \phantom{AAA} \in \mathrm{Ho}\left( \mathrm{SuperSpaces}_{\mathbb{R}} \right). $$ That cohomotopy governs the M-brane charges this way, at least rationally, was first proposed and highlighted in \cite[Sec. 2.5]{S-top}. \emph{A priori} there are many homotopy types that look like the 4-sphere in the rational approximation; however in \cite{ADE} further precise evidence was provided to demonstrate the sense in which the 4-sphere is the correct coefficient for M-brane charge. Namely, the 4-sphere coefficient is naturally identified with the 4-sphere around a black M5-brane singularity in $D = 11$ supergravity and this identification induces a real structure on the 4-sphere together with actions of the ADE subgroups of $SU(2)$ that are compatible with the corresponding BPS actions on super-spacetime. It is therefore natural to ask for enhancements of the $\mathrm{M2}/\mathrm{M5}$-brane cocycle to \emph{equivariant} cohomotopy $$ \xymatrix{ \mathbb{R}^{10,1\vert \mathbf{32}} \ar@(ul,ur)[]^{ G_{\mathrm{ADE}} \times G_{\mathrm{HW}} } \ar[rr]^-{ \widehat{\mu}_{{}_{M2/M5}} } && S^4 \ar@(ul,ur)[]^{ G_{\mathrm{ADE}} \times G_{\mathrm{HW}} } } \phantom{AA} \in \mathrm{Ho}\big( \left( G_{\mathrm{ADE}} \times G_{\mathrm{HW}} \right) \mathrm{\mbox{-}SuperSpaces}_{\mathbb{R}} \big). $$ Here $G_{\mathrm{ADE}} \subset SU(2)$ is a finite subgroup as per the ADE-classification that acts by orientation-preserving super-spacetime automorphisms, while $G_{\mathrm{HW}} = \mathbb{Z}_2$ is an orientation-reversing reflection as in Ho{\v r}ava--Witten theory. \medskip It is shown in \cite{ADE} that such an equivariant enhancement exists and makes the \emph{black branes} at ADE singularities appear. This results in a unified framework for black and fundamental M-branes. In particular, the corresponding A-series actions on the 4-sphere factor through the $U(1)$-action $$ \xymatrix{ S^4 \ar@(ul,ur)[]^{ S^1} } \;:=\; S( \mathbb{R} \oplus \!\!\!\!\xymatrix{\mathbb{C}^2 \ar@(ul,ur)[]^{\rm U(1) } }\!\!\! ) \,\subset\, S( \mathbb{R} \oplus \!\!\!\!\xymatrix{\mathbb{C}^2 \ar@(ul,ur)[]^{\rm SU(2) } }\!\!\!\! ) $$ obtained as the suspension of the circle action on the complex Hopf fibration $H_\mathbb{C}\colon S^3 \to S^2$. The projection to the corresponding homotopy quotient is identified with the M-theory $S^1$-fibration in the near horizon geometry of an M5-brane: \hspace{-15mm} \begin{equation} \label{SpacetimeAndATypeOrbispace} \mathpalette\mathclapinternal{ \raisebox{60pt}{\xymatrix@C=1em@R=1.3em{ & \ar@{}[rr]|{ \fbox{ M2/M5-brane cocycle } } && \\ \fbox{ \hspace{-3mm} \begin{tabular}{c} 11d super-spacetime \end{tabular} \hspace{-3mm} } & \mathbb{R}^{10,1\vert \mathbf{32}} \ar[dd] \ar[rr]^-{ \mu_{{}_{M2/M5}} } && S^4 \ar[dd] & \fbox{\hspace{-1mm} 4-sphere coefficient \hspace{-1mm}} \\ \\ \fbox{\hspace{-1mm} 10d super-spacetime \hspace{-1mm}} & \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} \simeq \mathbb{R}^{10,1\vert \mathbf{32}} /\!\!/ S^1 && S^4 /\!\!/ S^1 & \fbox{\hspace{-3mm} \begin{tabular}{c} A-type orbispace \\ of the 4-sphere \end{tabular} \hspace{-3mm} } }}} \end{equation} \medskip\noindent In conclusion, the problem of gauge enhancement in M-theory in the rational approximation, but otherwise proceeding from \emph{first principles}, reads as follows: \vspace{.2cm} \label{OpenRationalPage} \noindent \hypertarget{OpenRational}{{\bf Open Problem, rational version 2:}} \vspace{-2mm} \begin{quote} {\it Which universal construction in rational super homotopy theory enhances the cyclification of the $\mathrm{M2}/\mathrm{M5}$-cocycle from 6-truncated to un-truncated rational twisted K-theory? } \end{quote} \vspace{.2cm} \noindent {\bf Gauge enhancement explained.} \rm It is this version of the problem to which we present a solution. First we explain and analyze two relevant universal constructions in homotopy theory: \begin{enumerate}[{\bf (i)}] \item \emph{Fiberwise stabilization} (in Sec. \ref{RationalParameterizedStableHomotopyTheory}) and \item \emph{the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction} (in Sec. \ref{TheAdjunction}). \end{enumerate} We then consider (in Sec. \ref{RationalHomotopyTypeOfATypeOrbispace}) the rational homotopy type of the \emph{A-type orbispace of the 4-sphere}, as in \eqref{SpacetimeAndATypeOrbispace} above, and we apply the two aforementioned universal constructions to it (in Sec. \ref{RationalUnitOnAType}). Our main result, Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}, shows that rational untruncated twisted K-theory appears as a direct summand in the fiberwise stabilization of the $\mathrm{Ext}/\mathrm{Cyc}$-unit on the A-type orbispace of the 4-sphere. \medskip Since homotopy theory is immensely rich and computationally demanding (see. e.g. \cite{Ravenel03, HHR09}) one often simplifies calculations by working in successive approximations, such as in the filtration by \emph{chromatic layers}. The first of these approximations is \emph{rational homotopy theory} (e.g. \cite{Hess06}) obtained by disregarding all torsion elements in homotopy and cohomology groups. The model for \emph{rational parametrized stable homotopy theory} that we use in our computations had been conjectured in \cite[p. 20]{FSS16a} and was subsequently worked out in \cite{Bra18}. This model allows us to reveal a deeper meaning behind a curious dg-algebraic observation due to \cite{RS} (recalled as Prop. \ref{MinimalDGModels} below), culminating in our main Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}, below. However, since the gauge enhancement mechanism that we present is obtained by \emph{universal constructions} (specifically the \emph{derived adjunctions} discussed in Sec. \ref{HomotopyTheory}), the lift of the mechanism beyond the rational approximation certainly exists, but is just much harder to analyze. \medskip Finally, in Sec. \ref{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8} we apply Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} to the double dimensional reduction of the fundamental M-brane cocycles. We show that this solves \hyperlink{OpenRational}{\bf Open Problem, rational version 2} by making the D6- and D8-brane cocycles appear and by exhibiting a single unified super-cocycle in rational un-truncated twisted K-theory: $$ \hspace{-0.1cm} \xymatrix@C=.5pt@R=18pt{ \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Fundamental branes \\ \footnotesize in M-theory \end{tabular} \hspace{-3mm}} \ar[dd]|{\mbox{\tiny \begin{tabular}{c} {\color{blue} enhanced} \\ double dimensional reduction \end{tabular} }} & && & \mu_{{}_{\mathrm{M2}}} \ar@{|->}[ddl] \ar@{|->}[ddr] && && \mu_{{}_{\mathrm{M5}}} \ar@{|->}[ddl] \ar@{|->}[ddr] && &&&& \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Super-cocycle \\ \footnotesize on super-spacetime \\ \footnotesize in rational cohomotopy \end{tabular} \hspace{-4mm} } \ar[dd]|{ \mbox{\tiny \begin{tabular}{c} {\color{blue} fiberwise stabilized } \\ \tiny cyclification adjunction \end{tabular} } } \\ \\ \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Fundamental branes \\ \footnotesize in type IIA string theory \end{tabular} \hspace{-3mm}} & \mu_{{}_{\mathrm{D0}}} && \mu_{{}_{\mathrm{F1}}} && \mu_{{}_{\mathrm{D2}}} && \mu_{{}_{\mathrm{D4}}} && \mu_{{}_{\mathrm{NS5}}} && {\color{blue} \mu_{{}_{\mathrm{D6}}}} && {\color{blue} \mu_{{}_{\mathrm{D8}}}} & \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Super-cocycle \\ \footnotesize on super-spacetime \\ \footnotesize in rational {\color{blue} un-}truncated \\ \footnotesize twisted K-theory \end{tabular} \hspace{-3mm}} } $$ Notice how all the folkloric ingredients recalled above do appear in this rigorous result, albeit in a somewhat subtle way. First of all, the fact that fundamental branes and black branes are closely related, while still crucially different (particularly in the matter of gauge enhancement), is reflected by how super-cocycles interact with spacetime ADE singularities in the data specifying a real ADE-equivariant cohomotopy class \cite{ADE}. Second, the claim that gauge enhancement in M-theory is connected to the appearance of ADE singularities in spacetime is reflected here in the fact that the untruncated rational twisted K-theory spectrum (which, as we have discussed, is the true rational coefficient for the gauge enhanced brane charges), only appears from fiberwise stabilization of the \emph{equivariant} 4-sphere coefficient. This equivariant coefficient also induces the appearance of singular fixed point strata in spacetime via equivariant enhancement: $$ \xymatrix@C=5em{ & \fbox{ \hspace{-4mm} \begin{tabular}{c} \footnotesize A-type $S^1$-action \\ \footnotesize on coefficient 4-sphere \\ \footnotesize of fundamental M-brane cocycle \end{tabular} \hspace{-4mm} } \ar@{|->}[ddr]|{ \mbox{ \tiny \begin{tabular}{c} fiberwise stabilized \\ $\mathrm{Ext}/\mathrm{Cyc}$-adjunction \\ (Thm. \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}, Sec. \ref{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8}) \end{tabular} } } \ar@{|->}[ddl]|{ \mbox{ \tiny \begin{tabular}{c} equivariant \\ enhancement \\ (\cite[Thm. 6.1, Sec. 2.2]{ADE}) \end{tabular} } } \\ \\ \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Black branes \\ \footnotesize at A-type singularities \end{tabular} \hspace{-3mm}} \ar@{<~>}[rr]^{ \mbox{\tiny M-theory folklore } } && \fbox{\hspace{-3mm} \begin{tabular}{c} \footnotesize Gauge enhancement \\ \footnotesize on M-branes \end{tabular} \hspace{-3mm}} } $$ Of course it remains to lift our result beyond rational homotopy theory. However, we suggest that the rational derivation of gauge enhancement in Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} and Sec. \ref{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8} points to its own non-rational refinement. The reason is that the universal constructions that we have used also make sense non-rationally -- they are just much harder to compute. More concretely, we observe that the manner in which the rational version of twisted K-theory appears below is via a twisted, rational version of \emph{Snaith's theorem} (see Rem. \ref{InterpretationOfFiberwiseSuspensionOfATypeOrbispace}). This theorem says that the K-theory spectrum $\mathrm{KU}$ is obtained from the suspension spectrum $\Sigma^\infty_{+} B S^1$ of the classifying space $B S^1$ by adjoining a multiplicative inverse of the Bott generator $\beta$: $$ \Sigma^\infty_{+} B S^1 [\beta^{-1}] \;\simeq_{\mathrm{swhe}}\; \mathrm{KU} \,. $$ Rationally, Snaith's theorem is rather immediate, as is its rational twisted version (Ex. \ref{RationalSnaithTheorem} below) that underlies our identification of rational twisted K-theory in Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}. Since rationalization is the coarsest non-trivial approximation to full homotopy theory (and in this regard quite similar to taking the first derivative of a non-linear function at a single point) it loses plenty of information. \medskip \emph{A priori}, what looks like twisted K-theory in the rational approximation could correspond non-rationally to different twisted cohomology theories (see \cite{S4, GS17, GS19} for discussions in this context). However, we do not just see rational twisted K-theory in isolation, but rather appearing after applying universal constructions to the A-type orbispace of the 4-sphere. For our main theorem on gauge enhancement, these universal constructions are what really matter. Therefore, any non-rational lift of our gauge enhancement mechanism should arise by the same universal construction, applied non-rationally, possibly in conjunction with other universal constructions that are rationally invisible. This considerably constrains the possibilities. The conclusion we draw is that (fiberwise) inversion of the Bott generator is a good candidate for lifting our gauge enhancement mechanism beyond the rational approximation. \medskip It is quite plausible that the gauge enhancement mechanism presented in Sec. \ref{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8} generalizes beyond the rational approximation to a derivation of full twisted K-theory from degree 4 cohomotopy. We may state this as the remaining part of the problem of gauge enhancement in M-theory: \vspace{.2cm} \noindent {\bf Open Problem, remaining part:} \begin{quote} {\it What is the non-rational lift of the gauge enhancement mechanism, by universal constructions in super homotopy theory, from Sec. \ref{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8}?} \end{quote} \vspace{.2cm} \noindent We will return to this open problem elsewhere. \newpage \section{Two universal constructions in homotopy theory} \label{HomotopyTheory} Here we discuss two universal constructions in homotopy theory (see e.g. \cite{Schreiber17a, Schreiber17b}), which when applied to the A-type orbispace of the 4-sphere (Sec. \ref{TheATypeOrbispaceOfThe4Sphere}) reveal the mechanism of gauge enhancement on M-branes (Sec. \ref{TheMechanism}). Firstly, in Sec. \ref{RationalParameterizedStableHomotopyTheory} we recall \emph{parametrized stable homotopy theory} and recall an algebraic model for its rationalization from \cite{Bra18,Bra19b}, which enables us to effectively compute in this setting. The main results of this section are Theorem \ref{RationalParameterizedSpectradgModel}, which establishes differential-graded modules as rational models for parametrized spectra, and Prop. \ref{FiberwiseSuspensionSpectrumdgModel} characterizing the fiberwise stabilization adjunction in terms of these models. In Sec. \ref{TheAdjunction} we demonstrate that forming cyclic loop spaces is one part of a homotopy-theoretic adjunction, and we characterize the unit of the adjunction (Theorem \ref{GCycExtAdjunction}). This universal construction is used in our formulation of double dimensional reduction. \medskip This section may be read independently of the rest of the article and is of interest beside its application to M-brane phenomena. Conversely, readers interested only in the application to M-theory and willing to accept our homotopy-theoretic machinery as a black box may be inclined to skip this section. \subsection{Fiberwise stabilization} \label{RationalParameterizedStableHomotopyTheory} In \cite[p. 20]{FSS16a}, we found that the super $L_\infty$-algebraic F1/D$p$-brane cocycles organize into a diagram as shown in (\hyperlink{fig:rattwistk}{b}) below. We further indicated that this ought to be thought of as the image in supergeometric rational homotopy theory of a cocycle in twisted K-theory realized as a morphism of parametrized spectra, as shown in (\hyperlink{fig:fulltwistk}{a}) below. \begin{figure}[H] \centering \begin{subfigure}{0.35\textwidth} \fbox{$ \xymatrix{ && \mathrm{KU} \ar[d]^{\mathrm{hofib}(p_\rho)} \\ X \ar[dr]_\tau^{\ }="t" \ar@{-->}[rr]^c_{\ }="s" && \mathrm{KU} /\!\!/ BU(1) \ar[dl]^{p_\rho} \\ & B^2 U(1) % } $} \caption{\hypertarget{fig:fulltwistk}A twisted K-theory cocycle according to \cite{AndoBlumbergGepner10, NSS12}.} \end{subfigure} \;\;\;\;\;\;\;\;\;\;\;\; \begin{subfigure}{0.38\textwidth} \fbox{$ \xymatrix{ && \mathfrak{l}(\mathrm{ku}) \ar[d]^{\mathrm{hofib}(\phi)} \\ \mathbb{R}^{9,1\vert \mathbf{16}+ \overline{\mathbf{16}}} \ar[rr]^{\mu_{{}_{F1/D}}^{\mathrm{IIA}}} \ar[dr]_{\mu_{{}_{F1}}} && \mathfrak{l}( \mathrm{ku} /\!\!/ BU(1) ) \ar[dl]^{\phi} \\ & b^2 \mathbb{R} } $} \caption{\hypertarget{fig:rattwistk}The descended IIA F1/D$p$-brane cocycle according to \cite[Theorem 4.16]{FSS16a}.} \end{subfigure} \end{figure} Roughly, a \emph{spectrum} is a kind of \emph{linearized} or \emph{abelianized} version of a topological space. By the classical Brown Representability Theorem, maps into spectra represent cocycles in generalized cohomology theories, such as K-theory. A \emph{parametrized spectrum} is a family of spectra that is parametrized in a homotopy-coherent manner by a topological space (see \cite{MSi}). Roughly, these are equivalent to bundles of spectra over the parameter space, where maps into the total space of such a bundle represent cocycles in a \emph{twisted} generalized cohomology theory. $$ \xymatrix{ {\mbox{Spectra}} \; \ar@{^{(}->}[rrr]^-{\mbox{ \tiny \begin{tabular}{c}Parametrized \\ over the point\end{tabular}}} &&& {\mbox{\begin{tabular}{c}Parametrized \\ spectra\end{tabular}}} \ar@{->>}[rrr]^-{\mbox{ \tiny \begin{tabular}{c}Underlying \\ parameter space \end{tabular}}} &&& {\mbox{Spaces}}} $$ Under Koszul duality, the conjecture of \cite{FSS16a} means, roughly, that there ought to be highlighted entries as in the following table. These entries unify Quillen--Sullivan's DG-models for rational homotopy theory of topological spaces (the central result is recalled as Prop. \ref{SullivanEquivalence} below) with chain complex models for stable rational homotopy theory (recalled as Prop. \ref{SchwedeShipleyEquivalence} below). \vspace{3mm} \hspace{-11mm} \begin{tabular}{|c||c|c|c|} \hline {\bf Homotopy theory} & {\bf Stable} & {\bf Parametrized stable } & {\bf Plain} \\ \hline \hline {\bf Plain} & Spectra & Parametrized spectra & Spaces \\ \hline {\bf Rational} & Cochain complexes & {\it \color{blue} DG-modules} & DG-algebras \\ \hline {\bf Super rational} & Super cochain complexes & {\it \color{blue} Super DG-modules} & Super DG-algebras (``FDA''s) \\ \hline \end{tabular} \vspace{3mm} This conjecture has been proven recently in \cite{Bra18} (see also the forthcoming articles \cite{Bra19a, Bra19b}). In this paper we review those parts of the resulting \emph{rational parametrized stable homotopy theory} that we need for the proof of Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}. The main results in this section are Theorem \ref{RationalParameterizedSpectradgModel}, together with Prop. \ref{FiberwiseSuspensionSpectrumdgModel}, which provide differential graded models for fiberwise suspension spectra. To fix notation and conventions, we briefly recall some background on homotopy theory: \begin{defn}[Classical homotopy theory (see e.g. \cite{Schreiber17a})] \label{ClassicalHomotopyCategories} We write \begin{enumerate}[{\bf (i)}] \item $\mathrm{Ho}(\mathrm{Spaces})$ for the homotopy category of topological spaces, called the \emph{classical homotopy category}, which is the localization of the category of topological spaces at the \emph{weak homotopy equivalences} (those maps inducing isomorphisms on all homotopy groups, which we will denote by $\simeq_{\mathrm{whe}}$). \item $\mathrm{Ho}(\mathrm{Spaces})_{\mathbb{Q}, \mathrm{ft}}$ for the full subcategory on homotopy types of \emph{finite rational type}, namely those spaces $X$ for which the homotopy groups $\pi_{k\geq 1}(X)$ are uniquely divisible (i.e., torsion-free and divisible), and $H^1(X,\mathbb{Q})$ and $\pi_{k\geq 2}(X) \otimes \mathbb{Q}$ are finite-dimensional $\mathbb{Q}$-vector spaces; \item $\mathrm{Ho}(\mathrm{Spaces})_{\mathbb{Q},\mathrm{nil},\mathrm{ft}}\subset \mathrm{Ho}(\mathrm{Spaces})_{\mathbb{Q}, \mathrm{ft}}$ for the full subcategory of homotopy types that are moreover \emph{nilpotent}, meaning those $X$ that are connected and whose fundamental group $\pi_1(X)$ is a nilpotent group with each $\pi_n(X)$ a nilpotent $\pi_1(X)$-module. \end{enumerate} \end{defn} A nilpotent action is one whose iteration a certain number of times is the identity. Being simply-connected is a special case. Similar constructions hold in the parametrized case (see \cite{Crab, MSi}): \begin{defn}[Parametrized homotopy theory (e.g. {\cite[Sec. 1.1]{Bra18}})] \label{ParamaterizedHomotopyTheory} For any topological space $X$, we write \begin{enumerate}[{\bf (i)}] \item $\mathrm{Ho}\big( \mathrm{Spaces}_{/X} \big)$ for the homotopy category of spaces over $X$, that is, of spaces equipped with a map to $X$. We denote such objects by $[Y\xrightarrow{\pi} X]$; this is the \emph{$X$-parametrized classical homotopy category}; \item $\mathrm{Ho}\big(\mathrm{Spaces}_{/\!\!/ X}\big)$ for the homotopy category of spaces over $X$ that are equipped with a section. We denote such objects by $[X\xrightarrow{\sigma}Y\xrightarrow{\pi} X]$, where it is understood that the composite $\pi\circ \sigma = \mathrm{id}_X$; morphisms are maps of spaces over $X$ respecting the sections up to homotopy. \end{enumerate} \end{defn} \begin{remark} In the special case that $X \simeq \ast$ is the point, we simply have that \begin{enumerate}[{\bf (i)}] \item $\mathrm{Ho}\big( \mathrm{Spaces}_{/\ast}\big) \simeq \mathrm{Ho}\big( \mathrm{Spaces}\big)$ is the classical homotopy category (Def. \ref{ClassicalHomotopyCategories}); and \item $\mathrm{Ho}\big( \mathrm{Spaces}_{/\!\!/ \ast}\big)$ is the homotopy category of \emph{pointed} spaces. \end{enumerate} In general, it is sensible to think of $\mathrm{Ho}\big(\mathrm{Spaces}_{/\!\!/ X}\big)$ as the homotopy category of \emph{$X$-parametrized pointed spaces}. \end{remark} Recall that there are two fundamental constructions associated to any pointed space $Y$: its based loop space $\Omega_\ast Y$ and its (reduced) suspension $\Sigma_\ast Y$. There are analogous constructions in the parametrized setting over a fixed base space $X$ (see \cite{Crab, MSi}). These constructions compute loop spaces and reduced suspensions fiberwise over $X$: \begin{prop}[Looping and suspension] \label{ReducedSuspension} If $Y$ is parametrized over a space $X$ (Def. \ref{ParamaterizedHomotopyTheory}), we can form its fiberwise loop space $\Omega_X Y$. This construction is functorial and admits a left adjoint $\Sigma_X$, called the \emph{fiberwise reduced suspension}: $$ \xymatrix{ \mathrm{Ho} \left( \mathrm{Spaces}_{/\!\!/ X} \right) \; \ar@{<-}@<+6pt>[rr]^-{\Omega_X} \ar@{->}@<-6pt>[rr]_-{\Sigma_X}^{\top} && \; \mathrm{Ho} \left( \mathrm{Spaces}_{/\!\!/ X} \right). } $$ \end{prop} A fiberwise loop space $\Omega_X Y$ carries the structure of a fiberwise homotopical group (or, more precisely, a fiberwise \emph{grouplike $A_\infty$-space}) given by concatenating loops. In the case of a fiberwise \emph{double} loop space $\Omega^2_X Y = \Omega_X \Omega_X Y$, the Eckmann--Hilton argument implies an additional fiberwise first-order homotopy commutativity structure given by twisting based loops around each other. Increasing the fiberwise loop order, as $n$ goes to infinity the fiberwise homotopcial group structure on $\Omega^n_Y X$ becomes increasingly homotopy-commutative. The $n\to \infty$ limit therefore provides a useful heuristic for obtaining \emph{$X$-parametrized homotopical abelian groups}. \medskip One way to formalize the idea that increasing loop order \emph{stabilizes} to abelian homotopy theory is to exhibit a homotopy category for which the adjunction of Prop. \ref{ReducedSuspension} is an equivalence. In the unparametrized setting, the homotopy category obtained in this manner is the \emph{stable homotopy category}, the objects of which are called \emph{spectra}. By definition, a spectrum $P$ is a sequence of pointed topological spaces $\{P_n\}_{n\in \mathbb{N}}$ equipped with structure maps $\Sigma P_n \to P_{n+1}$. To each spectrum $P$ is naturally assigned a sequence of abelian groups $\{\pi_k(P)\}_{k\in \mathbb{Z}}$ called the \emph{stable homotopy groups} of $P$. There is a natural notion of a morphism of spectra with respect to which the assignment of stable homotopy groups is functorial. A map of spectra is a \emph{stable weak equivalence} if it induces an isomorphism on all stable homotopy groups, and localizing the category of spectra at the class of stable weak equivalences produces the \emph{stable homotopy category}. Every spectrum is stably weakly equivalent to a spectrum $P$ for which the adjuncts of the structure maps are weak homotopy equivalences $P_n \cong_{\mathrm{whe}} \Omega P_{n+1}$. Spectra of this special type are called \emph{$\Omega$-spectra}, and for an $\Omega$-spectrum $P$ there are weak homotopy equivalences $P_k \cong \Omega^n P_{n+k}$ for all $n,k\geq 0$, exhibiting each $P_k$ as an infinite loop space (or \lq\lq homotopical abelian group''). Finally, the looping and suspension operations on spaces prolong to spectra and for any spectrum $P$ we have natural isomorphisms \[ \pi_{k+1}(\Sigma P)\cong \pi_k (P) \cong \pi_{k-1}(\Omega P) \] for all $k\in \mathbb{Z}$. Spectra are primarily of interest since they represent generalized cohomology theories on topological spaces. It is a consequence of the previously-mentioned Brown representability theorem that \emph{all} generalized cohomology theories arise from spectra in this way. See \cite{Schreiber17b} for a review of classical stable homotopy theory. \medskip The above notions for s[aces have analogues for spectra. \begin{defn} [Stable homotopy theory] \label{StableHomotopyTheory} We write \begin{enumerate}[{\bf (i)}] \item $\mathrm{Ho}( \mathrm{Spectra} )$ for the \emph{homotopy category of spectra}, also called the \emph{stable homotopy category}; \item $\mathrm{Ho}(\mathrm{Spectra})_{\mathbb{Q}}$ for the \emph{rational stable homotopy category}, hence the localization of the category of spectra at the maps inducing isomorphisms on rationalized stable homotopy groups $\pi_\ast \otimes \mathbb{Q}$; \item $\mathrm{Ho}(\mathrm{Spectra})_{\mathbb{Q},\mathrm{ft}}$ for the full subcategory on those spectra $P$ which are of \emph{finite rational type}, meaning that $\pi_k(P)\otimes \mathbb{Q}$ is a finite-dimensional $\mathbb{Q}$-vector space for all $k\in\mathbb{Z}$; \item $\mathrm{Ho}(\mathrm{Spectra})_{\mathbb{Q},\mathrm{bbl}}$ for the full subcategory of those spectra which are \emph{rationally bounded below}, hence whose rationalized stable homotopy groups all vanish below some given dimension. \end{enumerate} \end{defn} We are mainly concerned here with \emph{parametrized} spectra, which are families of spectra parametrized by a topological space. Equivalently, these are bundles of spectra over that base space. Given a family of spectra $P$ parametrized by a topological space $X$, for any point $x\in X$ we can extract a \emph{stable homotopy fiber} $\mathbb{R}x^\ast P$ in a way that depends functorially on $P$ (and on $x$ in the appropriate homotopy-coherent sense). There is a natural notion of a map of $X$-parametrized spectra, and we declare a map $P\to Q$ to be a \emph{fiberwise stable equivalence} if the induced map $\mathbb{R}x^\ast Q\to \mathbb{R}x^\ast P$ is a stable weak equivalence for all $x\in X$. For a rigorous approach to the theory using simplicial homotopy theory see \cite{Bra18, Bra19a}. \begin{defn}[Parametrized stable homotopy theory] \label{ParamerizedStableHomotopyTheory} For a fixed parameter space $X$, we write \begin{enumerate}[{\bf (i)}] \item $\mathrm{Ho}\left( \mathrm{Spectra}_{X} \right)$ for the \emph{homotopy category of spectra parametrized by $X$}, hence the localization of the category of $X$-parametrized spectra at the fiberwise stable equivalences; \item $\mathrm{Ho}\left( \mathrm{Spectra}_{X} \right)_{\mathbb{Q}}$ for the \emph{rational homotopy category of spectra parametrized by $X$}, hence the localization of category of $X$-parametrized spectra at the maps inducing isomorphisms on rationalized stable homotopy groups on all homotopy fiber spectra; \item $\mathrm{Ho}\left( \mathrm{Spectra}_{X} \right)_{\mathbb{Q}, \mathrm{ft},\mathrm{bbl}}\subset \mathrm{Ho}\left( \mathrm{Spectra}_{X} \right)_{\mathbb{Q}}$ for the full subcategory on those $X$-spectra $P$ whose homotopy fiber spectra $\mathbb{R}x^\ast(P)$ are \emph{of finite rational type} and \emph{are rationally bounded below} for all $x\in X$. \end{enumerate} \end{defn} A key point about parametrized spectra (Def. \ref{ParamerizedStableHomotopyTheory}) is that they represent \emph{twisted} generalized cohomology theories, generalizing the fact that plain spectra represent generalized cohomology theories (e.g. \cite{ABGHR14}). We will be particularly interested in the (rational version of the) twisted cohomology theory called twisted K-theory (Lemma \ref{TwistedKModel} below). The fundamental relation between unstable and stable homotopy theory in the parametrized setting is captured by the following: \begin{prop}[Fiberwise stabilization adjunction] \label{AdjunctionStabilization} For any space $X$ there are pairs of adjoint functors \begin{equation} \label{FiberwiseStabilizationAdjunction} \xymatrix{ \mathrm{Ho}\big( \mathrm{Spaces}_{/X}\big) \ar@/^1.4pc/@{<-}@<+8pt>[rrrr]^{\Omega^\infty_{X}} \ar@/_1.4pc/@{->}@<-8pt>[rrrr]_{\Sigma^\infty_{+,X}} \ar@{<-}@<+6pt>[rr] \ar@{->}@<-6pt>[rr]_-{(-)_{+,X}}^-{\bot} && \mathrm{Ho} \big(\mathrm{Spaces}_{/\!\!/ X} \big) \ar@{<-}@<+6pt>[rr]^-{\Omega^\infty_X} \ar@{->}@<-6pt>[rr]_-{\Sigma^\infty_X}^-{\bot} && \mathrm{Ho}\big( \mathrm{Spectra}_{X} \big) } \end{equation} between the classical parametrized homotopy categories (Def. \ref{ParamaterizedHomotopyTheory}) and the parametrized stable homotopy category (Def. \ref{ParamerizedStableHomotopyTheory}). Here $(-)_{+,X}$ adjoins a copy of $X$, e.g. $[Y\to X]\mapsto[X\to X\coprod Y\to X]$, while $\Omega^\infty_{X}$ sends an $X$-parametrized spectrum to its \emph{fiberwise infinite loop space}, and the operations $\Sigma^\infty_X$ and/or $\Sigma^\infty_{+,X}$ are called forming \emph{fiberwise suspension spectra}. Moreover, the adjunction \eqref{FiberwiseStabilizationAdjunction} stabilizes the looping/suspension adjunction from Prop. \ref{ReducedSuspension} in that there is a diagram, commuting up to natural isomorphism, as follows $$ \xymatrix@R=1.6em@C=5em{ \mathrm{Ho}\big(\mathrm{Spaces}_{/\!\!/ X}\big) \ar@{<-}@<+6pt>[rr]^-{\Omega_X} \ar@{->}@<-6pt>[rr]_-{\Sigma_X}^-{\top} \ar@{<-}@<+6pt>[dd]^-{\Omega^\infty_X} \ar@{->}@<-6pt>[dd]_-{\Sigma^\infty_X}^-{ \dashv } && \mathrm{Ho}\big(\mathrm{Spaces}_{/\!\!/ X}\big) \ar@{<-}@<+6pt>[dd]^-{\Omega^\infty_X} \ar@{->}@<-6pt>[dd]_-{\Sigma^\infty_X}^-{ \dashv } \\ \\ \mathrm{Ho}\big(\mathrm{Spectra}_X\big) \ar@{<-}@<+6pt>[rr]^-{\Omega_X} \ar@{->}@<-6pt>[rr]_-{\Sigma_X}^-{\simeq} && \mathrm{Ho}\big(\mathrm{Spectra}_X\big). } $$ \end{prop} \begin{remark}[Units] We denote the unit morphism of the adjunction \eqref{FiberwiseStabilizationAdjunction} on $Y \in \mathrm{Ho}\big( \mathrm{Spaces}_{/X} \big)$ by \begin{equation} \label{UnitOfStabilizationAdjunction} \xymatrix@R=1em{ Y \ar[dr] \ar[rr]^-{\mathrm{st}_X(Y)} && \Omega^\infty_X \Sigma^\infty_{+,X}(Y)\;. \ar[dl] \\ & X } \end{equation} \end{remark} Both the classical and stable homotopy categories are extremely rich mathematical settings. In order to get a better handle on these categories, one may filter them in various ways so as to study (stable) homotopy types in controlled approximations. A particularly useful approximation of this sort is provided by rational homotopy theory, which discards all torsion information carried by the homotopy groups. The main reason that rational homotopy theory is so tractable is that both the unstable and stable variants can be completely described in terms of algebraic data. We recall this as Prop. \ref{SullivanEquivalence} and Prop. \ref{SchwedeShipleyEquivalence} below, but first we must recall some terminology: \begin{defn}[DG-algebraic homotopy theory] \label{dgAlgebrasAnddgModules} We write \begin{enumerate}[{\bf (i)}] \item $\mathrm{Ho}(\mathrm{DGCAlg})$ for the \emph{homotopy category of connective differential graded (unital) commutative algebras (DG-algebras)} over $\mathbb{Q}$. The connectivity condition means that the underlying cochain complex vanishes identically in negative degree, and working in the homotopy category means that we localize the category of DG-algebras with respect to the class of quasi-isomorphisms; \item $\mathrm{Ho}(\mathrm{DGCAlg})_{\mathrm{cn}}$ for the full subcategory on the \emph{(cohomologically) connected} DG-algebras; those $A$ for which the algebra unit $\mathbb{Q} \to A$ induces an isomorphism $\mathbb{Q} \simeq H^0(A)$; \item $\mathrm{Ho}\left(\mathrm{DGCAlg}\right)_{\mathrm{ft}}$ for the full subcategory of DG-algebras $A$ of \emph{finite type}, so that $A$ is cohomologically connected and quasi-isomorphic to a DG-algebra that is degreewise finitely generated. \end{enumerate} \end{defn} \begin{remark}[Differentials] {\bf (i)} We write \lq\lq DG'' throughout to indicate that we are working with \emph{co}homological grading conventions, so that in particular all differentials increase degree by $+1$. \item {\bf (ii)} We will write free graded commutative algebras (without differentials) as polynomial algebras $$ \mathbb{Q}[\alpha_{k_1}, \beta_{k_2}, \dotsc] \,, $$ where the subscript on the generator will always indicate its degree. Differentials on such free algebras are fully determined by their actions on generators by the graded Leibniz rule, and so we will denote DG-algebras obtained this way by $$ \mathbb{Q}[\alpha_{k_1}, \beta_{k_2}, \dotsc]\Bigg/ \left( \begin{aligned} d \alpha_{k_1} & = \cdots \\ d \beta_{k_2} & = \cdots \\ & \;\;\vdots \end{aligned} \right). $$ \end{remark} The main result of Sullivan's approach to rational homotopy theory \cite{Su} (a detailed treatment is the subject of the monograph \cite{BG}) is a characterization of certain well-behaved rational homotopy types in terms of DG-algebras: \newpage \begin{prop}[DG-models for rational homotopy theory (e.g. \cite{BG,Hess06})] \label{SullivanEquivalence} \item {\bf (i)} There is an adjunction $$ \xymatrix{ \mathrm{Ho}(\mathrm{Spaces}) \ar@{->}@<+6pt>[rr]^-{\mathcal{O}} \ar@{<-}@<-6pt>[rr]_-{\mathcal{S}}^-{\bot} && \mathrm{Ho}( \mathrm{DGCAlg})^{\mathrm{op}} } $$ between the classical homotopy category of topological spaces (Def. \ref{ClassicalHomotopyCategories}) and the opposite of the homotopy category of DG-algebras (Def. \ref{dgAlgebrasAnddgModules}), where $\mathcal{O}$ denotes the derived functor of forming the DG-algebra of rational polynomial differential forms. \item {\bf (ii)} This adjunction restricts to an equivalence of categories \begin{equation} \label{SullivanEquivalenceAdjunction} \xymatrix{ \mathrm{Ho}(\mathrm{Spaces})_{\mathbb{Q}, \mathrm{nil}, \mathrm{ft}} \ar@{->}@<+6pt>[rr]^-{\mathcal{O}} \ar@{<-}@<-6pt>[rr]_-{\mathcal{S}}^-{\simeq} && \mathrm{Ho}( \mathrm{DGCAlg})^{\mathrm{op}}_{\mathrm{ft}} } \end{equation} between the rational homotopy category of nilpotent spaces of finite type (Def. \ref{ClassicalHomotopyCategories}) and the homotopy category of DG-algebras of finite type (Def. \ref{dgAlgebrasAnddgModules}). \item {\bf (iii)} The rational cohomology of a space is computed by the cochain cohomology of any one of its DG-algebra models: $$ H^\bullet(X,\mathbb{Q}) \;\simeq\; H^\bullet(\mathcal{O}(X)) \,. $$ \item {\bf (iv)} Under the equivalence of \eqref{SullivanEquivalenceAdjunction}, every space $X$ on the left has a model by a \emph{minimal} DG-algebra, whose underlying graded algebra is the free graded commutative algebra on the dual rational homotopy groups of $X$: $$ \mathcal{O}(X) \;\simeq\; \mathbb{Q}\left[ (\pi_\bullet(X) \otimes \mathbb{Q} )^\ast\right] \big/ \big(d(\,\cdots) = (\,\cdots)\big) \,. $$ Minimal models are unique up to isomorphism, with the isomorphism between any two minimal models unique up to homotopy. \end{prop} \begin{example}[Minimal model for the 3-sphere] \label{MinimalDgcAlgebraModelFor3Sphere} The minimal model for the 3-sphere is given by $ \mathcal{O}(S^3) \;\simeq\; \mathbb{Q}[h_3]\,/ \left( d h_3 = 0 \right). $ Observe that $S^3$ is rationally indistinguishable from a $K(\mathbb{Z},3)$. This is true for all odd-dimensional spheres, since $\pi_\ast (S^{2k+1})\otimes \mathbb{Q}$ is a one-dimensional graded vector space concentrated in dimension $2k+1$. \end{example} \begin{example}[Minimal model for the 4-sphere] \label{MinimalDgcAlgebraModelFor4Sphere} In contrast, the minimal model for the 4-sphere is not free, and is given by $$ \mathcal{O}(S^4) \;\simeq\; \mathbb{Q}[ \omega_4, \omega_7 ]\Big/ \left( {\begin{aligned} d \omega_4 & = 0 \ \\[-2mm] d \omega_7 & = -\tfrac{1}{2} \omega_4 \wedge \omega_4 \end{aligned}} \right). $$ \end{example} \begin{example}[Minimal model for $B S^1$] \label{MinimalDGCAlgebraModelForClassifyingSpace} The minimal model for the classifying space $B S^1$ of the circle is given by $ \mathcal{O}(B S^1) \;\simeq\; \mathbb{Q}[\omega_2]\,\big/ \left( d \omega_2 = 0 \right). $ Note that this is decidedly not a minimal DGC-algebra model for the $2$-sphere, since $\pi_2(S^2)\cong \mathbb{Z}\cong\pi_3(S^2)$. \end{example} We will model the rational homotopy theory of parametrized spectra in terms of DG-modules over DG-algebras: \begin{defn}[DG-modular homotopy theory] \label{RatioaldgModules} Given a DG-algebra $A$ (Def. \ref{dgAlgebrasAnddgModules}), we write \begin{enumerate}[{\bf (i)}] \item $A\mbox{-}\mathrm{Mod}$ for the category of unbounded DG-modules over $A$; \item $A\mbox{-}\mathrm{Mod}_{\mathrm{ft}}$ for the subcategory of those DG-$A$-modules of \emph{finite type}, hence those whose cochain cohomology is finite-dimensional in each degree; \item $A\mbox{-}\mathrm{Mod}_{\mathrm{bbl}}$ for the subcategory of \emph{bounded below} DG-$A$-modules, namely those whose cochain cohomology vanishes identically below some degree; \item $\mathrm{DGCAlg}^{A/}$ for the slice category of \emph{$A$-algebras}. The objects of this category are simply DG-algebras equipped with an algebra morphism from $A$, which we will frequently denote $B \leftarrow A\colon \pi^\ast$. \item $\mathrm{DGCAlg}_{/\!\!/ A}$ for the category of \emph{augmented $A$-algebras}, whose objects are diagrams of DG-algebras \[ \xymatrix{ A\ar@{<-}[r]^{\;\;\sigma^\ast} & B \ar@{<-}[r]^{\;\;\pi^\ast} & A} \] such that $\sigma^\ast \circ \pi^\ast$ is the identity on $A$. The morphism $\sigma^\ast$ is called the \emph{augmentation}, and its kernel $\mathrm{ker}(\sigma^\ast) \in A\mbox{-}\mathrm{Mod}$ is the \emph{augmentation ideal}. \item Passing to the homotopy category in any of the above cases {\bf(i)}-{\bf(v)} (e.g. $A\mathrm{\mbox{-}Mod}\mapsto \mathrm{Ho}\big(A\mathrm{\mbox{-}Mod}\big)$ means that we localize with respect to the class of quasi-isomorphisms. \end{enumerate} \end{defn} \begin{remark} We emphasise that we always work with \emph{connective} DG-algebras, however modules over these algebras may be unbounded. \end{remark} The stable analogue of Sullivan's rational homotopy theory equivalence Prop. \ref{SullivanEquivalence} {\bf (i)} in the unparametrized context is the following: \begin{prop}[DG-models for rational stable homotopy theory] \label{SchwedeShipleyEquivalence} There is an equivalence $$ \xymatrix{ \mathrm{Ho}(\mathrm{Spectra})_{\mathbb{Q},\mathrm{ft}} \ar@{->}@<+6pt>[rr]^-{} \ar@{<-}@<-4pt>[rr]_{}^-{\simeq} && \mathrm{Ho}(\mathrm{Ch}(\mathbb{Q}))^{\mathrm{op}}_{\mathrm{ft}} } $$ between the rational homotopy category of spectra of finite type (Def. \ref{ClassicalHomotopyCategories}) and the opposite homotopy category of rational cochain complexes of finite type (as in Def. \ref{RatioaldgModules} for $A=\mathbb{Q}$). \end{prop} \begin{proof}[Sketch of proof.] This is a well-known fact in stable homotopy theory (and used extensively in differential cohomology, see \cite{GS17}), but we sketch a proof for completeness. Passing to the rational stable homotopy category is implemented by smashing with the Eilenberg-MacLane spectrum $H\mathbb{Q}$, so that $ \mathrm{Ho}(\mathrm{Spectra})_\mathbb{Q} \cong \mathrm{Ho}(H\mathbb{Q}\mathrm{\mbox{-}Mod})$. But the latter homotopy category is equivalent to the homotopy category of rational \emph{chain} complexes $\mathrm{Ho}(H\mathbb{Q}\mathrm{\mbox{-}Mod})\cong \mathrm{Ho}(\mathrm{ch}(\mathbb{Q}))$ (see \cite{Shipley07}). Under the assumption of finite type, dualizing then gives \[ \mathrm{Ho}(\mathrm{Spectra})_{\mathbb{Q},\mathrm{ft}} \cong \mathrm{Ho}(\mathrm{Ch}(\mathbb{Q}))^\mathrm{op}_\mathrm{ft}. \] An alternative proof which does not appeal to dualization is given in \cite[Sec. 2.7.4]{Bra18}; see also \cite{Bra19b}. \end{proof} \begin{remark}[Operations] \label{SuspensionandShifting} Under the above equivalence, suspension of spectra corresponds to shifting the corresponding cochain complex up by one. Looping corresponds to the cochain complex down by one. \end{remark} The unification of Prop. \ref{SullivanEquivalence} with Prop. \ref{SchwedeShipleyEquivalence} established in \cite{Bra18, Bra19b} is: \begin{theorem}[DG-models for rational parametrized spectra] \label{RationalParameterizedSpectradgModel} For any connective DG-algebra $A$ (Def. \ref{dgAlgebrasAnddgModules}), there is a pseudo-natural transformation $$ \xymatrix{ \mathrm{Ho}\big( \mathrm{Spectra}_{\mathcal{S}(A)}\big) \ar[rr]^-{\mathcal{M}_A} && \mathrm{Ho}\left( A\mbox{-}\mathrm{Mod}\right)^{\mathrm{op}} } $$ from the homotopy category of parametrized spectra (Def. \ref{ParamerizedStableHomotopyTheory}) parametrized by the rational space $\mathcal{S}(A)$ (Prop. \ref{SullivanEquivalence}) to the opposite homotopy category of DG-modules over $A$ (Def. \ref{RatioaldgModules}) with the following properties: \begin{enumerate} \item There is a factorization over the rational homotopy category of parametrized spectra: $$ \xymatrix{ \mathrm{Ho}\big( \mathrm{Spectra}_{\mathcal{S}(A)} \big) \ar[dr]_{\mathcal{M}_A} \ar[r] & \mathrm{Ho}\big( \mathrm{Spectra}_{\mathcal{S}(A)} \big)_{\mathbb{Q}} \ar[d] \\ & \mathrm{Ho}\left( A\mbox{-}\mathrm{Mod} \right)^{\mathrm{op}}. } $$ \item If the space $\mathcal{S}(A)$ is simply-connected, then $\mathcal{M}_A$ restricts to an equivalence of rational homotopy categories $$ \xymatrix{ \mathrm{Ho}\big( \mathrm{Spectra}_{\mathcal{S}(A)} \big)_{\mathbb{Q},\mathrm{ft}, \mathrm{bbl}} \ar[rr]^-{\mathcal{M}_A}_-\simeq && \mathrm{Ho}\big( A\mbox{-}\mathrm{Mod} \big)^{\mathrm{op}}_{\mathrm{ft}, \mathrm{bbl}} }\, $$ between finite-type, bounded below objects. \item For $A = \mathbb{Q}$, this extends to the equivalence of rational stable homotopy theory from Prop. \ref{SchwedeShipleyEquivalence}: $$ \xymatrix{ \mathrm{Ho}\big( \mathrm{Spectra} \big)_{\mathbb{Q},\mathrm{ft}} \ar[rr]^-{\mathcal{M}_A}_-\simeq && \mathrm{Ho}\big( \mathrm{Ch}\left(\mathbb{Q}\right) \big)^{\mathrm{op}}_{\mathrm{ft}}. } $$ \end{enumerate} \end{theorem} \begin{proof}[Sketch of proof.] The functor $\mathcal{M}_A$ is constructed by stabilizing the Sullivan--de Rham adjunction of Prop. \ref{SullivanEquivalence} {\bf (i)}. The various properties of the stabilized functor are established in \cite{Bra18, Bra19b}. Specifically, pseudo-naturality is \cite[Cor. 2.7.26]{Bra18}, the first item is \cite[Cor. 2.7.31]{Bra18}, the second item is \cite[Theorem 2.7.42]{Bra18}, and the final item is \cite[Rem. 2.7.44]{Bra18}. \end{proof} \begin{remark}[Fiberwise operations] \label{SuspensionandShiftingParam} The obvious parametrized analogue of Rem. \ref{SuspensionandShifting} holds, namely \emph{fiberwise} suspension of a parametrized spectrum corresponds to shifting the corresponding DG-module up by one, and forming fiberwise loop spaces corresponds to shifting the DG-module down by one. \end{remark} In the proof of Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} below, we will need to know explicitly how the functor $\mathcal{M}_A$ behaves on fiberwise suspension spectra. \begin{prop}[DG-models for fiberwise suspension spectra {\cite{Bra18, Bra19b}}] \label{FiberwiseSuspensionSpectrumdgModel} Let $A$ be a DG-algebra (Def. \ref{dgAlgebrasAnddgModules}) and let $$ \big[ \xymatrix{ Y\ar[r]^-{\pi} & \mathcal{S}(A) } \big] \;\in\; \mathrm{Ho}\big( \mathrm{Spaces}_{/\mathcal{S}(A)} \big)_{\mathbb{Q}} $$ be a space over (Def. \ref{ParamaterizedHomotopyTheory}) the rational space $\mathcal{S}(A)$ determined by $A$ via Prop. \ref{SullivanEquivalence}. Then, after passing to DG-models via $\mathcal{M}_A$ (Theorem \ref{RationalParameterizedSpectradgModel}), the fiberwise suspension spectrum (Prop. \ref{AdjunctionStabilization}) is modeled by $\mathcal{O}(Y)$ (according to Prop. \ref{SullivanEquivalence}), regarded as an $A$-module via $\pi^\ast$; that is, \begin{equation} \label{FormulaFiberwiseSuspensionSpectra} \mathcal{M}_A \big( \Sigma^\infty_{+,\mathcal{S}(A)}(Y) \big) \;\simeq\; \mathcal{O}(Y) \,. \end{equation} \end{prop} \begin{proof}[Sketch of proof.] In the proof of Theorem \ref{RationalParameterizedSpectradgModel} (see \cite[Lemma 2.7.25]{Bra18}, also \cite{Bra19b}), we encounter the commutative diagram of left Quillen functors: $$ \xymatrix@R=1.2em{ \mathrm{Spaces}_{/\mathcal{S}(A)} \ar[dd]_{\mathcal{O}_A} \ar@/^2pc/[rrrrrr]^-{\Sigma^\infty_{+,\mathcal{S}(A)}} \ar[rrr]^-{(-)_{+,\mathcal{S}(A)}} &&& \mathrm{Spaces}_{/\!\!/ \mathcal{S}(A)} \ar[dd]_-{ \mathcal{O}_A } \ar[rrr]^-{ \Sigma^\infty_{\mathcal{S}(A)} } &&& \mathrm{Spectra}_{\mathcal{S}(A)} \ar[dd]^-{\mathcal{M}_A} \\ \\ \big( \mathrm{DGCAlg}^{A/} \big)^{\mathrm{op}} \ar[rrr]_-{ (-)\oplus A } &&& \big(\mathrm{DGCAlg}_{/\!\!/ A} \big)^{\mathrm{op}} \ar[rrr]_-{\mathrm{aug}_A} &&& A\mbox{-}\mathrm{Mod}^{\mathrm{op}} } $$ Here the left and bottom horizontal functors send $$ { \left[\hspace{-2mm} \raisebox{20pt}{ \xymatrix@C=4pt{ Y \ar[d]^\pi \\ \mathcal{S}(A) } } \right] } \xymatrix{\ar@{|->}[r]^-{\mathcal{O}_A} &} { \left[\hspace{-2mm} { \raisebox{40pt}{ \xymatrix@C=4pt{ \mathcal{O}(Y) \\ \mathcal{O}(\mathcal{S}(A)) \ar[u]^{\pi^{\ast}} \\ A \ar[u]^\eta } }} \right] } \; \xymatrix{\ar@{|->}[r]^-{(-)\oplus A} &} \; { \left[\hspace{-2mm} { \raisebox{40pt}{ \xymatrix@C=4pt{ A \\ \mathcal{O}(Y) \oplus A \ar[u]^{0 \oplus \mathrm{id}} \\ A \ar[u]^{ (\pi^\ast \circ \eta) \oplus \mathrm{id}} } }} \right] } \xymatrix{\ar@{|->}[r]^-{\mathrm{aug}_A} &} \mathrm{ker}(0 \oplus \mathrm{id}) = \mathcal{O}(Y)\;, $$ from which the assertion follows. \end{proof} Theorem \ref{RationalParameterizedSpectradgModel} allows us to make use of the established theory of \emph{minimal DG-modules} in order to obtain models for parametrized spectra in the rational approximation: \begin{defn}[Minimal DG-modules (see {\cite{Roig94}\cite[Sec. 1]{RS}})] \label{MinimalDGModule} Let $A$ be a DG-algebra. Write $$ A[n] \simeq A \otimes \langle c_n \rangle \in A\mbox{-}\mathrm{Mod} $$ for the DG-module over $A$ that is freely generated by a single generator $c_n$ in degree $n\in \mathbb{Z}$. The underlying graded vector space is simply $A$ shifted up (or down, if $n$ is negative) in degree by $|n|$ and $c_n$ is identified with shifted algebra unit. The differential is same as $A$, shifted in degree, and the module structure is given by left multiplication with elements in $A$. \item {\bf (i)} Given $N \in A\mbox{-}\mathrm{Mod}$ and an element $\alpha \in N$ of degree $n+1$ such that $d \alpha = 0$, we construct a new DG-module $$ N \oplus_{\alpha} \left( A\otimes \langle c_n\rangle \right) \;\in\; A\mbox{-}\mathrm{Mod} $$ whose underlying graded vector space is the direct sum $N \oplus (A\otimes \langle c_n\rangle)$, equipped with the evident $A$-module structure. The differential is induced from the differentials of $N$ and $A$, with the additional condition that $ d c_n := \alpha. $ Hence, the differential is specified by $$ d(n + a \otimes c_n) \;=\; d_N n + (d_A a) \otimes c_n + (-1)^{\mathrm{deg}(a)} a \cdot \alpha $$ for all $n \in N$ and $a \in A$. An $A$-module of the form $N \oplus_{\alpha} (A\otimes \langle c_n\rangle)$ is called an \emph{$n$-cell attachment} of $N$. \item {\bf (ii)} A \emph{relative cell complex over $A$ } is an inclusion $ N \hookrightarrow \widehat N $ of $A$-modules such that $\widehat{N}$ arises from $N$ via a countable sequence of cell attachments as in {\bf (i)}. If $N = 0$ is the zero module, then such a $\widehat N$ is called a \emph{cell complex over $A$}. A cell complex over $A$ is therefore equivalent to the data of \begin{itemize} \item a graded vector space $V$ equipped with a countable, ordered linear basis $\{e^{(i)}\}_{i\in \mathbb{N}}$; and \item a differential on $A\otimes V$ making this graded vector space a DG-$A$-module and such that \[ de^{(i)} \in A\otimes \langle e^{(j)}\rangle_{j\leq i}\;. \] \end{itemize} \item {\bf (iii)} Furthermore, a cell complex over $A$ is \emph{minimal} the ordered basis $\{e^{(i)}\}_{i\in \mathbb{N}}$ satisfies the additional condition that $i\leq j$ if and only if $\mathrm{deg}(e^{(i)})\leq \mathrm{deg}(e^{(j)})$. \end{defn} \begin{remark}[Existence of minimal models {\cite{Roig94}}] \label{MinimalModelsExist} For a (necessarily connective) DG-algebra $A$, any $A$-module $M$ admits a \emph{minimal model}. This means that there is a minimal $A$-module $N$ together with a quasi-isomorphism of $A$-modules $N\to M$. Any two minimal models of $M$ are unique up isomorphism, and this isomorphism is unique up to homotopy. \end{remark} \begin{example}[Minimal cochain complexes have vanishing differential] \label{MinimalModelsForCochainComplexes} If $A = \mathbb{Q}$, so that $A\mbox{-}\mathrm{Mod} \simeq \mathrm{Ch}(\mathbb{Q})$ is the category of rational cochain complexes, then the minimal DG-modules (Def. \ref{MinimalDGModule}) are precisely the cochain complexes of finite type with \emph{vanishing} differential. This is because a non-vanishing differential would have to take a generator $e^{(i)}$ in some degree $n$ to a $\mathbb{Q}$-linear multiple of a generator $e^{(< i)}$ of degree $n+1$, but this is ruled out by the degree condition on the generators. \end{example} According to Theorem \ref{RationalParameterizedSpectradgModel}, minimal DG-modules over DG-algebra $A$ determine rational $\mathcal{S}(A)$-spectra. Just as for minimal DG-algebras, the fiberwise rational stable homotopy groups of a parametrized spectrum can be read off directly from a minimal module: \begin{lemma} \label{RationalizedSpectraModeledByRationalizedHomotopyGroups} For $E\in \mathrm{Ho}\left(\mathrm{Spectra}\right)_{\mathbb{Q},\mathrm{ft}}$ any spectrum of finite rational type, the minimal DG-module model for its rationalization (Rem. \ref{MinimalModelsExist}) is the cochain complex \[ (\pi_\bullet(E)\otimes \mathbb{Q})^\ast \] equipped with vanishing differential. \end{lemma} \begin{lemma}[Minimal models for fiber spectra (see {\cite[Rem. 2.7.35]{Bra18}; also \cite{Bra19b}})] \label{MinimalDgModulesForFiberSpectra} Let $X \in \mathrm{Ho}\left(\mathrm{Spaces}\right)$ be simply-connected of finite rational type, and let $E \in \mathrm{Ho}\left(\mathrm{Spectra}_X\right)_{\mathbb{Q},\mathrm{ft}, \mathrm{bbl}}$ be a bounded-below $X$-spectrum of finite rational type (Def. \ref{ParamerizedStableHomotopyTheory}). If, moreover, $$ A \simeq \mathcal{O}(X) \;\;\; \in \mathrm{DGCAlg} $$ is any DG-algebra model for $X$ under Prop. \ref{SullivanEquivalence}, and if $$ A \otimes V \;\simeq\; \mathcal{M}(E) \;\;\; \in A\mbox{-}\mathrm{Mod} $$ is a \emph{minimal} DG-module model (Def. \ref{MinimalDGModule}) for $E$ under Theorem \ref{RationalParameterizedSpectradgModel}, then for every $x\in X$ the cochain complex with vanishing differential $$ V \;\simeq\; \mathcal{M}_{\mathbb{Q}}(E_x) \;\;\; \in \mathrm{Ch}(\mathbb{Q}) $$ is a minimal model for the fiber spectrum $E_x$. \end{lemma} \begin{example}[Minimal model for suspension spectrum of $B S^1$] \label{MinimalModelForSuspensionSpectrumOfCircleClassifyingSpace} The minimal model for the suspension spectrum (Prop. \ref{AdjunctionStabilization}) of the classifying space $B S^1$ (viewed as a parametrized spectrum over the point) is the graded vector space spanned by one generator in every non-negative even degree: \begin{equation} \label{BS1MinimalModule} \mathcal{M}_{\mathbb{Q}}\left(\Sigma^\infty_+ B S^1 \right) \;\simeq\; \mathbb{Q}[\beta_2] = \langle 1, \beta_2, \beta_2^2 ,\dotsc\rangle \;\;\; \in \mathrm{Ch}(\mathbb{Q}) \,. \end{equation} Indeed, a minimal DG-algebra model for $B S^1$ is the symmetric graded algebra on a single generator in degree 2: $$ \mathbb{Q}[ \beta_2 ] \;\simeq\; \mathcal{O}(B S^1) \;\;\;\in \mathrm{DGCAlg} \,, $$ which necessarily has trivial differential; this implies the claim with Prop. \ref{FiberwiseSuspensionSpectrumdgModel}. Note that in \eqref{BS1MinimalModule} the generator in degree $2n$ is $\beta^n_2$ ( however, this notation should be taken with a grain of salt since we have forgotten the algebra structure at this point). \end{example} \begin{example}[Rational Snaith theorem] \label{RationalSnaithTheorem} Let $\mathrm{KU} \in \mathrm{Ho}\left(\mathrm{Spectra}\right)$ be the spectrum representing complex K-theory and write $\mathrm{ku} \in \mathrm{Spectra}$ for its connective cover (obtained by killing negative-dimensional homotopy groups). Minimal DG-module models for $\mathrm{KU}$ and $\mathrm{ku}$ are, by Lemma \ref{RationalizedSpectraModeledByRationalizedHomotopyGroups}, given by $$ \mathcal{M}_{\mathbb{Q}}\left( \mathrm{ku} \right) \;\simeq\; \mathbb{Q}[\beta_2] \; \in \mathrm{Ch}(\mathbb{Q}) \qquad \text{ and} \qquad \mathcal{M}_{\mathbb{Q}}\left( \mathrm{KU} \right) \;\simeq\; \mathbb{Q}[\beta_2, \beta_2^{-1}] \; \in \mathrm{Ch}(\mathbb{Q})\;. $$ In particular, rationally there is no difference between $ku$ and $\Sigma^\infty_+ BS^1$: $$ \Sigma^\infty_+ B S^1 \;\simeq_{\mathbb{Q}}\; \mathrm{ku}. $$ On the other hand, if we remember the algebra structure on $\mathbb{Q}[\beta_2]$, the full non-connective K-theory is obtained by multiplicatively inverting the element $\beta_2$ $$ \Sigma^\infty_+ B S^1[\beta_2^{-1}] \;\simeq_{\mathbb{Q}}\; \mathrm{KU} \,. $$ \end{example} \begin{remark}[Full Snaith theorem] As a matter of fact, the last statement in the previous example is still true \emph{non-}rationally: this is the content of Snaith's theorem \cite{Snaith81}. After rationalization, Snaith's theorem essentially reduces to a triviality, however keeping in mind that this is a \lq\lq rational shadow'' may help identify the non-rational situation approximated by our main Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} below. We conjecture that Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} remains true non-rationally by a generalization of Snaith's theorem to twisted K-theory obtained by \emph{fiberwise} inversion of the Bott generator. We will return to this point elsewhere. \end{remark} We have seen in Prop. \ref{FiberwiseSuspensionSpectrumdgModel} how stabilization -- the process of passing from spaces to spectra -- works in terms of rational models by taking augmentation ideals. Conversely, the \emph{de}stabilization process that extracts an infinite loop space from a spectrum also has a straightforward incarnation in terms of algebraic models. For connective parametrized spectra, extracting fiberwise infinite loop spaces is represented by taking the free algebra of the corresponding DG-module: \begin{prop}[Rational models for fiberwise infinite loop spaces (see {\cite[Sec. 2.7]{Bra18}; \cite{Bra19b}})] \label{RationalInfiniteLoopSpace} Let $A$ be a DG-algebra of finite type such that $\mathcal{S}(A)$ is simply-connected. If $M$ is a connective $A$-module, then under the inverse equivalence of $$ \xymatrix{ \mathrm{Ho}\big( \mathrm{Spectra}_{\mathcal{S}(A)} \big)_{\mathbb{Q},\mathrm{ft}, \mathrm{bbl}} \ar[rr]^-{\mathcal{M}_A}_-\simeq && \mathrm{Ho}\big( A\mbox{-}\mathrm{Mod} \big)^{\mathrm{op}}_{\mathrm{ft}, \mathrm{bbl}} }, $$ the fiberwise infinite loop space (Prop. \ref{AdjunctionStabilization}) is modelled by the augmented $A$-algebra $\mathrm{Sym}_A (N)$ where $N$ is a minimal model of $M$ (Rem. \ref{MinimalModelsExist}). \end{prop} \begin{lemma}[Minimal model for twisted connective K-theory] \label{TwistedKModel} Denote the parametrized spectrum representing general twisted connective K-theory (e.g. \cite{AndoBlumbergGepner10}) by $$ \xymatrix{ \mathrm{ku} /\!\!/ \mathrm{GL}_1(\mathrm{ku}) \ar[r]& B \mathrm{GL}_1( \mathrm{ku} ) } $$ and its restriction to the twist by ordinary degree-3 cohomology by \footnote{Here and elsewhere, ``(pb)" denotes a (homotopy-)pullback square.} $$ \xymatrix{ \mathrm{ku} /\!\!/ BS^1 \ar[r] \ar@{}[dr]|{\mbox{\emph{\footnotesize{(pb)}}}} \ar[d] & \mathrm{ku} /\!\!/ \mathrm{GL}_1(\mathrm{ku}) \ar[d] \\ K(\mathbb{Z},3) \ar[r] & B \mathrm{GL}_1(\mathrm{ku}). } $$ The minimal DG-algebra model for the base space is the graded symmetric algebra freely generated by a single generator in degree $h_3$ with vanishing differential: $$ \mathcal{O}\big(K(\mathbb{Z},3)\big) \simeq \mathbb{Q}[h_3] \,. $$ The corresponding minimal DG-module model (Def. \ref{MinimalDGModule}) for the rationalization of $\mathrm{ku}/\!\!/ BS^1$ is \begin{equation} \label{MinimaldgModelForTwistedK} \mathcal{M}_{\mathbb{Q}[h_3]}\left( \mathrm{ku} /\!\!/ BS^1 \right) \;\simeq\; \mathbb{Q}[h_3] \otimes \left\langle \omega_{2k} \,\vert\, k \in \mathbb{N} \right\rangle \Big/ \left(\!\! \begin{array}{c} \hspace{-6mm} d \omega_0 = 0 \\ d \omega_{2k + 2} = h_3 \wedge \omega_{2k} \end{array} \!\! \right). \end{equation} The module structure over $\mathbb{Q}[h_3]$ is given by the evident action on this tensor factor. Moreover, for each $n\geq 0$ the fiberwise infinite loop space of the fiberwise suspension $\Sigma_{B^3\mathbb{Q}}^{2n} (\mathrm{ku}/\!\!/ BS^1)$ has minimal DG-model \begin{equation} \label{OmegaInfinityOfTwistedk} \mathcal{O}\big( \Omega^{\infty-2n}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ BS^1 \right) \big) \;\simeq\; \mathbb{Q}[h_3, \omega_{2k+2n} \,\vert\, k \in \mathbb{N}] \Big/ \left(\!\! \begin{array}{c} \hspace{-6mm} d \omega_{2n} = 0 \\ d \omega_{2k + 2n+2} = h_3 \wedge \omega_{2k + 2n} \end{array} \!\! \right). \end{equation} \end{lemma} \begin{proof} For the structure as a $\mathbb{Q}[h_3]$-module, we appeal to Lemma \ref{MinimalDgModulesForFiberSpectra} and Ex. \ref{RationalSnaithTheorem}. It only remains to determine the differential. By \cite[Prop. 3.9]{AtiyahSegal05} the degree-3 twist on complex K-theory is non-trivial in every degree. But by the degrees of the generators in \eqref{MinimaldgModelForTwistedK}, the given differential is degreewise and up to isomorphism the only possible non-trivial differential. The minimal models \eqref{OmegaInfinityOfTwistedk} are obtained by using Rem. \ref{SuspensionandShiftingParam} and Prop. \ref{RationalInfiniteLoopSpace}. \end{proof} A form of this statement also appears as \cite[Ex. 12.5]{BunkeNikolaus14}, \cite[Sec. 3.1]{GS17}. \subsection{The $\mathrm{Ext}$/$\mathrm{Cyc}$-adjunction} \label{TheAdjunction} Our mathematical formalization of double dimensional reduction and gauge enhancement in Sec. \ref{TheMechanism} involves at its core particular universal construction -- the \emph{ $\mathrm{Ext}$/$\mathrm{Cyc}$-adjunction}. While we ultimately work with the rational homotopy theory version of this construction, we would like to amplify that this adjunction can be formulated much more generally. In particular, for $\mathbf{H}$ any \lq\lq good'' homotopy theory (for instance an $\infty$-topos \cite{Lurie06}) and for $G$ a strong homotopy group in $\mathbf{H}$ (namely, a grouplike $A_\infty$-monoid) with delooping $\mathbf{B}G$, there is a duality (an $\infty$-adjunction) $$ \xymatrix{ \mathbf{H} \ar@{<-}@<+6pt>[rr]^-{ \mathrm{Ext}_G } \ar@<-6pt>[rr]_-{ \mathrm{Cyc}_G }^-{\bot} && \mathbf{H}_{/\mathbf{B}G} } $$ \vspace{-3mm} \noindent between \vspace{-2mm} \begin{enumerate}[{\bf (i)}] \item the operation $\mathrm{Ext}_G$ of forming $G$-extensions; and \item the operation of \emph{$G$-cyclification}; the result of first forming the space of maps out of $G$ and then taking the homotopy quotient by the $G$-action rigidly reparametrizing these maps. \end{enumerate} In terms of abstract homotopy theory, this adjunction turns out to be right base change along the essentially unique point inclusion map $\ast \to \mathbf{B}G$. For the reader familiar with abstract homotopy theory this fully defines the adjunction, and the only point to check is that this right adjoint is indeed obtained by forming cyclifications as claimed. \medskip For any object $X\in \mathbf{H}$, specifying a $G$-principal bundle on $X$ is equivalent to the data of a map \[ \tau\colon X\longrightarrow \mathbf{B}G. \] The $G$-principal bundle associated to $\tau$ is obtained by computing the homotopy fiber at the essentially unique point in $\mathbf{B}G$: \[ \xymatrix@C=5em{ \mathrm{Ext}_G (\tau)\ar[r]\ar[d] \ar@{}[dr]|{\mbox{\footnotesize{(pb)}}} & \ast\ar[d] \\ X\ar[r]^-{\tau} & \mathbf{B}G, } \] (see \cite{NSS12} for an exposition of the general theory). Importantly for our purposes, for each map $\tau$ the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction provides us with a natural morphism -- the unit of the adjunction -- which fits into a (homotopy) commutative diagram: \[ \xymatrix@R=1em{ X\ar[rr]\ar[dr]_-{\tau} && \mathrm{Cyc}_G\mathrm{Ext}_G (\tau). \ar[dl] \\ &\mathbf{B}G& } \] The map $X\to \mathrm{Cyc}_G\mathrm{Ext}_G (\tau)$ is the operation that takes any point in $X$ to the map $G\to \mathrm{Ext}_G(\tau)$ which winds identically around the extension fiber over that point. This is only well-defined up to a choice of base point in the fiber, but this is precisely the ambiguity that this quotiented out in the definition of $\mathrm{Cyc}_G$. \begin{remark}[Notation] We will often abuse notation and write $\mathrm{Ext}_G(X)$ instead of $\mathrm{Ext}_G(\tau)$ when it is understood that we are considering a particular map $\tau\colon X\to \mathbf{B}G$. \end{remark} We now describe the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction in some detail in the setting of classical homotopy theory (Def. \ref{ClassicalHomotopyCategories}). More precisely, for any strict topological group $G$ we exhibit an ordinary adjunction (Theorem \ref{GCycExtAdjunction} below) between categories of topological spaces which for $G=S^1$ presents $\mathrm{Ext}/\mathrm{Cyc}$-adjunction in homotopy (see Rem. \ref{ExtCycAdjunctiononHomotopyCats}). Let us first recall some preliminaries to establish our conventions: \begin{defn}[Group action on topological space] \label{GSpace} For $X$ a topological space and $G$ a topological group, a \emph{(right) action} of $G$ on $X$ is a continuous function \begin{align*} X\times G &\longrightarrow X\\ (x,g)&\longmapsto x\cdot g \end{align*} such that $ x\cdot e = x $ and $(x\cdot g_1)\cdot g_2) = x\cdot (g_1\cdot g_2) $ for all $x\in X$ and $g_1, g_2\in G$. One also refers to this situation by saying that $X$ is a \emph{$G$-space}. \end{defn} \begin{defn}[Quotients by group actions] \label{Quotients} For a $G$-space $X , we write $$ X/G \;:=\; X/( x \sim x \cdot g ) $$ for the (ordinary) quotient space, which comes with the quotient projection \begin{equation} \label{QuotientProjection} \xymatrix@R=1.5em{ X \ar[r]^-{\pi_{G}} & X/G \,. } \end{equation} \item For $G$-spaces $X$ and $Y$, we write \begin{equation} \label{QuotientByDiagonalAction} X \times_G Y := (X \times Y)/G := \big(X \times Y\big)/\big( (x,y) \sim (x \cdot g , y \cdot g) \big) \end{equation} for the quotient by the diagonal action. \end{defn} \begin{remark}[Comparison map for free actions] \label{ComparisonMapForFreeActions} Recall that the $G$-action on $X$ is called \emph{free} if for every pair of points $(x_1,x_2) \in X \times X$ there is at most one $g \in G$ such that $x_2 = x_1 \cdot g$. For a free action there is a well-defined \emph{comparison map} which we suggestively write as \begin{align} \label{ComparisonMap} X\times_{X/G} X&\longrightarrow G\\ \notag [x_1, x_2] &\longmapsto x_1^{-1} \cdot x_2 \end{align} such that $ y_1 \cdot (x_1^{-1} \cdot x_2) = y_2 $ whenever $[y_1, y_2 ] = [x_1, x_2]$. The comparison map determines a homeomorphism $X\times_{X/G}X\to X\times G$ via $[x_1, x_2]\mapsto (x_1, x_1^{-1}\cdot x_2)$. \end{remark} \begin{defn}[$G$-Extension functor] \label{GExt} For any topological group $G$, there exists a topological space $EG$ such that $EG$ is a free $G$-space which is weakly contractible: the map $EG\to \ast$ is a weak homotopy equivalence. The quotient space \begin{equation} \label{ClassifyingSpace} B G := (E G)/G \end{equation} is the \emph{classifying space} of $G$, and the quotient projection \begin{equation} \label{UniversalGPrincipal} \xymatrix@R=1.2em{ E G \ar[r]^-{\pi_G} & B G } \end{equation} this is called the \emph{universal $G$-principal bundle}. The $G$-bundle $EG\to BG$ is determined by this specification uniquely up to homotopy equivalence. Given any topological space $X$ equipped with a map $X \overset{\phi}{\to} B G$, we can pull back the universal bundle \eqref{UniversalGPrincipal} to obtain a space $X\times_{BG} E G$ with free $G$-action whose quotient space is $X$: \begin{equation} \label{GBundleByPullback} \raisebox{20pt}{ \xymatrix@C=5em{ X \underset{B G}{\times} E G \ar@{}[dr]|{\mbox{\footnotesize{(pb)}}}\ar[d]_-{\pi_G} \ar[r] & E G \ar[d]^-{\pi_G} \\ X \ar[r]^-{\phi} & B G \,. } } \end{equation} This is the $G$-principal bundle \emph{classified by $\phi$}. For our purposes, it is useful to think of $X\times_{BG} EG$ as the \emph{extension} that is classified by the \emph{cocycle} $\phi$. Therefore, we write $\mathrm{Ext}_G$ for the functor that computes these fiber products: \begin{align} \label{GExtF} \mathrm{Ext}_G\colon \mathrm{Spaces}_{/BG} & \longrightarrow \mathrm{Spaces}\\ \notag (Y\to BG) &\longmapsto Y\underset{BG}{\times} EG. \end{align} \end{defn} \begin{defn}[Homotopy quotient] \label{HomotopyQuotient} Given a $G$-space $X$ (Def. \ref{GSpace}), write $X /\!\!/ G$ for the \emph{homotopy} quotient space. This is specified up to weak homotopy equivalence by the \emph{Borel construction} \begin{equation} \label{BorelConstruction} X/\!\!/ G := X \times_G E G \,, \end{equation} which we take as our definition. \end{defn} \begin{example}[Homotopy quotient of trivial $G$-actions] \label{HomotopyQuotientOfTrivialGAction} The homotopy quotient of the (unique) $G$-action on the point $\ast$ is the classifing space \eqref{ClassifyingSpace}: $$ \ast /\!\!/ G := (\ast \times E G)/G = (E G)/G = B G \,. $$ More generally, for a \emph{trivial} $G$-space $X$ (so that $x\cdot g = x$ for all $g$), the homotopy quotient is simply $$ X /\!\!/ G \;\simeq\; X \times B G \,. $$ \end{example} \begin{remark}[Maps related to the homotopy quotient] \label{MapsRelatedToHomotopyQuotient} The homotopy quotient (Def. \ref{HomotopyQuotient}) is naturally equipped with the following maps of interest: \item {\bf (i)} The ordinary quotient projection \eqref{QuotientProjection} factors canonically up to homotopy via the homotopy quotient as $$ \pi_G \;:\; \xymatrix{ X \ar[r] & X /\!\!/ G \ar[r] & X / G }\!. $$ Indeed, choosing any point $p \in E G$ (which is unique up to homotopy since $EG$ is contractible), the factorization is obtained by the sequence of maps on the left-hand side of the diagram \begin{equation} \label{HomotopyQuotientReceiving} \raisebox{40pt}{ \xymatrix{ X \ar[rr]^-{ x \mapsto (x,p) } \ar@/_1.6pc/[dd]_{\pi_G} \ar[d]^-{ x \mapsto [x,p] } && X \times E G \ar[d] \\ X /\!\!/ G \ar@{=}[rr] \ar[d] && X \times_G E G \ar[d] \\ X/G \ar@{=}[rr] && X \times_G \ast } } \end{equation} On the right-hand side we first take the quotient projection by the diagonal $G$-action on $X\times EG$ and then project out the $EG$ factor via the ($G$-equivariant) map $EG\to \ast$. If the $G$-action on $X$ is free, the comparison map $X/\!\!/ G\to X/G$ is a weak homotopy equivalence (see e.g. \cite{Kor}). \item {\bf (ii)} The homotopy quotient $X/\!\!/ G$ is equipped with a canonical map to the classifying space \eqref{ClassifyingSpace} \begin{equation} \label{CanonicalCocycleOnHomotopyQuotient} X/\!\!/ G \longrightarrow B G, \end{equation} obtained via the map $X\to \ast$ as \begin{equation} \label{CanonicalCocycle} X/\!\!/ G = X \times_{G} E G \xymatrix{\ar[r]&} \ast \times_G\, E G = (E G)/G = B G \,. \end{equation} \end{remark} \begin{prop}[Extension of homotopy quotient] \label{ExtensionOfHomotopyQuotientEquivalentToOriginalSpace} Any $G$-space $X$ is weakly homotopy equivalent to the $G$-extension (Def. \ref{GExt}) of its homotopy quotient $X/\!\!/ G$ (Def. \ref{HomotopyQuotient}) along the canonical map \eqref{CanonicalCocycleOnHomotopyQuotient}: \begin{equation} \label{ExtOfHomotopyQuotient} \xymatrix{ \mathrm{Ext}_G( X /\!\!/ G ) \ar[rr]^-{}^-{ \simeq_{\mathrm{whe}} } && X. } \end{equation} \end{prop} \begin{proof} Unwinding the definitions, the extension in question is obtained as the pullback \begin{equation} \label{PullbackDescriptionOfExtOfHomotopyQuotient} \mathrm{Ext}_G( X /\!\!/ G ) = (X \times_G E G) \underset{B G}{\times} E G, \end{equation} which is homeomorphic \emph{as a $G$-space} to $X \times E G$, via the map \begin{align} \label{ExtOfHomotopyQuotientForm} (X \times_G E G) \underset{B G}{\times} E G &\longrightarrow X\times EG \\ \notag \big([x,e_1], e_2\big) &\longmapsto \big(x\cdot (e_1^{-1}\cdot e_2), e_2 \big) \end{align} where we have used the comparison map \eqref{ComparisonMap} fiberwise over $BG$. Thus we have a map \[ \mathrm{Ext}_G(X/\!\!/ G)\cong X\times EG \longrightarrow X \] which is a weak homotopy equivalence by weak contractibility of $EG$. \end{proof} \begin{remark}[$\mathrm{Ext}_G(X/\!\!/ G)$ as a free resolution] The induced action on $X/\!\!/ G\times_{BG} E G$ in \eqref{PullbackDescriptionOfExtOfHomotopyQuotient} is always free, even if the action on $X$ is not. Additionally, the isomorphism \eqref{ExtOfHomotopyQuotientForm} is manifestly $G$-equivariant for the diagonal $G$-action on $X \times E G$. Hence Prop. \ref{ExtensionOfHomotopyQuotientEquivalentToOriginalSpace} is saying that $\mathrm{Ext}_G(X/\!\!/ G)$ is a \emph{resolution} of $X$ by a free $G$-space. For example, if $X = \ast$ then we have $\ast/\!\!/ G \times_{BG} E G \cong E G$. \end{remark} We now turn to the description of the $G$-cyclification functor, which extends the cyclification functor from \cite{FSS17, Higher-T}. \begin{defn}[Mapping space out of $G$] \label{MappingSpaceOutOfG} For a topological space $Y$, write $\mathrm{Maps}(G,Y)$ for the space of continuous maps $G\to Y$. \footnote{we will always assume that all topological spaces are compactly generated, so that $\mathrm{Maps}(G,Y)$ is the exponential object in the category of compactly generated spaces---this completely specifies the topology.} This mapping space is regarded as equipped with the $G$-action \begin{align*} \mathrm{Maps}(G,Y)\times G&\longrightarrow \mathrm{Maps}(G,Y)\\ (f,g)&\longmapsto \big[(f\cdot g)\colon h\mapsto f(hg^{-1})\big], \end{align*} which, equivalently, is the conjugation action on maps of $G$-spaces where $Y$ is regarded as having the trivial $G$-action. \end{defn} \begin{defn}[$G$-Cyclification] \label{GCyc} For $G$ a topological group and $Y$ a topological space, the \emph{$G$-cyclification} of $Y$ is the map \[ \mathrm{Maps}(G,Y)/\!\!/ G\longrightarrow \ast /\!\!/ G =BG \] obtained by forming the homotopy quotient (Def. \ref{HomotopyQuotient}) of the mapping space $\mathrm{Maps}(G,Y)$ (Def. \ref{MappingSpaceOutOfG}). \item {\bf (i)} This assignment extends to a functor \begin{align*} \mathrm{Cyc}_G\colon \mathrm{Spaces}&\longrightarrow \mathrm{Space}_{/BG} \\ Y &\longmapsto \mathrm{Maps}(G,Y)\times_G EG. \end{align*} \item {\bf (ii)} For the special case that $G = S^1$ is the circle group we omit the subscript \lq\lq $G$'' and write simply \begin{equation} \label{Cyclic} \mathrm{Cyc}(X) = \mathcal{L}(X) := \mathrm{Maps}(S^1, X) /\!\!/ S^1. \end{equation} This is the homotopy quotient of the \emph{free loop space} of $X$ by the rigid rotation action on loops. The cohomology of $\mathrm{Cyc}(X)$ is the \emph{cyclic cohomology} of $X$, whence the terminology and notation. \end{defn} The key fact relating the functors $\mathrm{Ext}_G$ and $\mathrm{Cyc}_G$ in this setting is the following: \begin{theorem}[The $\mathrm{Ext}$/$\mathrm{Cyc}$-adjunction] \label{GCycExtAdjunction} Let $G$ be a topological group. Then \item {\bf (i)} The functors $\mathrm{Ext}_G$ (Def. \ref{GExt}) and $\mathrm{Cyc}_G$ (Def. \ref{GCyc}) are adjoints, with $\mathrm{Ext}_G$ the left and $\mathrm{Cyc}_G$ the right adjoint: \vspace{-3mm} $$ \xymatrix{ \mathrm{Spaces} \ar@{<-}@<+6pt>[rr]^-{ \mathrm{Ext}_G } \ar@<-6pt>[rr]_-{ \mathrm{Cyc}_G }^-{\bot} && \mathrm{Spaces}_{/B G}\;. } $$ \vspace{-3mm} \item {\bf (ii)} The unit of the adjunction $$ \xymatrix{ X \ar[r]^-{\eta_{{}_X} } & \mathrm{Cyc}_G(\mathrm{Ext}_G(X)) } $$ is the map that sends $x\in X$ to the equivalence class of the map $G\to \mathrm{Ext}_G(X)|_x$ obtained by choosing any image of the neutral element $e\in G$ and extending to all of $G$ by the group action: $$ \eta_{{}_X} \;:\; x \longmapsto \big[ G \overset{\simeq}{\longrightarrow} \mathrm{Ext}_G(X)\vert_x \big]. $$ \item {\bf (iii)} The counit of the adjunction $$ \xymatrix@R=1.5em{ \mathrm{Ext}_G(\mathrm{Cyc}_G(Y)) \ar[dr]_-{ \simeq_{\mathrm{whe}} \ar[rr]^-{ \epsilon_{{}_Y} } && Y \\ & \mathrm{Maps}(G,Y) \ar[ur]_-{\mathrm{ev}_e} } $$ is the composite of a weak homotopy equivalence to $\mathrm{Maps}(G,Y)$ followed by evaluation at the neutral element. \end{theorem} \begin{proof} To show {\bf (i)}, given $(c\colon X \to B G) \in \mathrm{Spaces}_{BG}$ and $Y \in \mathrm{Spaces}$, we must produce a natural bijection of sets \begin{equation} \label{eqn:CycExtCondition} \mathrm{Hom} \Big( X \underset{B G}{\times} E G \,,\, Y \Big) \cong \mathrm{Hom}_{/BG} \big(X, \mathrm{Maps}(G,Y)\times_G EG\big). \end{equation} On the one hand, given a map \[ \xymatrix@R=1em{ X \ar[rr]^-{\phi} \ar[dr]_-{c}&& \mathrm{Maps}(G,Y)\times_G EG\ar[dl] \\ &BG& } \] that sends $x\mapsto [f_x, p_x]$, we define the map $\Phi(\phi)\colon X\underset{BG}{\times} EG \longrightarrow Y$ via $ (x,p) \longmapsto f_x (p^{-1}\cdot p_x)$. It is easy to check that this assignment does not depend on the choice of representative of the class $[f_x, p_x]$, and the argument of $f_x$ is determined by the comparison map \eqref{ComparisonMap}, since $p, p_x$ lie in the same fiber of $EG$ over $BG$. Conversely, given a map \begin{align*} \Psi\colon X\underset{BG}{\times} EG & \longrightarrow Y\\ (x,p)&\longmapsto y_{x,p}, \end{align*} we define the map $\psi(\Psi)\colon X \to \mathrm{Maps}(G,Y)\times_G EG$ via the assignment $ x\mapsto [y_{x,p_x\cdot (-)^{-1}}, p_x] $, where $p_x \in EG$ is such that $(x,p_x)\in X\times_{BG} EG$ and $y_{x,p_x\cdot (-)^{-1}}$ denotes the map $G\to Y$ given by the assignment $g \mapsto y_{x,p_x\cdot g^{-1}}$. It is straightforward to see that $\psi(\Psi)$ is well-defined and indeed determines a map of spaces over $BG$. We now check that that the assignments $\phi\mapsto \Phi(\phi)$ and $\Psi\mapsto \psi(\Psi)$ are inverses of each other. If $\phi\colon x\mapsto [f_x, p_x]$ is as above, then we have \[ \psi(\Phi(\phi))\colon x\longmapsto \Big[f_x\big((-)\cdot (p'_x)^{-1} \cdot p_x\big), p'_x\Big] = [f_x, p_x], \] where $p'_x\in EG$ is any point such that $(x,p_x)\in X\times_{BG} \, EG$. Similarly, $\Phi(\psi(\Psi))= \Psi$ for all maps of spaces $\Psi \colon X\to \mathrm{Cyc}_G(Y)$ over $BG$. Indeed, writing $\Psi\colon (x,p)\mapsto y_{x,p}$ as above then \[ \Phi(\psi(\Psi))\colon (x,p) \longmapsto y_{x, p_x\cdot (p_x^{-1}\cdot p)} = y_{x,p}. \] This gives us a bijection on hom-sets of the desired form \eqref{eqn:CycExtCondition}, which is manifestly natural in $(c\colon X\to BG)$ and $Y$. This completes the proof of {\bf (i)}. As to {\bf (ii)}, we recall that the component of the unit at $(X\to BG)$ is the adjunct of the identity map on $\mathrm{Ext}_G(X)$. By the above, this map is described by the assignment $$ x \longmapsto [ (x,p_x \cdot (-)^{-1}), p_x ]\;, $$ where $p_x\in EG$ is any point such that $(x,p_x)\in X\times_{BG} EG$. But this is precisely of the claimed form: for each $x$ we choose a point $(x,p_x)$ in the fiber of $\mathrm{Ext}_G(X)$ over $X$ which is the image of the neutral element of $G$. Then an arbitrary element $g\in G$ is sent to $(x, p_x\cdot g^{-1})$, which determines a homeomorphism $G\cong \mathrm{Ext}_G(X)|_x$. Passing to the homotopy quotient removes the choice ambiguity. For {\bf (iii)}, the component of the counit at $Y$ is the adjunct of the identity on $\mathrm{Cyc}_G(Y)$. By the above, this is simply the map \begin{align*} \big(\mathrm{Maps}(G,Y)\times_G EG\big)\underset{BG}{\times} EG &\longrightarrow Y\\ \big([f,p], p'\big) &\longmapsto f\big((p')^{-1}\cdot p\big). \end{align*} In the proof of Prop. \ref{ExtensionOfHomotopyQuotientEquivalentToOriginalSpace}, we saw that \begin{align*} \kappa\colon \big(\mathrm{Maps}(G,Y)\times_G EG\big)\underset{BG}{\times} EG &\longrightarrow \mathrm{Maps}(Y, G)\times EG\\ \big([f,p], p'\big) & \longmapsto \big(f\cdot (p^{-1}\cdot p'), p' \big) \end{align*} is a homeomorphism of $G$-spaces, so that the counit factors as \[ \xymatrix{ \big(\mathrm{Maps}(G,Y)\times_G EG\big)\underset{BG}{\times} EG \ar[rr]^-{\simeq_{\mathrm{whe}}} && \mathrm{Maps}(G,Y) \ar[rr]^-{\mathrm{ev}_e} && Y, } \] where we have used that $EG\to \ast$ is a weak homotopy equivalence. This completes the proof. \end{proof} \begin{remark}[Extension to the homotopy categories] \label{ExtCycAdjunctiononHomotopyCats} The result we have just proven establishes $\mathrm{Ext}/\mathrm{Cyc}$-adjunction as an ordinary adjunction between categories of topological spaces. However, as we are interested in the corresponding adjunction between the \emph{homotopy} categories, some additional points are in order. \begin{itemize} \item Since the universal $G$-principal bundle $EG\to BG$ is always a (Serre) fibration, taking fiber products with this map preserves weak homotopy equivalences. In particular, the functor $\mathrm{Ext}_G$ is \emph{homotopical} and so descends to functor on derived categories $\mathrm{Ho}(\mathrm{Spaces}_{/BG})\to \mathrm{Ho}(\mathrm{Spaces})$. \item The homotopical properties of the cyclification functor are more involved. Indeed, $\mathrm{Cyc}_G$ may fail to be a homotopical functor in general since $Y\mapsto \mathrm{Maps}(G,Y)$ need not preserve weak homotopy equivalences, though this problem evaporates if $G$ is a CW complex. In this article, we are primarily interested in $G= S^1$, in which case $\mathrm{Cyc}=\mathrm{Cyc}_{S^1}$ \emph{is} a homotopical functor and so does determine a functor between the corresponding homotopy categories. \end{itemize} In summary, we have that Theorem \ref{GCycExtAdjunction} presents the adjunction between homotopy categories \[ \xymatrix{ \mathrm{Ho}\big(\mathrm{Spaces}\big) \ar@{<-}@<+6pt>[rr]^-{ \mathrm{Ext} } \ar@<-6pt>[rr]_-{ \mathrm{Cyc} }^-{\bot} && \mathrm{Ho}\big(\mathrm{Spaces}_{/BS^1}}\!\big) \] and, therefore (upon further localization), between \emph{rational} homotopy categories, which is our primary focus in this article. For more general $G$, the adjunction of Theorem \ref{GCycExtAdjunction} may fail to descend to homotopy categories. There are various ways of remedying this issue, but this takes us beyond the scope of the present article. \end{remark} Below we will be mainly concerned with the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction in the \emph{rational} approximation. To see how this works, we first establish good models for the rationalization of the cyclification functor: \begin{remark}[Minimal models for cyclic loop spaces {\cite{VigueBurghelea}}] \label{MinimalDGCModelForCyclicLoopSpace} Let $X$ be a simply-connected topological space of finite rational type (Def. \ref{ClassicalHomotopyCategories}) and let $(\mathbb{Q}[ \{\omega_i\} ], d_X)$ be a corresponding minimal DG-algebra (Prop. \ref{SullivanEquivalence} {\bf{(iv)}}). Then \item {\bf (i)} a minimal DG-algebra model for the free loop space $\mathcal{L}(X) := \mathrm{Maps}(S^1,X)$ is obtained by adjoining a second copy $\{s \omega_i\}$ of the generators, where the degrees of these additional \lq\lq looped generators'' satisfy $\mathrm{deg}(s\omega_i) = \mathrm{deg}(\omega_i)-1 = i-1$, and with differential given by $$ d_{{}_{\mathcal{L}(X)}} \omega_i := d_{{}_X} \omega_i\,, \qquad \quad d_{{}_{\mathcal{L}(X)}} s \omega_i := - s ( d_{{}_{X}} \omega_i ) \,, $$ where on the right $s$ is uniquely extended from a linear map on generators to a graded derivation of degree $-1$. \item {\bf (ii)} a minimal DG-algebra model for the cyclic loop space $\mathrm{Cyc}(X)$ (Def. \ref{GCyc}) is obtained from this by adjoining one more generator $\widetilde{\omega}_2$ in degree 2, and taking the differential to be given by $$ d_{{}_{\mathrm{Cyc}(X)}} \widetilde{\omega}_2 = 0 \,, \qquad \quad d_{{}_{\mathrm{Cyc}(X)}} w = d_{{}_{\mathcal{L}(X)}} w + \widetilde{\omega}_2 \wedge s w \,, $$ where on the right $w \in \{\omega_i, s \omega_i\}$. \end{remark} \begin{example}[Minimal model for $\mathrm{Cyc}(S^4)$ {\cite[Example 3.3]{FSS16a}}] \label{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere} By Remark \ref{MinimalDGCModelForCyclicLoopSpace}, a minimal DG-algebra model for the cyclification of the 4-sphere is given by $$ \mathcal{O}(\mathrm{Cyc}(S^4)) \;\simeq\; \mathbb{Q}[h_3, h_7, \omega_2, \omega_4, \omega_6] \Bigg/ \left( \begin{aligned} d h_3 & = 0 \\[-1mm] d h_7 & = -\tfrac{1}{2} \omega_4 \wedge \omega_4 + \omega_6 \wedge \omega_2 \\[-1mm] d \omega_2 & = 0 \\[-1mm] d \omega_4 & = h_3 \wedge \omega_2 \\[-1mm] d \omega_6 & = h_3 \wedge \omega_4 \end{aligned} \right). $$ This exhibits the structure of a DG-algebra over $\mathbb{Q}[h_3]/(d h_3= 0)$, hence exhibiting a rational model for a map \begin{equation} \label{OverS3CycS4} \mathrm{Cyc}(S^4)\longrightarrow S^3. \end{equation} \end{example} \begin{example}[$\mathrm{Cyc}(S^4)$ covers 6-truncated twisted K-theory, rationally] \label{CyclificationOf4SphereReceives6TruncationOfTwistedK} Ex. \ref{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere} reveals a close relationship between $\mathrm{Cyc}(S^4)$ and the 6-truncation of $\Omega^{\infty-2}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ B S^1 \right)$ in the rational approximation. In terms of the minimal models of Lemma \ref{TwistedKModel}, the 6-truncation $\Omega^{\infty-2}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ B S^1 \right)\langle 6 \rangle$ is obtained simply by setting all $\omega_{\bullet>6}$ to zero. We then have the following morphisms in the rational homotopy category: $$ \xymatrix@R=-2pt{ \Omega^{\infty-2}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ B S^1 \right) \ar@{->}[r]^-{\tau_6} & \Omega^{\infty-2}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ B S^1 \right)\langle 6 \rangle & \mathrm{Cyc}(S^4) \ar[l]_-{p} \\ h_3 & h_3 \ar@{|->}[l] \ar@{|->}[r] & h_3 \\ \omega_2 & \omega_2 \ar@{|->}[l] \ar@{|->}[r] & \omega_2 \\ \omega_4 & \omega_4 \ar@{|->}[l] \ar@{|->}[r] & \omega_4 \\ \omega_6 & \omega_6 \ar@{|->}[l] \ar@{|->}[r] & \omega_6 \\ \omega_8 && h_7 \\ \vdots } $$ In Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} we encounter lifts $\widehat \phi$ of morphisms of rational homotopy types $\phi\colon X \overset{\phi}{\longrightarrow} \mathrm{Cyc}(S^4)$ through this zig-zag, i.e., maps $\widehat{\phi}$ making the following diagram of maps of rational homotopy types commute: \begin{equation} \label{LiftsThroughZigZag} \raisebox{45pt}{ \xymatrix@C=6em{ && \Omega^{\infty-2}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ B S^1 \right) \ar[d]^{ \tau_6 } \\ && \Omega^{\infty-2}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ B S^1 \right)\langle 6\rangle \\ X_{\mathbb{Q}} \ar[rr]^-{\phi} \ar@{-->}@/^1pc/[uurr]^{\widehat \phi} && \mathrm{Cyc}(S^4). \ar[u]_-{p} } } \end{equation} \end{example} \begin{prop}[Minimal DG-module for fiberwise stabilization of $\mathrm{Cyc}(S^4)$] \label{MinimaldgModuleForFiberwiseStabilisationOfCyclicSpaceOf4Sphere} \item {\bf (i)} A minimal DG-module (Def. \ref{MinimalDGModule}) modelling the fiberwise stabilization (Prop. \ref{AdjunctionStabilization}) of the cyclic loop space of the 4-sphere (Ex. \ref{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere}) over $S^3$ (via \eqref{OverS3CycS4}) is $$ \mathcal{M}_{\mathbb{Q}[S^3]} \left( \Sigma^\infty_{+,S^3} \mathrm{Cyc}(S^4) \right) \;\simeq\; \frac{ \mathbb{Q}[ h_3, \omega_2, \omega_4, \omega_6 ] } { (\omega_6 \wedge \omega_2 -\tfrac{1}{2} \omega_4 \wedge \omega_4) } \Bigg/ \left( \begin{aligned} d h_3 & = 0 \\[-1mm] d \omega_2 & = 0 \\[-1mm] d \omega_4 & = h_3 \wedge \omega_2 \\[-1mm] d \omega_6 & = h_3 \wedge \omega_4 \end{aligned} \right) \;\in\; \mathbb{Q}[h_3]\mathrm{\mbox{-}Mod}\;. $$ Here the module structure is the evident one induced by multiplication in $\mathbb{Q}[h_3]$. \item {\bf (ii)} There is a quasi-isomorphism from this minimal model to the DG-module underlying the minimal DG-algebra from Ex. \ref{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere} $$ \frac{ \mathbb{Q}[ h_3, \omega_2, \omega_4, \omega_6 ] } { (\omega_6 \wedge \omega_2 -\tfrac{1}{2} \omega_4 \wedge \omega_4) } \Bigg/ \left( {\begin{aligned} d h_3 & = 0 \\[-1mm] d \omega_2 & = 0 \\[-1mm] d \omega_4 & = h_3 \wedge \omega_2 \\[-1mm] d \omega_6 & = h_3 \wedge \omega_4 \end{aligned}} \right) \xrightarrow{\;\;\simeq_{\mathrm{qi}}\;\;} \mathbb{Q}[ h_3, h_7, \omega_2, \omega_4, \omega_6 ] \Bigg/ \left( {\begin{aligned} d h_3 & = 0 \\[-1mm] d h_7 & = -\tfrac{1}{2}\omega_4 \wedge \omega_4 + \omega_2 \wedge \omega_6 \\[-1mm] d \omega_2 & = 0 \\[-1mm] d \omega_4 & = h_3 \wedge \omega_2 \\[-1mm] d \omega_6 & = h_3 \wedge \omega_4 \end{aligned}} \right) $$ given by any choice of linear splitting of the underlying quotient map of graded algebras, for example, by the map sending equivalence classes on the left to their unique representatives on the right that are at most linear in $\omega_4$. \end{prop} \begin{proof} For {\bf(i)}: by Prop. \ref{FiberwiseSuspensionSpectrumdgModel} and Lemma \ref{MinimalDgModulesForFiberSpectra} the underlying graded $\mathbb{Q}[h_3]$-module is the free $\mathbb{Q}[h_3]$-module on the cohomology of the homotopy cofiber of $$ \mathbb{Q}[h_3] \longrightarrow \mathbb{Q}[ h_3, h_7, \omega_2, \omega_4, \omega_6 ] \Bigg/ \left( {\begin{aligned} d h_3 & = 0 \\[-1mm] d h_7 & = -\tfrac{1}{2}\omega_4 \wedge \omega_4 + \omega_2 \wedge \omega_6 \\[-1mm] d \omega_2 & = 0 \\[-1mm] d \omega_4 & = h_3 \wedge \omega_2 \\[-1mm] d \omega_6 & = h_3 \wedge \omega_4 \end{aligned}} \right), $$ where the minmal DG-algebra on the right is from Ex. \ref{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere}. This cofiber cohomology is evidently the graded-commutative algebra $$ \frac{ \mathbb{Q}[\omega_2, \omega_4, \omega_6 ] } { (\omega_6 \wedge \omega_2 -\tfrac{1}{2} \omega_4 \wedge \omega_4) }\;, $$ obtained as the quotient by the two-sided tensor ideal generated by $\omega_6 \wedge \omega_2 -\tfrac{1}{2} \omega_4\wedge\omega_4$. The graded vector space underlying the minimal DG-module is therefore $$ \mathbb{Q}[h_3] \otimes \frac{ \mathbb{Q}[\omega_2, \omega_4, \omega_6 ] } { (\omega_6 \wedge \omega_2 -\tfrac{1}{2} \omega_4 \wedge \omega_4) } \;\simeq\; \frac{ \mathbb{Q}[ h_3, \omega_2, \omega_4, \omega_6 ] } { (\omega_6 \wedge \omega_2 -\tfrac{1}{2} \omega_4 \wedge \omega_4) }\;. $$ The differential on this must be such that fiberwise stabilization does not change the cohomology (by Prop. \ref{FiberwiseSuspensionSpectrumdgModel}). This completely determines the differential, fixing it as claimed. The second point {\bf (ii)} now follows at once. \end{proof} \section{The A-type orbispace of the 4-sphere} \label{TheATypeOrbispaceOfThe4Sphere} In this section we consider a particular circle action on the 4-sphere, as well as the induced homotopy quotient, which we call the \emph{A-type orbispace of the 4-sphere} (see Def. \ref{ATypeOrbispaceOf4Sphere} and Rem. \ref{OrbispaceTerminology} below). We first provide an informal string-theoretic motivation for considering this space in Rem. \ref{TheATypeQuotientFromSpacetime}, and then substantiate this by a more formal mathematical analysis. After establishing some results on the rational homotopy type of the A-type orbispace in Sec. \ref{RationalHomotopyTypeOfATypeOrbispace}, our main result Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} shows that, rationally, there is a copy of twisted K-theory in the fiberwise stabilization of the A-type orbispace, fibered over the 3-sphere. In Sec. \ref{TheMechanism}, we demonstrate how this result witnesses the phenomenon of gauge enhancement of M-branes. \begin{defn}[The A-type orbispace of the 4-sphere] \label{ATypeOrbispaceOf4Sphere} Writing $S^4$ as the unit sphere in $\mathbb{R}^5 =\mathbb{R}\oplus \mathbb{C}^2$, the identification \begin{equation} \label{SU2ActionOn4Sphere} S^4 \;=\; S( \mathbb{R} \oplus \!\!\!\!\! \xymatrix{ \mathbb{C}^2 \ar@(ul,ur)[]^{ {\rm SU}(2)_L } } \!\!\!\!\! ) \end{equation} shows that $S^4$ inherits an action of ${\rm SU}(2)$. Specifically, on the right-hand side above we are referring to the defining linear representation of ${\rm SU}(2)$ on $\mathbb{C}^2$, regarded as a \emph{left} action. This restricts along the canonical inclusion $S^1 \simeq {\rm U}(1)\hookrightarrow {\rm SU}(2)$ to define an $S^1$-action on $S^4$. We refer to the corresponding homotopy quotient \eqref{BorelConstruction} \begin{equation} \label{ATypeOrbispace} S^4 /\!\!/ S^1 \;\simeq\; S^4 \times_{S^1} E S^1 \end{equation} as the \emph{A-type orbispace of the 4-sphere}. \end{defn} \begin{remark}[$A$-series vs. $S^1$] \label{OrbispaceTerminology} The terminology in Def. \ref{ATypeOrbispaceOf4Sphere} is motivated as follows: the finite subgroups of $SU(2)$ have a famous ADE-classification, corresponding to the simply-laced Dynkin diagrams. The finite subgroups in the A-series are cyclic and, up to conjugation, are all subgroups of the canonical copy of $S^1$ inside ${\rm SU}(2)$: $$ \xymatrix{ \mathbb{Z}_{n+1} \; \ar@{^{(}->}[r]& S^1 \simeq {\rm U}(1) \; \ar@{^{(}->}[r] & {\rm SU}(2) }. $$ The $S^1$-action considered in Def. \ref{ATypeOrbispaceOf4Sphere} is thus the limiting case (as $n \to \infty$) of the A-series actions. Now the homotopy quotient of the smooth 4-sphere such a finite group action is an \emph{orbifold}, hence an \emph{A-type orbifold} for an A-series group action (see \cite{ADE} for further discussion). More generally, homotopy quotients by (possibly non-finite) topological groups are \emph{orbispaces} \cite{HenriquesGepner07}, whence our terminology. \end{remark} The following result is immediate: \begin{prop}[Quotient and fixed points of the A-type orbispace] \label{SystemOfFixedPointsAndQuotientsOfATypeActionOn4Sphere} For the A-type $S^1$-action on $S^4$ (Def. \ref{ATypeOrbispaceOf4Sphere}), it holds that: \item {\bf (i)} The ordinary quotient space is the 3-sphere: $ S^4 / S^1 \;\simeq\; S^3 $. Hence, via \eqref{HomotopyQuotientReceiving} there is a canonical map from the A-type orbispace of the 4-sphere to the 3-sphere: \begin{equation} \label{MapFromATypeOrbispaceTo3Sphere} \raisebox{12pt}{\xymatrix@R=6pt{ S^4 /\!\!/ S^1 \ar[rr] && S^3 \\ S^4 \times_{S^1} E S^1 \ar[rr]_{ \mathrm{id}\times_{S^1} p } \ar@{=}[u] && S^4 \times_{S^1} \ast \ar@{=}[u] }} \end{equation} \vspace{-3mm} \item {\bf (ii)} The space of $S^1$-fixed points is the 0-sphere, included as two antipodal points $$ S^0 \;=\; \left(S^4\right)^{S^1} \xymatrix{\ar@{^{(}->}[r]&} S^4 \,. $$ In summary, we have the following system of spaces over $S^3$: \begin{equation} \label{QuotientOfS4ByS1OverS3} \raisebox{20pt}{\xymatrix@R=1pt@C=4em{ \overset{ \mbox{ \tiny \begin{tabular}{c} \emph{Fixed}\;\; \\ \emph{points}\;\; \end{tabular}}} {\overbrace{S^0 = \big(S^4\big)^{S^1}}} \ar[dddrr] \;\ar@{^{(}->}[r] & \overset{ \mbox{ \tiny \begin{tabular}{c} \emph{4-sphere}\;\;\; \end{tabular}}} {\overbrace{S^4}} \ar[dddr] \ar[r] & \overset{ \mbox{ \tiny \begin{tabular}{c} \emph{Homotopy}\;\; \\ \emph{quotient}\;\; \end{tabular}}} {\overbrace{S^4/\!\!/ S^1}} \ar[ddd] \ar[r] & \overset{ \mbox{ \tiny \begin{tabular}{c} \emph{Naive}\;\; \\ \emph{quotient}\;\; \end{tabular}}} {\overbrace{S^4/ S^1}} \ar@{=}[dddl] \\ \\ \\ && S^3 }} \end{equation} \end{prop} Below in Sec. \ref{TheMechanism} we regard the (rationalization of the) A-type orbispace of the 4-sphere as the \emph{coefficient} of a generalized cohomology theory. However, as explained in \cite[Sec. 2.2]{ADE}, the 4-sphere coefficient here ultimately originates as a factor in a black M5-brane spacetime $\sim \mathrm{AdS}_7 \times S^4$. With this in mind, the spaces appearing in \eqref{QuotientOfS4ByS1OverS3} readily explain those spaces appearing in the string theory literature. \begin{remark}[The A-type orbispace from black M5-brane geometry] \label{TheATypeQuotientFromSpacetime} \item {\bf{(i)}} The near-horizon geometries of black M2-brane and black M5-brane solutions of 11-dimensional supergravity are given by $\mathrm{AdS_4} \times S^7$ and $\mathrm{AdS}_7 \times S^4$, respectively \cite{Gueven92}. Both of the spherical factors admit natural maps to the four-sphere, namely the quaternionic Hopf fibration $H_\mathbb{H}\colon S^7 \to S^4$ and the identity map $S^4 \to S^4$, and these maps generate the torsion-free homotopy of $S^4$. It is natural to posit that $S^4$ is the coefficient for a \emph{nonabelian} cohomology theory (in this case \emph{cohomotopy}) that measures M-brane charge in the spirit of Dirac charge quantization \cite{Freed00}, at least rationally \cite{S-top, cohomotopy, FSS16a}. \begin{equation} \label{SpacetimeMaps} \raisebox{50pt}{\xymatrix@R=1em@C=5em{ \big[ \underset{ \tiny \begin{tabular}{c} black M2-brane \\ spacetime \end{tabular} }{\underbrace{ \mathrm{AdS}_4 \times S^7}} \ar[r]^-{{\rm pr}_2} \ar@/^2pc/[rr]^{ \mbox{ \tiny \begin{tabular}{c} one unit of \\ M2-brane charge \end{tabular} } } & \underset{ \!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mbox{ \tiny \begin{tabular}{c} sphere around \\ M2-brane \\ singularity \end{tabular} } \!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\! }{ \underbrace{ S^7 }} \ar[r]^-{ H_{\mathbb{H}} & S^4 \big] & \in \big[Y, S^4\big] \\ && & \mbox{ \footnotesize \begin{tabular}{c} cohomotopy classes \\ of $Y$ in degree 4 \end{tabular} } \\ \big[ \underset{ \tiny \begin{tabular}{c} black M5-brane \\ spacetime \end{tabular} }{\underbrace{ \mathrm{AdS}_7 \times S^4}} \ar[r]^-{{\rm pr}_2} \ar@/^2pc/[rr]^{ \mbox{ \tiny \begin{tabular}{c} one unit of \\ M5-brane charge \end{tabular} } } & \underset{ \!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mbox{ \tiny \begin{tabular}{c} sphere around \\ M5-brane \\ singularity \end{tabular} } \!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\! }{ \underbrace{ S^4 }} \ar[r]^{ \mathrm{id}} & S^4 \big] & \in \big[Y, S^4\big] }} \end{equation} \item {\bf (ii)} More generally, the black M5-brane may sit inside an $\mathrm{MK6}$, which itself is located at the singular locus of a global orbifold $$ \mathrm{AdS}_7 \times S^4 /\!\!/ G_{\mathrm{ADE}} $$ (see \cite[Ex. 2.7]{ADE} for a precise statement and for pointers to the literature), where $G_{\mathrm{ADE}} \subset SU(2)$ is a finite subgroup acting on the 4-sphere via the identification $$ \xymatrix{S^4 \ar@(ul,ur)[]^{ G_{\mathrm{ADE}} }} \;\simeq\; S( \mathbb{R} \oplus \!\!\!\xymatrix{ \mathbb{C}^2 \ar@(ul,ur)[]^{ G_{\mathrm{ADE}} } } \!\!\! ) \,. $$ In order for the 4-sphere charge coefficient to be able to measure the unit charges of such M5-branes sitting at ADE singularities in a manner generalizing \eqref{SpacetimeMaps}, it must be equipped with that same group action. The resulting \emph{equivariant} cohomotopy theory for M-branes is the subject of \cite{ADE}. \item {\bf (iii)} Our current focus is on the A-series subgroups, which up to conjugation are the cyclic subgroups $ \mathbb{Z}_{n+1} \hookrightarrow S^1 = {\rm U}(1) \hookrightarrow {\rm SU}(2) $, as in Rem. \ref{OrbispaceTerminology}. By analogy with the case for M2-branes as in \cite[p. 3]{ABJM08}, we may interpret the $S^1$-action as being that of the M-theory circle fibration over 10d type IIA supergravity. With this interpretation in mind, passage to the finite A-type orbifold quotient $$ \mathrm{AdS}_7 \times S^4 /\!\!/ \mathbb{Z}_{n+1} $$ corresponds to shrinking the M-theory circle fiber, and hence the coupling constant of non-perturbative type IIA string theory, by the factor $n+1$. The limit $n \to \infty$, in which the cyclic groups $\mathbb{Z}_{n+1}$ exhausts the group $S^1$, corresponds to the limit of perturbative type IIA string theory. Via the maps of \eqref{HomotopyQuotientReceiving}: $$ \scalebox{.9}{ \xymatrix@R=7pt{ \fbox{ \begin{tabular}{c} M-theoretic \\ near horizon spacetime \\ of black M5-brane \end{tabular} } & \mathrm{AdS}_7 \times S^4 \hspace{-.75mm} \ar[d] \\ \fbox{ \begin{tabular}{c} M-theoretic \\ near horizon spacetime \\ of black M5-brane at A-type singularity \\ for coupling $g/(n+1)$ \end{tabular} } & \mathrm{AdS}_7 \times S^4 /\!\!/ \mathbb{Z}_{n+1} \ar[d] \\ \fbox{ \begin{tabular}{c} Type IIA string-theoretic \\ near horizon spacetime \\ of black NS5-brane inside black D6-brane \end{tabular} } & \mathrm{AdS}_7 \times S^4 /\!\!/ S^1 \ar[d] \\ \fbox{ \begin{tabular}{c} Type IIA string-theoretic \\ near horizon spacetime \\ of black NS5-brane \end{tabular} } & \mathrm{AdS}_7 \times S^3. } } $$ Applying the same logic as before, we might expect that the A-type orbispace $S^4 /\!\!/ S^1$ serves as the charge quantization coefficient when M-branes are identified with their dual incarnations as D-branes in type IIA string theory. That this is indeed the case, up to a subtlety related to fiberwise stabilization, is essentially our result on gauge enhancement \item {\bf (iv)} While the A-type orbispace $S^4 /\!\!/ S^1$ has not previously featured in the string theory literature, the \emph{ordinary} quotient space $S^4 / S^1 \simeq S^3$ has been considered in this context. We briefly survey the related literature: \begin{itemize} \item The dimensional reduction of 11-dimensional supergravity on the 4-sphere factor yields a maximal ${\rm SO}(5)$-gauged supergravity in seven dimensions \cite{PNT}. The consistency of this reduction is established in \cite{NVvN} and a systematic classification of such reductions is given in \cite{FS2}. On the other hand, the reduction of type IIA supergravity on $S^3$ leads to an ${\rm SO}(4)$-gauged supergravity in seven dimensions. To compare these two gauged supergravity theories, one needs a means of breaking the ${\rm SO}(5)$ gauge symmetry. In \cite{CLPST} the comparison between the two reductions is achieved using the singular scaling limit of $S^4$ opening up to $S^3 \times \ensuremath{\mathbb R}$, based on earlier arguments \cite{HW,CLLP}. The consistency of such reductions is studied and established in \cite{CLP}. \item Reductions with less symmetry are also possible, for instance by gauging only a left-acting ${\rm SU}(2)$ subgroup of ${\rm Spin}(4) \cong {\rm SU}(2)_L \times {\rm SU}(2)_R$ \cite{CS}. In \cite{NV}, this was achieved using a singular limit of the $S^4$ reduction of 11d supergravity. In \cite[Sec. 2.2]{ADE} it is explained how the distinction between these actions relates to the 4-sphere detecting black M5-branes as well as black M2-branes at singularities. \end{itemize} \end{remark} \begin{remark}[Other circle actions on the 4-sphere] Circle actions on spheres form one of the most important problems in the theory of transformation groups. For a compact connected topological group $G$ acting non-trivially on the 4-sphere, requiring orbits to be of dimension $\leq 1$ immediately forces $G$ to be the circle group. However, the action may not be equivalent to a differentiable one \cite{MZ}. Furthermore, there are infinitely many nonlinear circle actions on $S^4$ \cite{Pao}. Since $S^4$ is compact, it follows (by applying \cite[Theorem 7.33]{FOT}) that in any case the fixed point set $F$ will have the same Euler characteristic as $S^4$, namely 2. Since the sum of dimensions of the cohomology groups of $F$ is always at most as large as the corresponding sum for the space $S^4$ \cite[Theorem 7.37]{FOT}, this forces $\dim H^{\rm ev}(F; \ensuremath{\mathbb Q})=2$ and $\dim H^{\rm odd}(F; \ensuremath{\mathbb Q})=0$. Away from the trivial case ($F\simeq_\mathbb{Q} S^4$), this implies that there are only two possibilities for the rational homotopy type of the fixed point space: either $F$ is rationally a 0-sphere (union of two points), or $F$ is a rational homology 2-sphere. The latter case is described in \cite[Ex. 7.39]{FOT}; in this article we deal exclusively with the former case. \end{remark} \subsection{Rational homotopy type of the A-type orbispace} \label{RationalHomotopyTypeOfATypeOrbispace} In this section we study the rational homotopy type of the A-type orbispace of the 4-sphere (Def. \ref{ATypeOrbispaceOf4Sphere}) and apply the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction to it. The main result is Prop. \ref{MinimalDGCAlgebraModelForATypeOrbispace} below, but first we establish some preliminary results: \begin{lemma}[Rational homotopy and cohomology of $S^4 /\!\!/ S^1$] \label{HomotopyGroupsOfS4OverS1} For \emph{every} $S^1$-action on the 4-sphere, the resulting homotopy quotient $S^4 /\!\!/ S^1$ has the following properties: \item {\bf (i)} Its rational homotopy groups are \begin{equation} \label{ATypoeOrbispaceRationalHomotopyGroups} \pi_\bullet^{\mathbb{Q}}(S^4 /\!\!/ S^1) := \pi_\bullet(S^4 /\!\!/ S^1)\otimes \mathbb{Q}\simeq \begin{cases} \mathbb{Q} & \mbox{\emph{in dimensions 2, 4 and 7}}\\ \,0& \mbox{\emph{otherwise}}. \end{cases} \end{equation} \item {\bf (ii)} Its rational cohomology groups are \begin{equation} \label{ATypeOrbispaceRationalCohomology} H^\bullet(S^4 /\!\!/ S^1,\mathbb{Q}) \;\simeq\; \langle \widetilde \omega_0 \rangle \oplus \langle \omega_2 \rangle \oplus \underset{k \in \mathbb{N}}{\bigoplus}\, \big\langle \, \omega_2^{\wedge(k+2)} \,,\, \omega_4 \wedge \omega_2^{\wedge k} \, \big\rangle \,, \end{equation} so that $\dim H^{2k}(S^4/\!\!/ S^1,\mathbb{Q})$ is $1$ for $k=0,1$, is $2$ for $k\geq 2$, and the odd cohomology vanishes. \end{lemma} \begin{proof} The first statement follows with the long exact sequence of rational homotopy groups induced by the homotopy fiber sequence \[ S^1 \longrightarrow S^4 \longrightarrow S^4 /\!\!/ S^1\,. \] The second statement follows with the corresponding multiplicative Serre spectral sequence in rational cohomology (though we make no claims regarding the \emph{algebra} structure on cohomology---the notation is merely suggestive of the manner in which these classes arise in the Serre spectral sequence). \end{proof} \begin{lemma}[DG-algebra model for general $S^4 /\!\!/ S^1$] \label{dgcAlgebraModelForATypeOrbispaceOf4Sphere} For \emph{every} $S^1$-action on the 4-sphere, the minimal DG-algebra (via Prop. \ref{SullivanEquivalence}) of the resulting homotopy quotient (Def. \ref{HomotopyQuotient}) is of the form \begin{equation} \label{dgcModelForHomotopyQuotientOf4SphereByCircleAction} \mathcal{O}\left( S^4 /\!\!/ S^1 \right) \;\simeq\; \mathbb{Q}[ \omega_2, \omega_4, h_7 ]\bigg/ \left( \begin{aligned} d \omega_2 & = 0 \\[-1mm] d \omega_4 & = 0 \\[-1mm] d h_7 & = -\tfrac{1}{2} \omega_4 \wedge \omega_4 \\ & \phantom{=} + c_1 \, \omega_2^{\wedge 4} + c_2 \, \omega_2^{\wedge 2} \wedge \omega_4 \end{aligned} \right) \;\in\; \mathrm{DGCAlg} \end{equation} for some coefficients $c_1, c_2 \in \mathbb{R}$. \end{lemma} \begin{proof} By Prop. \ref{SullivanEquivalence}, the minimal DG-algebra model of $S^4 /\!\!/ S^1$ has the following properties: \begin{enumerate} \item as a graded algebra, it is generated by the rational homotopy groups; \item the differential on the minimal model is such that the cochain cohomology reproduces the rational cohomology of $S^4/\!\!/ S^1$. \end{enumerate} By Lemma \ref{HomotopyGroupsOfS4OverS1}, we therefore have that the underlying graded commutative algebra of the minimal model of \emph{any} $S^4/\!\!/ S^1$ is $\mathbb{Q}[ \omega_2, \omega_4, h_7 ]$. By the second item in Lemma \ref{HomotopyGroupsOfS4OverS1} and for degree reasons, the differential is necessarily of the form \[ d\omega_2 =0, \qquad\quad d\omega_4 =0, \qquad\quad dh_7 \neq 0. \] There is a homotopy fiber sequence \[ S^4 \longrightarrow S^4 /\!\!/ S^1 \longrightarrow BS^1, \] which in the rational models is reflected by the requirement that setting $\omega_2$ to zero in $\mathcal{O}(S^4/\!\!/ S^1)$ produces a (necessarily minimal) DG-algebra model for $S^4$. Comparing with Ex. \ref{MinimalDgcAlgebraModelFor4Sphere}, this means that \[ dh_7 = -\tfrac{1}{2} \omega_4\wedge \omega_4 + \text{terms of degree 8 at most linear in $\omega_4$}, \] which completes the proof. \end{proof} \begin{lemma}[Rational Ext/Cyc-adjunction unit at $S^4/\!\!/ S^1$] \label{ExtCycAdjunctionForATypeOrbispace} For any $S^1$-action on $S^4$, the composite of the Ext/Cyc-adjunction unit (Theorem \ref{GCycExtAdjunction}) at $S^4 /\!\!/ S^1$ with the cyclification of the equivalence of Prop. \ref{ExtensionOfHomotopyQuotientEquivalentToOriginalSpace} is presented by the map of minimal DG-algebra models \begin{equation} \label{AdjunctionUnitOnRationalATypeOrbispaceISH} \raisebox{45pt}{\xymatrix@R=-3pt@C=4em{ S^4 /\!\!/ S^1 \ar[r]^-{\eta_{S^4 /\!\!/ S^1}} & \mathrm{Cyc}\,\mathrm{Ext}(S^4 /\!\!/ S^1) \ar[r]^-{\simeq_{\mathrm{whe}}} & \mathrm{Cyc}(S^4) \\ \omega_2 \vphantom{ \omega_2^{\wedge 3} } && \omega_2 \ar@{|->}[ll] \\ \omega_4 \vphantom{ \omega_2^{\wedge 3} } && \omega_4 \ar@{|->}[ll] \\ { c_1 \, \omega_2^{\wedge 3} + c_2 \, \omega_2 \wedge \omega_4 } && \omega_6 \ar@{|->}[ll] \\ 0 \vphantom{ \omega_2^{\wedge 3} } && h_3 \ar@{|->}[ll] \\ h_7 \vphantom{ \omega_2^{\wedge 3} } && h_7 \ar@{|->}[ll] }} \end{equation} where $c_1, c_2 \in \mathbb{R}$ are the constants as in \eqref{dgcModelForHomotopyQuotientOf4SphereByCircleAction}. \end{lemma} \begin{proof} We determine what the map $S^4/\!\!/ S^1 \to \mathrm{Cyc}(S^4)$ does on generators of the rational homotopy groups. With this we can posit a map on minimal models, which turns out to be uniquely specified for degree reasons. To begin, we observe that the degree-3 generator $h_3$ must be sent to zero (since $S^4 /\!\!/ S^1$ has no free homotopy in dimension $3$). The degree-2 generator $\omega_2$ on the right of \eqref{AdjunctionUnitOnRationalATypeOrbispaceISH} is sent to the generator of the same name on the left: all morphisms considered are over $BS^1$, and $\omega_2$ generates the minimal model of this space (Ex. \ref{MinimalDGCAlgebraModelForClassifyingSpace}). The zig-zag identity of the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction gives us a commuting diagram $$ \xymatrix{ S^4 \ar@{}[r]|-{\simeq_{\mathrm{whe}}} & \mathrm{Ext}(S^4/\!\!/ S^1) \ar[drrr]_{\mathrm{id }} \ar[rrr]^-{ \mathrm{Ext}(\eta_{S^4/\!\!/ S^1}) } &&& \mathrm{Ext}\,\mathrm{Cyc}\,\mathrm{Ext}(S^4/\!\!/ S^1) \ar[d]^{ \epsilon_{\mathrm{Ext}(S^4 /\!\!/ S^1)} } \ar@{}[r]|-{ \simeq_{\mathrm{whe}} } & \mathrm{Maps}(S^1, S^4) \ar[d]^{ \mathrm{ev}_0 } \\ && && \mathrm{Ext}(S^4/\!\!/ S^1) \ar@{}[r]|-{\simeq_{\mathrm{whe}}} & S^4 } $$ with weak homotopy equivalences as shown due to Prop. \ref{ExtensionOfHomotopyQuotientEquivalentToOriginalSpace}. Examining the DG-algebra model of the free loop space (Remark \ref{MinimalDGCModelForCyclicLoopSpace}), this means that the unit $\eta$ sends non-shifted (non-looped) algebra generators to themselves. These generators are $\omega_4$ and $h_7$. So far we have defined the desired map of DG-algebras on the generators $\omega_2$, $\omega_4$, $h_3$, and $h_7$. This map respects the differentials on $\omega_2$, $\omega_4$ and $h_3$, whereas respect for the differential on $h_7$ $$ \xymatrix{ {\begin{aligned} & -\tfrac{1}{2} \omega_4 \wedge \omega_4 \\ & + \omega_2 \wedge \big( c_1 \omega_2^{\wedge 3} + c_2 \omega_2\wedge\omega_{4} \big) \end{aligned}} && {\begin{aligned} & -\tfrac{1}{2} \omega_4 \wedge \omega_4 \\ & + \omega_2 \wedge \omega_6 \end{aligned}} \ar@{|-->}[ll] \\ h_7 \ar@{|->}[u]^-d && h_7 \ar@{|->}[ll] \ar@{|->}[u]_-d } $$ forces $\omega_6 \mapsto c_1 \omega_2^{\wedge 3} + c_2 \omega_2 \wedge \omega_4$. This completely determines the map of minimal models \eqref{AdjunctionUnitOnRationalATypeOrbispaceISH}. \end{proof} \begin{example}[DG-algebra model for homotopy quotient of the trivial action] \label{dgcAlgebraModelForHomotopyQuotientOfTrivialAction} The homotopy quotient of the trivial $S^1$-action on $S^4$ is $S^4 /\!\!/ S^1 \simeq S^4 \times BS^1$ (Ex.\ref{HomotopyQuotientOfTrivialGAction}). The minimal DG-algebra model of this product space is obtained by setting $c_1=c_2 =0$ in Lemma \ref{dgcAlgebraModelForATypeOrbispaceOf4Sphere}: $$ \mathcal{O}\left( S^4 \times B S^1 \right) \;\simeq\; \mathbb{Q}[ \omega_2, \omega_4, h_7 ]\bigg/ \left( \begin{aligned} d \omega_2 & = 0 \\[-1mm] d \omega_4 & = 0 \\[-1mm] d h_7 & = -\tfrac{1}{2} \omega_4 \wedge \omega_4 \end{aligned} \right) \;\in\; \mathrm{DGCAlg} \,. $$ \end{example} \begin{lemma}[A-type action on 4-sphere is rationally trivial] \label{ATypeActionIsRationallyTrivial} The A-type circle action on the 4-sphere (Def. \ref{ATypeOrbispaceOf4Sphere}) is rationally trivial. That is, the action is represented in rational homotopy theory by the coprojection map of DG-algebras \begin{align*} \mathcal{O}(S^4) &\longrightarrow \mathcal{O}(S^1)\otimes \mathcal{O}(S^4)\\ \eta &\longmapsto 1\otimes \eta. \end{align*} In particular, the A-type orbispace $S^4 /\!\!/ S^1$ is equivalent to the rationalization of the trivial action (Ex. \ref{dgcAlgebraModelForHomotopyQuotientOfTrivialAction}). \end{lemma} \begin{proof} It is sufficient to argue on minimal models: the action $\mu\colon S^1\times S^4 \to S^4$ determines a dual map in the category of DG-algebras $\mu^\ast\colon \mathcal{O}(S^4) \to \mathcal{O}(S^1)\otimes \mathcal{O}(S^4)$ via Prop. \ref{SullivanEquivalence}. Since minimal models are cofibrant and fibrant in the model structure on DG-algebras (see \cite{Hess06} for a review), the map $\mu^\ast$ is homotopic to a map between minimal DG-algebra models: \[ \mathbb{Q}[\omega_4, \omega_7] \bigg/ \left( \begin{aligned} d \omega_4 & = 0 \\[-1mm] dh_7 &= -\tfrac{1}{2} \omega_4 \wedge \omega_4 \end{aligned} \right) \xrightarrow{\quad \nu \quad} \mathbb{Q}[\theta_1, \omega_4, \omega_7] \bigg/ \left( \begin{aligned} d \theta_1 & = 0 \\[-1mm] d \omega_4 & = 0 \\[-1mm] dh_7 &= -\tfrac{1}{2} \omega_4 \wedge \omega_4 \end{aligned} \right)\!. \] The source of this map is the minimal DG-algebra model of the 4-sphere (Ex. \ref{MinimalDgcAlgebraModelFor4Sphere}), and the extra degree-1 generator $\theta_1$ appearing in the target corresponds to $\pi_1 (S^1)= \mathbb{Z}$. The map $\nu$ is completely determined by the images of the generators $\omega_4$ and $h_7$. Now, ${\rm SU}(2)$ acts on $S^4$ via the inclusion ${\rm SU}(4)\hookrightarrow {\rm SO}(5)$ (compare with \eqref{SU2ActionOn4Sphere}). In particular, the ${\rm SU}(2)$-action preserves the round volume form on $S^4$. Restricting along $S^1 \hookrightarrow {\rm SU}(2)$ and observing that the generator $\omega_4$ represents the round volume form in cohomology forces $\nu(\omega_4) = \omega_4$. Up to non-zero scaling, the only way to define $\nu$ on the degree-7 generator $h_7$ that respects the differential is $\nu(h_7) = h_7$. \end{proof} \begin{remark} Some remarks on the above results are in order: \item {\bf (i)} In the above proof, we refer to the degree-7 generator in the minimal model of $S^4$ as $h_7$, in line with the notation used throughout this section and in Sec. \ref{TheMechanism}. This generator was called $\omega_7$ in Ex. \ref{MinimalDgcAlgebraModelFor4Sphere}. \item {\bf (ii)} In interpreting expression \eqref{dgcModelForHomotopyQuotientOf4SphereByCircleAction} it may be worthwhile to view passing to rational homotopy theory as a homotopical analogue of forming first derivatives: that the A-type action on the 4-sphere is rationally trivial is analogous to finding that the derivative of some non-trivial function on the real line vanishes at the origin. That is to say, the A-type orbispace does \emph{not} itself split as a product, but it does in the rational approximation. This turns out to be crucial for our gauge enhancement mechanism . In the companion article \cite{ADE} we go further and work in \emph{equivariant} rational homotopy theory, which captures a great deal more information. \end{remark} In summary, we have the following: \begin{prop}[Minimal DG-algebra model for the A-type orbispace] \label{MinimalDGCAlgebraModelForATypeOrbispace} The minimal DG-algebra model of the A-type orbispace of the 4-sphere (Def. \ref{ATypeOrbispaceOf4Sphere}) is $$ \mathcal{O}\left( S^4 /\!\!/ S^1 \right) \;\simeq\; \mathbb{Q}[ \omega_2, \omega_4, h_7 ]\bigg/ \left( \begin{aligned} d \omega_2 & = 0 \\[-1mm] d \omega_4 & = 0 \\[-1mm] d h_7 & = -\tfrac{1}{2} \omega_4 \wedge \omega_4 \end{aligned} \right) \;\in\; \mathrm{DGCAlg} \,. $$ Furthermore, the unit of the $\mathrm{Ext}\dashv\mathrm{Cyc}$-adjunction (Theorem \ref{GCycExtAdjunction}) on the A-type orbispace, composed with the equivalence \eqref{CanonicalCocycle} from Prop. \ref{ExtensionOfHomotopyQuotientEquivalentToOriginalSpace}, pulls back the generators of the DG-algebra model for $\mathrm{Cyc}(S^4)$ (Ex. \ref{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere}) as follows: \begin{equation} \label{AdjunctionUnitOnRationalATypeOrbispace} \raisebox{40pt}{ \xymatrix@R=-2pt@C=4em{ S^4 /\!\!/ S^1 \ar[r]^-{\eta_{S^4 /\!\!/ S^1}} & \mathrm{Cyc}\,\mathrm{Ext}(S^4 /\!\!/ S^1) \ar[r]^-{ \mathrm{Cyc}(\kappa) }_-{\simeq_{\mathrm{whe}}} & \mathrm{Cyc}(S^4) \\ \omega_2 && \omega_2 \vphantom{h_7}\ar@{|->}[ll] \\ \omega_4 && \omega_4\vphantom{h_7} \ar@{|->}[ll] \\ 0 && \omega_6 \vphantom{h_7}\ar@{|->}[ll] \\ 0 && h_3 \vphantom{h_7}\ar@{|->}[ll] \\ h_7 && h_7 \ar@{|->}[ll] } } \end{equation} \end{prop} \begin{proof} By Lemma \ref{ATypeActionIsRationallyTrivial}, the minimal DG-algebra model for A-type homotopy quotient $S^4 /\!\!/ S^1$ coincides with that of the trivial action, given by Ex. \ref{dgcAlgebraModelForHomotopyQuotientOfTrivialAction}, hence is given by setting $c_1= c_2= 0$ in \eqref{dgcModelForHomotopyQuotientOf4SphereByCircleAction}. We conclude with Lemma \ref{ExtCycAdjunctionForATypeOrbispace}. \end{proof} This concludes our discussion of the rational homotopy type of the A-type orbispace of the 4-sphere. Next, we discuss the fiberwise stabilization of $S^4 /\!\!/ S^1 \to S^4/S^1= S^3$ in rational homotopy theory. \subsection{Fiberwise stabilized $\mathrm{Ext}$/$\mathrm{Cyc}$-unit of the A-type orbispace } \label{RationalUnitOnAType} In Prop. \ref{MinimalDGCAlgebraModelForATypeOrbispace} we described the rationalization of the unit $\eta_{S^4 /\!\!/ S^1}$ of the $\mathrm{Ext}$/$\mathrm{Cyc}$-adjunction (Theorem \ref{GCycExtAdjunction}) on the A-type orbispace of the 4-sphere (Def. \ref{ATypeOrbispaceOf4Sphere}). In the rational approximation, we may regard this map as lying over the classifying space $B^2 S^1 \simeq B^3 \mathbb{Z} \simeq_{\mathbb{Q}} B^3 \mathbb{Q} \simeq_{\mathbb{Q}} S^3$ and discuss its \emph{fiberwise stabilization} (Prop. \ref{AdjunctionStabilization}) in rational parametrized stable homotopy theory (Theorem \ref{RationalParameterizedSpectradgModel}). \medskip The main result of this section is Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}, which states that the fiberwise stabilization of the A-type orbispace contains two summands of twisted connective K-theory, and characterizes lifts through the fiberwise stabilization adjunction unit in terms of lifting from 6-truncated to untruncated twisted K-theory (cf. Ex. \ref{CyclificationOf4SphereReceives6TruncationOfTwistedK}). We interpret these lifts as being the rational image of gauge enhancement of M-branes in Sec. \ref{TheMechanism} below. \medskip The next result appears in \cite{RS}. We spell out the proof in some detail, since we will need certain details in the proof of our main Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}. In order to make certain features more apparent, we use a different naming convention for algebra generators than is used in \cite{RS}: $$ \begin{tabular}{c||ccccccc} \cite{{RS}}: & $a$ & 1 & $c_{2n}$ & $c_{2n+1}$ & $e$ & $\gamma_{2n}$ & $\gamma_{2n+1}$ \\ \hline here: & $h_3$ & $\widetilde\omega_0\vphantom{\Big(}$ & $\omega^L_{2n+2}$ & $\omega^R_{2n+4}$ & $\omega_2$ & $\widetilde\omega_{2n}$ & $\omega_{2n}$ \end{tabular} $$ \begin{prop}[Minimal DG-module for fiberwise stabilization of the A-type orbispace] \label{MinimalDGModels} We have the following table of minimal DG-module models (Def. \ref{MinimalDGModule}) describing fiberwise stabilizations of the spaces \eqref{QuotientOfS4ByS1OverS3} over the 3-sphere: \vspace{.3cm} \begin{center} \begin{tabular}{|c||c|c|} \hline {\bf Fibration} & {\begin{tabular}{c} Vector space underlying \\ minimal DG-model \end{tabular}} & {\begin{tabular}{c} Differential of \\ minimal DG-model \end{tabular}} \\ \hline \hline $ \raisebox{20pt}{ \xymatrix{ S^0 = \left( S^4\right)^{S^1} \ar[d] \\ S^3 }} $ & $ \mathbb{Q}[h_3] \otimes \left\langle \omega^{L}_{2 p}, \omega^R_{2 p } \,\vert\, p \in \mathbb{N} \right\rangle $ & $ d \;:\; \left\{ \begin{aligned} \omega^L_0 & \mapsto 0 && \multirow{2}{*}{ \bigg\}\,\footnotesize $(\mathrm{ku} /\!\!/ BS^1)$ } \\ \omega^L_{2p + 2} & \mapsto h_3 \otimes \omega^L_{2 p } \\ \omega^R_0 & \mapsto 0 && \multirow{2}{*}{ \bigg\}\,\footnotesize $(\mathrm{ku} /\!\!/ BS^1)$ } \\ \omega^R_{2p + 2} & \mapsto h_3 \otimes \omega^R_{2p} \end{aligned} \right. $ \\ \hline $ \raisebox{20pt}{ \xymatrix{ S^4 \ar[d] \\ S^3 }} $ & $ \mathbb{Q}[h_3] \otimes \left\langle \widetilde\omega_{2 p}, \omega_{2 p+ 4} \,\vert\, p \in \mathbb{N} \right\rangle $ & $ d \;:\; \left\{ \begin{aligned} \widetilde\omega_0 & \mapsto 0 && \multirow{2}{*}{ \bigg\}\,\footnotesize $(\mathrm{ku} /\!\!/ BS^1)$ } \\ \widetilde\omega_{2p + 2} & \mapsto h_3 \otimes \widetilde\omega_{2 p } \\ \omega_4 & \mapsto 0 && \multirow{2}{*}{ \bigg\}\,\footnotesize $(\Sigma^4 \mathrm{ku} /\!\!/ BS^1)$ } \\ \omega_{2p + 6} & \mapsto h_3 \otimes \omega_{2p+4} \end{aligned} \right. $ \\ \hline $ \raisebox{20pt}{ \xymatrix{ S^4/\!\!/ S^1 \ar[d] \\ S^3 }} $ & $ \mathbb{Q}[h_3, \omega_2] \otimes \left\langle \widetilde\omega_{2 p}, \omega_{2 p + 4} \,\vert\, p \in \mathbb{N} \right\rangle $ & $ d \;:\; \left\{ \begin{aligned} \widetilde\omega_0 & \mapsto 0 && \multirow{2}{*}{\bigg\}\,\footnotesize $(\mathrm{ku} /\!\!/ BS^1)$ } \\ \widetilde\omega_{2p + 2} & \mapsto h_3 \otimes \widetilde\omega_{2 p } \\ \omega_2 & \mapsto 0 && \multirow{3}{*}{\Bigg\}\, \footnotesize $(\Sigma^2 \mathrm{ku} /\!\!/ BS^1)$ } \\ \omega_4 & \mapsto h_3 \wedge \omega_2 \otimes \widetilde \omega_0 \\ \omega_{2p + 6} & \mapsto h_3 \otimes \omega_{2p + 4} \end{aligned} \right. $ \\ \hline \end{tabular} \end{center} \end{prop} \vspace{1mm} \noindent Beware of the special placement of the element $\omega_2$ in the last line, as a generator of (the graded vector space underlying) a whole graded-commutative algebra. On the far right of the table we are highlighting that there are two sequences of differentials in each case, differing only in the degrees in which they start, and that each are of the same form as those for the minimal DG-model of the labelled shifted twisted K-theory spectrum (Lemma \ref{TwistedKModel}). Observe, moreover, that the second sequence in the last line really starts with the element $\omega_2 \otimes \widetilde \omega_0$. \begin{proof} To determine the minimal DG-models, in each case we use that: \begin{itemize} \item by the particular form of the minimal DG-algebra model of $S^3$ (Ex. \ref{MinimalDgcAlgebraModelFor3Sphere}), the minimal DG-modules in question are necessarily \emph{free} modules over $\mathbb{Q}[h_3]$; and \item according to \eqref{FormulaFiberwiseSuspensionSpectra}, the cochain cohomology of the minimal DG-module must coincide with the rational cohomology of the total space of the corresponding fibration. \end{itemize} In all present cases of interest, these two constraints have a unique solution. We begin by adjoining additional closed generators $\omega_k$ to capture the cohomology of the total space. But by the free module structure, this also makes the element $h_3 \otimes \omega_k$ appear, which must be killed off in cohomology. To do this, we introduce new generators with prescribed differential, which produce new spurious elements that need to be killed off, and so on. Explicitly, we have: \item {\bf (i)} We start with the case $S^0 \to S^3$. Write the 0-sphere as the disjoint union of a \lq\lq left'' and a \lq\lq right'' point $$ S^0 = \ast^L \coprod \ast^R \,. $$ In cohomology, the map $S^0 \xrightarrow{\;\;\pi\;\;} S^3$ is $$ \xymatrix@C=3pt{ H^\bullet(S^0, \mathbb{Q}) = \left\langle [\omega^L_0], [\omega^R_0] \right\rangle \ar@{<-}[d]^{\pi^\ast} &&& [\omega_0^L] + [\omega_1^R] && 0 \\ H^\bullet(S^3, \mathbb{Q}) = \left\langle [1], [h_3] \right\rangle &&& [1] \ar@{|->}[u] && [h_3]. \ar@{|->}[u] } $$ Thus, if the DG-model for the fiberwise stabilization of $S^0$ over $S^3$ is to be of the form $$ \xymatrix@C=3pt@R=1.6em{ \left\langle 1, h_3\right\rangle \otimes \left\langle \omega_0^L, \omega_0^R, \cdots \right\rangle &&& 1 \otimes ( \omega^L_0 + \omega^R_0 ) && h_3 \otimes ( \omega^L_0 + \omega^R_0 ) \\ \underset{ =\mathbb{Q}[h_3] }{ \underbrace{ \left\langle 1, h_3\right\rangle }} \ar[u]_{\pi^\ast} &&& 1 \ar@{|->}[u] && h_3 \ar@{|->}[u] } $$ with $$ d 1 = 0, \qquad d h_3 = 0 , \qquad d \omega_0^{L/R} = 0 \,, $$ then arguing by induction proves the claim; firstly, there must be additional generators $\omega_2^{L/R}$ in order to remove the elements $h_3 \otimes \omega_0^{L/R}$ from cohomology: $$ d \omega_2^{L/R} = h_3 \otimes \omega_0^{L/R} \,. $$ But this means that the $h_3 \otimes \omega_2^{L/R}$ are closed, since $$ \begin{aligned} d\big( h_3 \otimes \omega_2^{L/R} \big) & = \underset{ = 0 }{ \underbrace{ (d h_3) }} \otimes \omega^{L/R}_2 - h_3 \otimes d \omega^{L/R}_2 \\ & = - \underset{ = 0 }{ \underbrace{ h_3 \wedge h_3 }} \otimes \omega^{L/R}_0\;. \end{aligned} $$ In order to remove these elements from cohomology, we need to introduce new elements $\omega_4^{L/R}$ with differential $$ d \omega^{L/R}_4 = h_3 \otimes \omega^{L/R}_2 \,, $$ and so on. \item {\bf (ii)} We now consider the case $S^4 \xrightarrow{\;\;\pi\;\;} S^3$, which in cohomology is the assignment $$ \xymatrix@C=3pt{ H^\bullet(S^4, \mathbb{Q}) = \left\langle [\widetilde\omega_0], [\omega_4] \right\rangle \ar@{<-}[d]^{\pi^\ast} &&& [\widetilde\omega_0] && 0 \\ H^\bullet(S^3, \mathbb{Q}) = \left\langle [1], [h_3] \right\rangle &&& [1] \ar@{|->}[u] && [h_3]. \ar@{|->}[u] } $$ Hence, if the minimal DG-model is to be of the form $$ \xymatrix@C=3pt@R=1.5em{ \left\langle 1, h_3\right\rangle \otimes \left\langle \widetilde\omega_0, \omega_4, \cdots \right\rangle &&& 1 \otimes \widetilde\omega_0 && h_3 \otimes \widetilde\omega_0 \\ \left\langle 1, h_3\right\rangle \ar[u]_{\pi^\ast} &&& 1 \ar@{|->}[u] && h_3 \ar@{|->}[u] } $$ with $$ d 1 = 0, \qquad d \widetilde\omega_0 = 0, \qquad d \omega_4 = 0\;, $$ then there need to be elements $\widetilde\omega_2$ and $\omega_6$ that remove $h_3 \otimes \widetilde\omega_0$ and $h_3 \otimes \omega_4$ from cohomology via $$ d \widetilde\omega_2 = h_3 \otimes \widetilde\omega_0 \,, \qquad\quad d \omega_6 = h_3 \otimes \omega_4 \,. $$ But this implies that $$ d( h_3 \otimes \widetilde\omega_2 ) = 0 \,, \qquad\quad d( h_3 \otimes \omega_6 ) = 0, $$ so that to we must introduce additional generators $\widetilde\omega_4$ and $\omega_8$ to make these elements exact. But then $h_3\otimes \widetilde\omega_4$ and $h_3\otimes \omega_8$ are closed, so that we must add $\widetilde\omega_6$ and $\omega_{10}$ to remove them from cohomology. The result follows by induction. \item {\bf (iii)} Now consider the main case of interest, the A-type orbispace $S^4 /\!\!/ S^1$. The cohomology of the total space was determined in Lemma \ref{HomotopyGroupsOfS4OverS1}, so that in cohomology the map $S^4/\!\!/ S^1 \to S^3$ is of the form $$ \xymatrix@C=3pt{ H^\bullet(S^4 /\!\!/ S^1 , \mathbb{Q}) = \langle [\widetilde \omega_0], [\omega_2], [\omega_4], [\omega_2 \wedge \omega_2], \cdots \rangle \ar@{<-}[d]^{\pi^\ast} &&& [\widetilde\omega_0] && 0 \\ H^\bullet(S^3, \mathbb{Q}) = \left\langle [1], [h_3] \right\rangle &&& [1] \ar@{|->}[u] && [h_3] \ar@{|->}[u] } $$ First consider $\widetilde \omega_0$ and of $\omega_2(\equiv \omega_2\otimes \widetilde\omega_0)$: analogously to {\bf (ii)} above, these induce sequences of additional generators $\widetilde \omega_{2p}$ and $\omega_{2p+4}$ with differentials as claimed. But adding in these generators already implies the existence of a further cohomology class in degree four, exhibited by the cocycle \begin{equation} \label{SecretDegree4Class} \widehat \omega_4 := 1 \otimes \omega_4 - \omega_2 \otimes \widetilde \omega_2 \end{equation} By degree reasons, the only potential primitives of $\widehat{\omega}_4$ are non-zero multiples of $h_3$, but this is closed. Thus we have already found the correct cohomology in dimensions $\leq 5$. Extending in powers of $\omega_2$, we find that the cocycles \[ \omega_2^{\wedge(k+2)}\otimes \widetilde\omega_0\;\;\mbox{ and }\;\;\omega_2^{\wedge k}\otimes \widehat{\omega}_4 \] recover the correct $2k+4$-dimensional cohomology of $S^4/\!\!/ S^1$, and no new non-trivial cocycles arise in odd degrees. This completes the proof. \end{proof} At this point, let us pause to provide some intuition for what is going on in Prop. \ref{MinimalDGModels}: \begin{remark}[Interpretation of fiberwise stabilization of A-type orbispace] \label{InterpretationOfFiberwiseSuspensionOfATypeOrbispace} Due to the rational homotopy equivalence $ S^3 \simeq_{\mathrm{whe},\mathbb{Q}} B^2 S^1 $, the homotopy fiber of any point inclusion $\ast \to S^3$ is, rationally, the classifying space $B S^1$ \eqref{ClassifyingSpace}: $$ \mathrm{hofib}\big( \xymatrix{ \ast \ar[r] & S^3 } \big) \;\simeq_{\mathrm{whe},\mathbb{Q}}\; B S^1 \,. $$ Accordingly, the homotopy fiber of $S^0 \to S^3$ is, rationally, the disjoint union of two copies of $B S^1$: $$ \mathrm{hofib}\Big( \! \xymatrix{ \big(S^4\big)^{S^1} = \ast\displaystyle\coprod\ast \ar[r] & S^3 } \! \Big) \;\simeq_{\mathrm{whe},\mathbb{Q}}\; B S^1 \coprod B S^1 \,. $$ Forming fiberwise suspension spectra (Prop. \ref{AdjunctionStabilization}) really means that we stabilize these homotopy fibers, which means that the fiberwise suspension spectrum of $S^0 \to S^3$ is a parametrized spectrum whose fiber over any point is $$ \Sigma^\infty_+ B S^1 \oplus \Sigma^\infty_+ B S^1 \;\simeq_{\mathrm{swhe}, \mathbb{Q}}\; \mathrm{ku} \oplus \mathrm{ku} $$ (cf. Ex. \ref{RationalSnaithTheorem}). Hence, the fiberwise suspension spectrum $\Sigma^\infty_{+, S^3} S^0$ is rationally equivalent to the direct sum of two copies of twisted connective K-theory, which by comparison in Lemma \ref{TwistedKModel} is precisely what item {\bf (i)} in Prop. \ref{MinimalDGModels} asserts. The second item in Prop. \ref{MinimalDGModels} can also be heuristically understood in similar terms: as the cartoon picture \begin{center} \includegraphics[width=.4\textwidth]{pic1-circle-action} \end{center} for the suspended Hopf action from Def. \ref{ATypeOrbispaceOf4Sphere} indicates, the homotopy fiber now is some mixture of the two copies of $B S^1$ attached to the fixed points, and a copy of $S^1$ attached to all the other points. Prop. \ref{dgModelForS0Inclusion} below shows that, accordingly, there is one unshifted copy of $\Sigma^\infty_{+} B S^1$ in the fiber spectrum of $\Sigma^\infty_{+,S^3} S^4$ that pulls back diagonally into the direct sum of the suspension spectra associated with the two fixed points. \end{remark} We record the following useful consequences of Prop. \ref{MinimalDGModels}: \begin{lemma}[Comparison map between DG-module models of A-type orbispace] \label{ComparisonMapBetweenModelsForFiberwiseStabilizationOfATypeOrbispace} A quasi-isomorphism of DG-modules over $\mathbb{Q}[h_3]$ from the minimal DG-model of $\Sigma^\infty_{+,S^3}\left( S^4 /\!\!/ S^1 \right)$ (Prop. \ref{MinimalDGModels}) to the (non-minimal) DG-model underlying the DG-algebra model of Prop. \ref{MinimalDGCAlgebraModelForATypeOrbispace} is determined on generators as follows: \begin{gather} \label{ComparisonMapBetweenModels} \mathpalette\mathclapinternal{\raisebox{70pt}{\xymatrix@R=-2pt{ \mathbb{Q}[\omega_2,\omega_4, h_7]\Bigg/ \left( {\begin{aligned} d \omega_2 & = 0 \\ d \omega_4 & = 0 \\ d h_7 & = - \tfrac{1}{2} \omega_4 \wedge \omega_4 \end{aligned}} \right) & \mathbb{Q}[h_3, \omega_2] \otimes \langle \widetilde \omega_{2p}, \omega_{2p + 4} \rangle\Bigg/ \left( {\begin{aligned} d \widetilde \omega_0 & = 0 \\ d \widetilde \omega_{2p + 2} & = h_3 \otimes \widetilde \omega_{2 p} \\ d \omega_2 & = 0 \\ d \omega_4 & = h_3 \wedge \omega_2 \otimes \widetilde \omega_0 \\ d \omega_{2 p+ 6} & = h_3 \otimes \omega_{2p + 4} \end{aligned}} \right) \ar[l]_-{ \simeq_{\mathrm{qi}} } \\ \\ 0 \ar@{<-|}[r] & \mathpalette\mathrlapinternal{h_3\vphantom{\widetilde \omega_{2p+2}}} \\ 1 \ar@{<-|}[r] & \mathpalette\mathrlapinternal{\widetilde \omega_0\vphantom{\widetilde \omega_{2p+2}}} \\ 0 \ar@{<-|}[r] & \mathpalette\mathrlapinternal{\widetilde \omega_{2p+2}} \\ \omega_2 \ar@{<-|}[r] & \mathpalette\mathrlapinternal{\omega_2\vphantom{\widetilde \omega_{2p+2}}} \\ \omega_4 \ar@{<-|}[r] & \mathpalette\mathrlapinternal{\omega_4\vphantom{\widetilde \omega_{2p+2}}} \\ 0 \ar@{<-|}[r] & \mathpalette\mathrlapinternal{\omega_{2p+6}\vphantom{\widetilde \omega_{2p+2}}} }}} \end{gather} where both wedge products as well as tensor products (including powers of $\omega_2$) on the right are sent to wedge products on the left. \end{lemma} \begin{proof} The only non-evident point to note is that this map is indeed an isomorphism on cohomology in degree 4. But this follows from \eqref{SecretDegree4Class}, since $\widehat{\omega}_4\mapsto \omega_4$. \end{proof} \begin{prop} [DG-model for $S^0$-$S^3$-$S^4$ system] \label{dgModelForS0Inclusion} The commutative diagram \eqref{QuotientOfS4ByS1OverS3} $$ \xymatrix@R=1em{ S^0 \ar[dr] \; \ar@{^{(}->}[rr] && S^4 \ar[dl] \\ & S^3 } $$ is represented on rational fiberwise suspension spectra in terms of the DG-modules of Prop. \ref{MinimalDGModels} by the commuting diagram \begin{gather*} \mathpalette\mathclapinternal{ \xymatrix@C=3.3em{ \left(\hspace{-1mm} {\begin{array}{l} d \omega^{L/R}_0 = 0 \\ d \omega^{L/R}_{2p + 2} = h_3\otimes \omega^{L/R}_{2p} \end{array}} \hspace{-1mm} \right) &&&& \left(\hspace{-1mm} {\begin{array}{l} d \widetilde{\omega}_{0} = 0\,, d \widetilde{\omega}_{2p+2} = h_3 \otimes \widetilde{\omega}_{2p} \\ d \omega_{4} = 0\,, d \omega_{2p+6} = h_3 \otimes \omega_{2p+4} \end{array}} \hspace{-1mm} \right) \ar[llll]_-{\tiny \left(\hspace{-2mm} {\begin{array}{l} \;\;\;\;\widetilde{\omega}_{2p} \mapsto (\omega^{L}_{2p} + \omega^R_{2p}) \\ \omega_{2p + 4} \mapsto 0 \end{array}} \hspace{-2mm} \right) } \\ && \left(\hspace{-1mm} {\begin{array}{l} d 1 =0 \\ d h_3 = 0 \end{array}} \hspace{-1mm} \right) \ar@{->}[urr]_{\hspace{1cm}\tiny \left( \hspace{-2mm} \begin{array}{l} \;\; 1 \mapsto 1 \otimes \widetilde{\omega}_0 \\ h_3 \mapsto h_3 \otimes \widetilde{\omega}_0 \end{array} \hspace{-2mm}\right) } \ar@{->}[ull]^{\hspace{-2.4cm}\tiny \left( \hspace{-2mm} \begin{array}{l} \;\; 1 \mapsto 1 \otimes ( \omega^L_0 + \omega^R_0 ) \\ h_3 \mapsto h_3 \otimes ( \omega^L_0 + \omega^R_0 ) \end{array} \hspace{-2mm}\right) } }\!\!.} \end{gather*} \end{prop} \begin{proof} We know that the various cohomology generators are mapped as follows: $$ \xymatrix@R=1.6em{ 1 \otimes ([\omega^L_0] + [\omega^R_0]) && 1 \otimes [\widetilde{\omega}_0] \ar@{|->}[ll] \\ & [1] \ar@{|->}[ur] \ar@{|->}[ul] } $$ This implies the statement along the lines of the proof of Prop. \ref{MinimalDGModels}. \end{proof} So far, we have shown that copies of rational twisted K-theory appear as summands in the rational fiberwise stabilization of the A-type orbispace. Our next aim is to analyze how the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction unit compares these copies to the copy of rational 6-truncated twisted K-theory to be found inside the rational cyclification of the 4-sphere (Ex. \ref{CyclificationOf4SphereReceives6TruncationOfTwistedK}). For this, we will need to know how the adjunciton unit is represented in terms of minimal DG-modules: \begin{prop}[Fiberwise stabilization of $\mathrm{Ext}/\mathrm{Cyc}$-unit at A-type orbispace via minimal DG-models] \label{FiberwiseStabilizationOfUnitOnMinimalModels} The fiberwise stabilization of the component of the $\mathrm{Ext}$/$\mathrm{Cyc}$-unit at the A-type orbispace of the 4-sphere $$ \xymatrix{ \Sigma^\infty_{+,S^3} \left( S^4/\!\!/ S^1 \right) \ar[rr]^-{ \Sigma^\infty_{+,S^3}(\eta) } && \Sigma^\infty_{+,S^3} \mathrm{Cyc}(S^4) } $$ is represented on minimal DG-modules (Prop. \ref{MinimaldgModuleForFiberwiseStabilisationOfCyclicSpaceOf4Sphere} and Prop. \ref{MinimalDGModels}) by a homomorphism of DG-modules over $\mathbb{Q}[h_3]$, shown as a dotted map in \eqref{TheMap}, which has the following properties: \item \hypertarget{FirstThreeGeneratorsAreMappedCorrectly}{{\bf (i)}} The generators $\omega_2$, $\omega_4$ and $\omega_6$ from Prop. \ref{MinimaldgModuleForFiberwiseStabilisationOfCyclicSpaceOf4Sphere} are sent to the generators of the same name in Prop. \ref{MinimalDGModels}: \begin{equation} \label{AdjunctionUnitFiberwiseStabilizedOnATypeOrbispace} \raisebox{15pt}{ \xymatrix@R=.5pt{ \omega_2 \otimes \tilde \omega_0 \ar@{<-|}[rr]^-{ \Sigma^\infty_{+,S^3}(\eta)^\ast } && \omega_2 \\ 1 \otimes \omega_4 \ar@{<-|}[rr]^-{ } && \omega_4 \\ 1 \otimes \omega_6 \ar@{<-|}[rr]^-{ } && \omega_6 } } \end{equation} \item \hypertarget{NotInImage}{{\bf (ii)}} The elements $$ 1 \otimes \omega_{2p + 8} \,,\;\; p\in \mathbb{N} $$ are \emph{not} in its image. \item \hypertarget{RelativeCellInclusion}{{\bf (iii)}} It is a minimal relative DG-module inclusion (Def. \ref{MinimalDGModule}). \end{prop} \begin{proof} A general representative for $\Sigma^\infty_{+,S^3}(\eta)$ on minimal DG-models is a dotted morphism that makes the following diagram commute \emph{up to homotopy}, but we claim that a representative with the stated properties exists that makes this diagram even \emph{strictly} commutative: \begin{gather} \label{TheMap} \mathpalette\mathclapinternal{ \hspace{0cm} \scalebox{.85}{ \xymatrix@R=2em@C=-2em{ \Sigma^\infty_{+,S^3}\left( S^4 /\!\!/ S^1 \right) \ar[r]^-{ \Sigma^\infty_{+,S^3}\left(\eta_{S^4 /\!\!/ S^1}\right) } & \Sigma^\infty_{+,S^3} \mathrm{Cyc}\,\mathrm{Ext}\left( S^4 /\!\!/ S^1 \right) \ar[r]^-{\Sigma^\infty_{S^3} \mathrm{Cyc}(\kappa)}_-{\simeq_{\mathrm{whe}}} & \Sigma^\infty_{+,S^3} \mathrm{Cyc}(S^4) \\ \mathbb{Q}[\omega_2,\omega_4, h_7]\Bigg/\! \left( {\begin{aligned} d \omega_2 & = 0 \\ d \omega_4 & = 0 \\ d h_7 & = - \tfrac{1}{2} \omega_4 \wedge \omega_4 \end{aligned}} \right) && \mathbb{Q}[ h_3, h_7, \omega_2, \omega_4, \omega_6 ] \Bigg/\! \left( {\begin{aligned} d h_3 & = 0 \\ d h_7 & = -\tfrac{1}{2} \omega_4 \wedge \omega_4 + \omega_2 \wedge \omega_6 \\ d \omega_2 & = 0 \\ d \omega_4 & = h_3 \wedge \omega_2 \\ d \omega_6 & = h_3 \wedge \omega_4 \end{aligned}} \right) \ar[ll]|-{\footnotesize \begin{aligned} h_3 & \mapsto 0 \\ h_7 & \mapsto h_7 \\ \omega_2 & \mapsto \omega_2 \\ \omega_4 & \mapsto \omega_4 \\ \omega_6 & \mapsto 0 \end{aligned} } \\ \\ \\ \\ \mathbb{Q}[h_3, \omega_2] \otimes \langle \widetilde \omega_{2p}, \omega_{2p + 4} \rangle\Bigg/\! \left( {\begin{aligned} d h_3 & = 0 \\ d \widetilde \omega_0 & = 0 \\ d \widetilde \omega_{2p + 2} & = h_3 \otimes \widetilde \omega_{2 p} \\ d \omega_2 & = 0 \\ d \omega_4 & = h_3 \wedge \omega_2 \otimes \widetilde \omega_0 \\ d \omega_{2 p+ 6} & = h_3 \otimes \omega_{2p + 4} \end{aligned}} \right) \ar@{-> }[uuuu]^-{\simeq_{\mathrm{qi}}}_-{\footnotesize \begin{aligned} h_3 & \mapsto 0 \\ \widetilde \omega_0 & \mapsto 1 \\ \widetilde \omega_{2p + 2} & \mapsto 0 \\ \omega_2 & \mapsto \omega_2 \\ \omega_4 & \mapsto \omega_4 \\ \omega_{2p+6} & \mapsto 0 \end{aligned} } && \frac{ \mbox{$\mathbb{Q}[ h_3, \omega_2, \omega_4, \omega_6 ]$} }{ \mbox{$\left( \omega_2 \wedge \omega_6 -\tfrac{1}{2}\omega_4 \wedge \omega_4 \right)$} }\Bigg/\! \left( {\begin{aligned} d h_3 & = 0 \\ d \omega_2 & = 0 \\ d \omega_4 & = h_3 \wedge \omega_2 \\ d \omega_6 & = h_3 \wedge \omega_4 \end{aligned}} \right). \ar@{-> }[uuuu]^-{\simeq_{\mathrm{qi}}}_-{\footnotesize \begin{aligned} h_3 & \mapsto h_3 \\ \omega_2 & \mapsto \omega_2 \\ \omega_4 & \mapsto \omega_4 \\ \omega_6 & \mapsto \omega_6 \end{aligned} } \ar@{..>}[ll]|-{\footnotesize \color{gray} \begin{aligned} h_3 & \mapsto h_3 \\ \omega_2 & \mapsto \omega_2 \otimes \widetilde \omega_0 \\ \omega_4 & \mapsto 1 \otimes \omega_4 \\ \omega_6 & \mapsto 1 \otimes \omega_6 \\ &\quad \vdots \end{aligned} } } }} \end{gather} Here the top horizontal morphism is the map from Prop. \ref{MinimalDGCAlgebraModelForATypeOrbispace}, regarded as a map between underlying DG-modules, while in the bottom row we have the corresponding minimal models. The left-hand vertical morphism is \eqref{ComparisonMapBetweenModels} from Lemma \ref{ComparisonMapBetweenModelsForFiberwiseStabilizationOfATypeOrbispace}, while the right-hand vertical morphism is provided by Prop. \ref{MinimaldgModuleForFiberwiseStabilisationOfCyclicSpaceOf4Sphere}. Firstly, we show that if such a dotted morphism exists, then it must satisfy (\hyperlink{FirstThreeGeneratorsAreMappedCorrectly}{ i}): To start with, the dotted morphism necessarily sends $h_3 \mapsto h_3$, since it is a homomorphism of free $\mathbb{Q}[h_3]$-modules. Next, in order for the underlying linear maps to commute, the dotted morphism needs to send $$ \omega_2 \mapsto \omega_2 \otimes \widetilde \omega_0 + \cdots \,, $$ where the ellipsis indicates a term of degree 2 that vanishes under the left vertical map. The only possibility for such a term is a scalar multiple of $1 \otimes \widetilde \omega_2$, but then the respect for the differential $$ \xymatrix@R=1.5em{ c \, h_3 \otimes \widetilde \omega_0 && 0 \ar@{|->}[ll] \\ \omega_2 \otimes \widetilde \omega_0 + c \, 1 \otimes \widetilde \omega_2 \ar[u]^-{d} && \omega_2 \ar@{|->}[u]_-d \ar@{|->}[ll] } $$ enforces $c = 0$. A similar argument applies to $\omega_4$: we must have $$ \omega_4 \mapsto 1 \otimes \omega_4 + \cdots $$ where the ellipsis indicates a term of degree 4 that vanishes under the left vertical map, which must be of the form $c_1 \, \omega_2 \wedge \widetilde \omega_2 + c_2 \, 1 \otimes \widetilde \omega_4$ for some $c_1, c_2 \in \mathbb{R}$. But then respect for the differentials $$ \xymatrix{ h_3 \wedge \omega_2 \otimes \widetilde \omega_0 + c_1 \, \omega_2 \wedge h_3 \otimes \widetilde \omega_0 + c_2 \, h_3 \otimes \widetilde \omega_2 && h_3 \wedge \omega_2 \ar@{|->}[ll] \\ 1 \otimes \omega_4 + c_1 \, \omega_2 \otimes \widetilde \omega_2 + c_2 \, 1 \otimes \widetilde \omega_4 \ar@{|->}[u]^d && \omega_4 \ar@{|->}[u]_d \ar@{|->}[ll] } $$ implies $c_1 = 0$ and $c_2 = 0$. Finally, we must have that $$ \omega_6 \mapsto 0 + \cdots, $$ where the ellipsis is a term of degree 6 that vanishes under the left vertical map. Hence $$ \omega_6 \mapsto c_1 \, 1 \otimes \omega_6 + c_2 \, 1 \otimes \widetilde \omega_6 + c_3 \, \omega_2 \otimes \widetilde \omega_4 + c_4 \, \omega_2^{\wedge 2}\otimes \widetilde \omega_2 $$ for some $c_1, c_2, c_3, c_4 \in \mathbb{R}$. Now the respect for the differentials $$ \xymatrix{ {\begin{aligned} c_1 \, h_3 \otimes \omega_4 + c_2 \, h_3 \otimes \widetilde \omega_4 + c_3 \, \omega_2 \wedge h_3 \otimes \widetilde \omega_2 + c_4 \, \omega_2^{ \wedge 2} \wedge h_3 \otimes \widetilde \omega_0 \end{aligned}} && h_3 \wedge \omega_4 \ar@{|->}[ll] \\ {\begin{aligned} c_1 \, 1 \otimes \omega_6 + c_2 \, 1 \otimes \widetilde \omega_6 + c_3 \, \omega_2 \otimes \widetilde \omega_4 + c_4 \, \omega_2 \wedge \omega_2 \otimes \widetilde \omega_2 \end{aligned}} \ar@{|->}[u] && \omega_6 \ar@{|->}[u] \ar@{|->}[ll] } $$ implies $c_1 = 1$ and $c_2 = c_3 =c_4 = 0$. This establishes (\hyperlink{FirstThreeGeneratorsAreMappedCorrectly}{{i}}), provided that the dotted morphism actually exists. We now prove that a dotted morphism in \eqref{TheMap} does exist as claimed. Indeed, we have just seen that the images of the generators $h_3$, $\omega_2$, $\omega_4$, and $\omega_6$ are predetermined. Extending this map as a $\mathbb{Q}[h_3,\omega_2]$-module homomorphism, we have defined the dotted morphism on all terms that are at most linear in $\omega_4$ and $\omega_6$. We must now extend this map to higher order terms in these generators---but by the relation $\tfrac{1}{2}\omega_4 \wedge \omega_4 = \omega_2 \wedge \omega_6$, which holds in the minimal DG-module in the bottom right of \eqref{TheMap}, we may consider those unique representatives which are at most linear in $\omega_4$. Hence we need only find consistent images for all elements of the form $$ (\omega_6)^{\wedge n}, \,\;\;\;\; (\omega_6)^{\wedge(n-1)} \wedge \omega_4 \phantom{AAAA} \mbox{ for $n \geq 2$ } \,. $$ We claim that there are unique coefficients $a_n, b_n \in \mathbb{N} \subset \mathbb{Q}$ such that the assignment \begin{equation} \label{ExtComp} \raisebox{12pt}{ \xymatrix@R=1pt{ b_n \, \omega_2^{\wedge (n-1)} \otimes \omega_{4n} && \mathpalette\mathrlapinternal{(\omega_6)^{\wedge(n-1)} \wedge \omega_4} \ar@{|->}[ll] \\ a_n \, \omega_2^{\wedge (n-1)} \otimes \omega_{4n+2} && \mathpalette\mathrlapinternal{(\omega_6)^{\wedge n} } \ar@{|->}[ll] } } \phantom{AAAAAAAAAAA} \mbox{for $n \geq 2$}. \end{equation} respects the differentials and makes the underlying linear maps of \eqref{TheMap} commute. We argue by induction: for the initial case $n=2$, respect for the differentials on $\omega_6 \wedge \omega_4$ means that the following square has to commute \begin{equation} \label{FirstSquare} \raisebox{20pt}{\xymatrix{ 3 h_3 \wedge \omega_2 \otimes \omega_6 && 3 h_3 \wedge \omega_2 \wedge \omega_6 \ar@{|->}[ll] \\ b_2 \, \omega_2 \otimes \omega_8 \ar@{|->}[u]^d && \omega_6 \wedge \omega_4 \,, \ar@{|->}[ll] \ar@{|->}[u]_d }} \end{equation} where the top left entry shows the image of the top right element. To see this, notice that: \vspace{-2mm} \begin{enumerate}[{\bf (a)}] \item in computing the differential on the right of \eqref{FirstSquare}, we are using the relation $\tfrac{1}{2} \omega_4 \wedge \omega_4 = \omega_2 \wedge \omega_6$ in the minimal DG-module model from Prop. \ref{MinimaldgModuleForFiberwiseStabilisationOfCyclicSpaceOf4Sphere}, to uniquely represent all differentials by representatives that are at most linear in $\omega_4$; and \vspace{-1mm} \item the definition \eqref{ExtComp} applies only to terms at least quadratic in $\omega_6, \omega_4$. For the image of the top right element in \eqref{FirstSquare} we instead use \eqref{AdjunctionUnitFiberwiseStabilizedOnATypeOrbispace} and respect for the $\mathbb{Q}[h_3,\omega_2]$-module structure. \end{enumerate} \vspace{-2mm} \noindent But the differential of the bottom left element in \eqref{FirstSquare} is $b_2 \, \omega_2 \wedge h_3 \otimes \omega_6$, and hence the square \eqref{FirstSquare} commutes precisely if $ b_2 = 3 $. In an analogous manner, we see that the respect for the differential on $\omega_6 \wedge \omega_6$ means that the following square has to commute: $$ \xymatrix{ 2 b_2\, h_3 \wedge \omega_2 \otimes \omega_8 && 2 h_3 \wedge \omega_6 \wedge \omega_4 \ar@{|->}[ll] \\ a_2 \, \omega_2 \otimes \omega_{10} \ar@{|->}[u]^d && \omega_6 \wedge \omega_6 \,, \ar@{|->}[ll] \ar@{|->}[u]_d } $$ where again the top left element shown is the image of the top right element, now obtained via \eqref{ExtComp}. We see that the differential on the left has this same image precisely if $$ a_2 = 2 b_2 = 6 \,. $$ For the inductive argument, we assume that the claim \eqref{ExtComp} holds for $(n-1)$, with $n \geq 3$. Then we need to assure the commutativity of the squares $$ \xymatrix{ n b_n \, h_3 \wedge \omega_2^{\wedge(n-1)} \otimes \omega_{4n} && n\, h_3\wedge (\omega_6)^{\wedge(n-1)} \wedge \omega_4 \ar@{|->}[ll] \\ a_n \, \omega_2^{\wedge (n-1)} \otimes \omega_{4n+2} \ar@{|->}[u]^d && (\omega_6)^{\wedge n} \ar@{|->}[u]_d \ar@{|->}[ll] } $$ and $$ \xymatrix{ \left( 2n-1\right) a_{n-1} \, h_3 \wedge \omega_2^{\wedge (n-2)}\otimes \omega_{4n-2} && \left( 2n-1\right)\, h_3 \wedge \omega_2 \wedge (\omega_6)^{\wedge(n-1)} \ar@{|->}[ll] \\ b_n \, \omega_2^{\wedge(n-1)} \otimes \omega_{4n} \ar@{|->}[u]^d && (\omega_6)^{\wedge(n-1)} \wedge \omega_4\,. \, \ar@{|->}[ll] \ar@{|->}[u]_d } $$ By the inductive hypothesis, the second square above implies that $b_n =(2n-1) a_{n-1}$, whereas the first square implies that $a_n = n\cdot b_n$. By induction, this gives a map of DG-$\mathbb{Q}[h_3, \omega_2]$-modules map as per the dotted arrow in \eqref{TheMap} via the specification \eqref{ExtComp}. Let us note for completeness that the coefficients in \eqref{ExtComp} are given recursively by \[ \frac{a_n}{a_{n-1}} = 2n^2-n, \qquad\quad b_n = \frac{a_n}{n}\phantom{AAAAA} \mbox{for $n > 2$}, \] with $a_2 =6$, $b_2 =3$. This completes the construction of the dotted morphism in \eqref{TheMap}, and it is now easy to see that that diagram commutes. By construction, it is clear that the elements $1 \otimes \omega_{2p+8}$, $p\geq 0$ are not in the image of the dotted morphism. This proves item (\hyperlink{NotInImage}{{ii}}). Finally, the construction of the dotted morphism via (\hyperlink{FirstThreeGeneratorsAreMappedCorrectly}{{i}}) and \eqref{ExtComp} is a map between minimal DG-modules (over $\mathbb{Q}[h_3]$) which is moreover manifestly an injective cell map in that is injectively maps generators to generators. It follows that this map is a relative cell complex inclusion, in fact a minimal DG-module inclusion. This completes the proof. \end{proof} \begin{remark} The proof of (\hyperlink{RelativeCellInclusion}{iii}) depends crucially upon the fact that the elements $\omega_2^{\wedge n}$ for varying $n$ are \emph{distinct} as generators for the minimal DG-modules appearing along the bottom of Diagram \eqref{TheMap}. Despite being a potential source of great confusion, we persist in using this notation because of the utility of being able to work \lq\lq multiplicatively in $\omega_2$'' in the argument above. \end{remark} We are now in a position to state and prove our main technical result: \begin{theorem} [K-theory \lq\lq detruncation'' by lifting against fiberwise stabilization of A-type orbispace] \label{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} Regarding the A-type orbispace of the 4-sphere (Def. \ref{ATypeOrbispaceOf4Sphere}) as fibered over the 3-sphere via the canonical map \eqref{QuotientOfS4ByS1OverS3} from Prop. \ref{SystemOfFixedPointsAndQuotientsOfATypeActionOn4Sphere}, \begin{equation} \label{ATypeOrbispaceOverB3Q} \xymatrix{ S^4 /\!\!/ S^1 \ar[r] & S^3 \ar[r]^-{\simeq_\mathbb{Q}} & B^3 \mathbb{Q}\,, } \end{equation} we have that: \item {\bf (i)} Rationally, the fiberwise stabilization (Prop. \ref{AdjunctionStabilization}) of \eqref{ATypeOrbispaceOverB3Q} contains a copy of the 2-fold suspension of twisted connective K-theory as a direct summand $$ \xymatrix{ \Sigma^2\, \mathrm{ku} /\!\!/ B S^1\; \ar@{^{(}->}[rr]^-{\iota} && \Sigma^\infty_{+, S^3 }\left( S^4 /\!\!/ S^1 \right) }. $$ \item {\bf (ii)} A map $\phi$ of rational homotopy types over $B^3 \mathbb{Q}$ \[ \xymatrix@R=1em{ X\ar[rr]^-{\phi} \ar[dr]_-{\mu_3} &&\mathrm{Cyc}(S^4) \ar[dl] \ar[r]^-{p}& \Omega^{\infty-2}_{B^3 \mathbb{Q}} \left( \mathrm{ku} /\!\!/ B S^1\right)\langle 6\rangle \ar@/^0.5pc/[dll] \\ & B^3 \mathbb{Q} & } \] lifts from rational 6-truncated twisted K-theory to rational untruncated twisted K-theory if and only if $\phi$ admits a lift through the fiberwise stabilization of the $\mathrm{Ext}/\mathrm{Cyc}$-unit $\eta_{S^4/\!\!/ S^1}$: \[ \mathpalette\mathclapinternal{ \xymatrix@R=1.6em@C=2.5em{ && \Omega^{\infty-2}_{B^3 \mathbb{Q}}\left( \mathrm{ku} /\!\!/ B S^1\right)\; \ar@{->}[d]^-{\;\tau_6} \ar@{^{(}->}[rr]^-{\iota} & & \Omega^\infty_{S^3} \Sigma^\infty_{+,S^3} (S^4/\!\!/ S^1) \ar[dd]^{ \Omega^\infty_{S^3} \Sigma^\infty_{+,S^3}\big( \eta_{{S^4 /\!\!/ S^1}} \! \big) } \\ && \Omega^{\infty-2}_{B^3 \mathbb{Q}}\left( \mathrm{ku} /\!\!/ B S^1\right)\langle 6\rangle \\ X \ar[dr]_{\mu_3} \ar@{-->}@/^1pc/[rruu]|{\;\; \widehat{\phi} \;\;} \ar@{..>}@/^6.3pc/[rrrruu]|{ \;\;\widehat{ \mathrm{st}\circ \phi } \;\;} \ar[rr]^{\;\;\;\phi} && \mathrm{Cyc}(S^4) \ar[u]_-{\; p} \ar[r]^-{ \mathrm{st} } \ar[dl] & \Omega^\infty_{S^3} \Sigma^\infty_{+,S^3} \mathrm{Cyc}(S^4) \ar[r]^-{\simeq} & \Omega^\infty_{S^3} \Sigma^\infty_{+,S^3} \mathrm{Cyc}\,\mathrm{Ext}(S^4/\!\!/ S^1) \\ & B^3 \mathbb{Q} }} \] That is, specifying a dotted lift $\widehat{\phi}$ is equivalent to specifying a dotted lift $\widehat{\mathrm{st}\circ \phi}$, where $\mathrm{st}$ is the unit of the fiberwise stabilization adjunction. \end{theorem} \begin{proof} The rational parametrized spectra appearing on the right-hand side of the diagram have minimal DG-module models as in Prop. \ref{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere} and Prop. \ref{MinimalDGModels}. On these minimal models, the vertical map on the right-hand side is represented by the dotted morphism in \eqref{TheMap}. The fiberwise stabilization map $\mathrm{st}$ is represented in algebra by the right-hand vertical map in \eqref{TheMap}. The claim {\bf (i)} was already established in Prop. \ref{MinimalDGModels}. To prove {\bf (ii)}, we argue as follows. Recall that by Prop. \ref{RationalInfiniteLoopSpace}, rational models for fiberwise infinite loop spaces are obtained by taking free relative algebras, which allows us to argue on module generators (equivalently, we argue on $\Sigma^\infty_{+, S^3}\dashv \Omega^\infty_{S^3}$-adjuncts). By item (\hyperlink{FirstThreeGeneratorsAreMappedCorrectly}{i}) of Prop. \ref{FiberwiseStabilizationOfUnitOnMinimalModels}, a strict lifting of $\mathrm{st}\circ \phi$ through the stabilized $\mathrm{Ext}/\mathrm{Cyc}$-unit implies a strict lifting of $\phi$ through the the zig-zag morphism $\tau_6$---this is since the images of $\omega_2$, $\omega_4$ and $\omega_6$ are already specified in the rational model of $X$. Since untruncated twisted K-theory includes as a direct summand via $\imath$, the converse is also true: a strict lifting of $\phi$ through the zig-zag $\tau_6$ extends by zero to the complementary direct summand in $\Omega_{S^3}\Sigma^\infty_{+, S^3} (S^4/\!\!/ S^1)$. To complete the proof, we argue that strict lifts of $p\circ \phi$ and $\mathrm{st}_{S^3}\circ \phi$ are equivalent to up-to-homotopy lifts. Ultimately, this follows from item (\hyperlink{RelativeCellInclusion}{{\bf iii}}) in Prop. \ref{FiberwiseStabilizationOfUnitOnMinimalModels}. by using a standard argument in homotopy theory. For completeness, let us recall how this works: homotopy commutative squares in a homotopical category \begin{equation} \label{HomotopyCommSquare} \raisebox{20pt}{ \xymatrix@C=5em@R=1.5em{ X_1 \ar[r]^-{ \hat f }_>{\ }="s" \ar[d]_{p} & Y_1 \ar[d]^{q} \\ X_2 \ar[r]_-{f}^<{\ }="t" & Y_2 % \ar@{=>} "s"; "t" } } \end{equation} are equivalent to morphisms in the homotopy category of the corresponding arrow category. When working with a cofibrantly generated model category $\mathcal{C}$, the homotopy theory of the arrow category can be presented by the projective model structure on functors $\mathrm{Fun}(\Delta[1], \mathcal{C})$ (see e.g. \cite[Sec. A.2.8]{Lurie06}). General model category theory then implies that \emph{strictly} commutative squares are equivalent to homotopy commutative squares \eqref{HomotopyCommSquare} as soon as \begin{itemize} \item the source morphism $p\colon X_1 \to X_2$ is projectively cofibrant; and \item the target morphism $q\colon Y_1 \to Y_2$ is projectively fibrant. \end{itemize} In terms of maps of DG-$\mathbb{Q}[h_3]$-modules, the fibrancy condition is always satisfied, whereas a map of DG-modules is projectively cofibrant if and only if it is a cofibration between cofibrant objects. In the case at hand, observe that minimal DG-modules are cofibrant, and the dotted morphism in \eqref{TheMap} is a cofibration by Prop. \ref{FiberwiseStabilizationOfUnitOnMinimalModels} item (\hyperlink{RelativeCellInclusion}{iii}), as is the 6-truncation map of Ex. \ref{CyclificationOf4SphereReceives6TruncationOfTwistedK}. Working with our rational models, this means that up-to-homotopy lifts of $p\circ \phi$ through $\tau_6$ are equivalent to strict lifts, and likewise for lifts of $\mathrm{st}\circ \phi$ through the stabilization of the adjunction unit. This completes the proof. \end{proof} \section{Gauge enhancement of M-branes} \label{TheMechanism} In this section, we explain how Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} provides a mechanism implementing gauge enhancement of fundamental M-branes. Firstly, in Sec. \ref{With}, we explain the construction without local supersymmetry taken into account. We present a solution to \hyperlink{FirstRational}{\bf Open Problem, rational version 1} within the framework of rational homotopy theory that explains the core of the gauge enhancement mechanism via double dimensional reduction of flux forms as in \cite{MathaiSati04}. This focuses the problem on the enhancement from 6-truncated to untruncated twisted de Rham cohomology but falls short of exhibiting the existence and uniqueness of cocycles and their lifts, which requires local supersymmetry. Then, in Sec. \ref{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8}, we pass to rational \emph{super} homotopy theory to explain the gauge enhancement of super M-branes from first principles. To be self-contained, we first recall how super-spacetimes and super $p$-branes propagating in them are incarnated as super-cocycles in super-homotopy theory. We then explain how Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} implements gauge enhancement of fundamental M-branes under double dimensional reduction to type IIA D-branes in the rational approximation. \subsection{The mechanism without supersymmetry} \label{With} We first discuss the aspects of the gauge enhancement mechanism that are already apparent in rational homotopy, disregarding for the moment the crucial interaction with local supersymmetry. Being that local supersymmetry is a necessary ingredient for the very existence and classification of fundamental $p$-branes and their charges, we will incorporate it into our discussion in Sec. \ref{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8} below. For the moment, however, we will simply assume that a collection of flux-like forms is provided to us on some spacetime without commenting on their origin in string or M-theory. \medskip We take as input data a smooth manifold $Y$ (to be thought of as our 11-dimensional background spacetime), equipped with a map to the 4-sphere \begin{equation} \label{Input} \xymatrix{ Y \ar[rr]^-{ {\color{blue}(G_4, G_7)} } && S^4\,, } \end{equation} to be thought of as measuring M-brane charge in $Y$. In the approximation of rational homotopy theory (Def. \ref{ClassicalHomotopyCategories}), the datum of such a map \eqref{Input} is encoded by a differential 4-form $G_4$ and a differential 7-form $G_7$ on $Y$ satisfying the relations \begin{equation} \label{SugraTopologicalSector} (G_4,G_7) \;\sim \; \left\{ \begin{aligned} d G_4 & = 0 \\ d G_7 & = - \tfrac{1}{2} G_4 \wedge G_4 \,. \end{aligned} \right. \end{equation} This follows from the equivalence Prop. \ref{SullivanEquivalence}, using the minimal DG-algebra model for the 4-sphere from Ex. \ref{MinimalDgcAlgebraModelFor4Sphere}. The relations \eqref{SugraTopologicalSector} have the form of the ``topological sector'' of the equations of motion in 11-dimensional supergravity for the \emph{C-field strength} $G_4$ and its dual flux $G_7$, which we have identified with a rational homotopy map to the 4-sphere (this was first highlighted in \cite[Sec. 2.5]{S-top}). \medskip Observe that the contribution of the \lq\lq dual flux'' $G_7$ can be disentangled from that of the \lq\lq C-field strength'' $G_4$ by taking the homotopy pullback along the quaternionic Hopf fibration $H_{\mathbb{H}}\colon S^7 \to S^4$. This is an ${\rm SU}(2)$-principal bundle over $S^4$, hence classified by a map $\phi_\mathbb{H}\colon S^4 \to B {\rm SU}(2)$ to the classifying space, as in \eqref{GBundleByPullback}. The cohomotopy cocycle \eqref{Input} decomposes along the quaternionic Hopf fibration as follows: \begin{equation} \label{Decompo} \raisebox{50pt}{ \xymatrix@R=1.2em{ \widehat Y \ar[dd] \ar[rr]^-{ \color{blue} G_7} \ar@{}[ddrr]|{\binom{\rm homotopy}{\rm pullback}} && S^7 \ar[dd]^-{ H_{\mathbb{H}} } \\ \\ Y \ar[rr]|-{ \;(G_4, G_7)\; } \ar[ddr]_-{ \color{blue} G_4 } && S^4 \ar[d]^-{ \phi_{\mathbb{H}} } \\ && B {\rm SU}(2) \ar[dl]^{ \mathrm{c_2} } \\ & { B^4 \mathbb{Q}, } }} \end{equation} where $\mathrm{c}_2 \colon B{\rm SU}(2)\to B^4 \mathbb{Q}$ is the second Chern class of the universal ${\rm SU}(2)$-bundle. We are interested in the \emph{double dimensional reduction} of these data, assuming that $Y$ is a circle fibration as as in \eqref{DD}. This means that we assume $Y$ sits in a homotopy fiber sequence \begin{equation} \label{YFib} \raisebox{20pt}{ \xymatrix@R=1.3em{ *+[r]{ \color{blue} Y \simeq_{\mathrm{whe}} \mathrm{Ext}(X) } \ar[d]_{ \pi } \\ X := Y /\!\!/ S^1 \ar[dr] \\ & B S^1 \,, }} \end{equation} hence that $Y$ is an \emph{$S^1$-extension} of $X$ as in Sec. \ref{TheAdjunction}. Consequently, our cohomotopy cocycle as in \eqref{Decompo} is now exhibited equivalently as a map out of a space in the image of the $\mathrm{Ext}$-functor \eqref{GExtF}: $$ \xymatrix@R=1.4em{ {\color{blue}\mathrm{Ext}( X ) } \ar[rr]^-{ (G_4, G_7) } \ar[dr]_-{G_4} && S^4 \ar[dl]^{ c_2(\phi_\mathbb{H})} \\ & B^4 \mathbb{Q}\,. } $$ By the adjunction of Theorem \ref{GCycExtAdjunction}, maps out of the image of $\mathrm{Ext}$ are equivalent to maps into the image of the right adjoint $\mathrm{Cyc}$, where this equivalence is exhibited by first applying $\mathrm{Cyc}$ and then precomposing with the adjunction unit map $\eta_X\colon X \to \mathrm{Cyc}\,\mathrm{Ext}(X)$. (e.g. \cite[Sec. 3]{Borceux94}). In summary, this means that the cohomotopy cocycle on $Y$ (with rational components $(G_4, G_7)$) is naturally identified with a $\mathrm{Cyc}(S^4)$-valued cocycle on $X$ \begin{equation} \label{RatDDRed} \raisebox{30pt}{ \xymatrix@C=20pt@R=1.5em{ {\color{blue}X} \ar@[white]@/^1.25pc/[rrrr]^-{ \mbox{ \tiny \begin{tabular}{c} double dimensional reduction \\ of $(G_4, G_7)$ \end{tabular}} } \ar@[blue]@/^1pc/[drrrr]|-{\,{\color{blue}\widetilde{(G_4, G_7)}}\, } \ar[d]_{\color{blue}\eta_X} &&&& \\ {\color{blue}\mathrm{Cyc}}\,\mathrm{Ext}(X) \ar[rrrr]|{\; {\color{blue}\mathrm{Cyc}}(G_4,G_7)\; } \ar[drr] &&&& {\color{blue}\mathrm{Cyc}}(S^4)\;, \ar[dll] \\ && {\color{blue}\mathrm{Cyc}}(B^4 \mathbb{Q}) \ar[d] \\ && B^3 \mathbb{Q} }} \end{equation} where the map $\mathrm{Cyc}(B^4 \mathbb{Q})\to B^3 \mathbb{Q}$ is obtained using the infinite loop space structure of $B^4\mathbb{Q}$; in terms of minimal DG-algebra models it is simply the map \[ \xymatrix@R=.5pt@C=4em{ \mathrm{Cyc}(B^4 \mathbb{Q}) \ar[rr] && B^3 \mathbb{Q}\;. \\ \omega_2\phantom{\,.} && \\ \omega_3\phantom{\,.} && \omega_3 \ar@{|->}[ll] \\ \omega_4 && } \] Using the minimal DG-algebra model for $\mathrm{Cyc}(S^4)$ (Ex. \ref{MinimalDGCAlgebraModelForCyclicSpaceOfFourSphere}), we have that the $\mathrm{Ext}/\mathrm{Cyc}$-adjunct of the cohomotopy cocycle $(G_4, G_7)$ is rationally determined by a collection of differential forms on $X$ satisfying the following relations: \begin{equation} \label{DDRed} \widetilde{(G_4,G_7)} \;\;=\;\; \left\{ \begin{aligned} d H_3 & = 0\;, \phantom{AAA}\phantom{BB}\phantom{CC} d H_7 = F_2 \wedge F_6 - \tfrac{1}{2} F_4 \wedge F_4\;, \\ \\ d F_2 & = 0\;, \\ d F_4 & = H_3 \wedge F_2\;, \\ d F_6 & = H_3 \wedge F_4 \,. \end{aligned} \right. \end{equation} These differential forms can also be seen as arising from the data $(G_4, G_7)$ via the Gysin sequence \cite[Sec 4.2]{MathaiSati04}. Interpreting $G_4$ and $G_7$ as M-brane flux forms in 11-dimensional supergravity, corresponding to the M2-brane and the M5-brane respectively, then the forms in \eqref{DDRed} are naturally interpreted as flux forms of brane species in 10-dimensional type IIA supergravity, together with their Bianchi identities. We notice that not all of the expected brane flux forms appear: we have the $\mathrm{NS1}$-brane flux $H_3$, the RR-flux forms $F_{(2p+2\leq 6)}$ for the $\mathrm{D}(2p \leq 4)$-branes, as well the $\mathrm{NS5}$-brane flux $H_7$,\footnote The identity $d H_7 = F_2 \wedge F_6 - \tfrac{1}{2} F_4 \wedge F_4$ for the type IIA $\mathrm{NS5}$-brane flux does not hold after fiberwise stabilization in \eqref{DDRedEnhc}---indeed, the flux form $H_7$ is part of the obstruction to completing the zig-zag truncation map $\tau_6$ in Ex. \ref{CyclificationOf4SphereReceives6TruncationOfTwistedK} to an actual homomorphism.} but the flux forms for the D6 and D8 branes are missing. \medskip So far we have merely recapitulated the mechanism of double dimensional reduction formalized via the $\mathrm{Ext}/\mathrm{Cyc}$-adjunction in rational homotopy theory \cite{FSS13,FSS16b}. We are now interested in how the collection of flux forms in \eqref{DDRed} may be naturally enhanced so as to contain flux forms $F_8$ and $F_{10}$. As discussed in Rem. \ref{TheATypeQuotientFromSpacetime}, operations on spacetime $Y$ should be accompanied by corresponding operations on the 4-sphere brane charge coefficient. This means that we should consider the A-type circle action on the 4-sphere: \begin{equation} \label{SphereExt} \raisebox{35pt}{\xymatrix@R=1.4em{ & *+[l]{\color{blue} \mathrm{Ext}(S^4 /\!\!/ S^1) \simeq_{\mathrm{whe}} S^4} \ar[d] \\ & S^4 /\!\!/ S^1 \ar[dl] \\ B S^1 }} \end{equation} Diagram \eqref{RatDDRed} can now be cast in a more symmetric form: \begin{equation} \label{SymmetricAtLast} \raisebox{35pt}{ \xymatrix@C=25pt@R=1.5em{ {\color{blue}X} \ar[d]_-{\color{blue}\eta_{X}} \ar@[blue]@/^1pc/[drrrr]|-{\,{\color{blue}\widetilde{(G_4, G_7)}}\, } && && {\color{blue} S^4 /\!\!/ S^1} \ar[d]^{\color{blue}\eta_{{}_{S^4 /\!\!/ S^1}}} \\ {\color{blue}\mathrm{Cyc}}\,\mathrm{Ext}(X) \ar[rrrr]|{\; {\color{blue}\mathrm{Cyc}}(G_4,G_7)\; } \ar[drr] &&&& {\color{blue}\mathrm{Cyc}}( \mathrm{Ext}(S^4 /\!\!/ S^1) )\;. \ar[dll] \\ && {\color{blue}\mathrm{Cyc}}(B^4 \mathbb{Q}) \ar[d] \\ && B^3 \mathbb{Q} } } \end{equation} Diagram \eqref{SymmetricAtLast} poses the natural question of whether we can find a lift through the $\mathrm{Ext}/\mathrm{Cyc}$-unit: \begin{equation} \label{IsIt} \raisebox{35pt}{ \xymatrix@C=25pt@R=1.5em{ X \ar@[blue]@{-->}@/^1.5pc/[rrrr]^-{ \mbox{ \tiny \begin{tabular}{c} enhanced \\ double dimensional reduction \\ of $(G_4,G_7)$ {\color{blue} ??} \end{tabular} } } \ar[d]_{\eta_X} && && {S^4 /\!\!/ S^1} \ar[d]^{\eta_{{}_{S^4 /\!\!/ S^1}}} \\ { \mathrm{Cyc}}\,\mathrm{Ext}(X) \ar[rrrr]|-{\;\mathrm{Cyc}(G_4,G_7)\; } \ar[drr]_{H_3} &&&& { \mathrm{Cyc}}( \mathrm{Ext}(S^4 /\!\!/ S^1) )\;. \ar[dll] \\ && B^3 \mathbb{Q}\,. } } \end{equation} In general, there are strong obstructions to such a lift: Prop. \ref{MinimalDGCAlgebraModelForATypeOrbispace} implies, via \eqref{AdjunctionUnitOnRationalATypeOrbispace}, that for such a lift to exist, the cocycle data in \eqref{DDRed} needs to satisfy the conditions that $F_6 = 0$ and $H_3 \wedge F_2 = 0$ (so that $d F_4 = 0$). Once we take local supersymmetry into account, we will see that these conditions are too restrictive and that a direct lift as in \eqref{IsIt} simply does not exist. \begin{remark}[Goodwillie calculus] \label{GoodwillieCalculus} At this point it behooves us to highlight that similarly to string theory and quantum field theory, homotopy theory also has a concept of \emph{perturbative approximation}. As we have touched upon at varied points, homotopy theory is an immensely rich and computationally demanding area of mathematics (see \cite{Ravenel03, HHR09} for good examples of this). However, in striking analogy to Taylor series expansions in differential calculus, mapping spaces between homotopy types can often be approximated by a sequence of increasingly accurate approximations called the \emph{Goodwillie--Taylor tower} (see \cite{Kuhn04} and \cite[Ch. 6]{Lurie09}). The first-order approximation in this \emph{Goodwillie perturbation theory} is provided by stabilization adjunction between spaces and spectra, as discussed in Sec. \ref{RationalParameterizedStableHomotopyTheory}. \end{remark} In view of Rem. \ref{GoodwillieCalculus}, as we encounter strong obstructions in homotopy theory as in \eqref{IsIt}, we are led to ask whether the obstruction persists ``perturbatively'', hence stage-wise in the Goodwillie--Taylor approximation. To first order in Goodwillie perturbation theory, this means that we should ask whether enhanced double dimensional reduction exists after \emph{fiberwise stabilization} (Prop. \ref{AdjunctionStabilization}) over $S^3 \simeq_\mathbb{Q} B^3\mathbb{Q}$: \begin{equation} \label{Enhc} \raisebox{40pt}{ \xymatrix@R=1.6em@C=4em{ X \ar@[blue]@{-->}@/^2pc/[rrrr]^-{ \mbox{ \tiny \begin{tabular}{c} {\color{blue} perturbatively} enhanced \\ double dimensional reduction \\ of $(G_4,G_7)$ {\color{blue} ?? } \end{tabular} } } \ar[d]_{\eta_{{X}}} && && {\color{blue} \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}} \left( S^4 /\!\!/ S^1 \right) \ar@[blue][d]^{ {\color{blue} \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}} \big( \eta_{{S^4 /\!\!/ S^1}} \hspace{-.7mm} \big) } \\ { \mathrm{Cyc}}\,\mathrm{Ext}(X) \ar[rr]^-{\mathrm{Cyc}(G_4,G_7)\; } \ar[drr]_{H_3} && \mathrm{Cyc}( S^4 ) \ar[d] \ar@[blue][rr]^-{ \color{blue} \mathrm{st}} && {\color{blue} \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}} \mathrm{Cyc}(S^4) \ar[dll] \\ && B^3 \mathbb{Q} } } \end{equation} But this is precisely the question that is addressed by Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere}: the obstruction to a dotted morphsim as in \eqref{Enhc} is precisely the lift to untruncated twisted K-theory as in the diagram: \begin{equation} \label{FactorEnhc} \hspace{-10mm} \mathpalette\mathclapinternal{ \raisebox{40pt}{ \xymatrix@R=1.8em@C=40pt{ X \ar@{-->}@/^2.4pc/@[blue][rrrr]^-{ \mbox{ \tiny \begin{tabular}{c} perturbatively enhanced \\ double dimensional reduction \\ of $(G_4,G_7)$ {\color{blue} ! } \end{tabular} } } \ar@{-->}@[blue][rr]^{ \color{blue} \widehat{(G_4,G_7)} } \ar[drr]|-{\;{\widetilde{(G_4, G_7)}}\; } \ar[d]_{\eta_{{X}}} && {\color{blue} \Omega^{\infty-2}_{B^3 \mathbb{Q}}\left( \mathrm{ku} /\!\!/ B S^1 \right) \,} \ar@[blue]@{..}[d]|-{\color{blue} \tau_6} \ar@{^{(}->}[rr]^-{\iota} && \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3} {S^4 /\!\!/ S^1} \ar[d]^{ \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3} \big( \eta_{{}_{S^4 /\!\!/ S^1}} \hspace{-.7mm}\big) } \\ { \mathrm{Cyc}}\,\mathrm{Ext}(X) \ar[rr]|-{\;\mathrm{Cyc}(G_4,G_7)\; } \ar[drr]_{H_3} && \mathrm{Cyc}( S^4 ) \ar[d] \ar[rr]^-{\mathrm{st}} && \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}\mathrm{Cyc}(S^4) \ar[dll] \\ && B^3 \mathbb{Q}\,. } }} \end{equation} That is, exhibiting a \emph{perturbatively} enhanced double dimensional reduction cocycle is equivalent to specifying an extension of the cocycle data \eqref{DDRed} to a cocycle of the form \begin{equation} \label{DDRedEnhc} \widehat{(G_4,G_7)} \;\;=\;\; \left\{ \begin{aligned} d H_3 & = 0 \\ \\ d F_2 & = 0 \\ d F_4 & = H_3 \wedge F_2 \\ d F_6 & = H_3 \wedge F_4 \\ {\color{blue} d F_8 } & {\color{blue} = H_3 \wedge F_6 } \\ {\color{blue} d F_{10} } & {\color{blue} = H_3 \wedge F_8 } \\ &\;{\color{blue} \vdots } \end{aligned} \right. \end{equation} According to the discussion in Sec. \ref{Before}, such a cocycle exhibits the full gauge ehancement mechanism in rational homotopy theory. That is, \begin{quotation} \noindent A {\bf solution} to \hyperlink{FirstRational}{\bf Open Problem, rational version 1} (p.~ \pageref{FirstRationalProblem}) is obtained as follows: \begin{itemize} \item Double dimensional reduction of the M-theory flux data $(G_4, G_7)$ of \eqref{SugraTopologicalSector} is given by the $\mathrm{Ext}/\mathrm{Cyc}$-adjunct along the M-theory spacetime extension \eqref{YFib}; and \item Its \emph{perturbative gauge enhancement}, making all RR flux forms appear, is exhibited by lifting against the fiberwise stabilization of the $\mathrm{Ext}/\mathrm{Cyc}$-unit on the A-type orbispace of the 4-sphere \eqref{FactorEnhc}\eqref{DDRedEnhc}. \end{itemize} \end{quotation} \begin{remark}[Copies of twisted K-theory] Let us briefly pause to highlight a curious aspect of our result on gauge enhancement. By Prop. \ref{MinimalDGModels}, there are in fact \emph{two} direct summands of rationalized twisted K-theory inside the rational fiberwise stabilization of the A-type orbispace. Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} assigns a special role to the shifted copy $\Sigma^2 \mathrm{ku} /\!\!/ B S^1$, since it is this copy that obstructs the lifting problem. The 2-fold suspension here reflects the fact that RR-flux forms in type IIA string theory ordinarily start with D0-flux $F_2$ in degree 2. But there is also a rational copy of \emph{un}shifted twisted K-theory; a cocycle landing in this this copy corresponds to a twisted de Rham cocycle whose lowest component is a 0-form instead of a 2-form. The subtle issue of considering this possibility in the context of the K-theory classification of RR-flux is discussed in \cite{MoS, MathaiSati04, BV} and revisited systematically, and in a refined form, in \cite{GS19}. \end{remark} \medskip \subsection{The mechanism with local supersymmetry} \label{TheAppearanceFromMTheoryOfTheFundamentalD6AndD8} \label{FundamentalpBranes} In this section, we finally present our solution to the problem of gauge enhancement for super M-branes in its formulation as \hyperlink{OpenRational}{\bf Open Problem, rational version 2} (p.~\pageref{OpenRationalPage}). We proceed as per the solution to \hyperlink{FirstRational}{\bf Open Problem, rational version 1} detailed in the previous section, with the important caveat that we now work in the proper context for super $p$-branes, namely locally supersymmetric supergeometry. Before presenting our solution, we briefly review some of the necessary background material (for a detailed introduction with an emphasis on the aspects of relevance here, see \cite{Schreiber16}) \medskip While there is no known mechanism in string theory that would enforce -- or even prefer -- \emph{global} supersymmetry, \footnote{Models of particle physics obtained from dimensional reductions of M-theory on singular manifolds of $G_2$-holonomy (see \cite{Kane17}) are among the globally supersymmetric extensions of the Standard Model of particle physics that are, so far, still consistent with experimental constraints \cite{BGK18}. If and when supersymmetric extensions of the Standard Model are ruled out, then this will also rule out dimensional reductions of M-theory on singular fiber manifolds of $G_2$-holonomy as realistic models for particle physics. However, this particular type of dimensional reduction is in no way dictated by the theory, and are certainly not generic amongst all possibilities, but were motivated by the expectation of global supersymmetry in the first place. What \emph{is} dictated by the theory is \emph{local} supersymmetry, which is already present as soon as fermions are.} what is predicted by perturbative string theory, and what is at the heart of its non-perturbative description as in \cite{ADE}, is \emph{local} supersymmetry, in particular \emph{supergravity}. An elegant way to understand supergravity is via \emph{Cartan geometry}, a powerful generalization of Riemannian geometry (see \cite[Ch. 1]{CapSlovak09} for an introduction). In Cartan geometry, one fixes a local model space $V$ and a subgroup $G \subset \mathrm{Aut}(V)$ of its automorphism group, and then studies spaces that look \emph{locally}, namely on the infinitesimal neighborhood of every point, like $V$ equipped with a reduction of the structure group to $G$: $$ \mbox{ \begin{tabular}{|c|c|} \hline \begin{tabular}{c} Local model for \\ Cartan geometry \end{tabular} & \raisebox{-9pt}{ \xymatrix{ V \ar@(ul,ur)[]^{ G } } } \\ \hline \end{tabular} } $$ In the physics literature, this is known as the method of \emph{local gauging via moving frames} (following \cite{Cartan1923}), while mathematically this is the study of \emph{(torsion-free) $G$-structures} \cite{Guillemin65}. If one chooses the local model to be Minkowski spacetime \begin{equation} \label{MinkowskiSpacetime} \mbox{ \begin{tabular}{|c|c|} \hline \begin{tabular}{c} Local model for off-shell \\ gravity \end{tabular} & \raisebox{-9pt}{ \xymatrix{ \mathbb{R}^{d,1} \ar@(ul,ur)[]^{{\rm SO}(d,1) } } } \\ \hline \end{tabular} } \end{equation} then the resulting Cartan geometries are precisely the pseudo-Riemannian manifolds modelling configurations in general relativity. However, not all of these configurations are physically admissible (they are not all \lq\lq on-shell'') and we need to impose the Einstein field equations to pick out the physically admissible geometries. In this Cartan-geometric formulation, the passage to supergravity is immediate: taking the local model space to be instead \emph{super} Minkowski spacetime \begin{equation} \label{SuperMinkowskiSpacetime} \mbox{ \begin{tabular}{|c|c|} \hline \begin{tabular}{c} Local model for off-shell \\ supergravity \end{tabular} & \raisebox{-9pt}{ \xymatrix{ \mathbb{R}^{d,1 \vert \mathbf{N}} \ar@(ul,ur)[]^{ {\rm Spin}(d,1)} } } \\ \hline \end{tabular} } \end{equation} then the resulting Cartan geometries are supergravity super-spacetimes. This is spelled out explicitly in \cite{Lott90, EE}, and is implicit in much of the physics literature on supergravity, following \cite{CDF}. \medskip In the spirit of Kleinian geometry \cite{Klein1872}, the group $G$ acting on the local model space $V$ reflects extra structure on $V$. For plain Minkowski spacetime \eqref{MinkowskiSpacetime} with its Lorentz group action, this extra structure is the Minkowski metric of special relativity. For the super-Minkowski spacetime \eqref{SuperMinkowskiSpacetime} with its Lorentzian Spin group action, this structure is the translational \emph{supersymmetry} super Lie algebra structure on $\mathbb{R}^{d,1\vert \mathbf{N}}$. Recall that the only non-trivial component of the super Lie bracket is the odd-odd spinor-to-vector pairing: $$ \xymatrix@R=-2pt{ \mathbf{N} \otimes \mathbf{N} \ar[rr]^{ \overline{(-)}\Gamma(-) } && \mathbb{R}^{d,1} \\ \psi_1, \psi_2 \ar@{|->}[rr] && \left(\overline{\psi}_1 \Gamma^a \psi_2\right)_{a=0}^d\,. } $$ The Chevalley--Eilenberg algebra of this super-Minkowski super Lie algebra is a super DG-algebra \begin{equation} \label{CEAlgebraSuperMinkowski} \mathrm{CE}\big( \mathbb{R}^{d,1\vert \mathbf{N}} \big) \;:=\; \Bigg( \mathbb{R}\big[\!\! \underset{ \mathrm{deg} = (1,\mathrm{even}) }{\underbrace{(e^a)_{a =0}^d}} \!\!, \; \underset{ (1, \mathrm{odd}) }{\underbrace{ ( \psi^\alpha )_{\alpha = 1}^{N} }} \; \big]\Big/ {\small \left( \begin{aligned} d e^a & = \overline{\psi}\Gamma^a \psi \\[-1mm] d \psi^\alpha & = 0 \end{aligned} \right)} \Bigg) \;\; \in \mathrm{DGCSuperAlg} \,, \end{equation} like those used in rational homotopy theory in Def. \ref{dgAlgebrasAnddgModules}, but equipped with additional $\mathbb{Z}_2$-grading. In view of the equivalence of Prop. \ref{SullivanEquivalence} we may think of \emph{super} DG-algebras of finite type as the formal duals of rational \emph{superspaces} (see \cite[Sec. 3.2]{ADE} for details in this context): \begin{equation} \label{RationalSuperSpaces} \mathrm{Ho}\left( \mathrm{SuperSpaces}_{\mathbb{R}, \mathrm{cn}, \mathrm{ft}, \mathrm{nil}} \right) \;:=\; \mathrm{Ho} \left( \mathrm{DGCSuperAlg}_{\mathrm{cn}, \mathrm{ft}}^{\mathrm{op}} \right). \end{equation} All of ordinary (henceforth \lq\lq bosonic'') rational homotopy theory embeds into rational super homotopy theory by \begin{itemize} \item replacing the ground field by $\mathbb{R}$ via $A \mapsto A\otimes_\mathbb{Q}\mathbb{R}; $\footnote{From the point of view of homotopy theory, there is little difference between working over $\mathbb{Q}$ or over $\mathbb{R}$.} and \item regarding ordinary DG-algebras are having an additional $\mathbb{Z}_2$-grading in which all elements have trivial degree. \end{itemize} For instance, a morphism of the form \vspace{-3mm} $$ \xymatrix{ \mathbb{R}^{d,1\vert \mathbf{N}} \ar@(ul,ur)[]^{{\rm Spin}(d,1)} \ar[rr]^-{\mu_p} && {B}^{p+1} S^1 } $$ in rational super homotopy theory is, equivalently, a ${\rm Spin}(d,1)$-invariant super Lie algebra cocycle of $\mathbb{R}^{p,1|\mathbf{N}}$ in degree $p+2$. The requirement of ${\rm Spin}(d,1)$-invariance forces such a cocycle to be a product of elements of the form $$ \mu_{p} \;:=\; \left( \overline{\psi} \Gamma_{a_1 \cdots a_p} \psi \right) \wedge e^{a_1} \wedge \cdots \wedge e^{a_p} \;\;\in \mathrm{CE}( \mathbb{R}^{d,1\vert \mathbf{N}} ) \,. $$ Remarkably, when the local model space is taken to be the $D = 11$, $\mathcal{N} = 1$ super-Minkowski spacetime $$ \mbox{ \begin{tabular}{|c|c|} \hline \begin{tabular}{c} Local model for \emph{on-shell} \\ $D = 11$, $\mathcal{N} = 1$ supergravity \end{tabular} & \raisebox{-9pt}{ \xymatrix{ \mathbb{R}^{10,1 \vert \mathbf{32}} \ar@(ul,ur)[]^{{\rm Spin}(10, 1)} } } \\ \hline \end{tabular} } $$ the condition of (super-)torsion freeness is already equivalent to the supergravity equations of motion \cite{CandielloLechner94, Howe97, FOS}. That is, (super-)torsion-free $D = 11$, $\mathcal{N} =1$ (super-)Cartan geometries are precisely on-shell configurations of supergravity. In fact, more is true: the supergravity equations of motion imply that the bifermionic components of the supergravity $C$-field curvature 4-form $G_4$ and its Hodge dual 7-form $G_7$ are covariantly constant on each super tangent space $\mathbb{R}^{10,1\vert \mathbf{32}}$ and constrained there to be of the form \begin{equation} \label{M2M5Cochains} (G_4,G_7)_{\mathrm{fermionic}} = \left( \begin{aligned} \mu_{M2} &:= \tfrac{i}{2} \left(\overline{\Psi} \Gamma_{a_1 a_2} \Psi\right) \wedge E^{a_1} \wedge E^{a_2}\,, \\ \mu_{M5}&:= \tfrac{1}{5!} \left(\overline{\Psi} \Gamma_{a_1 \cdots a_5} \Psi \right) \wedge E^{a_1} \wedge \cdots \wedge E^{a_5} \end{aligned} \right). \end{equation} In this expression, $(E^a, \Psi^\alpha)$ is the \emph{super-vielbein} field on super-spacetime, hence the supergeometric analog of the vielbein field determining a Cartan geometry. Torsion-freeness implies that on the infinitesimal neighborhood of each point of super-spacetime, the super-vielbein reproduces the super left-invariant 1-form generators $(e^a, \psi^\alpha)$ of $\mathrm{CE}\big( \mathbb{R}^{10,1\vert \mathbf{32}} \big)$ as in \eqref{CEAlgebraSuperMinkowski}. Due to the \emph{Fierz identities} of Spin-representation theory \cite{DF}, the super-forms of \eqref{M2M5Cochains} satisfy the relations \begin{equation} \label{M2M5Coc} \mu_{{}_{M2/M5}} \;=\; \left\{ \begin{aligned} d \mu_{{}_{M2}} & = 0 \\ d \mu_{{}_{M5}} & = -\tfrac{1}{2} \mu_{{}_{M2}} \wedge \mu_{{}_{M2}} \end{aligned} \right. \end{equation} on each super tangent space $\mathbb{R}^{10,1\vert \mathbf{32}}$. Moreover, this is precisely the super tangent space-wise condition that witnesses propagation of fundamental super M2-branes \cite[(15)]{BST2} and the fundamental super M5-branes \cite[(5)]{LT}, \cite[(6)]{BLNPST97} in the background super-spacetime (see \cite[Sec. 2.1]{ADE} for further background on fundamental super $p$-branes). \medskip After comparing the $\mathrm{M2}/\mathrm{M5}$-brane Fierz identity \eqref{M2M5Coc} with the minimal DG-algebra model for the 4-sphere (Ex. \ref{MinimalDgcAlgebraModelFor4Sphere}, also \eqref{SugraTopologicalSector}), we may summarize the state of affairs using the language of (rational) super homotopy theory as follows \begin{quote}{ 11-dimensional super-Minkowski spacetime carries an exceptional super-cocycle in rational cohomotopy of degree four \cite{FSS16a, cohomotopy}, and 11-dimensional supergravtiy super-spacetimes together with fundamental M2/M5-branes propagating in them are the \emph{higher} Cartan geometries \cite{Schreiber15, Wellen} locally modelled on the higher geometric data:} \begin{equation} \label{LocalM} \mbox{ \begin{tabular}{|c|c|} \hline \begin{tabular}{c} Local model for \\ fundamental $\mathrm{M2}/\mathrm{M5}$-branes \\ in $D = 11$, $\mathcal{N} = 1$ supergravity \\ \end{tabular} & \raisebox{-9pt}{ \xymatrix{ \mathbb{R}^{10,1\vert \mathbf{32}} \ar@(ul,ur)[]^{{\rm Spin}(10,1) } \ar[rr]^-{ \mu_{{}_{M2/M5}} } && S^4 } } \\ \hline \end{tabular} } \end{equation} \end{quote} \medskip \noindent Analogous statements hold for all fundamental branes that appear in string theory \cite{FSS13}. In particular, the super-spacetimes of $D = 10$, $\mathcal{N} = (1,1)$ supergravity (that is, type IIA supergravity) are Cartan geometries locally modelled on the type IIA super-Minkowski spacetime $$ \mbox{ \begin{tabular}{|c|c|} \hline \begin{tabular}{c} Local model for \\ $D = 10$, $\mathcal{N} = (1,1)$ supergravity \end{tabular} & \raisebox{-9pt}{ \xymatrix{ \mathbb{R}^{9,1 \vert \mathbf{16} + \overline{\mathbf{16}}} \ar@(ul,ur)[]^{ {\rm Spin}(9,1)} } } \\ \hline \end{tabular} } $$ The presence of the fundamental $\mathrm{F1}$- and $\mathrm{D}p$-branes propagating in these super-spacetimes is captured, as for the M2/M5-brane cocycles above, by the non-trivial super Lie algebra cocycles \cite[(3.9)]{CGNSW97}: \begin{equation} \label{DCoch} \begin{aligned} \mu_{{}_{F1}^{\mathrm{IIA}}} & = i \left( \overline \Psi \Gamma_{a} \Gamma_{10} \Psi\right) \wedge E^a \\ \mu_{{}_{D0}} & = \left(\overline{\Psi} \Gamma_{10} \Psi\right) \\ \mu_{{}_{D2}} & = \tfrac{i}{2} \left( \overline{\Psi} \Gamma_{a_1 a_2} \Psi \right) \wedge E^{a_1} \wedge E^{a_2} \\ \mu_{{}_{D4}} & = \tfrac{1}{4!} \left( \overline{\Psi} \Gamma_{a_1 \cdots a_4} \Gamma_{10} \Psi \right) \wedge E^{a_1} \wedge \cdots \wedge E^{a_4} \\ \mu_{{}_{D6}} & = \tfrac{i}{6!} \left( \overline{\Psi} \Gamma_{a_1 \cdots a_6} \Psi \right) \wedge E^{a_1} \wedge \cdots \wedge E^{a_6} \\ \mu_{{}_{D8}} & = \tfrac{1}{8!} \left( \overline{\Psi} \Gamma_{a_1 \cdots a_8} \Gamma_{10} \Psi \right) \wedge E^{a_1} \wedge \cdots \wedge E^{a_8} \\ \mu_{{}_{D10}} & = \tfrac{i}{10!} \left( \overline{\Psi} \Gamma_{a_1 \cdots a_{10}} \Psi \right) \wedge E^{a_1} \wedge \cdots \wedge E^{a_{10}} \\ \big(\mu_{{}_{NS5}} &= \tfrac{1}{5!} \left(\overline{\Psi} \Gamma_{a_1 \dotsb a_5}\Psi\right) \wedge E^{a_1}\wedge \dotsb \wedge E^{a_5}\big)\;, \end{aligned} \end{equation} where we have also included here the NS5-brane cocycle $\mu_{{}_{NS5}}$ for completeness. Using the Fierz identities for ${\rm Spin}(9,1)$-representations, one finds that these expressions satisfy the following relations: \begin{equation} \label{DCoc} ( \mu_{{}_{F1/Dp}} ) \;=\; \left\{ \begin{aligned} d \mu_{{}_{F1}} & = 0 \phantom{AAAABBBCCC} \big(d\mu_{{}_{NS5}} = \mu_{{}_{D0}}\wedge \mu_{{}_{D4}} -\tfrac{1}{2}\mu_{{}_{D2}}\wedge \mu_{{}_{D2}}\big) \\ d \mu_{{}_{D(2p+4)}} & = \mu_{{}_{F1}} \wedge \mu_{{}_{D(p+2)}}\,, \end{aligned} \right. \end{equation} (\cite[App.~A]{CGNSW97}, \cite[(6.8), (6.9)]{CAIB00}, see also \cite[Theorem 4.16]{FSS16a}, \cite[Prop. 4.8]{FSS16b}). This is the supersymmetric version of the cocycle \eqref{DDRedEnhc} that we had encountered in the previous section. \medskip By analogy with \eqref{LocalM}, using Lemma \ref{TwistedKModel} we may concisely summarize this in the language of rational super homotopy theory as follows: \begin{quote} { 10-dimensional type IIA super-Minkowski spacetime carries an exceptional cocycle in rational 2-shifted twisted connective K-theory and super-spacetimes of 10-dimensional type IIA supergravity together with fundamental $\mathrm{F1}/\mathrm{D}p$-branes propagating inside them are the \emph{higher} Cartan geometries locally modelled on the higher geometric data:} \begin{equation} \label{LocalIIA} \mbox{ \begin{tabular}{|c|c|} \hline \begin{tabular}{c} Local model for \\ fundamental $\mathrm{F1}/\mathrm{D}p$-branes \\ in $D =10$, $\mathcal{N} = (1,1)$ supergravity \end{tabular} & \raisebox{10pt}{ \xymatrix@R=1.7em{ \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} \ar@(ul,ur)[]^{{\rm Spin}(9,1) } \ar[rr]^-{ \mu_{{}_{F1/Dp}} } \ar[dr]_{\mu_{{}_{F1}}} && \mathrm{ku} /\!\!/ B S^1 \ar[dl] \\ & B^3 \mathbb{Q} } } \\ \hline \end{tabular} } \end{equation} \end{quote} Working with higher Cartan geometry in this way, one finds large parts of the string/M-theory literature appearing in cocycle incarnations (see \cite[Sec. 2]{ADE} for a more detailed account). The upshot is that all information about brane species and their behavior in string/M-theory is encoded in \emph{cohomological data}, depending on the local model space. The actual brane dynamics are provded by the higher-super-Cartan-geometric globalization of these local data. \medskip The problem of gauge enhancement for M-branes, therefore, reduces to the question of how double dimensional reduction turns the local model \eqref{LocalM} for $D = 11$, $\mathcal{N} = 1$ superspacetime with its $\mathrm{M2}/\mathrm{M5}$-brane cocycle into the local model \eqref{LocalIIA} for $D = 10$, $\mathcal{N} = (1,1)$ super-spacetime with its unified D-brane coycle. This works via the mechanism of Sec. \ref{With}: our input datum \eqref{Input} is now specifically the fundamental $\mathrm{M2}/\mathrm{M5}$-brane super cocycle \eqref{LocalM} \begin{equation} \label{FundamentalM2M5} \xymatrix{ \mathbb{R}^{10,1\vert \mathbf{32}} \ar[rr]^{ \color{blue} \mu_{{}_{M2/M5}} } && S^4\,. } \end{equation} The supersymmetric version of the spacetime circle extension \eqref{YFib} is the extension of type IIA super-Minkowski spacetime to $D =11$ super-Minkowski spacetime classified by the D0-brane cocycle \cite[Prop. 4.5]{FSS13} (this extension implements \lq\lq D0-brane condensation'') \begin{equation} \label{11dExt} \xymatrix@R=1.4em{ *+[r]{ \color{blue} \mathbb{R}^{10,1\vert \mathbf{32}} \;\simeq\; \mathrm{Ext}\left( \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} \right) } \ar[d] \\ \mathbb{R}^{9,1 \vert \mathbf{16} + \overline{\mathbf{16}}} \ar[dr]_{ \mu_{{}_{D0}} } \\ & B S^1\;. } \end{equation} Thus, \eqref{FundamentalM2M5} can be recast in the form $$ \xymatrix@R=1.3em{ {\color{blue} \mathrm{Ext}\left(\mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} \right)} \ar[rr]^-{ \mu_{{}_{M2/M5}} } \ar[dr]_{\mu_{{}_{M2}}} && S^4\;, \ar[dl] \\ & B^4 \mathbb{Q} } $$ so that its double dimensional reduction is given as in \eqref{RatDDRed} by the $\mathrm{Ext}/\mathrm{Cyc}$-adjunct as in the following diagram \begin{equation} \label{SuperRatDDRed} \raisebox{35pt}{ \xymatrix@C=30pt@R=1.5em{ {\color{blue} \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} } \ar@{}[rrrr]^-{ \mbox{ \tiny \begin{tabular}{c} double dimensional reduction \\ of $\mu_{{}_{M2/M5}}$\\{} \end{tabular}} } \ar@[blue]@/^1pc/[drrrr]|-{{\color{blue}\;\widetilde{\mu}_{M2/M5}\;} } \ar[d]_-{\color{blue}\eta} &&&& \\ {\color{blue}\mathrm{Cyc}}\mathrm{Ext}\big( \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} \big) \ar[rrrr]|{\; {\color{blue}\mathrm{Cyc}}( \mu_{{}_{M2/M5}})\; } \ar[drr] &&&& {\color{blue}\mathrm{Cyc}}(S^4)\;. \ar[dll] \\ && {\color{blue}\mathrm{Cyc}}(B^4 \mathbb{Q}) \ar[d] \\ && B^3 \mathbb{Q} }} \end{equation} In \cite[Prop. 3.8]{FSS16a} and \cite[Theorem 3.8]{FSS16b}, this double dimensional reduction cocycle $\widetilde{\mu}_{{}_{M2/M5}}$ was computed as: \begin{equation} \label{SuperDDRed} \widetilde{\mu}_{{}_{M2/M5}} \;\;=\;\; \left\{ \begin{aligned} d \mu_{{}_{F1}} & = 0 \phantom{AAA}\phantom{BB}\phantom{CC} d \mu_{{}_{NS5}} = \mu_{{}_{D0}} \wedge \mu_{{}_{D4}} - \tfrac{1}{2} \mu_{{}_{D2}} \wedge \mu_{{}_{D2}}\;. \\ \\ d \mu_{{}_{D0}} & = 0 \\ d \mu_{{}_{D2}} & = \mu{{}_{F1}} \wedge \mu_{{}_{D0}} \\ d \mu_{{}_{D4}} & = \mu_{{}_{F1}} \wedge \mu_{{}_{D2}} \,. \end{aligned} \right. \end{equation} We, therefore, obtain the truncation of \eqref{DCoch}, \eqref{DCoc} that contains the fundamental F1-brane cocycle $\mu_{F1}$, as well as the fundamental D$p$-brane cocycles $\mu_{Dp}$ for $p\in \{0,2,4\}$, together with Bianchi identities. We also obtain the NS5-brane cocycle $\mu_{NS5}$. To enhance this double dimensional reduction picture, we could ask for a lift as in \eqref{IsIt}: \vspace{-3mm} \begin{equation} \label{SuperIsIt} \raisebox{35pt}{ \xymatrix@C=30pt@R=1.5em{ X \ar@{-->}@/^2pc/[rrrr]^{ \mbox{ \tiny \begin{tabular}{c} enhanced \\ double dimensional reduction \\ of $\mu_{{}_{M2/M5}}$ {\color{blue} ??} \end{tabular} } } \ar[d]_{\eta_{{}_{ \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} }}} && && {S^4 /\!\!/ S^1} \ar[d]^{\eta_{{}_{S^4 /\!\!/ S^1}}} \\ { \mathrm{Cyc}}\mathrm{Ext}\big( \mathbb{R}^{9,1 \vert \mathbf{16} + \overline{\mathbf{16}}} \big) \ar[rrrr]|-{\;\mathrm{Cyc}( \mu_{{}_{M2/M5}})\; } \ar[drr]_{ \mu_{{}_{F1}} } &&&& { \mathrm{Cyc}}( \mathrm{Ext}(S^4 /\!\!/ S^1) )\;. \ar[dll] \\ && B^3 \mathbb{Q} } } \end{equation} But a dashed lift in \eqref{SuperIsIt} does \emph{not} exist: Prop. \ref{MinimalDGCAlgebraModelForATypeOrbispace} requires that for such a lift to exist, the double dimensionally reduced cocycle data in \eqref{SuperDDRed} needs to satisfy the extra conditions \begin{enumerate}[{\bf (i)}] \item $\mu_{{}_{D4}} = 0$; and \item $\mu_{{}_{F1}} \wedge \mu_{{}_{D0}} = 0$, \end{enumerate} both of which fail, the first by \eqref{DCoch} the second by \eqref{DCoc}. If we now approach the problem with homotopy theoretic perturbation theory (Rem. \ref{GoodwillieCalculus}), we might instead ask for a \emph{first-order} lift as in \eqref{Enhc}, hence a lift in the following diagram: \begin{equation} \label{SuperEnhc} \hspace{-6mm} \mathpalette\mathclapinternal{ \raisebox{48pt}{ \xymatrix@C=40pt{ \mathbb{R}^{9,1 \vert \mathbf{16} + \overline{\mathbf{16}} } \ar@[blue]@{-->}@/^2pc/[rrrr]^{ \mbox{ \tiny \begin{tabular}{c} \color{blue} perturbatively enhanced \\ double dimensional reduction \\ of $\mu_{{}_{M2/M5}}$ {\color{blue} ?? } \end{tabular} } } \ar[d]_{\eta_{{\mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}} }}}} && && {\color{blue} \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}} \left( S^4 /\!\!/ S^1 \right) \ar[d]^{ {\color{blue} \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}} \big( \eta_{{}_{S^4 /\!\!/ S^1}} \hspace{-.7mm} \big) } \\ { \mathrm{Cyc}}\,\mathrm{Ext}\big( \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}}} \big) \ar[rr]^-{ \mathrm{Cyc}( \mu_{{}_{M2/M5}} ) } \ar[drr]_{ \mu_{{}_{F1}} } && \mathrm{Cyc}( S^4 ) \ar[d] \ar@[blue][rr]^-{\color{blue} \mathrm{st}} && {\color{blue} \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}} \mathrm{Cyc}(S^4) \ar[dll] \\ && B^3 \mathbb{Q} } }} \end{equation} As was the case for \eqref{FactorEnhc}, Theorem \ref{TwistedKTheoryInsideFiberwiseStabilizationOfATypeOrbispaceOf4Sphere} says that the lift in \eqref{SuperEnhc} exists precisely if the lift $\widehat{\mu}_{{}_{M2/M5}}$ in the following diagram exists: \begin{equation} \label{SuperFactorEnhc} \hspace{-7mm} \mathpalette\mathclapinternal{ \raisebox{47pt}{ \xymatrix@C=30pt{ \mathbb{R}^{9,1 \vert \mathbf{16} + \overline{\mathbf{16}} } \ar@[blue]@{-->}@/^2.4pc/[rrrr]^{ \mbox{ \tiny \begin{tabular}{c} perturbatively enhanced \\ double dimensional reduction \\ of $\mu_{{}_{M2/M5}}$ {\color{blue} ! } \end{tabular} } } \ar@[blue]@{-->}[rr]^{ \color{blue} \widehat{\mu}_{{}_{M2/M5}} } \ar[drr]|-{\;{\widetilde{\mu}_{{}_{M2/M5}}} \;} \ar[d]_{\eta_{{}_{ \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}} } }}} && {\color{blue} \Omega^{\infty-2}_{B^3 \mathbb{Q}}\left( \mathrm{ku} /\!\!/ B S^1 \right) }\, \ar@[blue]@{..}[d]|-{\color{blue} \tau_6} \ar@{^{(}->}[rr]^-{\iota} && \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}\left( S^4 /\!\!/ S^1 \right) \ar[d]^{ \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3} \left( \eta_{{}_{S^4 \; /\!\!/ \; S^1}} \hspace{-.7mm} \right) } \\ { \mathrm{Cyc}}\,\mathrm{Ext} \big( \mathbb{R}^{9,1\vert \mathbf{16} + \overline{\mathbf{16}} } \big) \ar[rr]|-{\;\mathrm{Cyc}( \mu_{{}_{M2/M5}})\; } \ar[drr]_{ \mu_{{}_{F1}} } && \mathrm{Cyc}( S^4 ) \ar[d] \ar[rr]^-{\mathrm{st}} && \Omega^\infty_{S^3}\Sigma^\infty_{+,S^3}\mathrm{Cyc}(S^4) \ar[dll] \\ && B^3 \mathbb{Q}\,. } }} \end{equation} As in \eqref{DDRedEnhc}, such a lift $\widehat{\mu}_{{}_{M2/M5}}$ is equivalent to specifying an extension of the truncated data in \eqref{SuperDDRed} to a cocycle: \begin{equation} \label{SuperDDRedEnhc} \widehat{\mu}_{{}_{M2/M5}} \;\;=\;\; \left\{ \begin{aligned} d \mu_{{}_{F1}} & = 0 \\ \\ d \mu_{{}_{D0}} & = 0 \\ d \mu_{{}_{D2}} & = \mu_{{}_{F1}} \wedge \mu_{{}_{D0}} \\ d \mu_{{}_{D4}} & = \mu_{{}_{F1}} \wedge \mu_{{}_{D2}} \\ {\color{blue} d \mu_{{}_{D6}} } & {\color{blue} = \mu_{{}_{F1}} \wedge \mu_{{}_{D4}} } \\ {\color{blue} d \mu_{{}_{D8}} } & {\color{blue} = \mu_{{}_{F1}} \wedge \mu_{{}_{D6}} } \\ {\color{blue} d \mu_{{}_{D10}} } & {\color{blue} = \mu_{{}_{F1}} \wedge \mu_{{}_{D8}} } \,, \end{aligned} \right. \end{equation} (after discarding the NS5-brane cocycle $\mu_{{}_{NS5}}$). By \eqref{DCoch} and \eqref{DCoc}, such an extension exists and is precisely the required enhancement of the double dimensional reduction of the fundamental M2/M5-brane cocycle by the missing $\mathrm{D}(p \geq 6)$-brane cocycles to the full $\mathrm{F1}/\mathrm{D}p$-brane cocycle of type IIA string theory with coefficients in rational twisted K-theory. This is exactly the required gauge enhancement: \begin{quotation} \noindent A {\bf solution} to \hyperlink{OpenRational}{\bf Open Problem, rational version 2} (p.~ \pageref{OpenRationalPage}) is given as follows: \begin{itemize} \item While double dimensional reduction of the fundamental M2/M5-brane cohomotopy 4-cocycle $\mu_{{}_{M2/M5}}$ \eqref{M2M5Coc} is obtained as the $\mathrm{Ext}/\mathrm{Cyc}$-adjunct \eqref{SuperDDRed} along the M-theoretic super-spacetime extension \eqref{11dExt}; \item Its \emph{perturbative gauge enhancement}, making the full combined F1/D$p$-brane cocycle appear is obtained by lifting through the fiberwise stabilization of the $\mathrm{Ext}/\mathrm{Cyc}$-unit on the A-type orbispace of the 4-sphere \eqref{SuperEnhc}. \end{itemize} \end{quotation} \begin{remark}[Higher-dimensional branes] An often neglected point is that, with local supersymmetry taken into account, the system of relations \eqref{SuperDDRedEnhc} is indeed non-trivial up to the degree shown. The purely bosonic part of the D8-flux form $F_{10}$ is necessarily closed (being a 10-form on a 10-dimensional manifold). However, its fermionic component which is proportional to $\left(\overline{\Psi} \Gamma_{a_1 \cdots a_8}\Gamma_{10} \Psi\right) \wedge E^{a_1} \wedge \cdots \wedge E^{a_8}$ is only of bosonic degree 8, so that its differential as a super-form need not vanish (and, indeed, does not vanish \cite[(6.9)]{CAIB00}). For the same reason, we include in \eqref{DCoch} and \eqref{SuperDDRedEnhc} the non-trivial component $\mu_{{}_{D10}}$ which ought to correspond to a D10-brane, were it not for the fact that there is no bosonic aspect of a 10-brane in a 10-dimensional spacetime. One way to see the \lq\lq D10-brane contribution'' $\mu_{{}_{D10}}$ arise is to consider the type IIB D-brane super cocycles $\mu_{{}_{D1}}, \mu_{{}_{D3}}, \mu_{{}_{D5}}, \mu_{{}_{D7}}, \mu_{{}_{D9}}$ \cite[Sec. 2]{IIBAlgebra} and then apply super-geometric T-duality \cite[Theorem 5.3]{FSS16b}. The existence of the charge structure for would-be D10 branes was also noticed in \cite[p.~30]{CallisterSmith09} by different means. \end{remark} \medskip \medskip \noindent {\bf Acknowledgements.} We are grateful to August{\'i} Roig and Martintxo Saralegi-Aranguren for discussion of \cite{RS}, as well as to David Corfield, Ted Erler, Domenico Fiorenza, and David Roberts for useful comments. We also thank the anonymous referee for their careful reading and helpful suggestions. VBM acknowledges partial support of SNF Grant No. 200020\_172498/1. This research was partly supported by the NCCR SwissMAP, funded by the Swiss National Science Foundation, and by the COST Action MP1405 QSPACE, supported by COST (European Cooperation in Science and Technology). \medskip
{ "timestamp": "2019-03-05T02:28:40", "yymm": "1806", "arxiv_id": "1806.01115", "language": "en", "url": "https://arxiv.org/abs/1806.01115" }
\section*{Introduction} We present our motivation and main results, after which we give an overview of the paper's content and structure. The overview is followed by a discussion concerning earlier results by Carlen and Maas for certain finite-dimensional cases.\newline\par \textbf{Motivation.} Our motivation is to conduct geometric analysis on noncommutative spaces. For this, understanding of noncommutative curvature is essential. Doing so presents an ongoing challenge, one prominent approach being modular curvature for the special case of noncommutative tori. This was developed in \cite{CoMoModCur2T}, \cite{KhalkaliScalCurv4Tori} and \cite{LeMoModCurvME}. In general, an inability to access \textit{local} information prevents straight-forward generalisation of classical definitions to the noncommutative setting. An example of this is a lack of elementary ODE theory, even the notion of a chart, in the proper noncommutative setting. On the other hand, fruitful geometric analysis is possible in case we only have curvature bounds. A fundamental example is Bochner's inequality, see Li and Yau \cite{LiYauParKer}. Such bounds are \textit{global} information of the underlying space and can be expressed synthetically, hence we expect them to have a noncommutative analogue.\par Rather than understanding curvature directly, we therefore seek to establish a noncommutative analogue of Ricci curvature bounds. Synthetic Ricci curvature bounds for metric measure spaces in the form of entropic curvature bounds were introduced by Sturm \cite{SturmGMMSI}. Utilising $L^{2}$-Wasserstein distances, this approach leads to rich metric geometry for metric measure spaces beginning in \cite{SturmGMMSI} and \cite{SturmGMMSII}. The paper \cite{ErKuStEquivalence} by Erbar, Kuwada and Sturm shows equivalence of various curvature bound conditions, as well as an analogue of Bochner's inequality for specific metric measure spaces. Other examples of Bochner's inequality applied to the geometric analysis of singular spaces are \cite{GiKuwHeatAlex}, \cite{OhStBeWeitz} and \cite{ZhaZhuAlexandrov}.\newline\par \textbf{Main Results.} Our main results are twofold. Firstly, we introduce $L^{2}$-Wasserstein distances on the space of densities of a tracial $W^{*}$-algebra. A $C^{*}$-algebra $A$ equipped with a l.s.c.~semi-finite trace $\tau$ on $A$ yields a tracial $W^{*}$-algebra $L^{\infty}(A,\tau)$ represented over $L^{2}(A,\tau)$. Given a type of $A$-derivation $\partial$ from $L^{2}(A,\tau)$ to a submodule of a sum $\bigoplus_{k=1}^{m}L^{2}(A,\tau)$, we follow a Benamou-Brenier approach to define the $L^{2}$-Wasserstein distance. We will call such derivations symmetric gradients. The distance is given on the space of densities $\mathcal{D}:=\{p\in L_{+}^{1}(A,\tau)\ |\ \tau(p)=1\}$ by the minimisation problem \begin{align*} \mathcal{W}_{2}(p,q):=\inf_{(\rho_{t},v_{t})\in\mathcal{A}(p,q)}\sqrt{\frac{1}{2}\int_{0}^{1}||v_{t}||_{\rho_{t}}dt} \end{align*} \noindent where $\rho_{t}\in\mathcal{D}$, $\rho_{0}=p$, $\rho_{1}=q$, while $v_{t}$ lies in a tangent space constructed over each $\rho_{t}$. Furthermore, we demand a continuity equation $\frac{d}{dt}\tau(pa)=\langle v_{t},a\rangle_{\rho_{t}}$ to be satisfied. Here, $a$ is an element of an appropriate $^{*}$-subalgebra $\mathfrak{A}\subset A\cap D(\partial)$ with \begin{align*} ||a||_{\rho_{t}}^{2}=\sum_{k=1}^{m}\int_{0}^{1}\tau(\rho_{t}^{1-\alpha}\partial a\rho_{t}^{\alpha}\partial a)d\alpha \end{align*} \noindent yielding the tangent space over $\rho_{t}$ via Hausdorff completion of $\mathfrak{A}$. Our choice of inner product arises naturally from a noncommutative chain rule if we wish to retain classical relationships between finiteness of $\mathcal{W}_{2}$ on bounded densities, the heat flow and relative entropy.\par Secondly, we consider examples of form $C_{0}(X,\mathcal{K}(H))$ and give conditions for disintegrating an $L^{2}$-Wasserstein distance into $L^{2}$-Wasserstein distances for $(\mathcal{K}(H),\textrm{tr})$. Let $X$ be a locally compact Hausdorff space with Radon measure $\nu$ such that $(X,\mathcal{B}(X))$ is a separable measure space. Then $A=C_{0}(X,\mathcal{K}(H))$ equipped with trace \begin{align*} (\nu\otimes\textrm{tr})(F):=\int_{X}\textrm{tr}(F(x))d\nu \end{align*} \noindent provides a setting giving rise to well-behaved $L^{2}$-Wasserstein distances. A sufficiently regular decomposition $\partial=(\partial_{x})_{x\in X}$ into symmetric gradients $\partial_{x}$ for $(\mathcal{K}(H),\textrm{tr})$ induces a distance for which minimisers disintegrate. The disintegration theorem reads as follows: \setcounter{section}{4} \setcounter{thm}{0} \begin{thm} Let $\partial$ be a vertical gradient such that $\partial_{x}$ has continuous dependence of minimisers on start- and endpoints for a.e.~$x\in X$. For all $P,Q\in\mathcal{D}$ with finite distance, we have \begin{align*} \mathcal{W}_{2}^{2}(P,Q)=\int_{X}\mathcal{W}_{2,x}^{2}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))d\nu_{P} \end{align*} \noindent and there exists a minimiser $\mu_{t}$ of $\mathcal{W}_{2}(P,Q)$ such that $\theta_{P}(x)^{2}\mu_{t}(x)\in\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))$ for a.e.~$x\in X$. \end{thm} \setcounter{section}{0} \setcounter{thm}{0} \noindent In the theorem's formulation, $P,Q\in\mathcal{D}\subset L^{1}(X,\mathcal{S}_{1}(H))$, $d\nu_{P}=\textrm{tr}(P(x))d\nu$ and $\mathcal{W}_{2,x}$ is the $L^{2}$-Wasserstein distance induced by $\partial_{x}$ on density matrices. Furthermore, we have \begin{align*} \theta_{P}(x):= \begin{cases}\big(\textrm{tr}(P(x))\big)^{-\frac{1}{2}} & \textrm{if}\ P(x)\neq 0 \\ 0 & \textrm{else} \end{cases} \end{align*} \noindent for each $P\in\mathcal{D}$ and $\mathcal{M}(p,q)$ is the set of minimisers for density matrices w.r.t~the appropriate fibre-geometry. If $H$ is finite-dimensional, Theorem \ref{THM.Disint} implies existence of minimisers between all $P,Q\in\mathcal{D}$ with $\textrm{tr}(P(x))=\textrm{tr}(Q(x))$ for a.e.~$x\in X$. We extend the theorem to $\mathcal{K}(H)$-bundles and give sufficient conditions for viewing an arbitrary $L^{2}$-Wasserstein distance as one that disintegrates in the above sense. To decide if a disintegration is possible for a given $L^{2}$-Wasserstein distance is its \textit{disintegration problem}.\newline\par \textbf{Content overview.} A Benamou-Brenier approach to $L^{2}$-Wasserstein distances \cite{BenBreW2Mech} requires us to introduce a concept of noncommutative gradient. In our case, these are symmetric derivations on a $C^{*}$-algebra $A$ taking values in a symmetric Hilbert $A$-bimodule. Such derivations appear in \cite{CiDrchltFrmsNCS} and \cite{CiSaDrivSqRtsDrchltFrms} as the natural noncommutative extension of gradients induced by Dirichlet forms. The strongly related notion of derivation on a $W^{*}$-algebra was studied in detail in \cite{WeavDerI} and \cite{WeavDerII}, with \cite{WeavDerII_Err} as list of errata. Symmetry and the Leibniz rule give rise to a noncommutative chain rule, yielding \begin{align*} \partial\log x =\big((L_{x}\otimes R_{x})(D\log)\big)(\partial x). \end{align*} \noindent Here, $x$ is an appropriate element of the domain $D(\partial)$, $D\log$ is the quantum derivative of the logarithm, and $L_{x}\otimes R_{x}$ is a $C^{*}$-representation of $C(\spec(x)\times\spec(x))$ over $H$ induced by the bimodule action of $A$. In general, we expect \begin{align*} (L_{x}\otimes R_{x})(D\log)=x^{-1} \end{align*} \noindent to be true if and only if $L_{x}=R_{x}$ holds. If we wish to maintain classical relations between relative entropy, heat flow and finiteness of $L^{2}$-Wasserstein distances on bounded densities, we are lead to replace multiplication by a density with multiplication by an operator involving functional calculus and the representation above. More precisely, we define \begin{align*} M_{p}=(L_{p}\otimes R_{p})(M_{lm}) \end{align*} \noindent for each density $p\in L^{\infty}(A,\tau)$. Here, $M_{lm}$ is the logarithmic mean. If $p$ is invertible, this implies $M_{p}=(L_{x}\otimes R_{x})(D\log)^{-1}$, assuring \begin{align*} M_{\rho_{t}}(\partial\log \rho_{t}) = \partial \rho_{t} \end{align*} \noindent to remain true for $\rho_{t}:=e^{-t\Delta}p_{0}$. Here, $\Delta=\partial^{*}\partial$ is the induced Laplace operator and $p_{0}$ a bounded density. Having constructed our multiplication operator, we define the tangent space at a bounded density analogous to the commutative case. From this, an $L^{2}$-Wasserstein distance on the space of \textit{bounded} densities $\mathcal{D}_{b}$ follows via minimisation over admissible paths.\par Our finiteness result requires a setting similar to well-behaved commutative ones, for example a compact Riemannian manifold. Setting $P_{t}:=e^{-t\Delta}$, we first prove \setcounter{section}{2} \begin{thm} If $\partial$ satisfies a Poincar\'e-type inequality, the distance between any two invertible bounded densities is finite. If $A$ is unital, $p\in\mathcal{D}_{b}$ and $P_{t}$ regularity improving, the distance between $p$ and $\rho_{t}:=P_{t}(p)$ is finite for each $t\in [0,1]$. \end{thm} \noindent We thus recover the classical case and thereby justify our initial definitions. Moreover, we lift a theorem first proved by Simon \cite{SiPosImpr} to the noncommutative setting showing \begin{thm} If $e^{tL}$ is a semigroup of self-adjoint, positivity preserving operators on $L^{2}(A,\tau)$, it is positivity improving if and only if it is ergodic. \end{thm} \setcounter{section}{0} \noindent Ergodicity therefore becomes a necessary condition for $P_{t}$ to be regularity improving. Do note that so far, we have restricted ourselves to bounded densities.\par In order to extend the distance to unbounded densities, we require additional assumptions on domain and codomain of our symmetric gradients. To begin with, we assume $\partial$ to take values in a symmetric Hilbert $L^{\infty}(A,\tau)$-subbimodule of a sum $\bigoplus_{k=1}^{m}L^{2}(A,\tau)$ equipped with the canonical $L^{\infty}(A,\tau)$-bimodule action and Hilbert space structure. Next, we assume existence of an extension algebra $\mathfrak{A}\subset A\cap D(\partial)$ which lies dense and such that $\alpha\longmapsto p^{\alpha}\partial ap^{1-\alpha}$ is an element of $L^{1}([0,1],L^{1}(A,\tau))$ for each $a\in\mathfrak{A}$. Under these assumptions, the first statement of Proposition \ref{PRP.dlog} is sufficient to extend $\mathcal{W}_{2}$ to unbounded densities. Multiplication operators will be of form \begin{align*} \big(M_{p}(x)\big)_{k}=\int_{0}^{1}p^{\alpha}x_{k}p^{1-\alpha}d\alpha\in L^{1}(A,\tau) \end{align*} \noindent for $p\in\mathcal{D},x\in\bigoplus_{k=1}^{m}L^{\infty}(A,\tau)$ and $k\in\{1,...,m\}$. This yields the tangent space norm on elements of $\mathfrak{A}$ we wrote down at the very beginning. Knowing this, extending the distance becomes a straightforward task.\par Turning to vertical gradients, our first task is to understand $L^{p}$-spaces that are defined by $(C_{0}(X,\mathcal{K}(H)),\nu\otimes\textrm{tr})$. Proposition \ref{PRP.Prd_Tr_LP} shows these to equal $L^{p}(X,\mathcal{S}_{p}(H))$ for each $p\in[0,\infty]$. We construct vertical gradients as $(\partial F)(x)=\partial_{x}F(x)$ for each $F\in D(\partial)$, where $\partial_{x}$ is a symmetric gradient for $(\mathcal{K}(H),\textrm{tr})$ mapping to some $\bigoplus_{k=1}^{m}\mathcal{S}_{2}(H)$ with $m\in\mathbb{N}$ fixed. We demand $C_{c}(X)\odot\FinRk(H)$ to be an extension algebra. As outlined above, this allows us to extend multiplication operators to densities. We have \begin{align*} \big(M_{P}(F)\big)_{k}(x)=\int_{0}^{1}P(x)^{\alpha}F_{k}(x)P(x)^{1-\alpha}d\alpha\in L^{1}(X,\mathcal{S}_{1}(H)) \end{align*} \noindent for each $P\in\mathcal{D}, F\in\bigoplus_{k=1}^{m}L^{\infty}(X,\mathcal{B}(H))$ and $k\in\{1,...,m\}$. We impose other conditions ensuring mass preservation in each fibre, in particular assuming each $\partial_{x}$ to be a fibre gradient. The latter is a notion introduced at the very beginning of Subsection 3.2. Mass preservation in almost every fibre is necessary for showing the disintegration theorem and proved in Proposition \ref{PRP.Mass_Prsv_Unbd}. First consequences are given in Corollary \ref{COR.Mass_Prsv_Unbd}. For example, $\mathcal{D}$ disintegrates into subspaces whose elements have the same mass in almost every fibre. This gives rise to $L^{2}$-Wasserstein distances that do not metrisise the $w^{*}$-topology even as the underlying $C^{*}$-algebra is unital, see Remark \ref{REM.Mass_Prsv_Unbd}.\par Before proving the disintegration theorem, we concern ourselves with symmetric gradients on $(\mathcal{K}(H),\textrm{tr})$. These present our fibre-geometries. Proposition \ref{PRP.Mass_Prsv} yields the aforementioned conditions for mass preservation along fibres. Furthermore, we introduce the notion of continuous dependence on start- and endpoints necessary for a measurable selection theorem used in the proof of Theorem \ref{THM.Disint}. The proof itself is divided into two parts. In the first, we show every admissible path to have a representative inducing an admissible path on almost every fibre. In the second, we utilise a measurable selection theorem to show existence of an integrable choice of fibre-wise minimisers. Key for the second step is approximation of marginals by well-chosen step functions and utilisation of continuous dependence of minimisers on start- and endpoints. This will enable us to show a condition required for applying the measurable selection theorem.\par As an application, we consider mean entropic curvature bounds. If $H$ is finite, the relative entropy for any density $p$ in a fibre is $\textrm{tr}(p\log p)$. For bounded densities $P$ on a compact $X$, the noncommutative relative entropy becomes \begin{align*} \textrm{Ent}_{m}(P|\nu\otimes\textrm{tr})=\int_{X}\textrm{tr}(P(x)\log P(x))d\nu \end{align*} \noindent which also makes sense for unbouded densities. Declaring it to be the mean relative entropy, we consider synthetic Ricci curvature bounds in analogy to the commutative case. Doing so, we obtain global curvature bounds both on the fibres and the whole geometry. Adapting the notion of continuous dependence of minimisers on start- and endpoints for the proof, we obtain \setcounter{section}{4} \setcounter{thm}{1} \begin{thm} If $\partial$ is a vertical gradient for $(C_{0}(X,M_{n}(\mathbb{C})),\nu\otimes\textrm{tr})$, then \begin{align*} \mcurv(\nu\otimes\textrm{tr},\partial)\geq \essinf_{x\in X}\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial_{x}). \end{align*} \end{thm} \setcounter{section}{0} \setcounter{thm}{0} \noindent showing control of global curvature bounds by those for the fibre-geometries. Theorem \ref{THM.Disint} and \ref{THM.NC_MCrv} taken together indicate reasonable control of the global geometry by that of the fibres. In the fifth section, we extend vertical gradients and Theorem \ref{THM.Disint} to the general $\mathcal{K}(H)$-bundle case. This will present no great challenge, as most of the work occurs locally.\par Lastly, we introduce the disintegration problem. $A\cong\Gamma(\textrm{End}(V))$ for a finite-dimensional hermitian vector bundle $V$ over a compact Hausdorff space $X$ holds if and only if $A$ is Morita equivalent to $C(X)$. Given a symmetric gradient $\partial$ for $(A,\tau)$, we ask if it is possible to find a $C^{*}$-algebra isomorphism from $A$ to some $\Gamma(\textrm{End}(V))$ such that $\partial$ is a vertical gradient after push-forward. While we can show $\tau$ to have form $\nu\otimes\textrm{tr}$ locally, the same cannot be said for symmetric gradients. If for example $A=C(X)$ with non-zero gradient, $V$ must be one-dimensional and $1_{\mathbb{C}}$ the sole density in each fibre. If $\partial$ were vertical, $\partial$ vanishes by the Leibniz rule. Thus vertical gradients are a purely noncommutative phenomenon. We are able to provide sufficient conditions for disintegration in Corollary \ref{COR.Grd_Dcp_I}, after which we briefly outline plans to extend results to fields of elementary $C^{*}$-algebras and beyond.\par A last word regarding our choice of Wasserstein distance is in order. While easier to handle, $L^{1}$-Wasserstein distances are unsuitable for our purposes. Even in the commutative case and irrespective of the underlying metric measure space, their geodesics are convex combinations of the marginal states. Thus their metric geometry on states is independent of the underlying space's metric geometry. We nevertheless recommend the discussion in \cite{MartiViewOptTrnspNCG} as an introduction and point to \cite{LatrQuanLocCmpct} for a noncommutative $L^{1}$-Wasserstein distance based on ideas first formulated by Rieffel in his paper on compact quantum metric spaces \cite{RiefQuanMetrSp}. Relations between noncommutative $L^{1}$- and $L^{2}$-Wasserstein distances for finite-dimensional $C^{*}$-algebras mirroring the commutative setting have recently been announced \cite{RouNilRelL2andL1LogSobolev}.\newline\par The noncommutative continuity equation first presented by Carlen and Maas in \cite{CaMaW2RiemI} for the special case of the CAR algebra generated by $n$ bounded operators on a Hilbert space of dimension $n^{2}$, as well as its generalisation to other finite-dimensional cases in \cite{CaMaW2RiemII}, was essentially derived by the same reasoning we apply here. One minor difference is that Carlen and Maas aimed to replace multiplication by $p^{-1}$, rather than $p$, with a noncommutative analogue. While the noncommutative chain rule was not mentioned in either publication, the multiplication operator produced in both is indeed $(L_{\rho_{s}}\otimes R_{\rho_{s}})(D\log)$. This can be seen by applying the Proposition \ref{PRP.dlog}, i.e.~Pedersen's calculus, to the multiplication operators defined in \cite{CaMaW2RiemI} or \cite{CaMaW2RiemII}. As this replacement procedure is key to our approach, we view both papers as foundational.\par Maas' own work \cite{MaGrdtFlwsEntrpFin} concerning an analogue of $L^{2}$-Wasserstein distances for discrete spaces already utilised a similar replacement procedure involving the logarithmic mean. This is something we expect to see since noncommutative geometry aims, among other things, to unify continuous and discrete geometries. Many of the difficulties we face in the noncommutative setting already arise in the discrete case. Various alternative techniques were used in \cite{ErMaRiCuFinMarkCh} and \cite{ErMaRiCuBL} to obtain analogues of classical curvature bounds, as well as Gromov-Hausdorff convergence for a discrete commutative setting in \cite{GiMaGromHausConvgDiscrtTrnsprt}.\par Finally, we wish to point out that Wirth is developing an $L^{2}$-Wasserstein distance based on the same replacement procedure we engage in here. We stress that both Wirth and the author developed their approach independently of one another, only realising their ideas' similarity after they had matured. Notes were exchanged. In particular, Lemma \ref{LEM.L2_Pst_Prj} is a slight adaption of a lemma proved by Wirth in future work of his.\newline\par \noindent\textbf{Structure.} This paper is divided into two major parts. In the first, we introduce noncommutative $L^{2}$-Wasserstein distances and prove finiteness results on bounded densities. These are the first two sections. The second part comprises the last three sections. In the third section, we introduce fibre-geometries in preparation of the vertical gradient case. We deal with vertical gradients on trivial $\mathcal{K}(H)$-bundles and prove Theorem \ref{THM.Disint} and \ref{THM.NC_MCrv} in the fourth section. We extend to the general bundle case and introduce the disintegration problem in the fifth section. \smallskip \noindent\textbf{Notation and conventions.} $\mathcal{B}(H)$ is the space of bounded linear operators on a Hilbert space while $\mathcal{K}(H)$ are the compact operators. Furthermore, $\mathcal{S}_{p}(H)$ denotes the Schatten ideals for $p\in [1,\infty]$, with $\mathcal{S}_{\infty}(H)=\mathcal{B}(H)$. For a suitable measure space $(X,\nu)$ and Banach space $E$, $L^{p}(X,E,d\nu)$ denotes the Bochner-$L^{p}$-space. We often drop $\nu$ from our notation. Given a $C^{*}$-algebra $A$, let $A_{h}$ be its self-adjoint and $A_{+}$ its positive elements. We call $\tau$ a trace on $A$ if it is a l.s.c.\@, semi-finite trace according to Definition 6.1.1 in \cite{DixC*Alg} and set $D(\tau):=\{\tau(|a|)<\infty\ |\ a\in A\}$. In this case, we call $(A,\tau)$ a tracial $C^{*}$-algebra. By default, we consider its n.s.f. extension to $L^{\infty}(A,\tau)$ which we again denote by $\tau$. We write $A\subset L^{p}(A,\tau)$ when considering the image of $A$ under the canonical inclusion. \smallskip \noindent\textbf{Standard references.} General references concerning $C^{*}$- and $W^{*}$-algebras are \cite{DixC*Alg}, \cite{BK.PedC*Auto}, \cite{BK.SakC*W*}, and \cite{TakTOAI}. For noncommutative integration theory, the original paper \cite{SegNCInt} and its correction \cite{SegNCIntCorr} provide a detailed introduction, while both \cite{BK.HndbkGeomBS} and \cite{NelNotesNCInt} give streamlined ones. Broad introductions to noncommutative geometry are \cite{CoNCG} and \cite{KhalBasicNCG}, with \cite{VarilAnIntroNCG} focusing on differential geometric aspects from a functional analytic point of view. We recommend \cite{VilOptTrsp} as a reference for $L^{p}$-Wasserstein distances in the commutative case. For a Benamou-Bernier approach in the commutative setting, we refer to \cite{AmbGreenBook}. Results concerning the Bochner integral can be found in the usual works \cite{TreLocConvSpaces} and \cite{ZeiNLFAIIb}. A reference for vector bundles is \cite{BK.LeeSmthMfds}. \smallskip \noindent\textbf{Acknowledgements.} The author's position at time of writing was funded by the ERC Advanced Grant \textit{Metric measure spaces and Ricci curvature - analytic, geometric, and probabilistic challenges} awarded to K.-T. Sturm. The author wishes to express gratitude to K.-T. Sturm for his continued advice and support. \section{Preliminaries} We define symmetric gradients as noncommutative analogues of gradients into $L^{2}$-sections, provide a noncommutative chain rule, and describe a differential calculus developed by Pedersen \cite{PedOpDiffFct} useful when dealing with Fr\'echet derivatives on $\mathcal{B}(H)$ involving the continuous functional calculus. This provides a standard representation of our multiplication operator in case $H=L^{2}(A,\tau)$. \subsection{Gradients into bimodules over $C^{*}$-algebras} We define bimodules over $C^{*}$-algebras and introduce symmetric gradients. A primary reference and source of examples is \cite{CiDrchltFrmsNCS}. In it, derivation rather than gradient is the preferred terminology. \begin{dfn} Let $A$ be a $C^{*}$-algebra. We define a bimodule over $A$, or simply $A$-bimodule, to be a $C^{*}$-representation $\pi$ of $A\otimes_{max}A^{op}$ over a Hilbert space. \end{dfn} \begin{ntn} A representation $\pi$ induces an $A$-bimodule structure on $H$ in the algebraic sense, with both actions bounded w.r.t. the Hilbert space topology. We use $\pi$ and $H$ interchangably. \end{ntn} \begin{dfn}\label{DFN.NCGrad} Let $(A,\tau)$ be a tracial $C^{*}$-algebra. Furthermore, let $H$ be a bimodule over $A$. A gradient $\partial$ for $(A,\tau)$ is a densely defined, closed linear operator from $L^{2}(A,\tau)$ to $H$ such that \begin{itemize} \item[1)] $D(\partial)$ is closed under the $^{*}$-operation on $L^{2}(A,\tau)$, \item[2)] $A_{\partial}:=A\cap D(\partial)$ is a dense $^{*}$-subalgebra of $A$ and a core of $\partial$, \item[3)] $\partial$ is an algebra derivation from $A_{\partial}$ to $H$. \end{itemize} \end{dfn} \begin{ntn} We write $(A,\tau,\partial)$ for a tracial $C^{*}$-algebra $A$ with trace $\tau$ and gradient $\partial$. \end{ntn} We wish to make sense of expressions $\partial f(a)$ for self-adjoint $a\in A_{\partial}$ and sufficiently regular $f\in C(\spec(a))$. By necessity, we require $f(a)\in D(\partial)$ to hold. If this is true, we expect a noncommutative chain rule to apply. Such a chain rule exists if we have an involution on $H$ compatible with the gradient. \begin{dfn} A bimodule $H$ over $A$ is called symmetric if there exists an isometric, anti-linear involution $J$ on $H$ such that $J(ahb)=b^{*}ha^{*}$ for each $a,b\in A$ and $h\in H$. \end{dfn} \noindent We required the domain of our gradient to be closed under adjoining in $A$ by definition, hence compatibility of $\partial$ and $J$ as defined next makes sense. \begin{dfn} If $\partial$ is a gradient for $(A,\tau)$, $\partial$ is symmetric if $\partial(a^{*})=J(\partial a)$ for each $a\in A_{\partial}$. \end{dfn} \begin{rem}\label{REM.Cmplxfy} Replacing $A$ by $A_{h}$, we consider Definition \ref{DFN.NCGrad} for the real case by demanding $H$ to be a real Hilbert space with an $A_{h}$-bimodule structure. In this case, tensoring with $\mathbb{C}$ yields a symmetric gradient $\partial_{\mathbb{C}}:=\partial+i\partial$. \end{rem} For $a\in A_{h}$, $C(\spec(a))$ is unital and thus \textit{not} equal to $C^{*}(a)\subset A$ for non-unital $A$. We thus cannot obtain a representation of $C(\spec(a))\otimes C(\spec(a))$ over $H$ by restricting $\pi$ to $C(\spec(a))\otimes C(\spec(a))^{op}$ if $A$ is non-unital. Instead, we first consider the left representation $L_{a}$ of $C(\spec(a))$ over $H$ uniquely determined by \begin{align*} L_{a}(f)(x) := \begin{cases} f(a).x & \textrm{if}\ f(0)=0 \\ x & \textrm{if}\ f=1 \end{cases} \end{align*} \noindent We construct a right representation $R_{a}$ of $C(\spec(a))^{op}$ over $H$ analogously, replacing left by right action of $A$ on $H$ in our definition above. Tensoring both $L_{a}$ and $R_{a}$, we have \begin{align*} L_{a}\otimes R_{a}:C(\spec(a))\otimes C(\spec(a))^{op}\longrightarrow\mathcal{B}(H) \end{align*} \noindent with $L_{a}\otimes R_{a}$ depending on the bimodule $\pi$ by construction. We do not care about this, as $H$ will remain fix once chosen. Commutativity of $C(\spec(a))$ implies $C(\spec(a))\otimes C(\spec(a))$ and $C(\spec(a)\times\spec(a))$ to be isomorphic. \begin{prp}\label{PRP.LR_Rpr} Let $H$ be a symmetric bimodule over $A$, $a\in A_{h}$ and $I\subset\mathbb{R}$ a closed interval containing $\spec(a)$. Then for all $f\in C(I\times I)$, we have \begin{align*} ||(L_{a}\otimes R_{a})(f)||_{\mathcal{B}(H)}\leq ||f||_{C(I\times I)} \end{align*} \end{prp} \begin{proof} $L_{a}\otimes R_{a}$ is a representation, hence a homomorphism of $C^{*}$-algebras. Thus its norm is less or equal to one and therefore $||(L_{a}\otimes R_{a})(f)||_{\mathcal{B}(H)}\leq ||f||_{C(\spec(a)\times\spec(a))}$, while $||f||_{C(\spec(a)\times\spec(a))}\leq ||f||_{C(I\times I)}$ follows at once from $\spec(a)\subset I$. \end{proof} We are ready to discuss the noncommutative chain rule. It will involve the quantum derivative of a function. Outside of our immediate context, the quantum derivative is a natural analogue of the classical derivative in the discrete setting. Examples are discrete gradients on graphs. A useful introduction is provided by \cite{KacQuantumCalc}. Here, the quantum derivative is simply the correct object when searching for a chain rule involving gradients. \begin{dfn} Let $I\subset\mathbb{R}$ be a closed interval. For $f\in C^{1}(I)$, its quantum derivative is \begin{align*} Df(s,t):=\begin{cases} \frac{f(s)-f(t)}{s-t} & \textrm{if}\ s\neq t\\ f'(s) & \textrm{if}\ s=t \end{cases} \end{align*} \noindent with $(s,t)\in I\times I$. \end{dfn} \noindent $Df$ is continuous by hypothesis on $f$. \begin{prp} Let $\partial$ be a symmetric gradient for $(A,\tau)$ and $a\in A_{h}\cap A_{\partial}$. If $f\in C^{1}(\spec(a))$ such that $f(0)=0$, then \begin{itemize} \item[1)] $f(a)\in D(\partial)$ with $\partial(f(a))=(L_{a}\otimes R_{a})(Df)(\partial(a))$, \item[2)] $||\partial(f(a))||_{H}\leq ||f'||_{C(\spec(a))}||\partial(a)||_{H}$. \end{itemize} \noindent If we know $A$ and $H$ to be commutative, then $(L_{a}\otimes R_{a})(Df)(h)=f'(\partial(a)).h$ for each $h\in H$. \end{prp} \begin{proof} The first and second statement are proved in \cite{CiDrchltFrmsNCS}, while the third can be checked immediately on polynomials vanishing at the origin. This extends to all $f$ we consider by density of such polynomials. \end{proof} \begin{rem}\label{REM.QD_Intvl} If $f=g$ on $I$, then $Df=Dg$ in $C(\spec(a)\times \spec(a))$. We are thus able to compute the chain rule for elements $f\in C^{1}(\spec(a))$ even if they are not continuously differentiable outside of $I$, or in case $f(0)\neq 0$ but $0\notin I$. To do so, we simply replace $f_{|I}$ with an appropriate extension $g$ defined on $\mathbb{R}$. The result is independent of our choice. \end{rem} \subsection{Pedersen's differential calculus} In \cite{PedOpDiffFct}, Pedersen developed a differential calculus based on usual Frech\'et differentiation yet well-behaved with respect to functional calculus. For a more thorough treatment, we refer to the original paper. \begin{dfn} Let $H$ be a separable Hilbert space, $I\subset\mathbb{R}$ a closed interval. We denote the space of all self-adjoint, bounded operators over $H$ with spectra in $I$ by $B(H)_{h}^{I}$. We call a function $f:I\longrightarrow\mathbb{R}$ operator differentiable if the map \begin{align*} f:B(H)_{h}^{I}\longrightarrow B(H),\ T\mapsto f(T) \end{align*} \noindent is Fr\'echet differentiable, and denote its Fr\'echet derivative at $T$ by $df_{T}$. \end{dfn} Pedersen showed operator differentiable maps to form a Banach $^{*}$-algebra, denoted by $C_{op}^{1}(I)$. This notation is justified because every operator differentiable function $f$ is continuously Fr\'echet differentiable, cmpr.~Theorem 2.6 in \cite{PedOpDiffFct}. \begin{prp}\label{PRP.dlog} If $H$ is a separable Hilbert space and $I\subset\mathbb{R}_{>0}$ a closed interval, $d\hspace{0.05cm}\log_{\hspace{0.05cm}T}(S)$ is the unique solution to the integral equation \begin{align*} \int_{0}^{1}T^{s}XT^{1-s}ds=S \end{align*} \noindent for each $S,T\in B(H)_{sa}^{I}$. Let furthermore $(A,\tau)$ be a tracial $C^{*}$-algebra represented over $H$ and $L^{1}(A,\tau)$ separable. If $T\in L^{\infty}(A,\tau)\cap\mathcal{B}(H)_{sa}^{I}$ and $S\in\mathcal{B}(H)_{sa}^{I}$ such that $d\log_{T}(S)\in L^{1}(A,\tau)$, then \begin{align*} \tau(Td\log_{\hspace{0.05cm}T}(S))=\tau(S). \end{align*} \end{prp} \begin{proof} The first statement is proved on p.~155 of \cite{PedOpDiffFct}. For the second one, note $||T^{\alpha}||_{\infty}||T^{1-\alpha}||_{\infty}=||T||_{\infty}$ since $I\subset\mathbb{R}_{>0}$ closed by hypothesis while $\lambda\longmapsto\lambda^{\alpha}$ increases monotonically for $\lambda>0$, $\alpha\in (0,1]$. For all $R\in L^{1}(A,\tau)$, we therefore know \begin{align*} ||T^{\alpha}RT^{1-\alpha}||_{L^{1}(A,\tau)}\leq ||T^{\alpha}||_{\infty}||T^{1-\alpha}||_{\infty}||R||_{L^{1}(A,\tau)}=||T||_{\infty}||R||_{L^{1}(A,\tau)}. \end{align*} \noindent Multiplication in $L^{\infty}(A,\tau)$ is $||.||_{\infty}$-continuous. Thus $\alpha\longmapsto\tau(T^{\alpha}RT^{1-\alpha}X)=\tau(RT^{1-\alpha}XT^{\alpha})$ is continuous, hence measurable, for arbitrary $X\in L^{\infty}(A,\tau)$. Hence $T^{\alpha}RT^{1-\alpha}$ is Bochner-integrable as path from $[0,1]$ to $L^{1}(A,\tau)$ by separability of the latter. Using continuity of $\tau$ w.r.t.~the $||.||_{L^{1}(A,\tau)}$-topology and $d\log_{T}(S)\in L^{1}(A,\tau)$, we are now able to calculate \begin{align*} \tau(Td\log_{\hspace{0.05cm}T}(S)) &= \int_{0}^{1}\tau(Td\log_{\hspace{0.05cm}T}(S))ds\\ \\ &=\int_{0}^{1}\tau(T^{s}d\log_{\hspace{0.05cm}T}(S)T^{1-s})ds\\ \\ &=\tau(\int_{0}^{1}T^{s}d\log_{\hspace{0.05cm}T}(S)T^{1-s}ds)\\ \\ &=\tau(S). \end{align*} \end{proof} \begin{rem}\label{REM.dlog} By definition of $d\log$ as Fr\'echet derivative, $d\log_{T}(S)\in L^{\infty}(A,\tau)$ if $T,S\in L^{\infty}(A,\tau)$. Hence finiteness of $\tau$ implies $d\log_{T}(S)\in L^{1}(A,\tau)$ whenever $S,T\in L^{\infty}(A,\tau)$. \end{rem} We assume $A$ to be a $C^{*}$-algebra representable over a separable Hilbert space for the remainder of this section. Pedersen developed a noncommutative chain rule involving his differential calculus and derivations on $A$. \begin{prp} If $\partial:A\longrightarrow A$ is a closed derivation, then $f(a)\in D(\partial)$ and $\partial(f(a))=df_{a}(\partial(a))$ for each $f\in C_{op}^{1}(I)$ with $\spec(a)\subset I$. \end{prp} \begin{proof} This is Theorem 3.7 in \cite{PedOpDiffFct}. \end{proof} \noindent If $H=L^{2}(A,\tau)$, $(L_{a}\otimes R_{a})(Df)$ reduces to Pedersen's derivative $df_{a}$. As such, Pedersen derived a special case of the noncommutative chain rule we discussed above. \begin{prp} Let $L^{2}(A,\tau)$ be equipped with the canonical $A$-bimodule structure. If $\partial$ is a symmetric gradient such that $\partial_{|A}$ is a closed derivation on $A$, then $(L_{a}\otimes R_{a})(Df)=df_{a}$ for each $f\in C_{op}^{1}(I)$. \end{prp} \begin{proof} This is checked immediately on polynomials vanishing at the origin, and the general statement follows by density. \end{proof} The case of $H=L^{2}(A,\tau)$ yields a most canonical setting for our extension problem as it will provide an integral representation of our multiplication operator $M_{p}$ given by \begin{align*} M_{p}(h)=\int_{0}^{1}p^{\alpha}hp^{1-\alpha}d\alpha. \end{align*} \noindent Here, $h\in L^{2}(A,\tau)$ and $p$ is a bounded density. $C^{*}$-dynamical systems induce gradients of this form. As such, even \textit{bounded} symmetric gradients arise canonically in infinite dimensions. All $i[y,\hspace{0.05cm}.\hspace{0.05cm}]$ with $y\in A_{h}$ are of this form. We obtain them by differentiating $\alpha_{t}(x):=e^{tiy}xe^{-tiy}$ at the origin. \section{$L^{2}$-Wasserstein distances on noncommutative densities} Starting from noncommutative relative entropy for unital $C^{*}$-algebras, we motivate our notion of multiplication operator. Natural definitions of tangent space, energy functional and $L^{2}$-Wasserstein distance for bounded densities follow. Our justification is completed by finiteness results emulating the compact Riemannian case. We extend $L^{2}$-Wasserstein distances to unbounded densities for symmetric gradients mapping into symmetric Hilbert $L^{\infty}(A,\tau)$-subbimodules and having an extension algebra. \subsection{Noncommutative relative entropy} For this subsection, we assume $(A,\tau)$ to be a unital tracial $C^{*}$-algebra with $\tau(1_{A})=1$ such that both $L^{1}(A,\tau)$ and $L^{2}(A,\tau)$ are separable. This occurs if $A$ is separable. Unitality is required when using Petz's variational description of Araki's noncommutative relative entropy. Set $M:=L^{\infty}(A,\tau)$. \begin{dfn}\label{DFN.Bd_Dst_Untl} $\mathcal{D}_{b}:=\{p\in M_{+}\ |\ \tau(p)=1\}$ is the space of bounded densities. \end{dfn} The noncommutative relative entropy is defined as a Legendre . In the commutative case, this reduces to the familiar representation of the relative entropy as the Legendre dual of the logarithmic Laplace transform. \begin{dfn} For all $p\in\mathcal{D}_{b}$, the noncommutative relative entropy is defined as \begin{align*} \textrm{Ent}(p|\tau):=\underset{x\in M_{sa}}{\sup}\{\tau(xp)-\log\tau(e^{x})\} \end{align*} \end{dfn} \noindent We refer to Petz's paper \cite{PetzVarRelEnt} for the variational description we make use of here. See \cite{PetzPrpRelEnt} for a more general description of its operator algebraic properties. Convexity and hence lower semicontinuity of the noncommutative relative entropy in all relevant operator algebraic topologies follow immediately from the definition. The noncommutative relative entropy additionally takes the expected form $\tau(p\log p)$ on bounded densities. \begin{prp}\label{PRP.Entrp_Rpr} If $p\in\mathcal{D}_{b}$, then $\textrm{Ent}(p|\tau)=\tau(p\log p)<\infty$. \end{prp} \begin{proof} Assume $p$ to be invertible. In \cite{PetzVarRelEnt}, the relative entropy $S(\varphi,\omega)$ after Araki is discussed in full generality. Our case reduces to $\varphi=\tau$ and $\omega=\tau(\hspace{0.05cm}.\hspace{0.05cm}p)$. Both are faithful normal states. Traciality of $\tau$ implies equality of $\mathit{\Delta}$ and the identity. In the notation of \cite{PetzVarRelEnt}, this ensures $\varphi^{h}=\tau(\hspace{0.05cm}.\hspace{0.05cm}e^{h})$. Using this, the first proposition in \cite{PetzVarRelEnt} implies \begin{align*} S(\varphi,\omega)=\underset{x\in M_{h}}{\sup}\{\tau(xp)-\log\tau(e^{x})\}. \end{align*} \noindent The same proposition also tells us that the supremum is reached if and only if \begin{align*} \tau(\hspace{0.025cm}.\hspace{0.05cm}p)=\tau(\hspace{0.05cm}.\hspace{0.05cm}\frac{e^{x}}{\tau(e^{x})}) \end{align*} \noindent holds, which is true for $x=\log p$. Thus $\tau(p\log p)-\log(\tau(p))=\tau(p\log p)$ equals the supremum.\par For arbitrary $p$, we have $p\log p\in M$. This follows by continuity of $\lambda\log\lambda$ on $\mathbb{R}_{\geq 0}$, allowing us to apply Borel functional calculus. Any sequence of invertible operators $p_{i}$ converging to $p$ in the strong operator-topology implies \begin{align*} \textrm{Ent}(p|\tau)\leq\liminf_{i}\textrm{Ent}(p_{i}|\tau)=\tau(p\log p) \end{align*} \noindent by lower semi-continuity of the noncommutative relative entropy, as well as continuity of the functional calculus under the strong operator-topology. For the converse, define $x_{\varepsilon}:=\min\{\log p,\varepsilon\}$. Then $\tau(px_{-\varepsilon})$ converges to $\tau(p\log p)$, resp. $\tau(e^{x_{-\varepsilon}})$ to $1$, for $\varepsilon\longrightarrow\infty$. \end{proof} Let $\partial$ be a symmetric gradient for $(A,\tau)$ and $\Delta:=\partial^{*}\partial$ its Laplacian. We examine an interaction between the noncommutative relative entropy and heat semigroup $P_{t}:=e^{-t\Delta}$. For our following statements, we require the heat semigroup to regularise elements in $M_{+}$ sufficiently well. The derivative in the upcoming definition is the Fr\'echet derivative w.r.t.~the $||.||_{M}$-topology. \begin{dfn} Set $D_{Fr}(\Delta):=\{x\in D(\Delta)\ |\ \frac{d}{dt}_{|t=0}P_{t}(x)\ \textrm{exists} \}$. We call $P_{t}$ regularity improving if for all $x\in M_{+}$ and all $t\in (0,1]$, we have $P_{t}(x)\in GL(M)\cap D_{Fr}(\Delta)$. \end{dfn} \noindent By the semigroup property, $x\in D_{Fr}(\Delta)$ if and only if $t\longmapsto P_{t+s}(x)$ is Fr\'echet differentiable at the origin for each $s\in (0,1)$. An example from commutative geometry is the heat semigroup on a compact Riemannian manifold. Uniform convergence of the heat kernel yields the required property, see Chapter 8 of \cite{GrigHeatKernel}. In the finite-dimensional case, $\Delta$ having one-dimensional kernel implies the heat semigroup to be regularity improving. We will prove this at the end of this Section 2.3. The next lemma shows the logarithm's quantum derivative appearing when differentiating the relative entropy evaluated at the heat semigroup. \begin{lem}\label{LEM.Entrp_Mlt} Let $p\in\mathcal{D}_{b}$ and set $\rho_{t}:=P_{t}(p)$. If $P_{t}$ is regularity improving, then \begin{align*} \frac{d}{dt}_{|t=s}Ent(\rho_{t}\hspace{0.05cm}|\hspace{0.05cm}\tau)=-\langle \big((L_{\rho_{s}}\otimes R_{\rho_{s}})(D\log)\big)(\partial \rho_{s}),\partial\rho_{s}\rangle_{H} \end{align*} \noindent for each $s\in (0,1)$. \end{lem} \begin{proof} For all $t\in (0,1]$, we have $\frac{d}{dt}\rho_{t}=-\Delta\rho_{t}$ and we know the limit on the left-hand side to lie in $M$. Moroever, $\rho_{t}$ is a bounded \textit{density} for each $t\in [0,1]$ since $P_{t}$ preserves mass by unitality of $A$. Finally, $\rho_{t}$ is invertible for each $t\in (0,1]$. Taken together, this implies $\log(\rho_{t})$ to be Fr\'echet differentiable on $(0,1)$. Application of the chain rule allows us to express its derivative using Pedersen's differential calculus.\par We calculate \begin{align*} \frac{d}{dt}_{|t=s}\textrm{Ent}(\rho_{t}\hspace{0.05cm}|\hspace{0.05cm}\tau)=-\tau(\Delta\hspace{0.05cm}\rho_{s}\log \rho_{s})+\tau(\rho_{s}d\log_{\rho_{s}}(-\Delta\hspace{0.05cm}\rho_{s})) \end{align*} \noindent where we used Pedersen's chain rule and the derivative of a bounded bilinear map. The second summand equals $-\tau(\Delta\hspace{0.05cm}\rho_{s})$ by the second statement of Proposition \ref{PRP.dlog} and Remark \ref{REM.dlog}. Once more, $\tau(\Delta\rho_{s})=0$ as $1_{A}\in\ker\partial$ by unitality. We obtain \begin{align*} \frac{d}{dt}_{|t=s}\textrm{Ent}(\rho_{t}\hspace{0.05cm}|\hspace{0.05cm}\tau)=-\tau(\Delta\hspace{0.05cm}\rho_{s}\log\rho_{s}) \end{align*} \noindent for each $s\in (0,1)$. We have $\tau(\Delta\hspace{0.05cm}\rho_{s}\log\rho_{s})=\langle\Delta\hspace{0.05cm}p_{s},\log\rho_{s}\rangle_{L^{2}(A,\tau)}=\langle \partial\rho_{s},\partial\log\rho_{s}\rangle_{H}$. From this and $\spec(\rho_{s})\subset\mathbb{R}_{>0}$ being bounded from below by invertibility of $\rho_{s}$, the noncommutative chain rule for symmetric gradients shows \begin{align*} \frac{d}{dt}_{|t=s}\textrm{Ent}(\rho_{t}\hspace{0.05cm}|\hspace{0.05cm}\tau)=-\langle\partial\rho_{s},\big((L_{\rho_{s}}\otimes R_{\rho_{s}})(D\log)\big)(\partial\rho_{s})\rangle_{H}. \end{align*} \noindent $D\log$ is real-valued, hence $(L_{\rho_{s}}\otimes R_{p_{s}})(D\log)$ is self-adjoint. The statement follows by shifting the operator to the left-hand side of the inner product. \end{proof} \begin{rem} If $H$ is commutative, acting by $(L_{\rho_{s}}\otimes R_{\rho_{s}})(D\log)$ reduces to multiplication by $\rho_{s}^{-1}$. \end{rem} If $P_{t}$ is regularity improving and $\rho_{t}$ as above, $\rho_{t}$ should not only solve a noncommutative equivalent of the continuity equation but have finite energy. Under this condition, Lemma \ref{LEM.Entrp_Mlt} shows how noncommutativity leads us to replace $\rho_{t}^{-1}$ by $(L_{\rho_{s}}\otimes R_{\rho_{s}})(D\log)$. We seek to generalise multiplication by $\rho_{s}$, thus we consider the inverse of $(L_{p_{s}}\otimes R_{\rho_{s}})(D\log)$. As $\rho_{s}$ is invertible for $s>0$, this implies \begin{align*} (L_{p_{s}}\otimes R_{\rho_{s}})(D\log)^{-1}=(L_{p_{s}}\otimes R_{\rho_{s}})(D\log^{-1})=(L_{p_{s}}\otimes R_{\rho_{s}})(M_{lm}) \end{align*} \noindent Here, $M_{lm}:=D\log^{-1}$ is the logarithmic mean. It is defined on all of $\mathbb{R}_{\geq 0}^{2}$, hence we obtained a candidate for a multiplication operator even if $p$ is not invertible. This presents our starting point for defining the noncommutative $L^{2}$-Wasserstein distance. \subsection{Energy functional and definition for bounded densities} For the remainder of this section, let $(A,\tau)$ be a tracial $C^{*}$-algebra and set $M:=L^{\infty}(A,\tau)$. We demand the actions of $A$ on $H$ to extend to bounded actions of $M$. Such actions will be classified as part of future work by Wirth. For direct summands of $L^{2}(A,\tau)$, the canonical $M$-actions clearly extend those of $A$.\par We define a multiplication operator given an element in $M_{+}$. The logarithmic mean $M_{lm}$ is continuous on all of $\mathbb{R}_{\geq 0}$, vanishing at the boundary. For motivation, we refer to the previous subsection. \begin{dfn} If $x\in M_{+}$, then $M_{x}:=(L_{x}\otimes R_{x})(M_{lm})$ is the multiplication operator for $x$. \end{dfn} \begin{bsp} If $H=L^{2}(A,\tau)$, then $M_{x}(h)=\int_{0}^{1}x^{\alpha}hx^{1-\alpha}d\alpha$ by Proposition \ref{PRP.dlog}. This extends to direct sums of $L^{2}(A,\tau)$, as well as appropriate submodules defined in Subsection 2.4. \end{bsp} \noindent Since $M_{lm}$ is positive, we have $M_{x}\in B(H)_{+}$ in general. Furthermore, $M_{x}$ is invertible if $x$ is and for all $x\in M_{+}\cap GL(M)$, we have \begin{align*} M_{x}=(L_{x}\otimes R_{x})(M_{lm})=(L_{x}\otimes R_{x})(D\log)^{-1} \end{align*} \noindent because $L_{x}\otimes R_{x}$ is an algebra homomorphism. If we are in the situation of Lemma \ref{LEM.Entrp_Mlt} and set $v_{s}:=(L_{\rho_{s}}\otimes R_{\rho_{s}})(D\log)(\partial\rho_{s})$, the same lemma implies \begin{align*} -\frac{d}{dt}_{|t=s}\textrm{Ent}(\rho_{t}\hspace{0.05cm}|\hspace{0.05cm}\tau)=\langle v_{s},M_{\rho_{s}}v_{s}\rangle_{H}=||M_{\rho_{s}}^\frac{1}{2}v_{s}||_{H}^{2}. \end{align*} \noindent We view $||M_{p}^{\frac{1}{2}}h||_{H}$ as our analogue of the tangent space norm. Before defining admissible paths and the energy functional, we prove a crucial statement bounding the norm of $M_{x}\in B(H)$ by that of $x\in M$. \begin{prp}\label{PRP.Nrm_Mlt_Op} For $x\in M_{+}$, we have $||M_{x}||_{B(H)}\leq ||x||_{M}$. Equality holds if $\pi$ is faithful. \end{prp} \begin{proof} For $C>0$, consider $f(s,t):=M_{lm}(s,t)$ on $[0,C]\times [0,C]$. We claim $||f||_{\infty}=C$. To see this, first observe $f(C,C)=C$ since $D\log(C,C)=C^{-1}$. For the converse, note how $f(s,0)=0$ for each $s\in [0,\infty)$. Thus finding and comparing maxima of the differentiable functions $f_{s}(t):=f(s,t)$ on $(0,C)$ for each fix $s\in (0,C)$ is sufficient for our purposes. A calculation shows \begin{align*} \frac{d}{dt}f_{s}(t)=0 \iff t=f_{s}(t) \end{align*} \noindent to hold. Since $s,t>0$, the right hand side is equivalent to $\log(\frac{s}{t})=\frac{s}{t}-1$. Yet, $\log(x)=x-1$ implies $x=1$ as the functions $\log(x)$ and $x-1$ intersect tangentially while $\log$ is concave. Hence $f_{s}$ has an extrema on $(0,C)$ if any only if $s=t$. Knowing $s<C$, this implies $f_{s}(s)=f(s,s)=s>0=f_{s}(0)$ to be an extreme point. It must therefore be a global maximum as $f_{s}$ is continuous on $[0,C]$. If $s=C$, there is no extreme point on $(0,C)$ but $f_{C}(C)=C>0=f_{C}(0)$ still holds. Thus the global maximum of $f$ is given by $C$.\par The claim is trivial for $x=0$, hence let $x\in M_{+}$ be non-zero. As $M_{x}$ is given by the image of $f_{|\spec(x)\times\spec(x)}$ under $\pi$, we have \begin{align*} ||M_{x}||_{B(H)}\leq\sup_{s,t\in\spec(x)}|f(s,t)|\leq ||x||_{M} \end{align*} \noindent where the first inequality stems from $\pi$ being a $^{*}$-homomorphism of $C^{*}$-algebras. We used the first part of this proof to obtain the second estimate. However, $||x||\in\spec(x)$ holds by self-adjointness of $x$. The statement follows from $f(s,s)=s$. \end{proof} \begin{dfn}\label{DFN.Dst} Let $\mathcal{D}:=\{p\in L_{+}^{1}(A,\tau)\ |\ \tau(p)=1\}$ and $\mathcal{D}_{b}:=\{p\in\mathcal{D}\ |\ p\in L^{\infty}(A,\tau)\}$ be the space of densities, resp. the space of bounded densities. \end{dfn} \begin{rem} We defined $\mathcal{D}_{b}$ before in Definition \ref{DFN.Bd_Dst_Untl}. It was merely a question of exposition. \end{rem} We are ready to define tangent spaces, admissible paths, the energy functional and finally the $L^{2}$-Wasserstein distance. In the following, assume $\partial$ to be a symmetric gradient for $(A,\tau)$. \begin{dfn} For all $p\in\mathcal{D}_{b}$ and all $a,b\in A_{\partial}$, set \begin{align*} \langle a,b\rangle_{p}:=\langle M_{p}\partial a,\partial b\rangle_{H}. \end{align*} \end{dfn} \begin{rem} Each $\langle \ ,\hspace{0.05cm} \rangle_{p}$ is a semi-definite, positive bilinear form on $A_{\partial}$ by positivity of $M_{p}$. \end{rem} \begin{dfn} The tangent space $T_{p}\mathcal{D}_{b}$ at $p\in\mathcal{D}_{b}$ is defined to be the Hausdorff completion of $A_{\partial}$ w.r.t.~$\langle \ ,\hspace{0.05cm} \rangle_{p}$. The tangent bundle is defined as $T\mathcal{D}_{b}:=\underset{p\in\mathcal{D}_{b}}{\coprod}\ \{p\}\times T_{p}\mathcal{D}_{b}$. \end{dfn} \begin{ntn} A path $\mu_{t}$ in $T\mathcal{D}_{b}$ splits into a pair of paths $\mu_{t}=(\rho_{t},v_{t})$ with unique $\rho_{t}\in\mathcal{D}_{b}$ and $v_{t}\in T_{\rho_{t}}\mathcal{D}_{b}$. We always use this or analogous notation when decomposing a path in the tangent bundle. \end{ntn} \begin{dfn}\label{DFN.W2_Bd} Let $\mu_{t}:[0,1]\longrightarrow T\mathcal{D}_{b}$ such that $t\longmapsto\tau(\rho_{t}a)$ is absolutely continuous for each $a\in A_{\partial}$. We say that $\mu_{t}$ satisfies the noncommutative continuity equation if \begin{align*} \frac{d}{dt}\tau(\rho_{t}a)=\langle v_{t},a\rangle_{\rho_{t}} \end{align*} \noindent for each $a\in A_{\partial}$ and a.e.~$t\in [0,1]$. \end{dfn} \begin{ntn} We drop the adjective ''noncommutative'' in the future. \end{ntn} We are able to represent any $v\in T_{p}\mathcal{D}_{b}$ in $H$. Given $v$, choose a sequence of $a_{i}\in A_{\partial}$ converging to $v$. From this, we obtain \begin{align*} M_{p}^{\frac{1}{2}}\partial a_{i}\longrightarrow w \end{align*} \noindent in $H$. In the above, $w\in H$ is independent of our choice of $a_{i}$ by definition of the inner product. This defines a bounded linear map from $(T_{p}\mathcal{D}_{b},||.||_{p})$ to $H$, sending $v$ to $w$. It is an isometry by construction. In particular, the image of $T_{p}\mathcal{D}_{b}$ in $H$ is closed. We thereby view each $T_{p}\mathcal{D}_{b}$ as a closed subspace of $H$, and $T_{p}D_{b}$ as a subspace of $D_{b}\times H$. Using this, we rewrite the continutiy equation as \begin{align*} \frac{d}{dt}\tau(\rho_{t}a)=\langle w_{t},M_{\rho_{t}}^{\frac{1}{2}}\partial a\rangle_{H}. \end{align*} \begin{ntn} For a given path $\mu_{t}$ satisfying the continuity equation, we consider $v_{t}$ and $w_{t}$ interchangably from now on. Furthermore, we denote the projection from $H$ to $T_{\rho_{t}}\mathcal{D}_{b}$ by $R_{t}$. \end{ntn} \begin{dfn}\label{DFN.Bd_Adm} Let $p,q\in\mathcal{D}_{b}$. An admissible path from $p$ to $q$ is a $\mu_{t}:[0,1]\longrightarrow T\mathcal{D}_{b}$ such that \begin{itemize} \item[1)] $\mu_{t}$ satisfies the continuity equation, \item[2)] $\rho_{0}=p$ and $\rho_{1}=q$, \item[3)] $t\longmapsto ||v_{t}||_{\rho_{t}}^{2}=||w_{t}||_{H}^{2}\in L^{1}([0,1])$. \end{itemize} \noindent We denote the set of all admissible paths between $p$ and $q$ by $\mathcal{A}(p,q)$. \end{dfn} \noindent Let $\varphi$ be a linear reparametrisation and $\mu_{t}$ satisfy the continuity equation. We decompose $\mu_{\varphi(t)}$ into $\mu_{\varphi(t)}=(\rho_{\varphi(t)},v_{\varphi(t)}\dot{\varphi }(t))$. Hence precomposition by $t\longmapsto -t$ maps admissible paths to admissible paths. The decomposition additionally shows that concatenating two admissible paths, in the canonical topological sense, again yields an admissible path. From this we obtain symmetry, resp.~the triangle-inequality for our distance candidate once we have defined the latter. \begin{dfn}\label{DFN.Bd_L2WDst} We define the energy functional on admissible paths as \begin{align*} E(\mu_{t}):=\frac{1}{2}\int_{0}^{1}||v_{t}||_{\rho_{t}}^{2}dt \end{align*} \noindent and the noncommutative $L^{2}$-Wasserstein distance on bounded densities by \begin{align*} \mathcal{W}_{2}(p,q)=\inf_{\mu_{t}\in\mathcal{A}(p,q)}\sqrt{E(\mu_{t})}. \end{align*} \end{dfn} \begin{ntn} As before, we drop ''noncommutative'' in the above description. \end{ntn} We prove $\mathcal{W}_{2}$ to be a distance. By the discussion just prior to Definition \ref{DFN.Bd_L2WDst} and $E\geq 0$, we only need to check definiteness. To do so, we assume existence of a function $g$ allowing control of $\langle M_{p}\partial a,\partial a\rangle_{H}$ on a sufficiently large subset $S\subset A_{\partial}$. \begin{dfn}\label{DFN.Spfct} Let $S\subset A_{\partial}$ and $g:S\longrightarrow\mathbb{R}_{\geq 0}$ such that for all $p,q\in\mathcal{D}_{b}$, we have \begin{itemize} \item[1)] $\tau(pa)=\tau(qa)$ for each $a\in S$ if and only if $p=q$, \item[2)] $||a||_{p}^{2}\leq g(a)$ for each $a\in S$. \end{itemize} \noindent Then $g$ is called a separating function. \end{dfn} \begin{prp} If there exists a separating function $g$, then $\mathcal{W}_{2}$ is a distance. \end{prp} \begin{proof} We only need to show definiteness. For all admissible paths $\mu_{t}$ and all $a\in S$, we have \begin{align*} \tau((\rho_{1}-\rho_{0})a)&=\int_{0}^{1}\frac{d}{dt}\tau(\rho_{t}a)dt\\ &=\int_{0}^{1}\langle v_{t},a\rangle_{\rho_{t}}dt\\ &\leq \sqrt{2g(a)E(\mu_{t})} \end{align*} \noindent where we used $2)$ in Definition \ref{DFN.Spfct} for the last estimate. By $1)$ in the same definition and construction of $\mathcal{W}_ {2}$, $\mathcal{W}_{2}(p,q)=0$ if and only if $p=q$. \end{proof} \begin{ntn} Distances can be infinite in metric geometry. While we use this convention, distances with infinite value are also called extended distances, or extended metrics. \end{ntn} \begin{rem} Our definition is compatible with the commutative case if the underlying metric measure space $(X,d,m)$ satisfies the reduced curvature-dimension condition $CD^{*}(K,N)$, see 2) of Theorem 1.2 in \cite{RajLInfBound}. Other examples are measured-length spaces, defined in \cite{GiHaContMetrBdDens}. We will see in Example \ref{BSP.Unbd_Comm_Case} how to recover the $C^{\infty}$-manifold setting in general after having extended to all densities for particular gradients in Subsection 2.4. \end{rem} \begin{bsp}\label{BSP.Bd_Comm_Case} Let $(X,h)$ be a smooth Riemannian manifold with density $d|\omega|$ and connection $\nabla$. Set $A=C_{0}(X)$, $\tau=d|\omega|\otimes\mathbb{C}$, $\partial:=\nabla\otimes\mathbb{C}$ and $H$ to be the space of $L^{2}$-sections of $TX\otimes\mathbb{C}$ w.r.t.~$hd|\omega|$. If $S:=C_{c}^{\infty}(X)$, we have \begin{align*} ||a||_{p}^{2}=\int_{X}ph(\partial a,\partial a)d|\omega|\leq ||h(\partial a,\partial a)||_{\infty} \end{align*} \noindent for each $p\in D_{b}$ and $a\in S$. Hence $g(a):=||h(\partial a,\partial a)||_{\infty}$ is a separating function. \end{bsp} \begin{bsp}\label{BSP.SF_Fct_I} Let $H$ be separable. Consider $A=\mathcal{K}(H)$ and $\tau=\theta\textrm{tr}$ for $\theta>0$ fix. Then $\theta^{-1}L^{1}(A,\tau)$ equals $\mathcal{S}_{1}(H)$, thus $||x||_{M}\leq\theta^{-1}||x||_{L^{1}(A,\tau)}$ for each $x\in L^{1}(A,\tau)$. For $S=A_{\partial}$, we have \begin{align*} \langle M_{p}\partial a,\partial a\rangle_{H}\leq\theta^{-1}||\partial a||_{H}^{2} \end{align*} \noindent by Proposition \ref{PRP.Nrm_Mlt_Op} and $p\in\mathcal{D}_{b}$. Hence $g(a):=\theta^{-1}||\partial a||_{H}^{2}$ is a separating function. The fourth section deals with a wide generalisation of this example. \end{bsp} \begin{bsp} All symmetric gradients of type considered in Subsection 2.4 have a canonical separating function, see Proposition \ref{PRP.Unbd_Sep}. \end{bsp} We end this subsection with a lemma useful when discussing vertical gradients. \begin{lem}\label{LEM.Msrbl_Bd} Assume there exists a separating function and let $\mu_{t}$ be an admissible path. If $L^{1}(A,\tau)$ is separable, then $\rho_{t}\in L^{1}([0,1],L^{1}(A,\tau))$. \end{lem} \begin{proof} $A_{\partial}$ lies dense in $A$, hence dense in $M$ w.r.t.~the $w^{*}$-operator topology. We already know $\tau(\rho_{t}a)\in C([0,1])$ for each $a\in A_{\partial}$ by hypothesis. If on the other hand $a_{i}\in A_{\partial}$ converges to $x\in M$ in the $w^{*}$-topology, we know that $\tau(\rho_{t}a_{i})$ converges to $\tau(\rho_{t}x)$. Thus $\tau(\rho_{t}x)$ is approximated pointwise by measurable functions $\tau(\rho_{t}a_{i})$, where $x\in M$ was arbitrary but fix. Since $L^{1}(A,\tau)$ was separable and $L^{1}(A,\tau)^{*}=M$, Pettis' theorem shows strong measurability of $\rho_{t}$. Thus Bochner-integrability follows from $||\rho_{t}||_{L^{1}(A,\tau)}=1$. \end{proof} \subsection{Finiteness on bounded densities for unital $C^{*}$-algebras} For this subsection, let $\partial$ be a symmetric gradient for $(A,\tau)$ and assume existence of a separating function $g$. We show finiteness of $\mathcal{W}_{2}$ if $A$ is unital, $\partial$ satisfies a Poincar\'e-type inequality and the heat semigroup $P_{t}:=e^{-t\Delta}$ is regularity improving. For the latter, we show ergodicity to be a necessary condition. \begin{dfn} We say that $\partial$ satisfies a Poincar\'e-type inequality if there exists some $C>0$ such that $||a||_{L^{2}(A,\tau)}\leq C||\partial a||_{H}$ for each $a\in(\ker\partial)^{\bot}\cap A_{\partial}$. \end{dfn} \begin{prp}\label{PRP.Pncr_Inq} Let $\partial$ satisfy a Poincar\'e-type inequality. For all $x\in M\cap L_{sa}^{2}(A,\tau)\cap\ker\tau$, there exists an $h\in H$ such that $\tau(xa)=\langle h,\partial a\rangle_{H}$ for each $a\in A_{\partial}$. \end{prp} \begin{proof} This is proved in Theorem 9.2. of \cite{ZaevTopicsNCSpaces} for general $x\in L^{2}(A,\tau)$. \end{proof} To show finiteness, we first prove that a Poincar\'e-type inequality suffices to have finite distance between invertible elements. After this, we use the regularity improving property of the heat semigroup to connect non-invertible elements to invertible ones. Finite energy of these paths will follow from Lemma \ref{LEM.Entrp_Mlt}. \begin{thm}\label{THM.Fin_Dst} If $\partial$ satisfies a Poincar\'e-type inequality, the distance between any two invertible bounded densities is finite. If $A$ is unital, $p\in\mathcal{D}_{b}$ and $P_{t}$ regularity improving, the distance between $p$ and $\rho_{t}:=P_{t}(p)$ is finite for each $t\in [0,1]$. \end{thm} \begin{proof} We begin with the first statement. Thus let $p,q$ be invertible bounded densities and set $C:=\min\{\inf\spec (p),\inf\spec (q)\}$. We have $C>0$ because $p$ and $q$ are invertible in $M$. Writing $\rho_{t}:=(1-t)p+tq$, we have $C\langle x,x\rangle_{L^{2}(A,\tau)}\leq \langle\rho_{t}x,x\rangle_{L^{2}(A,\tau)}$ for each $x\in L^{2}(A,\tau)$. Hence $\rho_{t}$ is invertible for each $t\in [0,1]$. As $\partial$ satisfies a Poincar\'e-type inequality, Proposition \ref{PRP.Pncr_Inq} allows us to choose an $h\in H$ such that \begin{align*} \tau((q-p)a)=\langle h,\partial a\rangle_{H}=\langle M_{\rho_{t}}^{-\frac{1}{2}}h,M_{\rho_{t}}^{\frac{1}{2}}\partial a\rangle_{H} \end{align*} \noindent for each $a\in A_{\partial}$. By Proposition \ref{PRP.LR_Rpr}, $ ||M_{\rho_{t}}^{-1}||_{B(H)}\leq ||D\log||_{C([C,||\rho_{t}||_{M}]\times [C,||\rho_{t}||_{M}])}$ with the right-hand term bounded on $[0,1]$ by continuity of $\rho_{t}$. Hence $a\longmapsto \tau((q-p)a)=\tau(\dot{\rho}_{t}a)$ are bounded linear functionals on $T_{\rho_{t}}\mathcal{D}$ for each $t\in [0,1]$, represented by a unique $v_{t}\in T_{\rho_{t}}\mathcal{D}$. As an element in $H$, $v_{t}$ is given by \begin{align*} w_{t}=R_{t}(M_{\rho_{t}}^{-\frac{1}{2}}h). \end{align*} \noindent $M_{p_{t}}^{\frac{1}{2}}h$ is continuous by the $||.||_{M}$-continuity of $p_{t}$, and $R_{t}$ a projection for each $t\in [0,1]$. Thus $w_{t}$ is strongly measurable in $H$, and $||w_{t}||^{2}$ lies in $L^{1}([0,1])$. Hence $\mu_{t}:=(\rho_{t},v_{t})$ is an admissible path from $p$ to $q$. Since $p$ and $q$ were arbitrary, the first statement follows.\par For the second statement, let $p\in\mathcal{D}_{b}$ and note that we now assume $A$ to be unital. Without loss of generality, we norm $\tau$ to one. By Proposition \ref{PRP.Entrp_Rpr}, $p$ has finite relative entropy. Since $P_{t}$ is regularity improving, $\rho_{t}:=P_{t}(p)$ is an invertible bounded density for each $t\in (0,1]$. To see \begin{align*} -\frac{d}{dt}\tau(\rho_{t}a)=\tau(\Delta\rho_{t}a)=\langle \partial\rho_{t},\partial a\rangle_{H}=\langle M_{\rho_{t}}^{\frac{1}{2}}\partial\log \rho_{t},M_{\rho_{t}}^{\frac{1}{2}}\partial a\rangle_{H} \end{align*} \noindent we expand by $M_{\rho_{t}}^{-1}$ and apply the noncommutative chain rule as in the proof of Lemma \ref{LEM.Entrp_Mlt}. Analogous to the first statement's proof, this induces a bounded linear functional represented by some $v_{t}$, for each $t\in (0,1]$. In $H$, $v_{t}$ is given by \begin{align*} w_{t}=-M_{\rho_{t}}^{\frac{1}{2}}\partial\log \rho_{t}. \end{align*} \noindent which gives a vector field on $[0,1]$. Frech\'et differentiability of $\rho_{t}$ on $(0,1)$ implies $||.||_{M}$-continuity of $\rho_{t}$, thus $w_{t}$ is strongly measurable in $H$ as $\partial$ is linear. To show $||w_{t}||^{2}\in L^{1}([0,1])$, observe that $||R_{t}||=1$ implies \begin{align*} ||v_{t}||_{\rho_{t}}\leq ||M_{\rho_{t}}^{\frac{1}{2}}\partial\log \rho_{t}||_{H} \end{align*} \noindent for each $t\in (0,1]$. Using this, we estimate \begin{align*} \int_{0}^{1}||v_{t}||_{\rho_{t}}^{2}dt \leq \int_{0}^{1}||M_{\rho_{t}}^{\frac{1}{2}}\partial\log \rho_{t}||_{H}^{2}dt=\textrm{Ent}(p|\tau)-\textrm{Ent}(\rho(1)|\tau)<\infty \end{align*} \noindent We applied Lemma \ref{LEM.Entrp_Mlt} for the last equality. It follows that $\mu_{t}:=(\rho_{t},v_{t})$ is an admissible path. \end{proof} \begin{cor} Let $A$ be unital. If $\partial$ satisfies a Poincar\'e-type inequality and has regularity improving heat semigroup, $\mathcal{W}_{2}$ is finite. \end{cor} To end this subsection, we provide a necessary condition for $P_{t}$ to be regularity improving. In this, we lift Simon's original proof \cite{SiPosImpr} to the noncommutative setting. We make use of notations and results immediately leading up to and found on p.~204-205 in \cite{CiDrchltFrmsNCS}. \begin{lem}\label{LEM.L2_Pst_Prj} If $x\in L^{2}(A,\tau)$ is self-adjoint, then $\max\{x,0\}$ is given by the metric projection $x_{+}$ of $x$ onto the self-polar cone $L_{+}^{2}(A,\tau)$ in $L^{2}(A,\tau)$. \end{lem} \begin{proof} We know $\max\{x,0\}\in L_{+}^{2}(A,\tau)$ by construction of the positive elements. A general metric projection $P_{C}$ onto a closed convex set $C$ in a Hilbert space can be characterised uniquely by satisfying $\textrm{Re}\langle x-P_{C}(x),y-P_{C}(x)\rangle_{H}\leq 0$ for each $y\in C$. A calculation in our setting using any $y\geq 0$ yields \begin{align*} \tau((x-\max\{x,0\})(y-\max\{x,0\}))&=\tau((-\min\{x,0\})^{\frac{1}{2}}(\max\{x,0\}-y)(-\min\{x,0\})^{\frac{1}{2}})\\ &\leq \tau((-\min\{x,0\})^{\frac{1}{2}}\max\{x,0\}(-\min\{x,0\})^{\frac{1}{2}})\\ &=-\tau(\min\{x,0\}\max\{x,0\})\\ &=0. \end{align*} \end{proof} \begin{rem} Using the above characterisation of the metric projection to show the result was pointed out to the author by Wirth in a personal communication as a derivative of a lemma in future work of his. \end{rem} \noindent We turn to a second lemma that closely orients itself along Lemma 3.4 of Simon's proof. \begin{lem}\label{LEM.Sim_NC} Let $T$ be a positivity preserving operator on $L^{2}(A,\tau)$. If $x,y\in L_{+}^{2}(A,\tau)$ with $\langle x,y\rangle_{L^{2}(A,\tau)}\neq 0$, then $\langle Tx,Ty\rangle_{L^{2}(A,\tau)}\neq 0$. \end{lem} \begin{proof} $L^{2}(A,\tau)=L^{2}(M,\tau)$ is a standard form of $M$ with cyclic vector $1_{A}$. We are thus able to use analogues of the pointwise supremum and infimum operations. Assume $x\wedge y=0$, where $x\wedge y$ is our analogue of the infimum of $x$ and $y$.\par Point five of Lemma 2.50 in \cite{CiDrchltFrmsNCS} yields $x+y=|x-y|$. The fifth and third points of the same lemma together give $|x-y|=y+(x-y)_{+}$. From this, $x+y=y+(x-y)_{+}$ follows by Lemma \ref{LEM.L2_Pst_Prj} above. All in all, $x=\max\{x-y,0\}\in L_{+}^{2}(A,\tau)$ holds. Evoking Lemma 2.50 one last time, we have $y=\min\{x-y,0\}$ and thus $xy=yx=0$. This shows $x\wedge y\neq 0$ for $x,y\geq 0$. By definition, $x\wedge y\leq x,y$ holds for positive $x$ and $y$. From here on out, we nearly proceed verbatim as Simon did in the first lemma of \cite{SiPosImpr}. We only need to replace the minimum of $x$ and $y$ by $x\wedge y$. \end{proof} \begin{dfn} A positive semigroup $e^{tL}$ is ergodic if for all $x,y\in L^{2}_{+}(A,\tau)$ with $x,y\neq 0$, there exists a $t>0$ such that $\tau(xe^{tL}y)>0$. A semigroup $e^{tL}$ on $L^{2}(A,\tau)$ is called positivity improving if $e^{tL}x$ has strictly positive spectrum for each $x\in L_{+}^{2}(A,\tau)$ and each $t\in (0,\infty]$. \end{dfn} \begin{rem} An operator $T$ has strictly positive spectrum if $\spec (T)\subset\mathbb{R}_{>0}$. It does not imply existence of a uniform lower bound. We follow the commutative terminology in this, where a function $f$ is strictly positive if $f>0$ almost everywhere. \end{rem} \begin{thm}\label{THM.Erg} If $e^{tL}$ is a semigroup of self-adjoint, positivity preserving operators on $L^{2}(A,\tau)$, it is positivity improving if and only if it is ergodic. \end{thm} \begin{proof} After replacing Simon's first lemma with Lemma \ref{LEM.Sim_NC}, the proof is given verbatim to the one of Theorem 1 in \cite{SiPosImpr}. \end{proof} In \cite{CiDrchltFrmsNCS}, Cipriani provides necessary and sufficient conditions for ergodicity of a semigroup. If $A$ is unital, Corollary 2.48 in \cite{CiDrchltFrmsNCS} implies the heat semigroup to be ergodic if and only if $1_{A}$ is a \textit{simple} eigenvector of $\Delta$. \begin{cor}\label{COR.Erg} If $A$ is unital, then $P_{t}$ is positivity improving if and only if $1_{A}$ is a simple eigenvector of $\Delta$. \end{cor} \begin{proof} In the notation of \cite{CiDrchltFrmsNCS}, $(M,L^{2}(A,\tau),L_{+}^{2}(A,\tau),^{*})$ is a standard form of $M$ with cyclic vector $1_{A}$. Applying the equivalence between the first and third statement of Corollary 2.48 in \cite{CiDrchltFrmsNCS}, as well as Theorem \ref{THM.Erg} above, we obtain the statement. \end{proof} \begin{bsp}\label{BSP.FinDim_RgImprSG} Let $A$ be a finite-dimensional $C^{*}$-algebra. If $\partial$ has one-dimensional kernel, so does $\Delta$. Moreover, $\partial$ satisfies a Poincar\'e-type inequality since $\Delta$ becomes a positive operator on the orthogonal complement of the kernel. Then Corollary \ref{COR.Erg} implies $P_{t}$ to be positivity improving. Since all relevant operator topologies are equivalent and a strictly positive spectrum implies invertibility of the operator in finite-dimensions, $P_{t}$ is regularity improving. \end{bsp} \subsection{Extending to unbounded densities} So far, we required densities to be bounded. We now extend $\mathcal{W}_{2}$ to all densities. To do so, we impose conditions on the domain and codomain of the gradient. In particular, we consider multiplication operators given by \begin{align*} M_{p}(x)=\int_{0}^{1}p^{\alpha}xp^{1-\alpha}d\alpha \end{align*} \noindent on summands. For this, $\partial$ will have to take values in some $\bigoplus_{k=1}^{m}L^{2}(A,\tau)$ equipped with the canonical symmetric $L^{\infty}(A,\tau)$-bimodule structure and Hilbert space norm. All $L^{\infty}(A,\tau)$-subbimodules $H\subset\bigoplus_{k=1}^{m}L^{2}(A,\tau)$ are assumed to be closed subspaces throughout the paper. Furthermore, we assume $L^{1}(A,\tau)$ and $L^{2}(A,\tau)$ to be separable in this subsection. \begin{dfn}\label{DFN.Can_Set} If $H\subset\bigoplus_{k=1}^{m}L^{2}(A,\tau)$ is an $L^{\infty}(A,\tau)$-bimodule closed under adjoining of operators, we call $H$ a symmetric Hilbert $L^{\infty}(A,\tau)$-subbimodule. \end{dfn} \begin{rem} Note the important assumptions made at the beginning of this subsection. \end{rem} \noindent For the remainder of the subsection, let $\partial$ map into a symmetric Hilbert $L^{\infty}(A,\tau)$-subbimodule $H$. Morally, we view $H$ as a module of $L^{2}$-sections embedded in the $L^{2}$-sections of the trivial $m$-bundle over the space $A$ models. We do not assume $H$ to be a finitely generated, projective module. \begin{ntn}\label{NTN.Dcp_Grd_Std} As $\partial$ maps into $\bigoplus_{k=1}^{m}L^{2}(A,\tau)$, we view each $\partial_{k}$ as a symmetric gradient in itself. \end{ntn} We will have to replace $A_{\partial}$ by more suitable $^{*}$-subalgebra $\mathfrak{A}$. One can think of $\mathfrak{A}$ as playing a r\^ole similar to that of smooth functions with compact support, but the analogy is not too strict. \begin{dfn} Let $\mathfrak{A}\subset A_{\partial}$ be a dense $^{*}$-subalgebra of $A$ such that it is again a core for $\partial$. If furthermore $\partial a\in\bigoplus_{k=1}^{m}\big(L^{2}(A,\tau)\cap L^{\infty}(A,\tau)\big)$ for each $a\in\mathfrak{A}$, we call $\mathfrak{A}$ an extension algebra. \end{dfn} \begin{rem} In the above definition, $L^{2}(A,\tau)\cap L^{\infty}(A,\tau)$ is viewed as an $L^{\infty}(A,\tau)$-bimodule in the algebraic sense. No topology is being considered. \end{rem} \begin{bsp}\label{BSP.Unbd_Comm_Case} In example \ref{BSP.Bd_Comm_Case}, let $X$ be embedded isometrically into some $\mathbb{R}^{m}$. Then $TX\otimes\mathbb{C}\subset X\times\mathbb{C}^{m}$ and we set $\mathfrak{A}:=C_{c}^{\infty}(X)$ as an extension algebra. Thus we capture the smooth Riemannian setting with our formalism. \end{bsp} \begin{bsp} By Definition \ref{DFN.Vrt_Grd}, $C_{c}(X)\odot\FinRk(H)$ is an extension algebra for each vertical gradient. \end{bsp} \begin{bsp} Let $(A,\mathbb{R},\alpha_{t})$ be a $C^{*}$-dynamical system such that the $^{*}$-algebra \begin{align*} \mathfrak{A}:=\{x\in A\ |\ \partial(x):=\frac{d}{dt}_{|t=0}\alpha_{t}(x)\in A\cap L^{2}(A,\tau)\} \end{align*} \noindent lies dense in $A$ and is a core for $\partial$. If $A$ is unital and $\alpha_{t}(x)$ Fr\'echet differentiable at the origin for each $x\in A$, $\mathfrak{A}=A$ is an extension algebra. \end{bsp} \begin{lem}\label{LEM.Ext_Mlt} For all $p\in L_{+}^{1}(A,\tau)$ and all $x\in L^{\infty}(A,\tau)$, we have \begin{align*} ||p^{\alpha}xp^{1-\alpha}||_{L^{1}(A,\tau)}\leq ||p||_{\mathcal{S}_{1}(H)}||x||_{L^{\infty}(A,\tau)} \end{align*} \noindent and $p^{\alpha}xp^{1-\alpha}\in L^{1}([0,1],L^{1}(A,\tau))$. \end{lem} \begin{proof} We show $p^{\alpha}xp^{1-\alpha}\in L^{1}(A,\tau)$ by applying the generalised H\"older inequality twice. Since $\alpha\in [0,1]$, we know $\alpha^{-1},(1-\alpha)^{-1}\in [1,\infty]$. Using H\"older for $1=\alpha+(1-\alpha)$ and $\alpha=\alpha+0$, we obtain \begin{align*} ||p^{\alpha}xp^{1-\alpha}||_{1}&\leq ||p^{\alpha}x||_{\alpha^{-1}}||p^{1-\alpha}||_{(1-\alpha)^{-1}}\\ &\leq ||p^{\alpha}||_{\alpha^{-1}}||x||_{\infty}||p^{1-\alpha}||_{(1-\alpha)^{-1}}\\ &=\tau(p)^{\alpha}||x||_{\infty}\tau(p)^{1-\alpha}\\ &=||p||_{\mathcal{S}_{1}(H)}||x||_{\infty}. \end{align*} Once we know $\alpha\longmapsto p^{\alpha}xp^{1-\alpha}$ to be strongly measurable, the above yields Bochner-integrability. Measurability is clear if $p$ is bounded. Choose a strictly monotonically increasing sequence of $C_{i}\geq 1$ diverging to infinity, and set $p_{i}:=\min\{p,C_{i}\}$. Arguing by functional calculus shows $p_{i}^{\alpha}$ to approximate $p^{\alpha}$ in $L^{\alpha^{-1}}(A,\tau)$ for each $\alpha\in (0,1]$.\par We claim that $p_{i}^{\alpha}xp_{i}^{1-\alpha}$ $||.||_{L^{1}(A,\tau)}$-converges to $p^{\alpha}xp^{1-\alpha}$ for each fixed $\alpha\in [0,1]$. To see this, we have to show convergence of $p^{\alpha}xp^{1-\alpha}-p_{i}^{\alpha}xp_{i}^{1-\alpha}$ to zero. We do so by using the triangle inequality and then applying H\"older as above to $p^{\alpha}x(p^{1-\alpha}-p_{i}^{1-\alpha})$, resp. $(p^{\alpha}-p_{i}^{\alpha})xp_{i}^{1-\alpha}$. Hence our path is a pointwise limit of strongly measurable ones, therefore strongly measurable itself. \end{proof} \begin{dfn} For all $p\in\mathcal{D}$ and $x\in\bigoplus_{k=1}^{m}L^{\infty}(A)$, we set \begin{align*} M_{p}(x):=\Big(\int_{0}^{1}p^{\alpha}x_{k}p^{1-\alpha}d\alpha\Big)_{k=1}^{m}\in\bigoplus_{k=1}^{m}L^{1}(A,\tau) \end{align*} \end{dfn} \begin{prp}\label{PRP.Mlt_Ctrc_Apprx} For all $p\in\mathcal{D}$ and all $x\in\bigoplus_{k=1}^{m}L^{\infty}(A,\tau)$, the linear operator \begin{align*}M_{p}:\bigoplus_{k=1}^{m}L^{\infty}(A,\tau)\longrightarrow\bigoplus_{k=1}^{m}L^{1}(A,\tau) \end{align*} \noindent is a contraction and $M_{p_{i}}(x)$ $||.||_{L^{1}(A,\tau)}$-converges to $M_{p}(x)$ if $(p_{i})_{i\in\mathbb{N}}\subset L_{+}^{\infty}(A,\tau)$ is defined as in the proof of Lemma \ref{LEM.Ext_Mlt}. \end{prp} \begin{proof} This immediately follows from Lemma \ref{LEM.Ext_Mlt} above, resp. the last part of its proof. \end{proof} Assuming existence of an extension algebra $\mathfrak{A}$, we define the norm of a tangent space in analogy to the bounded case. Indeed, each summand of $\partial a$ lies in $L^{\infty}(A,\tau)$. Hence we are able to apply $M_{p}$ by Proposition \ref{PRP.Mlt_Ctrc_Apprx}. Furthermore, we have \begin{align*} \sum_{k=1}^{m}\tau((M_{p}\partial a)^{*}y)=\int_{0}^{1}\sum_{k=1}^{m}\tau(p^{1-\alpha}\partial x^{*}p^{\alpha}y)d\alpha \end{align*} \noindent for each $y\in\bigoplus_{k=1}^{m}L^{\infty}(A,\tau)$ by boundedness of $\tau$ on $L^{1}(A,\tau)$, as well as continuity of multiplication by $y$ from the right viewed as a linear operator on $L^{1}(A,\tau)$. For bounded $p$ and $y=\partial x$, we recover $||a||_{p}$ by construction. Approximation by $M_{p_{i}}$ shows the formula above to define a semi-definite, positive bilinear form on $\mathfrak{A}$. \begin{dfn} Let $\mathfrak{A}$ be an extension algebra. For all $p\in\mathcal{D}$ and $a,b\in\mathfrak{A}$, we define \begin{align*} \langle a,b\rangle_{p}:=\int_{0}^{1}\sum_{k=1}^{m}\tau(p^{1-\alpha}\partial a^{*}p^{\alpha}\partial b)d\alpha \end{align*} \noindent and let $T_{p}\mathcal{D}$ be the Hausdorff completion of $\mathfrak{A}$ w.r.t.~$\langle \ ,\hspace{0.05cm} \rangle_{p}$. The tangent bundle is defined as before by $T\mathcal{D}:=\underset{p\in\mathcal{D}}{\coprod}\ \{p\}\times T_{p}\mathcal{D}$. \end{dfn} \begin{rem} Each $T_{p}\mathcal{D}$ is a Hilbert space by construction. \end{rem} We extend the rest of our relevant notions, beginning with admissible paths on all densities. Compatibility with the bounded case has to be proved since we replace $A_{\partial}$ by a potentially smaller $^{*}$-subalgebra $\mathfrak{A}$. \begin{dfn} Let $\mu_{t}:[0,1]\longrightarrow T\mathcal{D}$ such that $t\longmapsto\tau(\rho_{t}a)$ is absolutely continuous for each $a\in\mathfrak{A}$. We say that $\mu_{t}$ satisfies the (noncommutative) continuity equation if \begin{align*} \frac{d}{dt}\tau(\rho_{t}a)=\langle v_{t},a\rangle_{\rho_{t}} \end{align*} \noindent for each $a\in\mathfrak{A}$ and a.e.~$t\in [0,1]$. \end{dfn} \begin{dfn}\label{DFN.Unbd_Adm} Let $p,q\in\mathcal{D}$. An admissible path from $p$ to $q$ is a $\mu_{t}:[0,1]\longrightarrow T\mathcal{D}$ such that \begin{itemize} \item[1)] $\mu_{t}$ satisfies the continuity equation, \item[2)] $\rho_{0}=p$ and $\rho_{1}=q$, \item[3)] $t\longmapsto ||v_{t}||_{\rho_{t}}^{2}\in L^{1}([0,1])$. \end{itemize} \noindent We denote the set of all admissible paths between $p$ and $q$ by $\mathcal{A}(p,q)$. \end{dfn} \begin{prp}\label{PRP.Ext_Cmptbl} If $\mu_{t}:[0,1]\longrightarrow T\mathcal{D}_{b}$, then $\mu_{t}$ is an admissible path w.r.t.~Definition \ref{DFN.Bd_Adm} if and only if it is one w.r.t.~Definition \ref{DFN.Unbd_Adm}. \end{prp} \begin{proof} $M_{\rho_{t}}$ reduces to the multiplication operator for bounded densities if $\rho_{t}$ is bounded. Thus density of $\mathfrak{A}\subset A_{\partial}$ w.r.t.~$||.||_{\partial}$, which we have by $\mathfrak{A}$ being a core, implies both constructions of $||.||_{p}$ to yield the same tangent space at $\rho_{t}$. We are left to show that absolute continuity w.r.t.~$\mathfrak{A}$ implies absolute continuity w.r.t.~$A_{\partial}$ as well. This follows from $\tau(p)=1$ and $\mathfrak{A}\subset A$ being dense. \end{proof} \noindent We copy Definition \ref{DFN.Bd_L2WDst} verbatim to define the $L^{2}$-Wasserstein distance on $\mathcal{D}$ associated to $(A,\tau,\partial)$ and $\mathfrak{A}$, using the wider class of admissible paths defined just above. Proposition \ref{PRP.Ext_Cmptbl} shows this to be compatible with our previous definition on bounded densities. \begin{ntn} We denote the $L^{2}$-Wasserstein distance on $\mathcal{D}$ obtained from the above extension procedure by $\mathcal{W}_{2}$ in analogy to the bounded case. \end{ntn} \begin{prp}\label{PRP.Unbd_Sep} $\mathcal{W}_{2}$ defines a distance on $\mathcal{D}$. \end{prp} \begin{proof} Setting $B:=\mathfrak{A}$ and $g(a):=||\partial a||_{\infty}$, we obtain a separating function as in the bounded case by density of $\mathfrak{A}$ and the second statement of Lemma \ref{LEM.Ext_Mlt}. Definiteness follows exactly as in the bounded case. \end{proof} \section{Symmetric gradients for $(\mathcal{K}(H),\textnormal{tr})$} We discuss symmetric gradients for $(\mathcal{K}(H),\textrm{tr})$ in preparation of the fourth section, in particular fibre gradients. Significance of Theorem \ref{THM.Disint} is ensured by showing continuous dependence of minimisers on start- and endpoints if $H$ is finite-dimensional. This includes existence of minimisers in finite dimensions. \subsection{Existence of minimisers for finite-dimensional $H$} In this subsection, we assume $\partial$ to be a symmetric gradient for $(M_{n}(\mathbb{C}),\textrm{tr})$ with $n\in\mathbb{N}$ arbitrary. Without loss of generality, we assume $\partial$ to map into a finite-dimensional space. The first step is to show finiteness of $\mathcal{W}_{2}$. Example \ref{BSP.FinDim_RgImprSG} shows this to be true if $\partial$ has one-dimensional kernel. For the general case, we introduce invertible operators $S_{p}$ associated to each $p\in\mathcal{D}_{b}$. This will allow us to write $\dot{\rho}_{t}=S_{\rho_{t}}v_{t}$ for each admissible path. Continuous dependence of $S_{p}$ on $p$ will imply $\rho_{t}:=(1-t)p+tq$ to be an admissible path even between non-invertible densities.\par Choose $p\in\mathcal{D}_{b}$ and decompose \begin{align*} M_{n}(\mathbb{C})=T_{p}\mathcal{D}_{b}\bigoplus\ker M_{p}^{\frac{1}{2}}\partial \end{align*} \noindent orthogonally. By finite-dimensionality, we avoid a completion procedure when constructing the tangent space. We have $\im (\partial^{*}M_{p}\partial)_{|T_{p}\mathcal{D}_{b}}\subset T_{p}\mathcal{D}_{b}$, where $(\partial^{*}M_{p}\partial)_{|T_{p}\mathcal{D}_{b}}$ is injective and positive by construction of the tangent space. Hence $(\partial^{*}M_{p}\partial)_{|T_{p}\mathcal{D}_{b}}$ is an invertible operator on $T_{p}\mathcal{D}_{b}$ . Let $R_{p}$ be the projection onto $T_{p}\mathcal{D}_{b}$ in $M_{n}(\mathbb{C})$. We construct an operator on $M_{n}(\mathbb{C})$ by \begin{align*} S_{p}:=(\partial^{*}M_{p}\partial)_{|T_{p}\mathcal{D}_{b}}R_{p}\oplus (1_{M_{n}(\mathbb{C})}-R_{p}). \end{align*} \noindent $S_{p}$ is invertible by construction and depends continuously on its base point since $M_{p}$ and $R_{p}$ do. The continuity equation and $v_{t}\in T_{p}\mathcal{D}_{b}$, as well as boundedness of $\partial$, imply $\dot{\rho}_{t}=(\partial^{*}M_{p}\partial)_{|T_{p}\mathcal{D}_{b}}v_{t}$ for any admissible path. We summarise our construction in a lemma. \begin{lem} For all $p\in\mathcal{D}_{b}$, there exists a positive invertible operator $S_{p}\in \mathcal{B}(M_{n}(\mathbb{C}))$ depending continuously on $p$ such that $\dot{\rho}_{t}=S_{\rho_{t}}(v_{t})$ for each $\mu_{t}=(\rho_{t},v_{t})\in\mathcal{A}(p,q)$. \end{lem} For all $p\in\mathcal{D}_{b}$ and all $x,y\in M_{n}(\mathbb{C})$, we consider $S_{p}^{-1}(x-y)$. This expression is jointly continuously w.r.t.~all three variables. Thus if $x$ and $y$ lie in a bounded set $K$, then $||S_{p}^{-1}(x-y)||$ is bounded on $\mathcal{D}_{b}\times K\times K$ by continuity. \begin{prp}\label{PRP.FinDim_WkMetr} If $p,q\in\mathcal{D}_{b}$, then $((1-t)p+tq,S_{(1-t)p+tq}^{-1}(p-q))\in\mathcal{A}(p,q)$. Furthermore, $\mathcal{W}_{2}$ has finite diameter and metrisises the $w^{*}$-topology on $\mathcal{D}_{b}$. \end{prp} \begin{proof} The path $\rho_{t}:=(1-t)p+tq$ is continuously differentiable with $\dot{\rho_{t}}=p-q=S_{\rho_{t}}(S_{\rho_{t}}^{-1}(p-q))$. Moreover, we have $||v_{t}||_{\rho_{t}}=||S_{\rho_{t}}^{-1}(p-q)||_{H}<C$ for some $C>0$ independent of $p,q$, by continuity and $\mathcal{D}_{b}\subset M_{n}(\mathbb{C})$ being bounded. Thus $(\rho_{t},v_{t})\in\mathcal{A}(p,q)$, and therefore $\mathcal{W}_{2}$ finite.\par It is immediate that convergence in $\mathcal{W}_{2}$ implies convergence in the weak topology. For the converse, we use the uniform bound $C>0$ above and continuous dependence on $p$. This allows use of dominated convergence to obtain \begin{align*} \lim_{i}E((1-t)p_{i}+tp)=\lim_{i}\frac{1}{2}\int_{0}^{1}||S_{(1-t)p_{i}+tp}^{-1}(p_{i}-p)||_{H}^{2}dt=0 \end{align*} \noindent for each sequence $p_{i}\in\mathcal{D}_{b}$ weakly convergent to $p$. \end{proof} \begin{prp}\label{PRP.FinDim_Min} For all $p,q\in\mathcal{D}_{b}$, there exists a $\mu_{t}\in\mathcal{A}(p,q)$ such that $\mathcal{W}_{2}(p,q)=\sqrt{E(\mu_{t})}$. \end{prp} \begin{proof} $\mathcal{A}(p,q)\neq\emptyset$ for all $p,q\in\mathcal{D}_{b}$ by Proposition \ref{PRP.FinDim_WkMetr}. Let $\mu_{t}^{i}\in\mathcal{A}(p,q)$ be a sequence such that $\sqrt{E(\mu_{t}^{i})}$ strictly decreases to $\mathcal{W}_{2}(p,q)$. $(E(\mu_{t}^{i}))_{i\in\mathbb{N}}$ is bounded, and we select a weakly convergent subsequence of $w_{t}^{i}\in L^{2}([0,1],\mathcal{H})$ by Banach-Alaoglu. By compactness of $\mathcal{D}_{b}$ in the $w^{*}$-topology, absolute continuity of $\rho_{t}^{i}$, and again boundedness of $(E(\mu_{t}^{i}))_{i\in\mathbb{N}}$, we choose a subsequence $\rho_{t}^{i}$ that $w^{*}$-converges uniformly to a path $\rho_{t}\in\mathcal{D}_{b}$ between $p$ and $q$ using Arzel\'a-Ascoli.\par Finite-dimensionality of $H$ implies uniform convergence of $\rho_{t}^{i}$ to $\rho_{t}$ in norm. This implies \begin{align*} \lim_{i}||M_{\rho_{t}^{i}}^{\frac{1}{2}}-M_{\rho_{t}}^{\frac{1}{2}}||_{\mathcal{B}(\mathcal{H})}=0 \end{align*} \noindent for each $t\in [0,1]$. Since all $\rho_{t}^{i},\rho_{t}$ are densities, Proposition \ref{PRP.Nrm_Mlt_Op} shows $||M_{\rho_{t}^{i}}^{\frac{1}{2}}-M_{\rho_{t}}^{\frac{1}{2}}||_{\mathcal{B}(\mathcal{H})}\leq 2$ and we apply dominated convergence to obtain \begin{align*} \lim_{i}|\int_{0}^{t}||(M_{\rho_{s}^{i}}^{\frac{1}{2}}-M_{\rho_{s}}^{\frac{1}{2}})\partial a||_{\mathcal{H}}^{2}ds|=0 \end{align*} \noindent for each $a\in M_{n}(\mathbb{C})$. Using this and that $E(\mu_{t}^{i})$ is strictly decreasing, we calculate \begin{align*} \lim_{i}|\int_{0}^{t}\langle w_{s}^{i},M_{\rho_{s}^{i}}^{\frac{1}{2}}-M_{\rho_{t}}^{\frac{1}{2}}\partial a\rangle_{\mathcal{H}}ds|\leq E(\mu_{t}^{0})\lim_{i}|\int_{0}^{t}||(M_{\rho_{s}^{i}}^{\frac{1}{2}}-M_{\rho_{s}}^{\frac{1}{2}})\partial a||_{\mathcal{H}}^{2}ds|=0. \end{align*} \noindent This proves \begin{align*} \lim_{i}\int_{0}^{t}\langle w_{s}^{i},M_{\rho_{s}^{i}}\partial a\rangle_{\mathcal{H}}ds=\int_{0}^{t}\langle w_{t},M_{\rho_{t}}^{\frac{1}{2}}\partial a\rangle_{\mathcal{H}}ds \end{align*} \noindent for each $a\in M_{n}(\mathbb{C})$. Here, $w_{t}$ is the weak limit of $w_{t}^{i}$ in $L^{2}([0,1],\mathcal{H})$. All of this implies \begin{align*} \textrm{tr}(\rho_{t}a)=\lim_{i}\textrm{tr}(\rho_{t}^{i}a)=\lim_{i}\Big(\int_{0}^{t}\langle w_{s}^{i},M_{\rho_{s}^{i}}\partial a\rangle_{\mathcal{H}}ds\Big)+\textrm{tr}(pa)=\Big(\int_{0}^{t}\langle w_{t},M_{\rho_{t}}^{\frac{1}{2}}\partial a\rangle_{\mathcal{H}}ds\Big)+\textrm{tr}(pa) \end{align*} \noindent for each $a\in M_{n}(\mathbb{C})$. Thus $(\rho_{t},w_{t})\in\mathcal{A}(p,q)$. Moreover, l.s.c.~of $||.||_{L^{2}([0,1],\mathcal{H})}$ coupled with weak convergence of $w_{t}^{i}$ to $w_{t}$ yields $E(\mu_{t})\leq\liminf E(\mu_{t}^{i})=\mathcal{W}_{2}^{2}(p,q)$. Hence $\mu_{t}$ is a minimiser. \end{proof} \subsection{Fibre gradients and mass preservation} In order to have mass preservation along fibres when dealing with vertical gradients, we require the latter to decompose into symmetric gradients for $(\mathcal{K}(H),\textrm{tr})$ with additional properties. The notion of fibre gradient encompasses precisely these properties. Proposition \ref{PRP.Mass_Prsv} is the result we need to show mass preservation in the fourth section. \begin{dfn}\label{DFN.Fbr_Grd} A symmetric gradient $\partial$ for $(\mathcal{K}(H),\textrm{tr})$ mapping to $\mathcal{S}_{2}(H)$ is a fibre gradient if \begin{itemize} \item[1)] $\mathcal{S}_{1}(H)\subset D(\partial)$ and $\FinRk(H)$ is a core, \item[2)] $\partial(\mathcal{S}_{1}(H))\subset\mathcal{S}_{1}(H)$, \item[3)] $\partial^{*}=-\partial$, \item[4)] $\partial$ extends to a bounded operator on $\mathcal{K}(H)$. \end{itemize} \end{dfn} \begin{rem}\label{REM.Curveball} In our setting, $\FinRk(H)$ being a core implies it being an extension algebra. This will be relevant in the fourth section exactly once [Link!]. \end{rem} \begin{bsp} For all $T\in\mathcal{B}(H)_{h}$, $i\textrm{Ad}_{T}$ is a fibre gradient. In particular, all symmetric gradients are fibre gradients if $H$ is finite-dimensional. \end{bsp} \begin{prp}\label{PRP.Bd_S1toS1} If $\partial\in\mathcal{B}(\mathcal{S}_{2}(H))$ is a symmetric gradient for $(\mathcal{K}(H),\textrm{tr})$, then $\partial(\mathcal{S}_{1}(H))\subset\mathcal{S}_{1}(H)$. \end{prp} \begin{proof} Without loss of generality, we assume $x\in\mathcal{S}_{1}(H)_{h}$ by symmetry of $\partial$. Any element of $\mathcal{S}_{1}(H)_{h}$ can be split into positive and negative parts again lying in $\mathcal{S}_{1}(H)_{h}$. We therefore reduce to the case of positive $x\in\mathcal{S}_{1}(H)$ and write $x=y^{2}$ for a $y\in\mathcal{S}_{2}(H)_{+}$. By construction, $||x||_{\mathcal{S}_{1}(H)}=||y||_{\mathcal{S}_{2}(H)}^{2}$. For all $z\in\mathcal{S}_{2}(H)$, we have \begin{align*} |\textrm{tr}(\partial x z)|&\leq |\textrm{tr}(y\partial y z )|+|\textrm{tr}(\partial y y z)|\\ &=\langle\partial y,zy\rangle_{\mathcal{S}_{2}(H)}+\langle\partial y,yz\rangle_{\mathcal{S}_{2}(H)}\\ &\leq ||\partial y||_{\mathcal{S}_{2}}\big(\sqrt{\textrm{tr}(xz^{*}z)}+\sqrt{\textrm{tr}(xzz^{*})}\ \big)\\ &=||\partial||_{\mathcal{B}(\mathcal{S}_{2}(H)}\sqrt{||x||_{\mathcal{S}_{1}(H)}}\big(2\sqrt{||x||_{\mathcal{S}_{1}(H))}||z||_{\mathcal{K}(H)}^{2}}\ \big)\\ &=2||\partial||_{\mathcal{B}(\mathcal{S}_{2}(H))}||x||_{\mathcal{S}_{1}(H)}||z||_{\mathcal{K}(H)}. \end{align*} \noindent Since $\mathcal{S}_{2}(H)\subset\mathcal{K}(H)$ densely and $\mathcal{K}(H)^{*}=\mathcal{S}_{1}(H)$, $\partial x\in\mathcal{S}_{1}(H)$. \end{proof} \begin{rem}\label{REM.Mass_Prsv} If $\partial\in\mathcal{B}(\mathcal{S}_{2}(H))$ is a bounded symmetric gradient for $(\mathcal{K}(H),\textrm{tr})$, then $1)$ in Definition \ref{DFN.Fbr_Grd} is satisfied by hypothesis and $2)$ is satisfied by Proposition \ref{PRP.Bd_S1toS1}. Thus $\partial$ is a fibre gradient if and only if $3)$ and $4)$ are satisfied. If $H$ is finite-dimensional, all symmetric gradients on $\mathcal{S}_{2}(H)$ are fibre gradients. In general, $\textrm{Ad}_{T}$ is a fibre gradient for each $T\in\mathcal{K}(H)$. \end{rem} \begin{prp}\label{PRP.Mass_Prsv} If $\partial$ is a fibre gradient and $(\eta_{i})_{i\in\mathbb{N}}\subset\mathcal{S}_{1}(H)$ an approximate identity in $\mathcal{B}(H)$, then $\eta_{i}\longrightarrow 0$ weakly in $T_{p}\mathcal{D}_{b}$ for each $p\in\mathcal{D}_{b}$. \end{prp} \begin{proof} Let $p\in\mathcal{D}_{b}$ be fix but arbitrary. From Lemma \ref{LEM.Ext_Mlt}, we know $p^{\alpha}\partial xp^{1-\alpha}\in\mathcal{S}_{1}(H)$ for each $x\in A_{\partial}$ and each $\alpha\in [0,1]$. Using the integral representation of $M_{p}$ we have by $\partial$ mapping into $L^{2}(\mathcal{K}(H),\textrm{tr})=\mathcal{S}_{2}(H)$, we obtain \begin{align*} \langle \eta_{i},\eta_{i}\rangle_{p}=\int_{0}^{1}||p||_{\mathcal{S}_{1}}||\partial\eta_{i}||_{\mathcal{B}(H)}^{2}d\alpha\leq(\sup_{i} ||\partial\eta_{i}||_{\mathcal{B}(H)})^{2}. \end{align*} \noindent As $\eta_{i}$ is an approximate identity, it $w^{*}$-converges in $\mathcal{B}(H)$. Thus $\sup_{i}||\eta_{i}||_{\mathcal{K}(H)}$ is finite. Hence $||\partial\eta_{i}||_{\mathcal{K}(H)}\leq||\partial||_{\mathcal{B}(\mathcal{K}(H))}\sup_{i}||\eta_{i}||_{\mathcal{K}(H)}$ is, where we used $4)$. Using our estimate just above, we see that $\sup_{i}||\eta_{i}||_{p}$.\par We know $-\partial(p^{\alpha}\partial xp^{1-\alpha})\in\mathcal{S}_{1}(H)$ by $2)$, implying $\lim\textrm{tr}(p^{\alpha}\partial x p^{1-\alpha}\partial\eta_{i})=-\textrm{tr}(\partial(p^{\alpha}\partial xp^{1-\alpha})\eta_{i})=-\textrm{tr}(\partial(p^{\alpha}\partial xp^{1-\alpha}))$ since $\eta_{i}$ is an approximate identity. We claim the last term vanishes. By the Leibniz rule and $3)$, $\textrm{tr}(\partial(TS))=0$ for each $T,S\in\mathcal{S}_{1}(H)$. Since $\mathcal{S}_{1}(H)_{h}=\mathcal{S}_{1}(H)_{+}-\mathcal{S}_{1}(H)_{+}$, the claim follows. We apply dominated convergence to obtain \begin{align*} \langle x,\eta_{i}\rangle_{p}=\int_{0}^{1}\textrm{tr}(p^{\alpha}\partial x p^{1-\alpha}\partial\eta_{i})d\alpha\longrightarrow 0 \end{align*} \noindent for each $x\in D(\partial)$. The latter is a dense subset of $T_{p}\mathcal{D}_{b}$ and $\sup_{i\in\mathbb{N}}||\eta_{i}||_{p}$ is finite. Together, this implies the statement. \end{proof} \subsection{Continuous dependence of minimisers on start- and endpoints} We introduce the notion of continuous dependence of minimisers on start- and endpoints, a property we require of almost every fibre in order to apply a measurable selection theorem in our proof of Theorem \ref{THM.Disint}. For the remainder of this section, we assume $H$ to be separable, identify $H=\mathcal{S}_{2}(H)$ and let $\partial$ be a symmetric gradient for $(\mathcal{K}(H),\textrm{tr})$.\par Choose countable $T_{k}\in\FinRk(H)\cap B_{\leq 1}(\mathcal{K}(H))$ lying densely in $B_{\leq 1}(\mathcal{K}(H))$. Then \begin{align*} d(S,R):=\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}|\textrm{tr}((S-R)T_{k})| \end{align*} \noindent metrisises the $w^{*}$-topology on $\mathcal{S}_{cl}(\mathcal{K}(H)):=\overline{\mathcal{S}(\mathcal{K}(H))}=B_{\leq 1}(\mathcal{S}_{1}(H),||.||_{\mathcal{S}_{1}(H)})$. The latter is compact in $(\mathcal{S}_{1}(H),w^{*})$ by Banach-Alaoglu, hence $(\mathcal{S}_{cl}(\mathcal{K}(H)),d)$ is a compact metric space. For finite-dimensional $H$, $\mathcal{S}_{cl}(\mathcal{K}(H))$ is the unit sphere. \begin{ntn} $\mathcal{S}_{cl}(\mathcal{K}(H)):=(\mathcal{S}_{cl}(\mathcal{K}(H)),d)$ \end{ntn} \noindent We define a distance on $C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H)))$ by setting \begin{align*} D(f,g):=\sup_{t\in [0,1]}d(f(t),g(t)) \end{align*} \noindent turning $(C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))),D)$ into a complete, separable metric space. \begin{ntn} $C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))):=(C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))),D)$. \end{ntn} \begin{ntn} Let $\otimes_{\varepsilon}$ denote the injective tensor product of locally convex topological vector spaces. \end{ntn} \begin{rem} To see separability, first note that $[0,1]$ is a Kelly space and $(\mathcal{S}_{1}(H),w^{*})$ a locally convex Hausdorff space. Thus $C([0,1],(\mathcal{S}_{1}(H),w^{*}))\cong C([0,1])\otimes_{\varepsilon} (\mathcal{S}_{1}(H),w^{*})$ w.r.t.~the topology of uniform convergence. The latter space is immediately seen to be separable by separability of $C([0,1])$ and $(\mathcal{S}_{1}(H),w^{*})$. We have \begin{align*} C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H)))\subset C([0,1],(\mathcal{S}_{1}(H),w^{*})) \end{align*} \noindent and uniform convergence is equivalent to convergence w.r.t.~the distance $D$. As subspace of a separable space, we thereby know $C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H)))$ to be separable itself. \end{rem} \begin{dfn}\label{DFN.MinSet} For all $p,q\in\mathcal{D}_{b}$, $\mathcal{M}(p,q):=\{\mu_{t}\in\mathcal{A}(p,q)\ |\ \mathcal{W}_{2}(p,q)=\sqrt{E(\mu_{t})}\}$ is the set of minimisers between $p$ and $q$. \end{dfn} \begin{lem}\label{LEM.FinDim_CntDpd_I} If $H$ is finite-dimensional, then $\mathcal{M}(p,q)\neq\emptyset$ and $\mathcal{M}(p,q)\subset C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H)))$ is closed for each $p,q\in\mathcal{D}_{b}$. \end{lem} \begin{proof} Let $p_{i}\in\mathcal{D}_{b}$ be a sequence $||.||_{\mathcal{S}_{1}(H)}$-approximating $p$, resp.~$q_{i}\in\mathcal{D}_{b}$ a sequence $||.||_{\mathcal{S}_{1}(H)}$-approximating $q$. Choose minimisers $\mu_{t}^{i}\in\mathcal{A}(p_{i},q_{i})$, which exist by Proposition \ref{PRP.FinDim_Min}. We know that $\mathcal{W}_{2}$ metrisises the $w^{*}$-topology by Proposition \ref{PRP.FinDim_WkMetr}, hence $\lim_{i}\sqrt{E(\mu_{t}^{i})}=\mathcal{W}_{2}(p,q)$. In particular, we obtain boundedness of $(E(\mu_{t}^{i}))_{i\in\mathbb{N}}\subset\mathbb{R}$. Then the argument used in our proof of Proposition \ref{PRP.FinDim_Min} for extracting a minimiser works the same for varying but $||.||_{\mathcal{S}_{1}(H)}$-converging start- and endpoints, modulo obvious minor modifications. We thus extract a subsequence $\mu_{t}^{i}$ of minimisers in order to obtain a minimisers from $p$ to $q$. This shows $\mathcal{M}(p,q)$ to be non-empty.\par Given a converging sequence $\mu_{t}^{i}\in\mathcal{M}(p,q)$ in $C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H)))$, $E(\mu_{t}^{i})=\mathcal{W}_{2}(p,q)$ allows us to extract a subsequence converging to a minimiser $\mu_{t}\in\mathcal{M}(p,q)$ as before. As $\mu_{t}^{i}$ converges by hypothesis, $\mu_{t}$ must be the limit of the whole sequence. \end{proof} \begin{dfn}\label{DFN.Cnt_Dpd} A symmetric gradient $\partial$ for $(\mathcal{K}(H),\textrm{tr})$ has continuous dependence of minimisers on start- and endpoints if for all $p,q\in\mathcal{D}_{b}$ and all $(p_{i})_{i\in\mathbb{N}},(q_{i})_{i\in\mathbb{N}}\subset\mathcal{D}_{b}$ with $p_{i}\longrightarrow p$, resp.~$q_{i}\longrightarrow q$ in the $||.||_{\mathcal{S}_{1}(H)}$-topology, we know that \begin{itemize} \item[1)] there exist $\mu_{t}\in\mathcal{M}(p,q)$ and $\mu_{t}^{i_{k}}\in\mathcal{M}(p_{i_{k}},q_{i_{k}})$ with $\lim_{k\in\mathbb{N}}D(\mu_{t}^{i_{k}},\mu_{t})=0$, \item[2)] the limit of each $D$-converging sequence of $\mu_{t}^{i}\in\mathcal{M}(p_{i},q_{i})$ lies in $\mathcal{M}(p,q)$. \end{itemize} \end{dfn} \begin{rem} We expect continuous dependence of minimisers on start- and endpoints if $E$ is lower semi-continuous. Moreover, if we know $2)$ and have existence of a $D$-converging sequence of $\mu_{t}^{i}$, $1)$ follows immediately. \end{rem} \begin{prp}\label{PRP.Cnt_Dpd} If $\partial$ has continuous dependence of minimisers on start- and endpoints, then $\mathcal{M}(p,q)$ is non-empty and closed w.r.t.~$D$ for each $p,q\in\mathcal{D}_{b}$. \end{prp} \begin{proof} Set $p_{i}=p, q_{i}=q$ and apply $1)$ to see non-emptiness, $2)$ for closedness. \end{proof} \begin{lem}\label{LEM.FinDim_CntDpd_II} If $H$ is finite-dimensional, all symmetric gradients have continuous dependence of minimisers on starting- and endpoints. \end{lem} \begin{proof} Our argument proving non-emptiness of $\mathcal{M}(p,q)$ in Lemma \ref{LEM.FinDim_CntDpd_I} makes no assumption on the sequences $(p_{i})_{i\in\mathbb{N}},(q_{i})_{i\in\mathbb{N}}\in\mathcal{D}_{b}$ used. As we extract a minimising subsequence by Arzel\'a-Ascoli, $\lim D(\mu_{t}^{i_{k}},\mu_{t})=0$ follows. We thus have $1)$. If we already have $D$-convergence, we argue as in the proof of Lemma \ref{LEM.FinDim_CntDpd_I} \textit{after} having applied Arzel\'a-Ascoli to show $2)$. \end{proof} \section{Vertical gradients for trivial $\mathcal{K}(H)$-bundles} We establish our setting, prove the disintegration theorem and consider mean entropic curvature bounds as an application. For the remainder of this section, let $X$ be a locally compact Hausdorff space, $\mathcal{B}(X)$ its Borel $\sigma$-algebra, $(X,\mathcal{B}(X))$ a separable measure space and $H$ a separable Hilbert space. Separability of $(X,\mathcal{B}(X))$ ensures $L^{p}(X.\nu)$ to be separable as Banach space for each Radon measure $\nu$. \subsection{Product traces, their $L^{p}$-spaces and vertical gradients} As before, $\otimes_{\varepsilon}$ denotes the injective tensor product. We have $C_{c}(X,E)=C_{c}(X)\otimes_{\varepsilon}E$ for each Banach space $E$ since $X$ is a Kelly space by local compactness. Thus $C_{c}\odot E\subset C_{c}(X,E)$ densely, while $C_{c}(X,E)\subset C_{0}(X,E)$ holds in any case. \begin{dfn} If $\tau$ is a trace on $C_{0}(X,\mathcal{K}(H))$ such that \begin{itemize} \item[1)]$C_{c}(X,\mathcal{S}_{1}(H))\subset D(\tau)$, \item[2)]$T\longmapsto \tau(f\odot T)=:\tau_{f}(T)$ is a bounded linear functional on $\mathcal{S}_{1}(H)$ for each $f\in C_{c}(X)$, \end{itemize} \noindent then $\tau$ is called a product trace. \end{dfn} \begin{prp}\label{PRP.Prd_Tr} If $\nu$ is a Radon measure on $X$, the functional $\nu\odot\textrm{tr}$ on $C_{c}(X)\odot\mathcal{S}_{1}(H)$ induces a unique product trace denoted by $\nu\otimes\textrm{tr}$. Moreover and for each product trace $\tau$, there exists a unique Radon measure $\nu$ on $X$ such that $\tau=\nu\otimes\textrm{tr}$. \end{prp} \begin{proof} Consider $L^{1}(X,\mathcal{S}_{1}(H),d\nu)$ and define the subspace \begin{align*} D(\nu\otimes\textrm{tr}):=C_{0}(X,\mathcal{S}_{1}(H))_{+}\cap L^{1}(X,\mathcal{S}_{1}(H),d\nu). \end{align*} \noindent We set \begin{align*} (\nu\otimes\textrm{tr})(F) := \begin{cases} \int_{X}\textrm{tr}(F(x))d\nu & \textrm{if}\ F\in D(\tau) \\ \infty & \textrm{else} \end{cases} \end{align*} \noindent yielding a trace $\nu\otimes\textrm{tr}$ on $C_{0}(X,\mathcal{K}(H))$. As $C_{c}(X,\mathcal{S}_{1}(H))\subset D(\nu\otimes\textrm{tr})$ holds by construction, $\nu\otimes\textrm{tr}$ is a product trace. Furthermore, we have $(\nu\otimes\textrm{tr})(|F|)=\int_{X}\textrm{tr}(|F(x)|)d\nu$ for each $F\in D(\tau)$ because $|F|(x)=|F(x)|$ for all $x\in X$. $C_{c}(X)\odot\mathcal{S}_{1}(H)$ lies dense in both $L^{1}(A,\tau)$ and $L^{1}(X,\mathcal{S}_{1}(H))$ since it lies dense in $D(v\otimes\textrm{tr})$ w.r.t.~either topology. Thus the $L^{1}$-space defined by $\nu\otimes\textrm{tr}$ is $L^{1}(X,\mathcal{S}_{1}(H),d\nu)$ by construction, and $\nu\odot\textrm{tr}$ uniquely determines the product trace $\nu\otimes\textrm{tr}$.\par Let $\tau$ be a product trace. For all positive $f\in C_{c}(X)$, set $S_{f}\in B(H)=\mathcal{S}_{1}(H)^{*}$ for the unique element such that $\tau_{f}=\textrm{tr}(S_{f}\hspace{0.05cm}.\hspace{0.05cm})$. Positivity and traciality of $\tau$ imply the same to hold for each $\tau_{f}$. Thus $S_{f}=L(f)1_{\mathcal{B}(H)}$ for a unique $L(f)\in [0,\infty)$. Here, positivity of $L(f)$ follows from positivity of $\tau_{f}$. A similar argument applies for negative $f$, with $L(f)\in(-\infty,0]$. We decompose $f=f_{+}+f_{-}$, for $f_{+}=\max\{f,0\}$ and $f_{-}=\min\{f,0\}$. Linearity of $\tau$ implies $\tau_{f}=((L(f_{+})+L(f_{-}))\textrm{tr}$ and $\tau_{f+g}=((L(f)+L(g))\textrm{tr}$. We obtain $L(f)=L(f_{+})+L(f_{-})$ and $L(f+g)=L(f)+L(g)$. Hence $L$ is a positive linear functional on $C_{c}(X)$, and there exists a unique Radon measure $\nu$ on $X$ representing $L$. We have $\tau_{|C_{c}(X)\odot\mathcal{S}_{1}(H)}=\nu\odot\textrm{tr}$ by construction of $L$. The second statement follows by uniqueness of $\nu\otimes\textrm{tr}$. \end{proof} \begin{cor}\label{COR.Prd_Tr} Let $H$ be finite-dimensional. If $\tau$ is a finite trace on $C_{0}(X,\mathcal{K}(H))$, then it is a product trace with finite Radon measure. \end{cor} \begin{proof} Since $H$ is finite-dimensional, we only need to show $C_{c}(X,\mathcal{S}_{1}(H))\subset D(\tau)$. This is true by hypothesis, as $\tau$ is defined on all of $C_{0}(X,\mathcal{K}(H))$. Finiteness of $\nu$ is implied by finiteness of $\tau=\nu\otimes\textrm{tr}$. \end{proof} Our proof of Proposition \ref{PRP.Prd_Tr} shows each product trace $\tau=\nu\otimes\textrm{tr}$ to have $C_{0}(X,\mathcal{S}_{1}(H))\cap L^{1}(X,\mathcal{S}_{1}(H),d\nu)$ as domain in $C_{0}(X,\mathcal{K}(H))$. Furthermore, we saw the $L^{1}$-space of $\tau$ to equal $L^{1}(X,\mathcal{S}_{1}(H),d\nu)$. We generalise this relation to arbitrary $p\in [1,\infty]$. \begin{ntn} We fix a product trace $\tau=\nu\otimes\textrm{tr}$ for the remainder of this section and drop all references to $\nu$ from our $L^{p}$-space notation in the future. \end{ntn} \begin{prp}\label{PRP.Prd_Tr_LP} If $\tau$ is a product trace, then $L^{p}(C_{0}(X,K(H)),\tau)=L^{p}(X,\mathcal{S}_{p}(H))$ for each $p\in [1,\infty]$. \end{prp} \begin{proof} Let $p\in [1,\infty)$. For all $F\in D(\tau)$, we know $|F|^{p}\in D(\tau)$. Then $\tau=\nu\otimes\textrm{tr}$ implies \begin{align*} \tau(|F|^{p})=\int_{X}\textrm{tr}(|F(x)|^{p})d\nu \end{align*} \noindent for each $F\in D(\tau)$. We argue by density to obtain our statement for general $p\in [1,\infty)$ in direct analogy to Proposition \ref{PRP.Prd_Tr}.\par Let $p=\infty$. We have $L^{\infty}(X,\mathcal{B}(H))\subset L^{1}(X,\mathcal{S}_{1}(H))^{*}$ isometrically via the map \begin{align*} F\longmapsto\Big(G\longmapsto\int_{X}\textrm{tr}(F(x)G(x))d\nu\hspace{0.05cm}\Big). \end{align*} \noindent We already saw $L^{1}(X,\mathcal{S}_{1}(H))=L^{1}(C_{0}(X,\mathcal{K}(H),\tau)$, thus \begin{align*} C_{0}(X,\mathcal{K}(H))\subset L^{\infty}(X,\mathcal{B}(H))\subset L^{1}(C_{0}(X,\mathcal{K}(H)),\tau)^{*}=L^{\infty}(C_{0}(X,\mathcal{K}(H)),\tau) \end{align*} \noindent where the last object is the $W^{*}$-algebra generated by $C_{0}(X,\mathcal{K}(H)$ represented over $L^{2}(X,\mathcal{S}_{2}(H))$. Moreover, we used $L^{1}(A,\omega)^{*}=L^{\infty}(A,\omega)$ for each trace $\omega$ on any $C^{*}$-algebra $A$. Our statement follows from $C_{0}(X,\mathcal{K}(H))\subset L^{\infty}(C_{0}(X,\mathcal{K}(H)),\tau)$ densely in the strong operator-topology should $L^{\infty}(X,\mathcal{B}(H))$ be closed w.r.t.~the strong operator-topology.\par Choose a countable subset $N\subset\mathcal{B}(H)$ dense in the $w^{*}$-topology and let $T\in L^{\infty}(X,\mathcal{B}(H))^{\prime}$. $T$ commutes with every $1_{X}\otimes S\in L^{\infty}(X,\mathcal{B}(H))$, $S\in N$ arbitrary. As $N$ is countable, we have $T(x)\in N^{\prime}$ for a.e.~$x\in X$. By density of $N$ and $\mathcal{B}(H)$ being a factor, $T\in L^{\infty}(X)$. Hence $L^{\infty}(X,\mathcal{B}(H))^{\prime}=L^{\infty}(X)$. Theorem IV.7.10 in \cite{TakTOAI} states that $L^{\infty}(X)^{\prime}=L^{\infty}(X,\mathcal{B}(H))$, and $L^{\infty}(X,\mathcal{B}(H))^{\prime\prime}=L^{\infty}(X,\mathcal{B}(H))$ is a $W^{*}$-algebra. In particular, it is closed in the strong operator-topology. \end{proof} \begin{rem}\label{REM.Prd_Tr_LP} We have $L^{1}(X,\mathcal{S}_{1}(H))\cong L^{1}(X)\otimes_{\pi}\mathcal{S}_{1}(H)$. Furthermore, $L^{1}(X)$ and $\mathcal{S}_{1}(H)$ are separable. Thus $L^{1}(X,\mathcal{S}_{1}(H))$ is separable. The same holds true for $L^{2}(X,\mathcal{S}_{2}(H))$, where we use the tensor product of Hilbert spaces rather than the projective tensor product $\otimes_{\pi}$ above. \end{rem} \begin{cor}\label{COR.L1_Pstv} Each class in $L_{+}^{1}(X,\mathcal{S}_{1}(H))$ can be represented by an integrable function $F$ such that $F(x)\geq 0$ for every $x\in X$. \end{cor} \begin{proof} Each $F\in L_{+}^{1}(X,\mathcal{S}_{1}(H))$ can be expressed as $F=G^{*}G$ by unbounded Borel functional calculus. Any representative in $G$ thus induces a representative in $F$ as required. \end{proof} We establish an appropriate setting for a symmetric gradient. To do so, we have to define an action of $L^{\infty}(X,B(H))\otimes_{max} L^{\infty}(X,B(H))^{op}$ on a Hilbert space equipped with an appropriate involution $J$. In light of Definition \ref{DFN.Fbr_Grd} and Proposition \ref{PRP.Mass_Prsv}, we focus on $\bigoplus_{k=1}^{m}L^{2}(X,\mathcal{S}_{2}(H))=L^{2}(X,\bigoplus_{k=1}^{m}\mathcal{S}_{2}(H))$ as our Hilbert space. This will enable us to prove Proposition \ref{PRP.Mass_Prsv_Unbd}, i.e.~mass preservation in almost every fibre. \begin{rem} Each $\bigoplus_{k=1}^{m}L^{2}(X,\mathcal{S}_{2}(H))=L^{2}(X,\bigoplus_{k=1}^{m}\mathcal{S}_{2}(H))$ is equipped with the canonical left and right action of $L^{\infty}(X,\mathcal{B}(H))$ induced by pointwise multiplication and pointwise adjoining. For each $F\in L^{\infty}(X,\mathcal{B}(H))$ and $G\in\bigoplus_{k=1}^{m}L^{2}(X,\mathcal{S}_{2}(H))$, we thus have $(F.G)_{k}(x)=F(x).G_{k}(x)$ and $(G.F)_{k}(x)=G_{k}(x).F(x)$. \end{rem} We next discuss vertical gradients. Assume we are given $\mathcal{H}:=\bigoplus_{k=1}^{m}\mathcal{S}_{2}(H)$. We consider a family $(\partial_{x})_{x\in X}$ of symmetric gradients for $(\mathcal{K}(H),\textrm{tr})$ mapping to $\mathcal{H}$ such that \begin{itemize} \item[$\bullet$] $(\partial_{x})_{k}$ is a fibre gradient for all $k\in\{1,...,m\}$ for a.e.~$x\in X$, \item[$\bullet$] $x\longmapsto\partial_{x}T$ is measurable for each $T\in\FinRk(H)$. \end{itemize} \noindent For $f\odot T\in C_{c}(X)\odot\FinRk(H)$, set $\partial(f\odot T)(x):=f(x)\partial_{x}T$ and consider \begin{align*} D_{cb}(\partial):=\{F\in C_{c}(X)\odot\FinRk(H)\ |\ ||\partial_{x}F(x)||_{\mathcal{H}}\in L^{2}(X)\cap L^{\infty}(X)\} \end{align*} \noindent to obtain a densely defined, unbounded operator $(\partial,D_{cb}(\partial))$ from $L^{2}(X,\mathcal{S}_{2}(H))$ to $L^{2}(X,\mathcal{H})$. The operator is closable since each $\partial_{x}$ is closable and $L^{1}$-convergence implies a.e.~pointwise convergence for a subsequence. We denote this closure by $\partial$ as well. Observe that $(\partial F)(x)=\partial_{x}F(x)$ a.e.~for each $F\in D(\partial)$ by construction as each $\partial_{x}$ is closed. \begin{dfn}\label{DFN.Vrt_Grd} Let $(\partial_{x})_{x\in X}$ be a family as above. We call $\partial$ the induced operator of $(\partial_{x})_{x\in X}$ and call it a vertical gradient if \begin{itemize} \item[1)] $C_{c}(X)\odot\FinRk(H)=D_{cb}(\partial)$, \item[2)] there exists an approximate identity $(\eta_{i})_{i\in\mathbb{N}}\subset\FinRk(H)$ such that $\sup_{x\in K,i\in\mathbb{N}}||\partial_{x}\eta_{i}||_{\mathcal{B}(H)}$ is finite for each compact set $K\subset X$. \end{itemize} \end{dfn} \begin{prp}\label{PRP.Vrt_Grd_Prop} If $\partial$ is a vertical gradient, it is a symmetric gradient for $(C_{0}(X,\mathcal{K}(H)),\tau)$ such that $(\partial F)(x)=\partial_{x}F(x)$ a.e.~for each $F\in D(\partial)$. Furthermore, $C_{c}(X)\odot\FinRk(H)$ is an extension algebra. \end{prp} \begin{proof} By construction, $\partial$ is a closed, densely defined, unbounded operator such that $(\partial F)(x)=\partial_{x}F(x)$ for each $F\in D(\partial)$. The latter implies $\partial$ to be a derivation since each $\partial_{x}$ is one. As $C_{c}(X)\odot\FinRk(H)\subset D(\partial)$, $\partial$ is a gradient. Symmetry follows from $(\partial F)(x)=\partial_{x}F(x)$ and $\partial_{x}$ being symmetric. We already used density of $C_{c}(X)\odot\FinRk(H)$ in $C_{0}(X,\mathcal{K}(H))$, while it is a core by construction of $\partial$. It therefore is an extension algebra by definition of $D_{cb}(\partial)$. \end{proof} \begin{bsp} If $H$ is finite-dimensional such that $x\longmapsto||\partial_{x}||$ is locally bounded, then $\partial$ is a vertical gradient by Remark \ref{REM.Mass_Prsv}. \end{bsp} \begin{bsp} Let $\partial_{0}$ be a symmetric gradient for $(\mathcal{K}(H),\textrm{tr})$ mapping to $\mathcal{H}$ such that each $\partial_{k}$ is a fibre gradient. For all $f\in L_{loc}^{\infty}(X,\mathcal{K}(H))$, the measurable family given by $\partial_{x}:=f(x)\partial_{0}$ induces a vertical gradient. \end{bsp} \begin{rem} All derivations from $M_{n}(\mathbb{C})$ to itself are of form $\textrm{Ad}_{T}$. Hence if $H$ is finite-dimensional, then $\partial_{x}=(\textrm{Ad}_{T_{1}(x)},\hdots,\textrm{Ad}_{T_{m}(x)})$ for a measurable family with $(T_{1}(x),...,T_{m}(x))\in M_{n}(\mathbb{C})^{m}$. \end{rem} For the remainder of this section, let $\partial$ be a vertical gradient. As described in Subsection 2.4, having an extension algebra allows us to extend to unbounded densities. It furthermore implies existence of a separating function suitable for our extension. Here, $\mathfrak{A}:=C_{c}(X)\odot\FinRk(H)$ is the extension algebra we consider. From what we have seen at the beginning of this subsection, a density is an element $P\in L_{+}^{1}(X,\mathcal{S}_{1}(H))$ such that $\int_{X}\textrm{tr}(P(x))d\nu=1$. \begin{dfn}For all $P\in\mathcal{D}$, we define \begin{align*} \theta_{P}(x):= \begin{cases}\big(\textrm{tr}(P(x))\big)^{-\frac{1}{2}} & \textrm{if}\ P(x)\neq 0 \\ 0 & \textrm{else} \end{cases} \end{align*} \noindent and set $\nu_{P}:=\textrm{tr}(P(x))d\nu$. \end{dfn} \noindent If $A\in L^{1}([0,1],L^{1}(X,\mathcal{S}_{1}(H)))$, then $(\int_{0}^{1}A_{t}dt)(x)=\int_{0}^{1}A_{t}(x)dt$. If $m=1$, we have \begin{align*} M_{P}(F)(x)=\Big(\int_{0}^{1}P^{\alpha}FP^{1-\alpha}d\alpha\Big)(x)=\int_{0}^{1}P(x)^{\alpha}F(x)P(x)^{1-\alpha}d\alpha=M_{P(x)}(F(x)) \end{align*} \noindent for each $P\in\mathcal{D}$ and $F\in L^{\infty}(X,\mathcal{B}(H))$. $G^{\alpha}(x)=G(x)^{\alpha}$ is immediate if $G\in L_{+}^{1}(X,\mathcal{S}_{1}(H))$ is already bounded. The unbounded case follows by approximating $G$ in $L^{\alpha}(X,\mathcal{S}_{1}(H))$ with $\min\{G,C_{i}\}(x):=\min\{G(x),C_{i}\}$, where $C_{i}\geq 0$ is a strictly increasing sequence. The tangent space inner product at $P$ is thus given by \begin{align*} \langle F,G\rangle_{P}&=\int_{X}\textrm{tr}(M_{P(x)}(\partial_{x}F(x)^{*})\partial_{x}G(x))d\nu\\ &=\int_{X}\textrm{tr}(M_{P(x)}^{\frac{1}{2}}(\partial_{x}F(x)^{*})M_{P(x)}^{\frac{1}{2}}(\partial_{x}G(x)))d\nu\\ &=\int_{X}\langle F(x),G(x)\rangle_{\theta_{P}^{2}(x)P(x)}d\nu_{P}. \end{align*} \noindent for each $F,G\in\mathfrak{A}$, where used $P(x)\in\mathcal{B}_{+}(H)$ a.e.~to ensure that $M_{P(x)}^{\frac{1}{2}}$ is defined. The case of general $m\in\mathbb{N}$ follows at once as the above describes the situation on each summand.\par For all $P\in\mathcal{D}$ and all $F\in\mathfrak{A}$, set $(M_{P}^{\frac{1}{2}}\partial F)_{k}(x):=M_{P(x)}^{\frac{1}{2}}(\partial_{x})_{k}F(x)$. Then $M_{P}^{\frac{1}{2}}\partial F$ is strongly measurable by Lemma 7.5 in \cite{TakTOAI}, and lies in $L^{2}(X,\mathcal{H})$ by what we showed just above. Furthermore, we have \begin{align*} ||F||_{P}=||M_{P}^{\frac{1}{2}}\partial F||_{L^{2}(X,\mathcal{H})} \end{align*} \noindent and are therefore able to identify $T_{P}\mathcal{D}$ isometrically with a subspace of $L^{2}(X,\mathcal{H})$ in direct analogy to the bounded case. \begin{ntn} For an admissible path in the setting of vertical gradients, we write $\mu_{t}=(P_{t},V_{t})=(P_{t},W_{t})$ instead of $\mu_{t}=(\rho_{t},v_{t})=(\rho_{t},w_{t})$. Here, $W_{t}$ is the unique vector in $L^{2}(X,\mathcal{H})$ associated to $V_{t}$ such that $||V_{t}||_{P_{t}}=||W_{t}||_{L^{2}(X,\mathcal{H})}$. \end{ntn} \begin{prp}\label{PRP.Mass_Prsv_Unbd} If $\mathcal{A}(P,Q)\neq\emptyset$, then $\textrm{tr}(P(x))=\textrm{tr}(P_{t}(x))$ for a.e.~$x\in X$ for each $t\in [0,1]$. \end{prp} \begin{proof} Let $\eta_{i}\in\FinRk(H)$ be an approximate identity. Consider $f\odot\eta_{i}\in C_{c}(X)\odot\FinRk(H)$. Then for all $P\in\mathcal{D}$ and all $F\in\mathfrak{A}$, we have \begin{align*} \langle g\odot T,f\odot\eta_{i}\rangle_{P}=\sum_{k=1}^{m}\int_{X}f(x)(\langle F(x),\eta_{i}\rangle_{\theta_{P}^{2}(x)P(x)})_{k}d\nu_{P} \end{align*} \noindent and furthermore estimate \begin{align*} |(\langle F(x),\eta_{i}\rangle_{\theta_{P}^{2}(x)P(x)})_{k}|\leq ||\partial_{x}F(x)||_{\mathcal{B}(H)}||\partial_{x}\eta_{i}||_{\mathcal{B}(H)}\leq ||\partial_{x}F(x)||_{\mathcal{S}_{2}(H)}||\partial_{x}\eta_{i}||_{\mathcal{B}(H)}. \end{align*} \noindent We have $\sup_{x\in X}||\partial_{x}F(x)||_{\mathcal{S}_{2}(H)}<\infty$ by $1)$ and $\sup_{x\in\textrm{supp}\hspace{0.05cm}f,i\in\mathbb{N}}||\partial_{x}\eta_{i}||_{\mathcal{B}(H)}<\infty$ by $2)$ in Definition \ref{DFN.Vrt_Grd}. Since $P$ was fixed and Proposition \ref{PRP.Mass_Prsv} shows pointwise convergence to zero, we are able to apply dominated convergence to show convergence to zero of the integral above. Setting $g=f$ and $T=\eta_{i}$, we see $||f\odot\eta_{i}||_{P}$ to be bounded uniformly in $P$ and $i\in\mathbb{N}$. In particular, $f\odot\eta_{i}$ converges weakly to zero in each $T_{P}\mathcal{D}$ by density of $\mathfrak{A}$.\par Let $\mu_{t}\in\mathcal{A}(P,Q)$. Observe that since $\eta_{i}$ converges to the identity in the $w^{*}$-topology, $||\eta_{i}||_{\mathcal{B}(H)}$ must be uniformly bounded. For all $t\in [0,1]$ and all $f\in C_{c}(X)$, we therefore have \begin{align*} \int_{X}f(x)\textrm{tr}(P-P_{t})d\nu=\lim\int_{X}f(x)\textrm{tr}((P-P_{t})\eta_{i})d\nu=\lim\int_{0}^{t}\langle V_{s},f\odot\eta_{i}\rangle_{P_{s}}ds. \end{align*} \noindent The right-hand side is zero since $\sup_{i,P} ||\eta_{i}||_{P}<\infty$ and $||V_{t}||_{\rho_{t}}^{2}\in L^{1}([0,1])$. Since $f$ was arbitrary, $\textrm{tr}(P(x))=\textrm{tr}(P_{t}(x))$ almost everywhere. \end{proof} \begin{dfn} Let $\mathcal{D}(X,\nu)$ be the set of densities on $(X,\nu)$. For $f\in\mathcal{D}(X,\nu)$, let $\mathcal{D}_{f}$ be the set of all $P\in\mathcal{D}$ with $\textrm{tr}(P(x))=f(x)$ almost everywhere. \end{dfn} \begin{cor}\label{COR.Mass_Prsv_Unbd}\hspace{1cm} \begin{itemize} \item[1)] $(\mathcal{D},\mathcal{W}_{2})=\underset{f\in\mathcal{D}(X,\nu)}{\coprod}\ (\mathcal{D}_{f},\mathcal{W}_{2})$. \item[2)] Admissible paths starting at an (un-)bounded density remain (un-)bounded. \item[3)] $\mathcal{W}_{2}$ does not metrisise the $w^{*}$-topology. \end{itemize} \end{cor} \begin{proof} Any $P\in\mathcal{D}$ induces a density on $(X,\nu)$ by $f(x):=\textrm{tr}(P(x))$. Given $f\in \mathcal{D}(X,\nu)$, choose some density matrix $p\in\mathcal{S}_{1}(H)$ and set $P(x):=f(x)p$ to obtain a $P\in\mathcal{D}$ such that $\textrm{tr}(P(x))=f(x)$. This and \ref{PRP.Mass_Prsv_Unbd} imply the first and second statement. The third statement follows from the first, as $(\mathcal{D},w^{*})$ is connected. \end{proof} \begin{rem}\label{REM.Mass_Prsv_Unbd} If $X$ is compact and $H$ finite-dimensional, any vertical gradient induces a $\mathcal{W}_{2}$ \textit{not} metrisising the $w^{*}$-topology on $\mathcal{D}$ even as its underlying $C^{*}$-algebra is unital. This is a departure from the commutative case. \end{rem} \begin{lem}\label{LEM.Msrbl_Unbd} If $\mu_{t}$ is an admissible path, then $P_{t}\in L^{1}([0,1],L^{1}(X,\mathcal{S}_{1}(H)))$. \end{lem} \begin{proof} When proving Lemma \ref{LEM.Msrbl_Bd}, we did not require boundedness of $\rho_{t}$. Hence our argument remains applicable to $P_{t}$ since $L^{1}(A,\tau)=L^{1}(X,\mathcal{S}_{1}(H))$ in our current setting. \end{proof} \subsection{Proving the disintegration theorem} We first show all $\mu_{t}=(P_{t},W_{t})\in\mathcal{A}(P,Q)$ to be rectifiable, by which we mean the following. We fix a representative of $W_{t}\in L^{2}(X,\mathcal{H})$ for each $t\in [0,1]$, which we again denote by $W_{t}$. Then there exists a representative $P_{t}^{rct}$ of $P_{t}\in L^{1}([0,t]\times X,\mathcal{S}_{1}(H))$ such that \begin{align*} \mu_{t}^{rct}(x):=(\theta_{P}^{2}(x)P_{t}^{rct}(x),\theta_{P}(x)W_{t}(x))\in\mathcal{A}(\theta_{P}^{2}(x)P(x),\theta_{P}^{2}(x)Q(x)) \end{align*} \noindent for a.e.~$x\in X$. By construction, we have $\mu_{t}=(P_{t}^{rct},W_{t})$. This implies a mean energy representation of the energy functional.\par Assuming continuous dependence of minimisers on start- and endpoints for a.e.~fibre, the second step is application of a measurable selection theorem. The latter is used to find an integrable collection of fibre-wise minimisers $(\xi_{t}(x))_{x\in X}\in\mathcal{A}(\theta_{P}^{2}(x)P(x),\theta_{P}^{2}(x)Q)$ integrating to an element of $\mathcal{A}(P,Q)$ after norming with $\theta_{P}^{-2}$. This yields a minimiser by the mean energy representation. \begin{rem} $L^{1}(X,\mathcal{S}_{1}(H))$ and $L^{2}(X,\mathcal{S}_{2}(H))$ are separable, see Remark \ref{REM.Prd_Tr_LP}. \end{rem} \begin{lem}\label{LEM.Rctf_NS} Let $(X,\nu)$ and $(Y,\eta)$ be locally compact Hausdorff spaces equipped with Radon measures. Moreover, let $E$ be a Banach space. If $F$ is a representative of $0\in L^{1}(X\times Y,E)$, then $N_{x}:=\{y\in Y\ |\ F(x,y)\neq 0\}$ is a nullset for a.e~$x\in X$. \end{lem} \begin{proof} For $x\in X$ fix, set $g_{x}(y):=||F(x,y)||_{E}$. Then $N_{x}=g_{x}^{-1}(\mathbb{R}\setminus {0})$, hence $N_{x}$ is measurable. We know $\int_{X}\int_{Y}||F(x,y)||_{E}\hspace{0.025cm}d\eta\hspace{0.025cm}d\nu=0$, hence $\int_{Y}g_{x}(y)=0$ for a.e.~$x\in X$. The claim follows from definiteness of the norm. \end{proof} \begin{ntn} If we fix a representative of $W_{t}\in L^{2}(X,\mathcal{H})$, we again denote it by $W_{t}$. \end{ntn} \begin{lem}\label{LEM.Rcft_Msrbl} If $\mu_{t}$ is an admissible path and we fix a representative of $W_{t}\in L^{2}(X,\mathcal{H})$ for each $t\in [0,1]$, then \begin{itemize} \item[1)] $\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}F(x)\rangle_{\mathcal{H}}$ is measurable on $[0,1]\times X$ for each $F\in\mathfrak{A}$, \item[2)] $T\longmapsto\int_{0}^{t}\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}T\rangle_{\mathcal{H}}ds$ defines a unique $\tilde{P}_{t}(x)\in\mathcal{S}_{1}(H)$ for each $t\in [0,1]$ for a.e.~$x\in X$, \item[3)] $t\longmapsto\tilde{P}_{t}(x)$ is $w^{*}$-continuous on $[0,1]$ for a.e.~$x\in X$, \item[4)] $x\longmapsto\tilde{P}_{t}(x)\in\mathcal{S}_{1}(H)$ is strongly measurable w.r.t.~the $||.||_{\mathcal{S}_{1}(H)}$-topology for each $t\in [0,1]$. \end{itemize} \end{lem} \begin{proof} From $W_{s}\in L^{2}(X,\mathcal{H})$, $P_{s}\in\mathcal{D}$ and existence of a separable function, we know \begin{align*} x\longmapsto\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}F(x)\rangle_{\mathcal{H}} \end{align*} \noindent to lie in $L^{1}(X)$ for each $s\in [0,1]$. Hence we need $s\longmapsto \langle W_{s}(\hspace{0.05cm}.\hspace{0.05cm}),M_{P_{s}(\hspace{0.05cm}.\hspace{0.05cm})}^{\frac{1}{2}}\partial_{\hspace{0.05cm}.\hspace{0.05cm}}F(\hspace{0.05cm}.\hspace{0.05cm})\rangle_{\mathcal{H}}$ to be strongly measurable w.r.t.~the $||.||_{L^{1}(X)}$-topology to obtain the first statement. To see this, we test on continuous bounded functions and extend to $L^{\infty}$-functions. For $g\in C_{b}(X)$ and $F\in\mathfrak{A}$, $gF\in\mathfrak{A}$. We thus have \begin{align*} \int_{X}g(x)\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}F(x)\rangle_{\mathcal{H}}d\nu=\langle W_{s},M_{P_{s}}^{\frac{1}{2}}\partial (gF)\rangle_{L^{2}(X,\mathcal{H})}=\frac{d}{dt}_{|t=s}\tau(P_{t}gF) \end{align*} \noindent which is a measurable map on $[0,1]$. For $g\in L^{\infty}(X)$, we use density of $C_{b}(X)\subset L^{\infty}(X)$ to approximate pointwise by measurable maps. This proves the first statement.\par For the second one, we assume $P_{t}(x)\geq 0$ for each $(t,x)\in [0,1]\times X$ without loss of generality by Corollary \ref{COR.L1_Pstv} and Lemma \ref{LEM.Msrbl_Unbd}. Using Proposition \ref{PRP.Mass_Prsv_Unbd} and letting $E=\mathbb{C}$ in Lemma \ref{LEM.Rctf_NS}, we know $\textrm{tr}(P_{t}(x))=\textrm{tr}(P_{0}(x))$ to hold for each $t\in [0,1]$ for a.e.~$x\in X$. Given arbitrary $T\in\FinRk(H)$ and $x\in X$, choose $f\in C_{c}(X)$ with $f(x)=1$. Then \begin{align*} s\longmapsto\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}T\rangle_{\mathcal{H}}=\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}(f\odot T)(x)\rangle_{\mathcal{H}} \end{align*} \noindent is measurable on $[0,1]$. Furthermore, we estimate \begin{align*} |\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}T\rangle_{\mathcal{H}}|\leq ||W_{s}(x)||_{\mathcal{H}}\textrm{tr}(P_{0}(x))||\partial_{x}||\hspace{0.025cm}||T||_{\mathcal{K}(H)} \end{align*} \noindent for each $s\in [0,1]$. Furthermore, we have $||W_{\hspace{0.025cm}.\hspace{0.025cm}}(\hspace{0.025cm}.\hspace{0.025cm})||_{\mathcal{H}}\in L^{2}([0,1],L^{2}(X))=L^{2}([0,1]\times X)$ since $\mu_{t}$ is admissible. Thus $W_{t}(x)$ is measurable in $t$ on $[0,1]$ for a.e.~$x\in X$. Using this and our previous estimate, we see that \begin{align*} T\longmapsto\int_{0}^{t}\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}T\rangle_{\mathcal{H}}ds \end{align*} \noindent defines a unique element in $\mathcal{K}(H)^{*}=\mathcal{S}_{1}(H)$ by density of $\FinRk(H)\subset\mathcal{K}(H)$. Continuity when tested on finite rank operators holds by construction, while the estimate just above shows $\sup_{t\in [0,1]}||P_{t}(x)||_{\mathcal{S}_{1}(H)}$ to be finite. From this, the third statement follows.\par We turn to the last statement. Let $f_{i}\in C_{c}(X)$ be an approximation of $1_{X}$ in $C_{b}(X)$ and choose arbitrary $T\in\FinRk(H)$. Then $\textrm{tr}(\tilde{P}_{t}(x)T)=\lim_{i}\textrm{tr}(\tilde{P}_{t}(x)f_{i}(x)T)$ for each $x\in X$. This lemma's first two statements show \begin{align*} x\longmapsto\textrm{tr}(\tilde{P}_{t}(x)f_{i}(x)T)=f_{i}(x)\int_{0}^{t}\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}T\rangle_{\mathcal{H}}ds. \end{align*} \noindent In particular, the map above is measurable. Using this and density of $\FinRk(H)$ w.r.t.~the strong operator-topology, our last claim follows from pointwise approximation by measurable maps. \end{proof} \begin{lem}\label{LEM.Rctf} If $\mu_{t}\in\mathcal{A}(P,Q)$ and we fix a representative of $W_{t}\in L^{2}(X,\mathcal{H})$ for each $t\in [0,1]$, then there exists a representative $P_{t}^{rct}$ in $P_{t}\in L^{1}([0,t]\times X,\mathcal{S}_{1}(H))$ such that \begin{itemize} \item[1)]$P_{t}=\tilde{P}_{t}+P=:P_{t}^{rct}\in L^{1}([0,1]\times X,\mathcal{S}_{1}(H))$, \item[2)]$\mu_{t}=(P_{t}^{rct},W_{t})$, \item[3)]$\mu_{t}^{rct}(x):=(\theta_{P}^{2}(x)P_{t}^{rct}(x),\theta_{P}(x)W_{t}(x))\in\mathcal{A}(\theta_{P}^{2}(x)P(x),\theta_{P}^{2}(x)Q(x))$ for a.e.~$x\in X$. \end{itemize} \end{lem} \begin{proof} Let $\tilde{P}_{t}$ be as in Lemma \ref{LEM.Rcft_Msrbl}. For all $F\in\mathfrak{A}$, the same lemma implies \begin{align*} \tau((P_{t}-P)F)&=\int_{X}\textrm{tr}((P_{t}-P_{0})(x)F(x))d\nu\\ &=\int_{0}^{t}\int_{X}\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}F(x)\rangle_{\mathcal{H}}d\nu\hspace{0.025cm}ds\\ &=\int_{X}\int_{0}^{t}\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}F(x)\rangle_{\mathcal{H}}ds\hspace{0.02cm}d\nu\\ &=\int_{X}\textrm{tr}(\tilde{P}_{t}(x)F(x))d\nu. \end{align*} \noindent We were able to use Fubini-Tonelli in the third equality since the integrated function was shown to be measurable and $\mu_{t}$ has finite energy. By definition, $F=f\odot T$ for some $f\in C_{c}(X)$ and some $T\in\FinRk(H)$. We know $\textrm{tr}(\tilde{P}_{t}(x)T)\in L^{1}(X,d\nu)$ and $\int_{X}f(x)\textrm{tr}((P_{t}-P)(x)T)d\nu=\int_{X}f(x)\textrm{tr}(\tilde{P}_{t}(x)T)d\nu$. If we fix $t\in [0,1]$, this shows $\textrm{tr}((P_{t}-P)(x)T)=\textrm{tr}(\tilde{P}_{t}(x)T)$ for a.e.~$x\in X$. By density and countability of $\FinRk(H)$, we thus have $(P_{t}-P)(x)=\tilde{P}_{t}(x)$ for a.e.~$x\in X$ once we fixed a $t\in [0,1]$.\par Using this, we know $P_{t}=\tilde{P}_{t}+P$ as measurable maps on $X$ modulo nullsets for each $t\in [0,1]$. By $4)$ in \ref{LEM.Rcft_Msrbl} and $P_{t}\in\mathcal{D}$, $\tilde{P}_{t}\in L^{1}([0,1]\times X,\mathcal{S}_{1}(H))$. Applying Lemma \ref{LEM.Rctf_NS} with $E=\mathbb{C}$, we have $\tilde{P}_{t}(x)=(P_{t}-P)(x)$ for each $t\in [0,1]$ for a.e.~$x\in X$. This and $3)$ in Lemma \ref{LEM.Rcft_Msrbl} shows $\textrm{tr}(\tilde{P}_{t})=0$ for each $t\in [0,1]$ for a.e.~$x\in X$. We set $P_{t}^{rct}(x):=\tilde{P}_{t}(x)+P(x)$, which is positive for all $t\in [0,1]$ for a.e.~$x\in X$ by $3)$ in Lemma \ref{LEM.Rcft_Msrbl} and $P_{t}^{rct}=P_{t}\in L^{1}([0,1]\times X,\mathcal{S}_{1}(H))$. Our remaining claims follow by construction of $\tilde{P}_{t}$. \end{proof} \begin{cor}\label{COR.Rctf_DblInt} If $\mu_{t}\in\mathcal{A}(P,Q)$, then $E(\mu_{t})=\int_{X}E(\mu_{t}^{rct}(x))dsd\nu_{P}$. \end{cor} \begin{proof} \begin{align*} E(\mu)=\frac{1}{2}\int_{0}^{1}\int_{X}||W_{t}(x)||^{2}_{\mathcal{H}}d\nu\hspace{0.025cm}dt=\int_{X}\frac{1}{2}\int_{0}^{1}||\theta_{P}(x)W_{t}(x)||_{\mathcal{H}}^{2}dt\hspace{0.025cm}d\nu_{P}=\int_{X}E(\mu_{t}^{rct}(x))dsd\nu_{P}. \end{align*} \end{proof} We next focus on conditions allowing us to integrate a measurable selection of minimisers. Corollary \ref{COR.Rctf_DblInt}, i.e.~ the mean energy representation, ensures the resulting path to be a minimiser itself. \begin{lem}\label{LEM.Pos_Appr} Let $F:X\longrightarrow\mathcal{S}_{2}(H)_{+}$ be a strongly measurable function with $\textrm{tr}(F^{2}(x))=1$ for a.e.~$x\in X$. Then there exist simple functions $(S_{i})_{i\in\mathbb{N}}$ converging to $F^{2}$ pointwise a.e.~in the $||.||_{\mathcal{S}_{1}(H)}$-topology such that for all $i\in\mathbb{N}$ and a.e.~$x\in X$, $S_{i}(x)\in\mathcal{S}_{1}(H)_{+}$ and $\textrm{tr}(S_{i}(x))=1$. \end{lem} \begin{proof} As $F$ is strongly measurable, there exist simple functions $H_{i}$ converging to $F$ pointwise a.e.~in the $||.||_{\mathcal{S}_{2}(H)}$-topology. Setting $(H_{i})_{+}(x):=(H_{i}(x))_{+}$, we again obtain a simple function. By Lemma \ref{LEM.L2_Pst_Prj}, we know that $(H_{i})_{+}(x)$ is the metric projection of $H_{i}(x)$ onto the positive cone in $\mathcal{S}_{2}(H)$. Since $F(x)\geq 0$ a.e., this shows $(H_{i})_{+}$ to converge pointwise a.e.~to $F$ in the $||.||_{\mathcal{S}_{2}(H)}$-topology. Thus assume $H_{i}(x)\geq 0$ for a.e.~$x\in X$ and let $p\in\mathcal{S}_{1}(H)$ be a density matrix. We set $G_{i}(x):=H_{i}(x)+1_{X\setminus H_{i}^{-1}(0)}(x)p$ to obtain a sequence of simple functions $G_{i}$ that is non-zero for all $x\in X$. As $H_{i}$ converges pointwise a.e.~to $F$ and $F(x)\neq 0$ a.e., $1_{X\setminus H_{i}^{-1}(0)}(x)$ converges to zero for a.e.~$x\in X$.\par We norm $G_{i}^{2}$ to obtain the required approximation. Consider the sequence of positive simple functions given by $S_{i}(x):=\textrm{tr}(G_{i}^{2}(x))^{-1}G_{i}^{2}(x)$ for each $x\in X$. This is well-defined because each $G_{i}$ has full support by construction. Pointwise convergence of $G_{i}(x)$ to $F(x)$ in $\mathcal{S}_{2}(H)$ for a.e.~$x\in X$ yields pointwise convergence of $G_{i}^{2}(x)$ to $F^{2}(x)$ in $\mathcal{S}_{1}(H)$. To see this, apply H\"older. From this, we have a.e.~convergence of $\sqrt{\textrm{tr}(G_{i}^{2}(x))}=||G_{i}(x)||_{\mathcal{S}_{2}(H)}$ to $||F(x)||_{\mathcal{S}_{2}(H)}=1$ and thus obtain a sequence $S_{i}$ as required. \end{proof} We establish an appropriate setting for applying the measurable selection theorem. Recall our construction of the distance $d$ on $\mathcal{S}_{cl}(\mathcal{K}(H))$ and $D$ on $C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H)))$ in Subsection 3.3. For $0<C<\infty$, let $\textrm{Lip}_{C}:=\{f:[0,1]\longrightarrow \mathcal{S}_{cl}(\mathcal{K}(H))\ |\ f\ d\textrm{-Lipschitz}\ \textrm{with}\ ||f||_{\textrm{Lip}}\leq C\}$ and equip it with the restriction of $D$. Then $\textrm{Lip}_{C}\subset C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))$ isometrically by construction. Arzel\'a-Ascoli immediately shows $(\textrm{Lip}_{C},D)$ to be a compact metric space. In particular, $\textrm{Lip}_{C}$ is closed, complete and separable. If $p,q$ are two density matrices with finite distance, then $\mathcal{M}(p,q)\subset\bigcup_{n=1}^{\infty}\textrm{Lip}_{n}$.\par Let $\partial$ be a vertical gradient such that $\partial_{x}$ has continuous dependence of minimisers on start- and endpoints for a.e.~$x\in X$, and $P,Q\in\mathcal{D}$ with $\textrm{tr}(P(x))=\textrm{tr}(Q(x))$ for a.e.~$x\in X$. Apply Lemma \ref{LEM.Pos_Appr} for $F=\theta_{P}\sqrt{P}$ to find a sequence $P_{i}$ of simple functions converging to $\theta_{P}^{2}P$, and do the same to have a sequence $Q_{i}$ converging to $\theta_{P}^{2}Q$. For $x\in X$ and $\mu_{t}\in\mathcal{A}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))$, set \begin{align*} N_{\mu_{t}}:=\{\ (\mu_{t}^{i_{k}})_{k\in\mathbb{N}}\ |\ \mu_{t}^{i_{k}}\in\mathcal{M}(P_{i_{k}}(x),Q_{i_{k}}(x))\ \textrm{s.t.}\ \lim_{k\in\mathbb{N}} D(\mu_{t}^{i_{k}},\mu_{t})=0\} \end{align*} \noindent and define a multifunction from $X$ to $(C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))),D)$ by \begin{align*} \psi_{P,Q}(x):=\{\mu_{t}\in\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))\ |\ N_{\mu_{t}}\neq\emptyset\}. \end{align*} \noindent Continuous dependence of minimisers on start- and endpoints ensures $\psi_{P,Q}(x)\neq\emptyset$ a.e., while closedness of $\psi_{P,Q}(x)$ in $(C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))),D)$ follows by closedness of $\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))$ and construction of $\psi_{P,Q}(x)$. We are in the setting of Theorem 6.9.3 in \cite{BK.Bog_MsrThry}. We therefore obtain a measurable selection of minimisers if for all open $U\subset C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))$, the sets \begin{align*} \hat{\psi}_{P,Q}(U):=\{x\in X\ |\ \psi_{P,Q}(x)\cap U\neq\emptyset\} \end{align*} \noindent are measurable. \begin{rem} Each $\psi_{P,Q}$ depends not only on $P$ and $Q$, but also $P^{i}$ and $Q^{i}$. These dependencies do not matter as we seek \textit{some} measurable selection for fixed $P$ and $Q$. \end{rem} \begin{lem}\label{LEM.Msrbl_Slct} Let $\partial$ be a vertical gradient such that $\partial_{x}$ has continuous dependence of minimisers on start- and endpoints for a.e.~$x\in X$. If $P,Q\in\mathcal{D}$ with $\textrm{tr}(P(x))=\textrm{tr}(Q(x))$ for a.e.~$x\in X$, then $\hat{\psi}(U)$ is measurable for each open $U\subset C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))$. \end{lem} \begin{proof} Without loss of generality, we replace 'almost everywhere' with 'everywhere' in this lemma's assumptions. Let $U\subset C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))$ be open. Then \begin{align*} \psi_{P,Q}(x)\cap U=\bigcup_{n=1}^{\infty}\Big(\psi_{P,Q}(x)\cap U\cap\textrm{Lip}_{n}\Big) \end{align*} \noindent since each admissible path lies in some $\textrm{Lip}_{C}$. However, $U\cap\textrm{Lip}_{n}$ is open in the relative topology of $\textrm{Lip}_{n}$ because $\textrm{Lip}_{n}\subset C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))$ is closed. We thus have \begin{align*} \hat{\psi}_{P,Q}(U)=\bigcup_{n=1}^{\infty}\{x\in X\ |\ \psi_{P,Q}(x)\cap U\cap\textrm{Lip}_{n}\} \end{align*} \noindent and are left to check measurability of the sets on the right-hand side.\par Reducing notational overhead, we consider an open set $U\subset\textrm{Lip}_{C}$ for some $0<C<\infty$. Since $\textrm{Lip}_{C}$ is separable, there exists a countable set of open balls covering $U$. Our statement follows if for all $f\in\textrm{Lip}_{C}$ and all $\varepsilon>0$, the set \begin{align*} \hat{\psi}_{P,Q,C}(B_{\varepsilon}(f)):=\{x\in X\ |\ \psi_{P,Q}(x)\cap B_{\varepsilon}(f)\} \end{align*} \noindent is measurable. We claim that \begin{align*} \hat{\psi}_{P,Q,C}(B_{\varepsilon}(f))=\bigcup_{j=1}^{\infty}\bigcup_{k=1}^{\infty}\bigcap_{i=k}^{\infty}\{x\in X\ |\ \mathcal{M}(P_{i}(x),Q_{i}(x))\cap B_{\varepsilon-j^{-1}}(f)\neq\emptyset\}. \end{align*} Of course, $B_{\varepsilon-j^{-1}}(f)=\emptyset$ if $\varepsilon\leq j^{-1}$ holds. Let $x\in\hat{\psi}_{P,Q,C}(B_{\varepsilon}(f))$ and choose a $\mu_{t}\in\psi_{P,Q}(x)\cap B_{\varepsilon}(f)$. Pick a $j\in\mathbb{N}$ such that $\mu_{t}\in B_{\varepsilon-j^{-1}}(f)$. By definition of $\psi_{P,Q}(x)$ and the triangle inequality, there exist some $j_{0}\geq j$ and $i_{0}\in\mathbb{N}$ such that $\mu_{t}^{i}\in B_{\varepsilon-j_{0}^{-1}}(f)$ for all $i\geq i_{0}$. Hence \begin{align*} x\in\bigcap_{i=i_{0}}^{\infty}\{x\in X\ |\ \mathcal{M}(P_{i}(x),Q_{i}(x))\cap B_{\varepsilon-j_{0}^{-1}}(f)\neq\emptyset\}. \end{align*} \noindent showing one direction. For the converse, choose an arbitrary $x\in \bigcap_{i=k}^{\infty}\{x\in X\ |\ \mathcal{M}(P_{i}(x),Q_{i}(x))\cap B_{\varepsilon-j^{-1}}(f)\neq\emptyset\}$. By hypothesis, we have a sequence of minimisers $\mu_{t}^{i}\in\mathcal{M}(P_{i}(x),Q_{i}(x))\cap\textrm{Lip}_{C}$ such that $D(f,\mu_{t}^{i})<\varepsilon-j^{-1}$ for each $i\in\mathbb{N}$. As $\textrm{Lip}_{C}$ is compact, we extract a $D$-converging subsequence $\mu_{t}^{i_{k}}$ which we know must lie in $B_{\varepsilon-j^{-1}}(f)$. By $2)$ in Definition \ref{DFN.Cnt_Dpd}, we see the limit to be a $\mu_{t}\in\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))$. Since $j>0$, $\mu_{t}\in\psi_{P,Q}(x)\cap B_{\varepsilon}(f)$ and therefore $x\in\hat{\psi}_{P,Q,C}(B_{\varepsilon}(f))$.\par To conclude, we show each $\{x\in X\ |\ \mathcal{M}(P_{i}(x),Q_{i}(x))\cap B_{\varepsilon}(f)\neq\emptyset\}$ to be measurable. Write $P_{i}=\sum_{j=1}^{n_{i}}1_{A_{ij}}p_{ij}$, $Q_{i}=\sum_{j=1}^{m_{i}}1_{B_{ij}}q_{ij}$, and choose a finite sub-partition $C_{ij}$ of $X$ for the finite partitions $A_{ij}$ and $B_{ij}$ of $X$ such that each $C_{ij}$ is measurable. Then write \begin{align*} P_{i}=\sum_{j=1}^{k_{i}}1_{C_{ij}}\tilde{p}_{ij},\ Q_{i}=\sum_{j=1}^{k_{i}}1_{C_{ij}}\tilde{q}_{ij} \end{align*} \noindent where $\tilde{p}_{ij}$ or $\tilde{q}_{ij}$ might remain the same upon varying the $j$-variable. The latter is not relevant for this proof. From the representation above, we see $\mathcal{M}(P_{i}(x),Q_{i}(x))\cap B_{\varepsilon}(f)\neq\emptyset$ for $x\in C_{ij}$ if and only if it holds true for all $x\in C_{ij}$. Thus $\{x\in X\ |\ \mathcal{M}(P_{i}(x),Q_{i}(x))\cap B_{\varepsilon}(f)\neq\emptyset\}$ is given by a finite union of some $C_{ij}$, hence measurable. \end{proof} \begin{ntn} For all $x\in X$, the $L^{2}$-Wasserstein distance on $\mathcal{D}_{b}\subset\mathcal{S}_{1}(H)_{+}$ arising from $\partial_{x}$ is $\mathcal{W}_{2,x}$. \end{ntn} \begin{thm}\label{THM.Disint} Let $\partial$ be a vertical gradient such that $\partial_{x}$ has continuous dependence of minimisers on start- and endpoints for a.e.~$x\in X$. For all $P,Q\in\mathcal{D}$ with finite distance, we have \begin{align*} \mathcal{W}_{2}^{2}(P,Q)=\int_{X}\mathcal{W}_{2,x}^{2}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))d\nu_{P} \end{align*} \noindent and there exists a minimiser $\mu_{t}$ of $\mathcal{W}_{2}(P,Q)$ such that $\theta_{P}(x)^{2}\mu_{t}(x)\in\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))$ for a.e.~$x\in X$. \end{thm} \begin{proof} By finiteness of $\mathcal{W}_{2}(P,Q)$, we have $\textrm{tr}(P(x))=\textrm{tr}(Q(x))$ for a.e.~$x\in X$. Lemma \ref{LEM.Msrbl_Slct} yields a measurable selection of minimisers $\xi$. By construction, $\xi(x)\in\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))$ for a.e.~$x\in X$. We claim $(t,x)\longmapsto\xi(x)(t)$ to be a strongly measurable map from $[0,1]\times X$ to $(\mathcal{S}_{1}(H),||.||_{\mathcal{S}_{1}(H)})$. For all $T_{k}$ as in the beginning of Subsection 3.3, evaluation at $T_{k}$ is a continuous map from $C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))$ to $C([0,1])$. Since $(T_{k})_{k\in\mathbb{N}}\subset\mathcal{B}(H)$ $w^{*}$-densely, pointwise approximation shows evaluation at each $T\in\mathcal{B}(H)$ to be strongly measurable. Moreover, $\xi$ is measurable w.r.t.~$\mathcal{B}(X)$ and $\mathcal{B}(C([0,1],\mathcal{S}_{cl}(\mathcal{K}(H))))$ by construction. Taken together, this implies measurability of $\textrm{tr}(\xi(x)(t)T)$ on $[0,1]\times X$ for each $T\in\mathcal{B}(H)$. As $\mathcal{S}_{1}(H)$ is separable, this proves the claim.\par Furthermore, we proved strong measurability of $P_{t}(x):=\textrm{tr}(P(x))\xi(x)(t)$ as a map from $[0,1]\times X$ to $\mathcal{S}_{1}(H)$. By construction of $\xi$, we have $P_{t}(x)\geq 0$ with $||P_{t}(x)||_{\mathcal{S}_{1}(H)}=\textrm{tr}(P(x))$ for each $t\in [0,1]$ for a.e.~$x\in X$. Hence $P_{t}\in\mathcal{D}$ for each $t\in [0,1]$. Let $t\longmapsto w_{t}(x)$ be the vector field associated to the admissible path $\xi(x)$. By strong measurability of $\xi$, the maps \begin{align*} \textrm{tr}(P(x))f(x)\frac{d}{dt}\textrm{tr}(\xi(x)(t)T)=\langle\textrm{tr}(P(x))^{\frac{1}{2}}w_{t}(x),M_{P_{t}(x)}^{\frac{1}{2}}\partial_{x}(f\odot T)(x)\rangle_{\mathcal{H}} \end{align*} \noindent are measurable on $[0,1]\times X$ for each $f\odot T\in\mathfrak{A}$. Furthermore, $M_{\xi_{t}(x)}$ and $\partial_{x}T$ are measurable on $[0,1]\times X$ for each $T\in\FinRk(H)$. Hence each $||T||_{\xi_{t}(x)}$ is measurable on $[0,1]\times X$ as well. This in turn implies \begin{align*} ||w_{t}(x)||_{\mathcal{H}}=\sup_{T\in\FinRk(H)}||T||_{\xi_{t}(x)}^{-1}\langle w_{t}(x),M_{\xi(t)(x)}^{\frac{1}{2}}\partial_{x}T\rangle_{H} \end{align*} \noindent Yet $\FinRk(H)$ is an extension algebra for each fibre gradient, see Remark \ref{REM.Curveball}. As a pointwise limit of measurable maps on $[0,1]\times X$, $||w_{t}(x)||_{\mathcal{H}}$ is therefore measurable on $[0,1]\times X$.\par In particular, $\mathcal{W}_{2,x}^{2}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))=\int_{0}^{1}||w_{s}(x)||_{\mathcal{H}}^{2}ds$ is measurable on $X$. We therefore know \begin{align*} \int_{X}\mathcal{W}_{2,x}^{2}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))d\nu_{P}\leq\mathcal{W}_{2}^{2}(P,Q)<\infty \end{align*} \noindent by Corollary \ref{COR.Rctf_DblInt} and construction of $\xi$. This implies integrability of $\textrm{tr}(P(x))||w_{t}(x)||_{\mathcal{H}}^{2}$ on $[0,1]\times X$. We set $W_{t}(x):=\textrm{tr}(P(x))^{\frac{1}{2}}w_{t}(x)$, and $||W_{t}(x)||_{\mathcal{H}}^{2}$ is integrable by what we just showed. By the above, integrability of $||W_{t}(x)||_{\mathcal{H}}^{2}$, and the separating function $g(f\odot T)=||\partial f\otimes T||_{\infty}$, we have \begin{align*} \int_{X}\textrm{tr}((P_{t}-P)(x)(f\odot T)(x))d\nu&=\int_{X}\int_{0}^{t}\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}(f\odot T)(x)\rangle_{\mathcal{H}}dsd\nu\\ &=\int_{0}^{t}\int_{X}\langle W_{s}(x),M_{P_{s}(x)}^{\frac{1}{2}}\partial_{x}(f\odot T)(x)\rangle_{\mathcal{H}}d\nu ds \end{align*} \noindent for each $f\odot T\in\mathfrak{A}$. If $R_{t}$ is the projection onto $T_{P_{t}}\mathcal{D}\subset L^{2}(X,\mathcal{S}_{2}(H))$, then $\mu_{t}=(P_{t},R_{t}(W_{t}))$ is an admissible path by what we showed just now. On the other hand, we have \begin{align*} \int_{0}^{1}||R_{s}(W_{s})||_{L^{2}(X,\mathcal{S}_{2}(H))}^{2}ds\leq \int_{0}^{1}||W_{s}||_{L^{2}(X,\mathcal{S}_{2}(H))}^{2}=\int_{X}\mathcal{W}_{2,x}^{2}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))d\nu_{P} \end{align*} \noindent and we see $\mu_{t}$ to be a minimiser as required. The statement follows. \end{proof} \begin{cor}\label{COR.Disint} Let $\partial$ be a vertical gradient and $H$ finite-dimensional. For all $P,Q\in\mathcal{D}$ with $\textrm{tr}(P(x))=\textrm{tr}(Q(x))$ for a.e.~$x\in X$, we have \begin{align*} \mathcal{W}_{2}^{2}(P,Q)=\int_{X}\mathcal{W}_{2,x}^{2}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))d\nu_{P} \end{align*} \noindent and there exists a minimiser $\mu_{t}$ of $\mathcal{W}_{2}(P,Q)$ such that $\theta_{P}(x)^{2}\mu_{t}(x)\in\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x))$ for a.e.~$x\in X$. \end{cor} \begin{proof} Using Lemma \ref{LEM.FinDim_CntDpd_II}, we see each $\partial_{x}$ to have continuous dependence of minimisers on start- and endpoints. By Proposition \ref{PRP.FinDim_WkMetr}, each $\mathcal{W}_{2,x}$ has finite diameter. This implies $\mathcal{W}_{2}(P,Q)<\infty$ if $\textrm{tr}(P(x))=\textrm{tr}(Q(x))$ for a.e.~$x\in X$ since $\textrm{tr}(P(x))$ is integrable. We apply Theorem \ref{THM.Disint} to conclude. \end{proof} As an end to this subsection, we give a toy application of Theorem \ref{THM.Disint}. Let $H$ be finite-dimensional and $\nu$ a probability measure. We view density matrices as modelling a system's states and assume to be given a vertical gradient $\partial$. Minimisers for $\mathcal{W}_{2,x}(p,q)$ describe all possible ways for the system to evolve from $p$ to $q$ under the cost, hence geometry, determined by $\partial_{x}$.\par We are in the following situation: if the system changes states, then it evolves along a minimiser determined by \textit{some} $\partial_{x}$. However, we are unable to say which $\partial_{x}$ is chosen for any particular state change. We hope to find an average evolution. For $p,q\in\mathcal{S}_{1}(H)_{+}$ density matrices, set $P(x):=p$ and $Q(x):=q$. Both $P,Q\in\mathcal{D}$ with $\textrm{tr}(P(x))=\textrm{tr}(Q(x))=1$, hence $d\nu_{P}=d\nu$. By Corollary \ref{COR.Disint}, there exists a minimiser $\mu_{t}\in\mathcal{M}(P,Q)$ and we have \begin{align*} \mathcal{W}_{2}^{2}(P,Q)=\int_{X}\mathcal{W}_{2,x}^{2}(p,q)\hspace{0.025cm}d\nu. \end{align*} \noindent As $\mu_{t}(x)\in\mathcal{M}(p,q)$, we consider $\mu_{t}$ to be an average evolution (see FIG. \ref{FIG.1} and FIG. \ref{FIG.2}).\par Another application of Theorem \ref{THM.Disint} and its proof will be presented in the next subsection in form of mean entropic curvature bounds. \begin{figure} \begin{center} \begin{tikzpicture} \node at (1.81,0.583) {$p$}; \node at (14.125,-0.445) {$q$}; \draw [black,fill=black] (1.935,0.483) circle (0.03cm); \draw [black,fill=black] (14,-0.305) circle (0.03cm); \draw [dashed] (2,0.5) .. node[above, yshift=-0.05cm] (ref) {\small{$\mu_{t}(x_{1})$}} controls (8,3.25) .. (13.935,-0.278); \draw [dashed] (2,0.5) .. node[above, yshift=-0.05cm] {\small{$\mu_{t}(x_{2})$}} controls (8,2.375) .. (13.935,-0.278); \draw [dashed] (2,0.5) .. node[above, yshift=-0.05cm] {\small{$\mu_{t}(x_{3})$}} node[below] {$\vdots$} controls (8,1.5) .. (13.935,-0.278); \node [above = 0cm of ref] {$\vdots$}; \end{tikzpicture} \caption{} \label{FIG.1} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture} \node at (1.81,0.583) {$p$}; \node at (14.125,-0.445) {$q$}; \draw [black,fill=black] (1.935,0.483) circle (0.03cm); \draw [black,fill=black] (14,-0.305) circle (0.03cm); \draw [fill=gray!40, draw=none] (2,0.5) .. node[above, yshift=-0.05cm] (ref){} controls (8,3.25) .. (13.935,-0.278); \draw [line width=0.025cm] (2,0.5) .. node[above] {$\mu_{t}$} controls (8,1.875) .. (13.935,-0.278); \draw [fill=white, draw=none] (2,0.5) .. node[below] {$\vdots$} controls (8,1.5) .. (13.935,-0.278); \node [above = 0cm of ref] {$\vdots$}; \end{tikzpicture} \caption{} \label{FIG.2} \end{center} \end{figure} \subsection{Mean entropic curvature bounds} Let $H$ be finite-dimensional and $\partial$ a symmetric gradient for $(M_{n}(\mathbb{C}),\textrm{tr})$. In the classical setting, Sturm introduced entropic curvature bounds for metric measure spaces \cite{SturmGMMSI}. Theorem \ref{THM.Disint} leads us to consider a mean relative entropy, as well as mean entropic curvature bounds. For the latter, we prove a local to global theorem. The next definition uses Proposition \ref{PRP.Entrp_Rpr}, i.e. $\textrm{Ent}(p|\tau)=\tau(p\log p)$ for all density matrices. \begin{dfn}\label{DFN.Crv_Fibre} We say that $(M_{n}(\mathbb{C}),\textrm{tr},\partial)$ has curvature $\geq K\in\mathbb{R}$ if for all $p,q\in\mathcal{D}_{b}$, there exists some $\mu_{t}\in\mathcal{M}(p,q)$ such that \begin{align} \textrm{tr}(\rho_{t}\log\rho_{t})\leq (1-t)\textrm{tr}(p\log p)+t\textrm{tr}(q\log q)-\frac{K}{2}t(1-t)\mathcal{W}_{2}^{2}(p,q) \end{align} \noindent for each $t\in [0,1]$. We set $\curv(A,\tau,\partial):=\sup\{K\in\mathbb{R}\ |\ (M_{n}(\mathbb{C}),\textrm{tr},\partial)\ \textrm{has}\ \textrm{curvature} \geq K\}$, where $\sup\emptyset=-\infty$ as usual. \end{dfn} \begin{rem} By definition, each $(M_{n}(\mathbb{C}),\textrm{tr},\partial)$ has curvature $\geq\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial)$. \end{rem} \begin{dfn} For all $p,q\in\mathcal{D}_{b}$, set $\mathcal{M}(p,q,K):=\{\mu_{t}\in\mathcal{M}(p,q)\ |\ \mu_{t}\ \textrm{satisfies}\ (1)\ \textrm{for}\ K\}$. \end{dfn} \begin{lem}\label{LEM.FinDim_CntDpd_III} Let $\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial)\geq K$. For all $p,q\in\mathcal{D}_{b}$ and all $(p_{i})_{i\in\mathbb{N}},(q_{i})_{i\in\mathbb{N}}\subset\mathcal{D}_{b}$ with $p_{i}\longrightarrow p$, resp.~$q_{i}\longrightarrow q$ in the $||.||_{\mathcal{S}_{1}(H)}$-topology, then \begin{itemize} \item[1)] there exist $\mu_{t}\in\mathcal{M}(p,q,K)$ and $\mu_{t}^{i_{k}}\in\mathcal{M}(p_{i_{k}},q_{i_{k}},K)$ with $\lim_{k\in\mathbb{N}}D(\mu_{t}^{i_{k}},\mu_{t})=0$, \item[2)] the limit of each $D$-converging sequence of $\mu_{t}^{i}\in\mathcal{M}(p,q,K)$ lies in $\mathcal{M}(p,q,K)$. \end{itemize} \end{lem} \begin{proof} Convergence w.r.t~the $||.||_{\mathcal{S}_{1}}$-topology implies $\lim\tau(p_{i}\log p_{i})=\tau(p\log p)$ and Proposition \ref{PRP.FinDim_WkMetr} shows convergence of $\mathcal{W}_{2}(p_{i},q_{i})$ to $\mathcal{W}_{2}(p,q)$. Hence $(1)$ is a closed condition, and we argue analogously to our proof of Lemma \ref{LEM.FinDim_CntDpd_II}. \end{proof} \begin{prp}\label{PRP.Cnt_Dpd_II} If $\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial)\geq K$, then $\mathcal{M}(p,q,K)$ is non-empty and closed w.r.t.~$D$ for each $p,q\in\mathcal{D}_{b}$. \end{prp} \begin{proof} By Definition of $\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial)$, $\mathcal{M}(p,q,K)$ is non-empty. Closedness follows from $\mathcal{M}(p,q)$ being closed and $(1)$ being a closed condition. \end{proof} We define the mean relative entropy of a density, as well as an associated synthetic curvature bound condition. The latter is called the mean entropic curvature bound. Lemma \ref{LEM.FinDim_CntDpd_III} and Proposition \ref{PRP.Cnt_Dpd_II} allow us to argue analogously to Lemma \ref{LEM.Msrbl_Slct}. This yields a theorem similar in spirit to the disintegration theorem. Note that for all $P\in\mathcal{D}$, $x\longmapsto\textrm{tr}(P(x)\log P(x))$ is measurable. \begin{dfn} Let $\partial$ be a vertical gradient for $(C_{0}(X,M_{n}(\mathbb{C})),\nu\otimes\textrm{tr})$. \begin{itemize} \item[1)] For all $P\in\mathcal{D}$, $\textrm{Ent}_{m}(P|\nu\otimes\textrm{tr})=\int_{X}\textrm{tr}(P(x)\log P(x))d\nu\in\mathbb{R}\cup\{\pm\infty\}$ is the mean relative entropy. \item[2)] We say that $(C_{0}(X,M_{n}(\mathbb{C})),\nu\otimes\textrm{tr},\partial)$ has mean curvature $\geq K\in\mathbb{R}$ if for all $f\in\mathcal{D}(X,\nu)$ and for all $P,Q\in\mathcal{D}_{f}$, there exists a minimiser $\mu_{t}\in\mathcal{A}(P,Q)$ such that \begin{align} \textrm{Ent}_{m}(P_{t}|\nu\otimes\textrm{tr})\leq (1-t)\textrm{Ent}_{m}(P|\nu\otimes\textrm{tr})+t\textrm{Ent}_{m}(Q|\nu\otimes\textrm{tr})-\frac{K}{2}t(1-t)\mathcal{W}_{2}^{2}(P,Q) \end{align} \noindent for each $t\in [0,1]$. We set \begin{align*} \mcurv(\nu\otimes\textrm{tr},\partial):=\sup\{K\in\mathbb{R}\ |\ (C_{0}(X,M_{n}(\mathbb{C})),\nu\otimes\textrm{tr},\partial)\ \textrm{has}\ \textrm{mean}\ \textrm{curvature} \geq K\}. \end{align*} \end{itemize} \end{dfn} \begin{prp} The mean relative entropy is a convex function on $\mathcal{D}$. If $X$ is compact and $P\in\mathcal{D}_{b}$, then $\textrm{Ent}(P|\nu\otimes\textrm{tr})=\textrm{Ent}_{m}(P|\nu\otimes\textrm{tr})$. \end{prp} \begin{proof} The first statement follows immediately from convexity of the noncommtuative relative entropy. For the second one, use Proposition \ref{PRP.Entrp_Rpr} to see the first equality in \begin{align*} \textrm{Ent}(P|\nu\otimes\textrm{tr})=\tau(P\log P)=\int_{X}\textrm{tr}(P(x)\log P(x))d\nu=\textrm{Ent}_{m}(P|\nu\otimes\textrm{tr}). \end{align*} \end{proof} \begin{thm}\label{THM.NC_MCrv} If $\partial$ is a vertical gradient for $(C_{0}(X,M_{n}(\mathbb{C})),\nu\otimes\textrm{tr})$, then \begin{align*} \mcurv(\nu\otimes\textrm{tr},\partial)\geq \essinf_{x\in X}\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial_{x}). \end{align*} \end{thm} \begin{proof} If the right-hand side is $-\infty$, there is nothing to show. We therefore assume \begin{align*} \essinf_{x\in X}\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial_{x})\geq K \end{align*} \noindent for some $K\in\mathbb{R}$. For $f\in\mathcal{D}(X,\nu)$, let $P,Q\in\mathcal{D}_{f}$. By definition, $\textrm{tr}(P(x))=\textrm{tr}(Q(x))$ for a.e.~$x\in X$. By Lemma \ref{LEM.FinDim_CntDpd_III} and Proposition \ref{PRP.Cnt_Dpd_II}, each $\mathcal{M}(p,q,K)$ has the same properties we required of $\mathcal{M}(p,q)$ when proving $\ref{LEM.Msrbl_Slct}$. Arguing as in Theorem \ref{THM.Disint}, we obtain a minimiser $\mu_{t}\in\mathcal{A}(P,Q)$. Our choice of $\mathcal{M}(p,q,K)$ instead of $\mathcal{M}(p,q)$ ensures $\theta_{P}(x)^{2}\mu_{t}(x)\in\mathcal{M}(\theta_{P}(x)^{2}P(x),\theta_{P}(x)^{2}Q(x),K)$.\par For all density matrices $p$ and $C>0$, we have $\textrm{tr}(Cp\log Cp)=\textrm{tr}(Cp\log C)+\textrm{tr}(Cp\log p)$ by functional calculus and unitality of $M_{n}(\mathbb{C})$. Using this, we obtain \begin{align*} \textrm{Ent}_{m}(P_{t}|\nu\otimes\textrm{tr})-\int_{X}\textrm{tr}(P_{t}(x)\log\textrm{tr}(P(x)))d\nu=\int_{X}\textrm{tr}(\theta_{P}(x)^{2}P_{t}(x)\log\theta_{P}(x)^{2}P_{t}(x))d\nu_{P}. \end{align*} \noindent Since $\theta_{P}(x)^{2}\mu_{t}(x)$ satisfies $(1)$ with $K$ for a.e.~$x\in X$, the right-hand side of the equation just above is less or equal to \begin{align*} &\ (1-t)\int_{X}\textrm{tr}(\theta_{P}(x)^{2}P(x)\log\theta_{P}(x)^{2}P(x))d\nu_{P}\\ +&\ t\int_{X}\textrm{tr}(\theta_{P}(x)^{2}P(x)\log\theta_{P}(x)^{2}Q(x))d\nu_{P}\\ -&\ \frac{K}{2}t(1-t)\int_{X}\mathcal{W}_{2,x}^{2}(\theta_{P}(x)^{2}P(x)),\theta_{P}(x)^{2}Q(x)))d\nu_{P}. \end{align*} \noindent The last summand equals $-\frac{K}{2}t(1-t)\mathcal{W}_{2}^{2}(P,Q)$ by Theorem \ref{THM.Disint}. Knowing this, it suffices to add \begin{align*} \int_{X}\textrm{tr}(P_{t}(x)\log\textrm{tr}(P(x)))d\nu=(1-t)\int_{X}\textrm{tr}(P_{t}(x)\log\textrm{tr}(P(x)))d\nu+t\int_{X}\textrm{tr}(P_{t}(x)\log\textrm{tr}(P(x)))d\nu \end{align*} \noindent to both sides of the esimate we just proved and to use $\textrm{tr}(Cp\log Cp)=\textrm{tr}(Cp\log C)+\textrm{tr}(Cp\log p)$ to show that $\mu_{t}$ satisfies $(2)$ for $\essinf_{x\in X}\curv(M_{n}(\mathbb{C}),\textrm{tr},\partial_{x})$. \end{proof} \section{Disintegrating $L^{2}$-Wasserstein distances} We extend the notion of vertical gradients to $\mathcal{K}(H)$-bundles, introduce the disintegration problem for unital $C^{*}$-algebras and give sufficient conditions for solving it. Finally, we outline plans to achieve disintegration for more general fields of $C^{*}$-algebras.\par As in the last section, let $X$ be a locally compact Hausdorff space, $\mathcal{B}(X)$ its Borel $\sigma$-algebra, $(X,\mathcal{B}(X))$ a separable measure space and $H$ a separable Hilbert space. Furthermore, let $X$ have a continuous partition of unity for each locally finite open cover. We say that $X$ has sufficiently many continuous partitions of unity. Normal spaces have sufficiently many continuous partitions of unity, see \cite{BK.Quer_MngthTop}. Compact Hausdorff spaces and paracompact topological manifolds are examples of normal spaces. \subsection{Vertical gradients for $\mathcal{K}(H)$-bundles} Let $E$ be an hermitian vector bundle over $X$ with fibres $H$. Since $E$ is hermitian, all structure maps of $\textrm{End}(E)$ must be unitary. In particular, $||.||_{\mathcal{B}(H)}$ changes appropriately under structure maps and we are able to define bounded sections accordingly. Finally, structure maps send compact operators to compact operators. We obtain the compact endomorphism bundle $\textrm{End}_{\mathcal{K}}(E)$ with fibres given by $\mathcal{K}(H)$, its structure maps being $^{*}$-homomorphisms. $\Gamma_{0}(\textrm{End}_{\mathcal{K}}(E))$ is the $C^{*}$-algebra of continuous sections vanishing at infinity. \begin{ntn} Write $\mathcal{T}(E)$ for the set of trivialising open subsets. If we pick a continuous partition of unity $(\varphi_{i})_{i\in I}$, we demand each $\textrm{supp}\hspace{0.025cm}\varphi_{i}$ to be a subset of some $U_{i}\in\mathcal{T}(E)$. \end{ntn} Let $\tau$ be a trace on $\Gamma_{0}(\textrm{End}_{\mathcal{K}}(E))$. Given a continuous partition of unity $(\varphi_{i})_{i\in I}$, surjectivity of the restriction map implies that each $F\longmapsto \tau(\varphi_{i}F_{|U_{i}})$ defines a trace $\tau_{|U_{i}}$ on $\Gamma_{0}(\textrm{End}_{\mathcal{K}}(E_{|U_{i}}))$. This allows a general notion of product trace. \begin{dfn} We call a trace $\tau$ on $\Gamma_{0}(\textrm{End}_{\mathcal{K}}(E))$ a product trace if there exists a continuous partition of unity $(\varphi_{i})_{i\in I}$ such that each $\tau_{|U_{i}}$ is a product trace. \end{dfn} \begin{prp}\label{PRP.Prd_Tr_All_Part} If $\tau$ is a product trace, then $\tau_{|U_{i}}$ is a product trace for all continuous partitions of unity $(\varphi_{i})_{i\in I}$. \end{prp} \begin{proof} Choose a continuous parition of unity $(\eta_{i})_{j\in J}$ such that $\tau_{|U_{j}}$ is a product trace. Write $\varphi_{i}=\sum_{j\in J}\eta_{j}\varphi_{i}$ for an arbitrary continuous partition of unity $(\varphi_{i})_{i\in I}$ and set $\chi_{i,j}:=\eta_{j}\varphi_{i}$. Each $(\chi_{i,j})_{(i,j)\in I\times J}$ is itself a continuous partition of unity and each $\tau_{|U_{j}}$ a product trace. Therefore $\tau_{|U_{(i,j)}}$ must be a product trace as well. From this, the statement follows at once. \end{proof} \begin{cor}\label{COR.Prd_Tr_Bdl} If $H$ is finite-dimensional and there exists a continuous partition of unity $(\varphi_{i})_{i\in I}$ such that each $\tau_{|U_{i}}$ is finite, $\tau$ is a product trace. \end{cor} \begin{proof} This follows from the above proposition and Corollary \ref{COR.Prd_Tr} by finite-dimensionality. \end{proof} Given a continuous partition of unity $(\varphi_{i})_{i\in I}$ and an $F\in\Gamma_{0}(\textrm{End}_{\mathcal{K}}(E))$, we write \begin{align} \tau(F)=\sum_{i\in I}\tau(\varphi_{i}F)=\sum_{i\in I}\int_{U_{i}}\varphi_{i}(x)\textrm{tr}(F_{|U_{i}}(x))d\nu_{i} \end{align} \noindent where the right-hand side is independent of our choices since the left-hand side already is. Let $\Gamma_{c}(\textrm{End}_{\mathcal{K}}(E))$ denote the space of continuous section with compact support. \begin{dfn} If $p\in [1,\infty)$, then $\Gamma^{p}(\textrm{End}_{\mathcal{K}}(E),\tau)$ is defined as the Hausdorff-completion of $\Gamma_{c}(\textrm{End}_{\mathcal{K}}(E))$ w.r.t.~the semi-norm \begin{align*} ||F||_{p}:=\sum_{i\in I}\int_{U_{i}}\varphi_{i}(x)\textrm{tr}(F_{|U_{i}}(x))d\nu_{i}. \end{align*} \noindent For $p=\infty$, let $\Gamma^{\infty}(\textrm{End}_{\mathcal{K}}(E),\tau)$ be the space of bounded measurable sections modulo nullsets with norm \begin{align*} ||F||_{\infty}:=\esssup_{x\in X} ||F(x)||_{\mathcal{B}(H)}. \end{align*} \end{dfn} \begin{rem} All $\Gamma^{p}(\textrm{End}_{\mathcal{K}}(E),\tau)$ are Banach spaces. $\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E),\tau)$ is a Hilbert space with the obvious inner product, and we represent $\Gamma^{\infty}(\textrm{End}_{\mathcal{K}}(E),\tau)$ canonically over $\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E),\tau)$. This representation trivialises to the usual one used in the fourth section. The definition of $||.||_{\infty}$ makes sense since structure maps are unitary. \end{rem} \begin{ntn} From now on, we suppress $\tau$ in the notation of the above $L^{p}$-spaces. \end{ntn} \begin{prp} For all $p\in [0,\infty]$, we have $L^{p}(\Gamma_{0}(E),\tau)=\Gamma^{p}(\End_{\mathcal{K}}(E))$. \end{prp} \begin{proof} Use $(3)$ and Proposition \ref{PRP.Prd_Tr_LP}. \end{proof} Let $\bigoplus_{k=1}^{m}E$ be the $m$-th Whitney sum of $E$. Next, fix a product trace $\tau$ and consider the $\Gamma^{\infty}(\textrm{End}_{\mathcal{K}}(E))$-bimodule $\bigoplus_{k=1}^{m}\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E))=\Gamma^{2}(\textrm{End}_{\mathcal{K}}(\bigoplus_{k=1}^{m}E))$. By construction, the latter trivialises to $L^{2}(U,\bigoplus_{k=1}^{m}\mathcal{S}_{2}(H),\tau_{|U})$ for each $U\in\mathcal{T}(E)$. For a continuous partition of unity $(\varphi_{i})_{i\in I}$, we have \begin{align*} FG=\sum_{i\in I}\varphi_{i}F_{|U_{i}}G_{|U_{i}},\ GF=\sum_{i\in I}\varphi_{i}G_{|U_{i}}F_{|U_{i}} \end{align*} \noindent for each $F\in\Gamma_{\infty}(\textrm{End}_{\mathcal{K}}(E))$ and $G\in\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E))$. As in the trivial case, we obtain a canonical symmetric bimodule structure. Moreover, we see that \begin{align} M_{P}(G)=\sum_{i\in I}\varphi_{i}M_{P_{|U_{i}}}(G_{|U_{i}}) \end{align} \noindent for each $P\in\mathcal{D}_{b}$ and $G\in\bigoplus_{k=1}^{m}\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E))\cap\bigoplus_{k=1}^{m}\Gamma_{\textrm{loc}}^{\infty}(\textrm{End}_{\mathcal{K}}(E))$. Here, $\Gamma_{\textrm{loc}}^{\infty}(\textrm{End}_{\mathcal{K}}(E))$ denotes the locally bounded sections modulo nullsets.\par Consider an unbounded $C_{c}(X)$-module map $\Phi$. Given a continuous partition of unity $(\varphi_{i})_{i\in I}$, we define linear maps from $\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E_{|U_{i}}))$ to $\bigoplus_{k=1}^{m}\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E_{|U_{i}}))$ by setting $\Phi_{|U_{i}}(F):=\Phi(\varphi F_{|U_{i}})=\varphi_{i}\Phi (F)$. \begin{rem} In the next definition, we could choose to replace $\bigoplus_{k=1}^{m}\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E))$ by a symmetric Hilbert $\Gamma^{\infty}(\textrm{End}_{\mathcal{K}}(E))$-subbimodule which is furthermore a subsheaf. However, composing $\partial$ with the subsheaf inclusion would then yield a vertical gradient in the sense of Definition \ref{DFN.Vrt_Grd_Bdl}. \end{rem} \begin{dfn}\label{DFN.Vrt_Grd_Bdl} An unbounded $C_{c}(X)$-module map $\partial:\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E))\longrightarrow\bigoplus_{k=1}^{m}\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E))$ is a vertical gradient if there exists a continuous partition of unity $(\varphi_{i})_{i\in I}$ such that each $\partial_{|U_{i}}$ is a vertical gradient in the sense of Definition \ref{DFN.Vrt_Grd}. \end{dfn} \begin{rem} Definition \ref{DFN.Vrt_Grd_Bdl} is consistent with Definition \ref{DFN.Vrt_Grd}. Arguing as in Proposition \ref{PRP.Prd_Tr_All_Part}, an unbounded $C_{c}(X)$-module map $\partial$ as above is a vertical gradient if and only if all $\partial_{|U_{i}}$ are vertical gradients for each continuous partition of unity $(\varphi_{i})_{i\in I}$. \end{rem} Set $\mathfrak{A}:=\{F\in \Gamma_{0}(\textrm{End}_{\mathcal{K}}(E))\cap\Gamma^{2}(\textrm{End}_{\mathcal{K}}(E))\ |\ \partial F\in\bigoplus_{k=1}^{n}\Gamma_{\textrm{loc}}^{\infty}(\textrm{End}_{\mathcal{K}}(E))\}$. Then $\mathfrak{A}$ is an extension algebra by $(4)$ and $D_{cb}(\partial_{|U})=C_{c}(U)\otimes \FinRk(H)$ for each $U\in\mathcal{T}(E)$. We proceed as in Subsection 4 to obtain a disintegration theorem for general $\mathcal{K}(H)$-bundles. \setcounter{section}{4} \setcounter{thm}{0} \begin{thm}\label{THM.Bdl_Disint} Let $\partial$ be a vertical gradient such that $(\partial_{|U})_{x}$ has continuous dependence of minimisers on start- and endpoints for a.e.~$x\in U$ for each $U\in\mathcal{T}(E)$. For all $P,Q\in\mathcal{D}$ with finite distance and all partitions of unity $(\varphi_{i})_{i\in\mathbb{N}}$, we have \begin{align*} \mathcal{W}_{2}^{2}(P,Q)=\sum_{i}\int_{U_{i}}\varphi_{i}(x)\mathcal{W}_{2}^{2}(\theta_{P_{|U_{i}}}^{2}P_{|U_{i}}(x),\theta_{P_{|U_{i}}}^{2}Q_{|U_{i}}(x))d\nu_{P_{|U_{i}}} \end{align*} \noindent and there exists a minimiser $\mu_{t}$ of $\mathcal{W}_{2}(P,Q)$ such that $\theta_{P}(x)^{2}\mu_{t}(x)$ is a fibre-wise minimiser a.e. \end{thm} \setcounter{section}{5} \setcounter{thm}{0} \subsection{Sufficient conditions involving Morita equivalence} We begin by giving sufficient conditions in case $A$ and $\tau$ are already of the required form, i.e. as in the previous subsection. We thus concern ourselves with $\partial$ only. Moreover, we restrict to the finite-dimensional case. \begin{ntn} An unbounded linear map trivialising to an unbounded linear map for each $U\in\mathcal{T}(E)$ is called an unbounded bundle map. \end{ntn} \begin{prp}\label{PRP.Grd_Dcp_I} Let $H$ be finite-dimensional, $\tau$ a product trace and $\partial$ is a symmetric gradient for $(\Gamma_{0}(\End_{\mathcal{K}}(E)),\tau)$ mapping to $\bigoplus_{k=1}^{m}\Gamma^{2}(\End_{\mathcal{K}}(E))$. Then $\partial$ is a vertical gradient if for all $U\in\mathcal{T}(E)$, we know that \begin{itemize} \item[1)] $\partial_{|U}$ is bounded, \item[2)] $C_{c}(U)\subset\ker\partial_{|U}$. \end{itemize} \end{prp} \begin{proof} Let $\textrm{dim}_{\mathbb{C}}(H)=n$ and $U\in\mathcal{T}(E)$ such that $\tau_{|U}(1_{U}\otimes 1_{M_{n}(\mathbb{C})})<\infty$. By $1)$, we then have $L^{\infty}(U)\otimes M_{n}(\mathbb{C})\subset D(\partial)$. Thus for all $g\in C_{0}(U)$ and $F\in C_{0}(U,M_{n}(\mathbb{C}))$, we have $\partial_{|U}(gF)=g\partial_{|U}(F)$ by the Leibniz-rule and $2)$. Since $\partial_{|U}$ is bounded, $\partial$ commutes with $C_{0}(U)$ and hence $L^{\infty}(U)$. Applying Theorem IV.7.10 in \cite{TakTOAI} shows that $\partial_{|U}$ decomposes into bounded linear operators $\partial_{x}$ from $M_{n}(\mathbb{C})$ to $\bigoplus_{k=1}^{m}M_{n}(\mathbb{C})$. As $\partial$ is furthermore an unbounded bundle map, it therefore is an unbounded $C_{0}(X)$-module map. By finite-dimensionality, each $\partial_{x}$ is a fibre-gradient. Because $\partial_{|U}$ was assumed to be a symmetric gradient, $\partial_{x}$ must be a symmetric gradient for almost every $x\in U$. Boundedness of each $\partial_{|U}$ and finite-dimensionality of $H$ ensure all remaining conditions in Definition \ref{DFN.Vrt_Grd} to be met. \end{proof} Consider $(A,\tau,\partial)$ for unital $A$ being Morita equivalent to a $C(X)$, $X$ compact Hausdorff. By compactness, $X$ has sufficiently many partitions of unity. Assume $\partial$ to map into $\bigoplus_{k=1}^{m}L^{2}(A,\tau)$. By Morita equivalence, we have an isomorphism $\Phi$ from $A$ to a $C(X,\textrm{End}_{\mathcal{K}}(E))$ as unitality ensures any Hilbert module implementing the equivalence to be finitely projective \cite{KhalBasicNCG}. $E$ is a finite-dimensional hermitian vector bundle. Moreover, we have isomorphisms from $L^{p}(A,\tau)$ to $\Gamma^{p}(\textrm{End}_{\mathcal{K}}(E),\Phi_{*}\tau)$ for each $p\in [0,\infty]$. By Corollary \ref{COR.Prd_Tr_Bdl}, finiteness of $\tau$ implies $\Phi_{*}\tau$ to be a product trace. \begin{dfn}\label{DFN.Disint} Let $(A,\tau,\partial)$ such that $A$ is unital and Morita equivalent to $C(X)$, $X$ compact. Furthermore, assume $\tau$ is finite and that $\partial$ maps to $\bigoplus_{k=1}^{m}L^{2}(A,\tau)$, $m\in\mathbb{N}$. We say that $\mathcal{W}_{2}$ disintegrates if there exists an isomorphism $\Phi$ such that $\Phi_{*}\partial$ is a vertical gradient. \end{dfn} \begin{cor}\label{COR.Grd_Dcp_I} If we are in the setting of Definition \ref{DFN.Disint}, then $\mathcal{W}_{2}$ disintegrates if $\Phi_{*}\partial$ satisfies the conditions in Proposition \ref{PRP.Grd_Dcp_I}. \end{cor} It is clear that the sufficient conditions presented here are too strong for easy application. For example, we require better conditions for choosing isomorphisms such that $\phi_{*}\partial$ is at least an unbounded bundle map. Nevertheless, they give a first tentative attempt to reduce general problems to the vertical gradient case.\par There are two main avenues we wish to explore. Firstly, we require conditions for having continuous dependence of minimisers on start- and endpoints of symmetric gradients for $(\mathcal{K}(H),\textrm{tr})$ and for the hyperfinite type $\textrm{II}_{1}$ factor $R$ equipped with its canonical trace $\tau_{0}$. Secondly, we consider more general direct integrals than $L^{2}(X,H)$ since we seek to understand gradients after disintegrating $L^{\infty}(A,\tau)$ into its factors. A natural point of departure are fields of elementary $C^{*}$-algebras. However, even if a continuous field of elementary $C^{*}$-algebras satisfies Fell's condition it need not equal the induced field of elementary $C^{*}$-algebras associated to its direct integral of Hilbert spaces, cmpr.~Theorem 10.7.15 in \cite{DixC*Alg}. In our setting, this is necessary for having $L^{\infty}(X,\mathcal{B}(H))=L^{\infty}(A,\tau)$. Thus not all direct integrals are immediately suitable to our purposes.\par Once we have determined a class of direct integrals and generalised the notion of vertical gradient, we hope to apply results of form \cite{MathUnbdDecomp} in order to decompose unbounded gradients between direct integrals. With the outlined approach, we aim to cover a large number of $C^{*}$-algebras whose $L^{\infty}$-space is isomorphic to that induced by a direct integral whose fibres are given by some $\mathcal{S}_{2}(H)$ or $L^{2}(R,\tau_{0})$.
{ "timestamp": "2018-08-08T02:09:03", "yymm": "1806", "arxiv_id": "1806.01073", "language": "en", "url": "https://arxiv.org/abs/1806.01073" }
\section{Introduction} In this paper, we consider interface problems, where the solution is continuous on a domain $\Omega \subset \mathbb{R}^2$, but its normal derivative may have a jump in normal direction over an interior interface. Problems of this kind arise for example in fluid-structure interaction, multiphase flows, multicomponent structures and in many other configurations where multiple physical phenomena interact. All these examples have in common that the interface between the two phases is moving and may be difficult to capture due to small scale features. { If the interface is not resolved by the finite element mesh,} the accuracy of the finite element approach might decrease severely, see e.g.~\cite{Babuska1970}. For simple elliptic interface problem with jumping coefficients, it has been shown, that optimal convergence can be recovered by a harmonic averaging of the diffusion constants~\cite{TikhonovSamarskii1962}, \cite{ShubinBell1984}. For more complex couplings, e.g.$\,$ fluid-structure interactions, where two entirely different equations interact with each other, the list of possible { discretisation techniques} that yield optimal order can be split roughly in two groups. The first class of approaches consists of so-called fitted finite element methods, where the meshes are constructed in such a way that the interface is sufficiently resolved, see~\cite{Babuska1970, FeistauerSobotikova1990, Zenisek1990, BrambleKing1996, BastingPrignitz2013}. If the interface is moving, curved or has small scale features, the repeated generation of fitted finite element meshes can exceed the feasible effort, however. In non-stationary problems, the projection of previous iterates to the new mesh, brings along further difficulties and sources of error. Further developments are based on local modifications of the finite element mesh, that only alter mesh elements close to the interface~\cite{Boergers1990, XieItoLiToivanen2008, Fang2013, GawlikLew2014}. An alternative approach is based on unfitted finite elements, where the mesh is fixed and does not resolve the interface. Here, proper accuracy is gained by local modifications or enrichment of the finite element basis. Prominent examples for these methods are the extended finite element method (XFEM~\cite{MoesDolbowBelytschko1999}), the generalised finite element method~\cite{BabuskaBanarjeeOsborn2004,BaBa12} or the unfitted Nitsche method by \cite{HansboHansbo2002, HansboHansbo2004}. Based on the latter works, so-called cut finite elements have been developed, see for instance~\cite{BastianEngwer}, \cite{Burmanetal2015}, \cite{Lehrenfeld4d}, \cite{FidkowskiDarmofal2007}. All these enrichment methods are well analysed and show the correct order of convergence. One drawback of the enrichment methods is a complicated structure that requires local modifications in the finite element spaces leading to a variation in the connectivity of the system matrix and number of unknowns. In this article, we use a simple approach that is based on a fixed \textit{patch mesh} consisting of quadrilaterals and will stay unchanged independent of the position of the interface. Inside the patches we refine once more, either in eight triangles or in four quadrilaterals, in such a way that the interface is locally resolved. In this sense the resulting finite element approach can be considered a \textit{fitted} finite element approach. {This approach has first been proposed in~\cite{FreiRichter2014}.} In our practical implementation, we do however not construct this fitted mesh explicitly. Instead, the local degrees of freedom are included in a parametric way in the finite element space, or to be more precise in the local mappings between a reference patch and the physical patches. The drawback of this approach is that the condition number of the resulting system matrices might be unbounded, when the interface approaches certain vertices or mesh lines. This problem can however be solved by constructing a scaled hierarchical basis of the finite element space. Using this basis the approach can be viewed as a simple enrichment method as well, where the enrichment consists of the standard Lagrangian basis functions on the fine scale. The mathematical details, including a complete analysis of the discretisation error and the condition number of the system matrix have already been published in~\cite{FreiRichter2014}, \cite{FreiDiss} and~\cite{RichterBuch}. Later on, related approaches on triangular patches have been developed by~\cite{HolmFSIband} and by~\cite{GanglLanger}. Furthermore, the approach has been applied by the authors to simulate fluid-structure interaction problems with large deformations in~\cite{FrRiWi14b,FreiRichterWick2016} and~\cite{FreiRichterSammelband} and by Gangl to simulate problems of topology optimisation~\cite{GanglDiss}. The goal of this article is to explain in detail the implementation of the fitted finite element method and to provide a programming code based on the C++ finite element library deal.II~\cite{BangerthHartmannKanschat2007,dealII85}. In {extension} to \cite{FreiRichter2014} further details concerning the implementation of the finite element approach, and in particular on the construction of the hierarchical basis are given. Moreover, we study the performance of some iterative solvers, i.e.$\,$ a simple and preconditioned conjugate gradient method (CG/PCG) to solve the arising linear systems, while {`only'} a direct solver was used in \cite{FreiRichter2014}. The organisation of this article is as follows. In Section \ref{sec_model_problem}, {a simple elliptic model problem} is presented. Next, in Section \ref{sec:fe} {we introduce the local modifications of the finite element space in the cells that are cut by the interface.} In Section \ref{sec_discrete_forms} the discrete forms and the approximation properties are briefly recapitulated. Then, we introduce the hierarchical finite element space in Section~\ref{sec_hierarchical}. Section \ref{sec:num} consists of two numerical tests, that illustrate the main features and the performance of our approach. Finally, we present algorithmic details and details on the implementation in Section~\ref{sec.impl}. We conclude in Section~\ref{sec.conclusion}. \section{Motivation: A simple elliptic model problem} \label{sec_model_problem} To get started, let us consider a simple Poisson problem in $\Omega\subset\mathbb{R}^2$ with a discontinuous coefficient $\kappa$ across an interface line $\Gamma\subset\mathbb{R}$. Find $u:\Omega\to\mathbb{R}$ such that \begin{equation}\label{problem:1} -\nabla\cdot (\kappa_i\nabla u) = f\text{ on }\Omega_i\;\; (i=1,2),\quad [u]=0 \text{ and } [\kappa\partial_n u] = 0\text{ on }\Gamma, \end{equation} with constants $\kappa_i>0$ and subject to homogeneous Dirichlet conditions on the exterior boundary $\partial\Omega$. Here, we denote the subdomains by $\Omega_i, i=1,2$ and by $[u]$ the jump of $u$ across the interface $\Gamma$. The variational formulation of this interface problem is given by \begin{definition}[Continuous variational formulation] Find $u\in H^1_0(\Omega)$ such that \begin{align} a(u,\phi) :=\sum_{i=1}^2 (\kappa_i\nabla u,\nabla\phi) = (f,\phi)\quad\forall\phi\in H^1_0(\Omega).\label{contBilin} \end{align} \end{definition} Interface problems are elaborately discussed in literature. If the interface $\Gamma$ cannot be resolved by the mesh, the overall error for a standard finite element approach will be bounded by \[ \|\nabla (u-u_h)\|_\Omega = \mathcal{O}(h^{1/2}), \] independent of the polynomial degree $r$ of the finite element space, see the early works \cite{Babuska1970} or~\cite{MacKinnonCarey1987}. In Figure~\ref{fig:standardfe}, we show the $H^1$- and $L^2$-norm errors for a simple interface problem with a curved interface that is not resolved by the finite element mesh. Both linear and quadratic finite elements yield only $\mathcal{O}(h^{1/2})$ accuracy in the $H^1$-semi-norm and $\mathcal{O}(h)$ in the $L^2$-norm. This is due to the limited regularity of the solution across the interface. \begin{figure}[t] \begin{minipage}{0.63\textwidth} \hspace{-0.8cm} \includegraphics[width=\textwidth]{ball-standard-color.pdf} \end{minipage} \hfil \begin{minipage}{0.33\textwidth} \begin{picture}(0,0)% \includegraphics{circle1.pdf}% \end{picture}% \setlength{\unitlength}{2072sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(2973,3283)(2218,-3761) \put(2746,-3070){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\Omega_2$}% }}}} \put(5176,-1411){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\color[rgb]{0,0,0}$-\kappa_i\Delta u=f$}% }}}} \put(5176,-2086){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\color[rgb]{0,0,0}$u=u^d$ on $\partial\Omega$}% }}}} \put(2251,-3661){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\color[rgb]{0,0,0}$\kappa_1=0.1,\; \kappa_2=1$}% }}}} \put(3196,-1750){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\Omega_1$}% }}}} \end{picture}% \end{minipage} \caption{$L^2$- and $H^1$-error for a standard finite element method using $Q_1$ and $Q_2$ polynomials for the discretisation of the interface problem~\eqref{problem:1}. Configuration of the test problem in the right sketch. Further details are given in Section~\ref{sec:num}.\label{fig:standardfe}} \end{figure} \section{Locally modified finite elements} \label{sec:fe} \begin{figure}[t] \centering \resizebox*{0.8\textwidth}{!}{ \begin{picture}(0,0)% \includegraphics{merge.pdf}% \end{picture}% \setlength{\unitlength}{1657sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(11742,5197)(1747,-5179) \put(8191,-2634){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_1$}% }}}} \put(9271,-2634){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_2$}% }}}} \put(10441,-2634){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_3$}% }}}} \put(8191,-1451){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_4$}% }}}} \put(9271,-1451){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_5$}% }}}} \put(10441,-1451){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_6$}% }}}} \put(8191,-358){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_7$}% }}}} \put(9271,-358){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_8$}% }}}} \put(10441,-358){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$\hat x_9$}% }}}} \put(3646,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Omega_1$}% }}}} \put(5131,-3211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Omega_2$}% }}}} \put(1936,-3886){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Omega$}% }}}} \put(6751,-601){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$P$}% }}}} \put(6256,-4021){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Gamma$}% }}}} \put(7071,-2025){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$x_2$}% }}}} \put(6751,-1051){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$x_1$}% }}}} \end{picture}% } \caption{\textit{Left:} Triangulation $\mathcal{T}_{2h}$ of a domain $\Omega$ that is split into $\Omega_1$ and $\Omega_2$ with interface $\Gamma$. The quadrilateral cells in $\mathcal{T}_{2h}$ are illustrated by the bold lines. Patch $P$ is cut by $\Gamma$ at $x_1$ and $x_2$. \textit{Right:} Subdivision of reference patches $\hat P_0,\hat P_1,\hat P_2,\hat P_3$ (top left to bottom right) into eight triangles each. } \label{fig:mesh} \end{figure} In order to define the modified finite elements, let us assume that $\mathcal{T}_{2h}$ is a form and shape-regular triangulation of the domain $\Omega\subset\mathbb{R}^2$ into open quadrilaterals. The discrete domain $\Omega_h$ does not necessarily resolve the partitioning $\Omega=\Omega_1\cup\Gamma\cup\Omega_2$ and the interface $\Gamma$ can cut the elements $P\in\mathcal{T}_{2h}$. We assume that the interface $\Gamma$ cuts patches in the following way: \begin{enumerate} \item Each (open) patch $P\in\mathcal{T}_{2h}$ is either not cut $P\cap\Gamma=\emptyset$ or cut in exactly two points on its boundary: $P\cap\Gamma\neq \emptyset$ and $\partial P\cap\Gamma=\{x^P_1,x^P_2\}$. \item If a patch is cut, the two cut-points $x^P_1$ and $x^P_2$ may not be inner points of the same edge. \end{enumerate} In principle, these assumptions only rule out two possibilities: a patch may not be cut multiple times and the interface may not enter and leave the patch at the same edge. Both situations can be avoided by refinement of the underlying mesh. If the interface is matched by an edge, the patch is not considered to be cut. \subsection{Construction of the finite element space} We define four reference patches $\hat{P}_0,...,\hat{P}_3$ on the unit square $(0,1)^2$. These patches are split into 4 quadrilaterals or 8 triangles as illustrated in Figure~\ref{fig:mesh}. Moreover, we define 9 nodes $\hat x_1,\dots,\hat x_9$ in the vertices, edge midpoints and the midpoint of the patches, which will serve as degrees of freedom of the finite element space. Note that the same position of the degrees of freedom can be found in a standard quadratic $Q_2$ discretisation, the structure of which served as a starting point for our implementation. Now we define local reference spaces $\hat Q_P$ (here $P$ indicates the patch, but not the polynomial degree) as a piecewise polynomial space of degree 1. On the reference patch $\hat P_0$ consisting of quadrilaterals $\hat K_1,\dots, \hat K_4$, we choose the standard space of piecewise bilinear functions \[ \hat Q_P = \hat Q := \left\{ \phi\in C(\bar P),\; \phi\Big|_{\hat K_i}\in \operatorname{span}\{1,x,y,xy\},\; \hat K_1,\dots,\hat K_4\in \hat P\right\}. \] This local space will be used when a physical patch $P$ is not cut by the interface. If a patch $P\in\mathcal{T}_{2h}$ is cut by the interface, we use one of the reference patches $\hat P_1, \dots, \hat P_3$ with triangles $\hat T_1,\dots,\hat T_8$ and define \[ \hat Q_P = \hat Q_\text{mod} := \left\{ \phi\in C(\bar P),\; \phi\Big|_{\hat T_i}\in \operatorname{span}\{1,x,y\},\; \hat T_1,\dots,\hat T_8\in \hat P\right\}. \] We define a mapping $\hat T_P \in \hat Q_P, \hat T_P: \hat{P}_i \to P$, that is piecewise linear in sub-triangles and piecewise bi-linear in sub-quadrilaterals on $\hat{P}_i$. This gives us the possibility to map the degrees of freedom $\hat x_1,\dots, \hat x_9$ to nodes $x_1^P,\dots,x_9^P$, in such a way that the interface is resolved in a linear approximation in the physical patch. Denoting by $\{\hat\phi^1,\dots,\hat\phi^9\}$ the standard Lagrange basis of $\hat Q$ or $\hat Q_\text{mod}$ with $\hat\phi^i(\hat{x}_j)=\delta_{ij}$, the transformation $\hat{T}_P$ is given by \begin{equation} \hat{T}_P(\hat{x})=\sum_{i=1}^9 x_i^P\hat\phi_i(\hat{x}). \label{TP} \end{equation} Finally, we define the finite element trial space $V_h\subset H^1_0(\Omega)$ as an iso-parametric space on the triangulation $\mathcal{T}_{2h}$: \[ V_h = \left\{\phi\in C(\bar\Omega) \cap H^1_0(\Omega),\; \phi\circ \hat{T}_P^{-1}\Big|_P\in \hat Q_P\text{ for all patches }P\in\mathcal{T}_{2h}\right\}. \] Note that, whatever splitting of the patch is applied, the local number of degrees of freedom is always 9. Therefore, the global number of unknowns and the {sparsity pattern} of the system matrix stays identical, independent of the interface position. It is important to note, that the functions in $\hat Q$ and $\hat Q_\text{mod}$ are all piecewise linear on the edges $\partial P$, such that mixing different element types does not affect the continuity of the global finite element space. \begin{figure}[t] \centering \scalebox{1.1}{ \begin{picture}(0,0)% \includegraphics{types.pdf}% \end{picture}% \setlength{\unitlength}{1973sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(11909,3544)(736,-3545) \put(2626,-136){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$e_3$}% }}}} \put(4351,-3061){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$r$}% }}}} \put(1651,-1411){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$x_m$}% }}}} \put(751,-3061){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$x_1$}% }}}} \put(3676,-211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$x_3$}% }}}} \put(826,-211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$x_4$}% }}}} \put(2626,-3361){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$e_1$}% }}}} \put(3601,-3061){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$x_2$}% }}}} \put(1501,-1936){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$e_4$}% }}}} \put(1876,-2986){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$s$}% }}}} \put(2851,-1411){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$e_2$}% }}}} \put(6676,-2536){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$s$}% }}}} \put(2626,-586){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$r$}% }}}} \put(9001,-286){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$s$}% }}}} \put(2101,-3481){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$\textbf{A}$}% }}}} \put(5101,-3481){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$\textbf{B}$}% }}}} \put(8101,-3481){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$\textbf{C}$}% }}}} \put(11101,-3481){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$\textbf{D}$}% }}}} \end{picture}% } \caption{Different types of cut patches. The subdivision can be anisotropic with $r,s\in (0,1)$ arbitrary.}\label{fig:types} \end{figure} Next, we present the subdivision of interface patches $P$ into eight triangles. \begin{definition} We distinguish four different types of interface cuts, see Figure~\ref{fig:types}: \begin{description} \item[Configuration A] The patch is cut in the interior of two opposite edges. \item[Configuration B] The patch is cut in the interior of two adjacent edges. \item[Configuration C] The patch is cut in the interior of one edge and in one node. \item[Configuration D] The patch is cut in two opposite nodes. \end{description} \end{definition} Configurations A and B are based on the reference patches $\hat P_2$ and $\hat P_3$, configurations C and D use the reference patch $\hat P_1$, see Figure~\ref{fig:mesh}. By $e_i\in\mathbb{R}^2$, $i=1,2,3,4$ we denote the vertices in the interior of edges, by $m_P\in\mathbb{R}^2$ the grid point in the interior of the patch. The parameters $r,s\in (0,1)$ describe the relative position of the intersection points with the interface on the outer edges. If an edge is intersected by the interface, we move the corresponding point $e_i$ on this edge to the point of intersection. The position of $m_P$ depends on the specific configuration. For configuration A, B and D, we choose $m_P$ as the intersection of the line connecting $e_2$ and $e_4$ with the line connecting $e_1$ and $e_3$. In configuration C, we use the intersection of the line connecting $e_2$ and $e_4$ with the line connecting $x_1$ and $e_3$. As the cut of the elements can be arbitrary with $r,s\to 0$ or $r,s\to 1$, the triangle's aspect ratio can be very large. With the described choices for the midpoints $m_P$ we can guarantee, that the maximum angles in all triangles will be well bounded away from $180^\circ$~\cite{FreiRichter2014}: \begin{lemma}[Maximum angle condition]\label{lemma:maxangle} All interior angles of the triangles shown in Figure~\ref{fig:types} are bounded by $144^\circ$ independent of $r,s\in (0,1)$. \end{lemma} The respective reference patches $\hat P_0,...,\hat P_3$ (see Figure~\ref{fig:mesh}) are chosen based on the following criteria: First, it is mandatory that a maximum angle can be guaranteed. Second, it is beneficial for practical purposes to keep the maximum angle as small as possible on the one hand and on the other hand to conserve the symmetry in the discretisation, in the case of a symmetric problem. From these considerations, we choose type $\hat{P}_2$ if $r+s > 1$ and $\hat{P}_3$ if $r+s < 1$ for configuration A in our implementation. For an example, consider the left patch in Figure~\ref{fig:types}, where $\hat{P}_3$ has been chosen, as $r+s>1$. {Note that the symmetry criterion would not be fulfilled, if we would choose always either $\hat{P}_2$ or $\hat{P}_3$, independent of $r$ and $s$.} In configuration B, we choose $\hat{P}_3$, when the cut separates the lower left or the upper right vertex from the rest of the patch and $\hat{P}_2$, when only the lower right or the upper left vertex lie on one side of the interface. \section{Discrete variational formulation and approximation properties} \label{sec_discrete_forms} In the previous sections, we tacitly assumed that the interface can be resolved in a geometric exact way. In the case of a curved interface, a linear approximation by mesh lines is constructed. With the help of the discrete approximation of the interface, we introduce a second splitting of the domain $\Omega$ into the discrete subdomains \begin{align*} \Omega = \Omega_h^1 \cup \Omega_h^2, \end{align*} such that all cells of the sub-triangulation are either completely included in $\Omega_h^1$ or in $\Omega_h^2$, see Figure~\ref{fig:discmesh}. \begin{figure}[bt] \centering \scalebox{1.2}{ \begin{picture}(0,0)% \includegraphics{discmesh3.pdf}% \end{picture}% \setlength{\unitlength}{1657sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(12573,3753)(1747,-4165) \put(13321,-4021){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Gamma_h$}% }}}} \put(12196,-3211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Omega_h^2$}% }}}} \put(10711,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Omega_h^1$}% }}}} \put(3646,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Omega_1$}% }}}} \put(5131,-3211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Omega_2$}% }}}} \put(6256,-4021){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$\Gamma$}% }}}} \end{picture}% } \caption{\label{fig:discmesh} {\textit{Left:} Patch elements and interface $\Gamma$ that goes through two patches. \textit{Right:} Sub-triangulation} and splitting of the mesh into subdomains $\Omega_h^1$ and $\Omega_h^2$. The interface $\Gamma_h$ is a linear approximation of the interface $\Gamma$ shown on the left-hand side.} \end{figure} Using these definitions, we define a discrete bilinear form $a_h(\cdot,\cdot)$. For the elliptic model problem, this form is given by \begin{align} a_h(u_h,\phi_h) := (\kappa_h \nabla u_h,\nabla \phi_h)_{\Omega_h},\label{discBilin} \end{align} where \begin{align*} \kappa_h = \begin{cases} \kappa_1 \quad \text{in } \Omega_h^1,\\ \kappa_2 \quad \text{in } \Omega_h^2.\\ \end{cases} \end{align*} Note that $\kappa_h$ differs from $\kappa$ in a small layer between the continuous interface $\Gamma$ and the discrete interface $\Gamma_h$. \begin{definition}[Discrete variational formulation] The discrete problem is to find $u_h \in V_h$ such that \begin{align*} a_h(u_h,\phi_h) = (f,\phi_h)_{\Omega_h} \quad \forall \phi_h \in V_h. \end{align*} \end{definition} The maximum angle conditions of Lemma~\ref{lemma:maxangle} is sufficient to ensure that the Lagrangian interpolation operators $I_h:H^2(T)\cap C(\bar T)\to V_h$ are of optimal order for smooth functions $v\in H^2(T)\cap C(\bar T)$ on an element $T$, i.e.$\,$ \begin{equation}\label{interpolation} \|\nabla^k (v-I_h v)\|_T \le c h_{T,\max}^{2-k} \|\nabla^2 v\|_T,\quad k=0,1 \end{equation} where $c>0$ is a constant and $h_{T,\max}$ is the maximum diameter of a triangle $T\in P$ (see e.g.~\cite{Apel1999}). If the interface $\Gamma$ is curved, the solution $u$ to~\eqref{problem:1} is however non-smooth across the interface. Here, we have to argument using smooth extensions of $u|_{\Omega_i}, i=1,2$ to the other sub-domain and the smallness of the region \begin{align*} S_h=(\Omega_1\cap \Omega_h^2) \cup (\Omega_2\cap \Omega_h^1) \end{align*} around the interface. The following result has been shown for the elliptic interface problem~\eqref{problem:1}: \begin{theorem}[A priori estimate]\label{thm:apriori} Let $\Omega\subset\mathbb{R}^2$ be a domain with convex polygonal boundary, split into $\Omega=\Omega_1\cup\Gamma\cup\Omega_2$, where $\Gamma$ is a smooth interface with $C^2$-parametrisation. We assume that $\Gamma$ divides $\Omega$ in such a way that the solution $u\in H^1_0(\Omega)$ satisfies the stability estimate \[ u\in H^1_0(\Omega)\cap H^2(\Omega_1\cup\Omega_2),\quad \|u\|_{H^2(\Omega_1\cup\Omega_2)}\le c_s \|f\|. \] For the corresponding modified finite element solution $u_h\in V_h$, it holds that \[ \|\nabla (u-u_h)\|_\Omega\le C h_P \|f\|,\quad \|u-u_h\|_\Omega\le C h_P^2 \|f\|. \] \end{theorem} \begin{proof} For the proof, we refer to \cite{RichterBuch} or \cite{FreiDiss}. \end{proof} { \section{Hierarchical basis functions} \label{sec_hierarchical} The drawback of {the previously described} simple approach is that the condition number of the system matrix is unbounded for certain anisotropies ($r,s\to 0$). This is an unresolved issue in many of the presently used enriched finite element methods for interface problems. We refer to \cite{LehrenfeldReusken} or \cite{BaBa12} for two of the few positive results in the case of extended finite elements of low-order. In our case, this can be circumvented by using a scaled hierarchical finite element basis, that will yield system matrices $A_h$ that satisfy the optimal bound $\operatorname{cond}_2(A_h)=\mathcal{O}(h_P^{-2})$ for elliptic problems, with a constant that does not depend on the position of the interface $\Gamma$ relative to the mesh elements. A detailed proof of this result has been given in~\cite{FreiRichter2014}. \begin{figure}[t] \centering \resizebox*{0.85\textwidth}{!}{ \begin{picture}(0,0)% \includegraphics{hierarchical2.pdf}% \end{picture}% \setlength{\unitlength}{1657sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(10866,3283)(2668,-5336) \put(10801,-5236){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$v_b\in V_b$}% }}}} \put(6976,-5236){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$v_{2h}\in V_{2h}$}% }}}} \put(3376,-5236){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\color[rgb]{0,0,0}$v_h\in V_h$}% }}}} \end{picture}% } \caption{Example for a hierarchical splitting of a function $v_h\in V_h$ into coarse mesh part $v_{2h}\in V_{2h}$ and fine mesh fluctuation $v_b\in V_b$.} \label{fig:hierarchical} \end{figure} \begin{figure}[t] \centering \resizebox*{0.7\textwidth}{!}{ \begin{picture}(0,0)% \includegraphics{hierarchical_basis.pdf}% \end{picture}% \setlength{\unitlength}{1657sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(9339,4974)(6199,-4123) \end{picture}% }% \caption{Local basis functions of the hierarchical finite element space. Top: Two of the four basis functions $\phi_i^{2h} \in V_{2h}$. Bottom: Two of the five basis functions $\phi_i^b \in V_b$} \label{fig:HierarchicalBasis} \end{figure} We split the finite element space $V_h$ in a hierarchical manner \[ V_h = V_{2h} + V_b,\quad N:=\operatorname{dim}(V_h)= \operatorname{dim}(V_{2h}) + \operatorname{dim}(V_b)=:N_{2h}+N_b. \] The space $V_{2h}$ is the standard space of piecewise bilinear or linear functions on the patches $P\in\mathcal{T}_{2h}$ equipped with the usual nodal Lagrange basis $V_{2h} = \operatorname{span}\{\phi_{2h}^1,\dots,\phi_{2h}^{N_{2h}}\}$. Patches cut by the interface are split into two large triangles. The space $V_b=V_h\setminus V_{2h}$ collects all functions, that are needed to enrich $V_{2h}$ to $V_h$. These functions are defined piecewise on the sub-elements in the remaining 5 degrees of freedom, see Figure~\ref{fig:hierarchical} for an example of the splitting and Figure~\ref{fig:HierarchicalBasis} for an illustration of the local basis functions. These basis functions are denoted by $V_b=\operatorname{span}\{\phi_b^1,\dots,\phi_b^{N_b}\}$. The finite element space $V_{2h}$ on the other hand is fully isotropic and standard analysis holds. Functions in $V_{2h}$ do not resolve the interface, while the basis functions $\phi_b^i\in V_b$ will depend on the interface location if $\Gamma\subset \operatorname{supp}\,\phi_b^i$. In order to define the hierarchical ansatz space, we have to modify some of the basic triangles in the cases A, B and C, see Figure~\ref{basisfunctions}. In contrast to Section~\ref{sec:fe}, the midpoint can be moved along one of the diagonal lines only, such that the space $V_{2h}$ can be defined as space of piecewise linear functions on two large triangles. Note that in order to guarantee a maximum angle condition in the cases A.1 and C.1 in Figure~\ref{basisfunctions}, we must also move the outer node $x_2$ belonging to the space $V_b$, due to the additional constraint on the position of $m_P$. \begin{figure}[t] \centering \resizebox*{0.85\textwidth}{!}{ \begin{picture}(0,0)% \includegraphics{patch3.pdf}% \end{picture}% \setlength{\unitlength}{1450sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(13347,8847)(5836,-7996) \put(9496,-4651){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$r$}% }}}} \put(10351,-3976){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$s$}% }}}} \put(15346,-4651){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$r$}% }}}} \put(19036,-2761){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$s$}% }}}} \put(14401,-3976){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$s$}% }}}} \put(11296,-4651){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$r$}% }}}} \put(5851,-3211){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}\textbf{B}}% }}}} \put(5851,-61){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}\textbf{A}}% }}}} \put(16516,-4786){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}3}% }}}} \put(12601,-4831){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}2}% }}}} \put(8416,-4876){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}1}% }}}} \put(5851,-6496){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}\textbf{C}}% }}}} \put(15886,-6001){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$r$}% }}}} \put(10486,-7351){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$r$}% }}}} \put(8641,-7981){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}1}% }}}} \put(13096,-7981){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}2}% }}}} \put(9631,-7756){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$x_2$}% }}}} \put(6751,-826){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$s$}% }}}} \put(11251,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$s$}% }}}} \put(15886,479){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$r$}% }}}} \put(10486,-871){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\color[rgb]{0,0,0}$r$}% }}}} \put(8641,-1501){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}1}% }}}} \put(13096,-1501){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}2}% }}}} \put(9406,-1276){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\color[rgb]{0,0,0}$x_2$}% }}}} \end{picture}% } \caption{Configuration of the hierarchical basis functions $V_b$ for the different patch types. In each sketch, we consider the case $r\to 0$ or $s\to 0$ or both.} \label{basisfunctions} \end{figure} } \subsection*{Scaling of the basis functions} Moreover, in order to ensure the optimal bound for the condition number, we have to normalise the Lagrangian basis functions on the fine scale $\phi_b^i, i=1,...,N_b,$ by setting \begin{align*} \tilde{\phi}_b^i := \frac{\phi_b^i}{\|\nabla \phi_b^i\|}, \end{align*} such that it holds that \begin{equation}\label{scaling} C^{-1}\le \|\nabla\tilde{\phi}_b^i\| \le C,\quad i=1,\dots,N_b. \end{equation} In a practical implementation, one can use the basis $\phi_i, i=1,\ldots,N$ to assemble the system matrix $A_h$ and apply a simple row- and column-wise scaling with the diagonal elements \[ a_{ij} = (\nabla\phi_j,\nabla\phi_i),\quad \tilde a_{ij}:=\frac{a_{ij}}{\sqrt{a_{ii} a_{jj}}}. \] Alternatively, a simple preconditioning of the linear system can be applied multiplying with the diagonal of the system matrix from left and right \[ \mathbf{A}\mathbf{x} = \mathbf{b} \quad\Leftrightarrow\quad \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \widetilde{\mathbf{x}} = D^{-\frac{1}{2}} b,\quad \widetilde{\mathbf{x}} = \mathbf{D}^\frac{1}{2}\mathbf{x}, \] where $\mathbf{D} = \operatorname{diag}(a_{ii})$. \section{Numerical examples} \label{sec:num} We now present two numerical examples that include all different types of interface cuts (configurations A to D) and arbitrary anisotropies. \subsection{Example 1: Performance under mesh refinement} \label{sec_ex_1} This first example has already been considered to discuss the interface approximation in Section~\ref{sec:fe}, see Figure~\ref{fig:standardfe} for a sketch of the configuration. The unit square $\Omega=(-1,1)^2$ is split into a ball $\Omega_1=B_R(x_m)$ with radius $R=0.5$, midpoint $x_m=(0,0)$ and $\Omega_2=\Omega\setminus\bar\Omega_1$. As diffusion parameters we choose $\kappa_1=0.1$ and $\kappa_2=1$. We use the analytical solution \[ u(x)=\begin{cases} -2 \kappa_2 \|x-x_m\|^4, \quad &x \in \Omega_2,\\ -\kappa_1 \|x-x_m\|^2 +\frac{1}{4}\kappa_1 - \frac{1}{8}\kappa_2 \quad &x \in \Omega_1, \end{cases} \] to define the right-hand side $f_i:=-\kappa_i\Delta u$ in $\Omega_i$ and the Dirichlet boundary data. A sketch of the solution is given on the right side of Figure~\ref{fig:ex1_a}. \begin{figure}[t] \centering {\includegraphics[width=7cm]{visit_Jan_31_2018_0001.png}} {\includegraphics[width=7cm]{visit_Jan_31_2018_0003.png}} \caption{{Example 1: The cut-mesh on level $4$ (left) and a 3D surface plot of the solution (right).}} \label{fig:ex1_a} \end{figure} On the coarsest mesh with 16 patch elements, we have four patches of type D. After some steps of global refinement this simple example includes the configurations A to C with different anisotropies. In Figure~\ref{fig:ball}, we plot the $H^1$- and $L^2$-norm errors obtained on several levels of global mesh refinement. According to Theorem~\ref{thm:apriori}, we observe linear convergence in the $H^1$-norm and quadratic convergence in the $L^2$-norm. For comparison, Figure~\ref{fig:standardfe} shows the corresponding results using standard non-fitting finite elements. \begin{figure}[t] \centering \begin{minipage}{0.65\textwidth} \includegraphics[width=\textwidth]{ball-mod.pdf} \end{minipage} \caption{Example 1: $H^1$- and $L^2$ errors under mesh refinement. }\label{fig:ball} \end{figure} { As we have shown numerically computed condition numbers for this and the following example already in \cite{FreiRichter2014}, we provide here computational evidence that the arising linear systems can be solved with iterative methods as the conjugate gradient (CG) instead. We incorporate the scaling of the basis functions by means of a diagonal preconditioner, as discussed in Section~\ref{sec_hierarchical}. In order to analyse the effect of the scaling, we compare the performance of the diagonally preconditioned CG method (dPCG) with a standard CG scheme without preconditioning. Moreover, we also show the performance of a CG scheme with SSOR relaxation as preconditioner (SSOR-PCG, without a scaling of the basis functions). For the latter we choose the relaxation parameter $\omega = 1.2$, see e.g., \cite{Meister2011}. The (absolute) tolerance for the global residual is chosen as $10^{-12}$. The iteration numbers for the non-hierarchical finite element basis introduced in Section~\ref{sec:fe} (nh) and the hierarchical (h) variant described in Section~\ref{sec_hierarchical} in combination with the three CG methods are shown in Table~\ref{tab_CG_iter} on different mesh levels, where each finer mesh is constructed from the coarser one by global mesh refinement. Theoretically the number of iterations needed to reach a certain tolerance in the CG method should scale with the square root of the condition number $\mathcal{O}(\sqrt{\kappa})$ (see e.g., \cite{Br07,BuchRichterWick}), i.e.$\,$for the scaled hierarchical approach with a condition number of order $\kappa = \mathcal{O}(h_P^{-2})$, we can expect that the number of iterations grows asymptotically with $\mathcal{O}(h_P^{-1})$. This behaviour can be observed quite clearly for the preconditioned CG methods in Table~\ref{tab_CG_iter}. The SSOR preconditioning seems to work even better than the diagonal preconditioning. In this example, the expected convergence of the linear solver can be obtained without using the hierarchical basis functions. The use of the hierarchical basis leads however to an advantage in terms of the absolute numbers of iterations. For the standard CG method without preconditioning, we observe that the number of iterations grows faster than $\mathcal{O}(h_P^{-1})$ for both the hierarchical and the non-hierarchical approach. This has to be expected, as the condition number might be unbounded for certain anisotropies. The observation that the iteration numbers for the scaled non-hierarchical approach seem bounded by $\mathcal{O}(h_P^{-1})$ in this example, might be due to the fact that not all kind of anisotropies are present and that the anisotropies that are present do not necessarily get worse on the finer grids. To study the performance of our approach considering all kinds of anisotropies (see Figure~\ref{basisfunctions}), we will next move the circular interface gradually by small fractions of patch cells in vertical direction. } \begin{table} \begin{center} \begin{tabular}{cc|ccc|ccc} \toprule Level & \#Patches & CG(nh) & dPCG(nh) &SSOR-PCG(nh) & CG(h) & dPCG(h) &SSOR-PCG(h) \\ \hline $0$ & 16 & 10 &10 &15 & 10 &10 &15\\ $1$ & 64 & 43 &29 &32 &64 &39 &25\\ $2$ & 256 & 114 &60 &56 &126 &61 &32 \\ $3$ & 1024 & 253 &124 &97 &197 &95 &47\\ $4$ & 4096 & 561 &238 &175 &351 &167 &81 \\ $5$ & 16384 &1436 &484 &335 &881 &322 &150\\ $6$ & 65536 &3518 &967 &634 &2053 &622 &293\\ \bottomrule \end{tabular} \end{center} \caption{Example 1: Iteration numbers of the linear solvers on different mesh levels for hierarchical (h) and non-hierarchical (nh) versions and the standard CG method compared to a diagonally preconditioned (dPCG) and a SSOR-preconditioned CG (SSOR-PCG) approach.} \label{tab_CG_iter} \end{table} \subsection{Example 2: Performance for different anisotropies} \label{sec_ex_2} To include all kind of anisotropies, we fix the refinement level to the fourth level of the previous example (4096 patch cells) and move the circular interface gradually in vertical direction. Precisely, we move the position of the midpoint by \begin{align*} x_m= (0, \frac{k}{N} h_P) \end{align*} for $k=0,...,N-1$, where $N=1000$. Note that for $k=N$, the interface would have been moved by exactly one patch cell, i.e.$\,$exactly the same cuts as for $k=0$ would appear. The problem and parameters are exactly the same as in the previous example (note that the exact solution and the data defined above depend on $x_m$). The meshes for $k=0$ and $k=990$ are shown in Figure \ref{fig:ex2_b}. Moreover, in order to illustrate the anisotropic sub-cells, a zoom-in of the cut-meshes for $k=0,10,50$ and $990$ is displayed in larger in Figure~\ref{fig:ex2_c}. For $k=0$, we find very anisotropic cells in two patches of type C in the patches in the centre; for $k=10$ in four patches of type B; for $k=50$ in two patches of type B in the middle and two patches of type A on the left and right; for $k=990$ very anisotropic cells of type A are present. \begin{figure}[t] \centering {\includegraphics[width=7cm]{visit_Jan_31_2018_0004.png}} {\includegraphics[width=7cm]{visit_Jan_31_2018_0005.png}} \caption{Example 2: The cut-mesh at $k=0$ and $k=990$.} \label{fig:ex2_b} \end{figure} \begin{figure}[t] \centering {\includegraphics[width=6.5cm]{visit_Jan_31_2018_0010.png}} {\includegraphics[width=6.5cm]{visit_Jan_31_2018_0011.png}} {\includegraphics[width=6.5cm]{visit_Jan_31_2018_0012.png}} {\includegraphics[width=6.5cm]{visit_Jan_31_2018_0013.png}} \caption{Example 2: Zoom-in at $k=0$ (top left), $k=10$ (top right), $k=50$ (bottom left) and $k=990$ (bottom right).} \label{fig:ex2_c} \end{figure} {In Table~\ref{tab.aniso}, we show some properties of the triangulation $\mathcal{T}_h$ consisting of the sub-cells for the four different configurations shown in Figure~\ref{fig:ex2_c}. The most anisotropic cells can be found for $k=10$ and $k=990$, where both the largest aspect ratio \begin{align*} \max\limits_{K\in\mathcal{T}_h} \frac{|e_{K,\text{max}}|}{|e_{K,\text{min}}|} \end{align*} of an element and the ratio between the largest and the smallest element's size are of order $10^5$. Note that due to the symmetry of the problem and the discretisation, the values for $k=10$ and $k=990$ are identical. The element with the largest aspect ratio can be found on the very left of the circle (and due to symmetry also on the very right, see Figure~\ref{fig:ex2_b} on the right), where the patch line connecting the vertices $x_1=(-0.5, 0.03125)$ and $x_2=(-0.46875, 0.03125)$ is cut by the interface at $x_s\approx(-0.4999999,0.03125)$.} \begin{table} \begin{center} \begin{tabular}{c|ccc|ccc} \toprule k &$|K_{\max}|$ &$|K_{\min}|$ & $\frac{|K_{\max}|}{|K_{\min}|}$ & $|e_{\text{max}}|$ & $|e_{\text{min}}|$ & $\max\limits_{K\in\mathcal{T}_h} \frac{|e_{K,\text{max}}|}{|e_{K,\text{min}}|}$ \\ \hline 0 &$2.44\cdot 10^{-4}$ &$3.82\cdot 10^{-\,6\,\,}$ &$6.39\cdot 10^1$ &$3.45\cdot 10^{-2}$ &$4.89\cdot 10^{-4}$ &$3.20\cdot 10^1$ \\ 10 &$2.50\cdot 10^{-4}$ &$7.63\cdot 10^{-10}$ &$3.28\cdot10^5$ &$3.45\cdot 10^{-2}$ &$9.77\cdot 10^{-8}$ &$1.60\cdot 10^5$\\ 50 &$2.52\cdot 10^{-4}$ &$1.91\cdot 10^{-\,8\,\,}$ &$1.32\cdot10^4$ & $3.45\cdot10^{-2}$ &$2.44\cdot10^{-6}$&$6.40\cdot10^3$ \\ 990 &$2.50\cdot 10^{-4}$ &$7.63\cdot 10^{-10}$ &$3.28\cdot10^5$ &$3.45\cdot 10^{-2}$ &$9.77\cdot 10^{-8}$ &$1.60\cdot 10^5$\\ \bottomrule \end{tabular} \end{center} \caption{\label{tab.aniso} Properties of the triangulations $\mathcal{T}_h$ consisting of the sub-cells for the four different configurations shown in Figure~\ref{fig:ex2_c}. In columns 2 to 4, we show the area of the largest and the smallest element $|K_{\max}|$ and $|K_{\min}|$ and their ratio; in columns 5 and 6 he largest and smallest edge $|e_{\text{max}}|$ and $|e_{\text{min}}|$. Finally, in column 7 the biggest aspect ratio of all elements is shown. } \end{table} In order to study the dependence of the iteration numbers on the position of the interface, we plot the number of linear iterations for the three different CG methods and the non-hierarchical and hierarchical basis in Figure \ref{fig:ex2_a} over the increment $k$. For both the non-hierarchical and the hierarchical approach, we observe that the iteration numbers decrease by at least a factor of 2 for the diagonal preconditioning and at least by a factor of 4 for the SSOR preconditioning compared to the standard CG method. For the non-hierarchical approach, the iteration numbers depend considerably on the position of the interface, even after preconditioning. Using the diagonal preconditioning the iteration number varies between 239 and 585 iterations, for the SSOR preconditioning between 129 and 260 iterations are needed. These numbers get worse, when the fineness $N$ is increased. In this example it becomes clear that the non-hierarchical approach shows a condition number issue, even when preconditioning techniques are used. For the hierarchical approach the iteration numbers seem to be bounded independently of the position of the interface for both preconditioning variants. The diagonally preconditioned CG method needs between 163 and 188 linear iterations, the SSOR preconditioned CG method between 62 and 81 iterations. Again the SSOR preconditioned CG method is superior to the simple diagonal preconditioning, although our analysis for the condition number is based on the scaling of the hierarchical basis \eqref{scaling}, which is only ensured for the diagonal preconditioning. \begin{figure}[t] \centering \begin{minipage}{0.5\textwidth} \includegraphics[width=\textwidth]{iter_nonhier.pdf} \end{minipage} \hspace{-0.5cm} \begin{minipage}{0.5\textwidth} \includegraphics[width=\textwidth]{iter_hierarchical.pdf} \end{minipage} \caption{Example 2: Number of linear iterations needed for the different CG methods to decrease the residual below a tolerance of $10^{-12}$ plotted over the increment $k$, where for $k=1000$ the circular interface has been moved by exactly one patch cell. \textit{Left}: Non-hierarchical finite element basis. \textit{Right}: Hierarchical basis. } \label{fig:ex2_a} \end{figure} \section{Implementation} \label{sec.impl} Our implementation is based on deal.II, version 8.5.0. A short guide on the installation and compilation is given in the file \texttt{README.txt}. We start this section by giving an overview of the basic structure of the source code in Section~\ref{sec.structure}. Then, we describe the implementation of the level set function in Section~\ref{sec.levelset}. In Section~\ref{sec.locmodfe}, we give an overview on the additional steps needed compared to a standard finite element code and how they are implemented in the class \texttt{LocModFE}. Finally, we show in Section~\ref{sec.UsingLocModFE}, how these are incorporated in a standard finite element program. \subsection{Structure of the code} \label{sec.structure} The source code can be split into three parts, which can be found in the files \texttt{locmodfe.h} and \texttt{.cc}, \texttt{step-modfe.cc} and \texttt{problem.h}. The following lines are copied from the preamble of the file \texttt{README.txt}:\\ \begin{lstlisting} * The source code includes the following files and classes: * * 1) locmodfe.cc/h: Contain all functions that are specific to the locally * modified FE method * a) class LocModFEValues : Extends the FEValues class in deal.II, where * the local basis functions on the reference patches * are evaluated * b) class LocModFE : Key class of the locally modified finite element * method * * 2) step-modfe.cc: * a) class ParameterReader: Read in parameters from a seperate parameter * file * b) class InterfaceProblem : local user file similar to many deal.II * tutorial steps, which controls the general workflow of * the code, for example the solution algorithm, assembly * of system matrix and right-hand side and output * c) int main() * * 3) problem.h: Problem-specific definition of geometry, boundary conditions * and analytical solution * a) class LevelSet : Implicit definition of interface and sub-domains * b) classDirichletBoundaryConditions : Definition of the Dirichlet data * c) class ManufacturedSolution : Analytical solution for error estimation \end{lstlisting} \subsubsection*{\texttt{locmodfe.h} and \texttt{locmodfe.cc}} The files \texttt{locmodfe.h} and \texttt{locmodfe.cc} contain all functions that are specific for the locally modified finite element discretisation. The class \texttt{LocModFEValues} extends the \texttt{FEValues} class in \texttt{deal.II}. In this class the values of the basis functions and their gradients (in deal.II ``shape functions'') as well as the derivatives of the map $\hat{T}_P$ are evaluated in quadrature points on the reference patch, depending on the reference patch type ($\hat{P}_0,...,\hat{P}_3$) and the boolean parameter \texttt{\_hierarchical}, which specifies if a hierarchical basis is to be used. In the class \texttt{LocModFE}, we check if patches are cut and in which sub-domains they are (function \texttt{set\_material\_ids}), define the type of the cut (configurations A,...,D), the reference patch type ($\hat{P}_0,...,\hat{P}_3$) and the local mappings $\hat{T}_P$ (function \texttt{init\_FEM}). Moreover, we initialise the respective quadrature formulas depending on the reference patches (function \texttt{compute\_quadrature}, more details on the quadrature will be given below), provide functions to compute norm errors (function \texttt{integrate\_difference\_norms}), to set Dirichlet boundary values in cut patches (function \texttt{interpolate\_boundary\_values}) and to visualise the solution (\texttt{plot\_vtk}). \subsubsection*{\texttt{step-modfe.cc}} In the file \texttt{step-modfe.cc}, we find the \texttt{main()} function and the classes \texttt{ParameterReader} and \texttt{InterfaceProblem}. The class \texttt{ParameterReader} is used to read in parameters from a parameter file, as in many \texttt{deal.II} tutorial steps. The class \texttt{InterfaceProblem} can also be found similarly in many of the local user files in the tutorial steps. It contains for example the loops of the Newton iteration as well as functions to assemble the right-hand side and the system matrix. They differ from other \texttt{deal.II} steps only, when specific functions from the \texttt{LocModFE} class need to be used. The main modifications that are required for the locally modified finite element method will be explained in detail in the next section. \subsubsection*{\texttt{problem.h}} Finally, the file \texttt{problem.h} contains three classes, where the geometry, the Dirichlet boundary data and the analytical solution for the specific example to be solved are specified. \subsection{The Level set function} \label{sec.levelset} In order to assign an element type to a patch, let us assume, that the interface is represented as zero-contour of a Level-Set function $\chi(x)$. In our examples, the function $\chi(x)=\|x-x_m\|^2 - 0.25, x_m=(0,y_{\rm offset})$ is specified by the following expressions in the class \texttt{LevelSet} in the file \texttt{problem.h}: \begin{lstlisting} template <int dim> class LevelSet { ... public: // Compute value of the LevelSet function in a point p double dist(const Point<dim> p) const { return p(0)*p(0) + (p(1)-_yoffset)*(p(1)-_yoffset) -0.5*0.5; } // Derivatives for Newton's method to find cut position double dist_x(const Point<dim> p) const { return 2.0*p(0); } double dist_y(const Point<dim> p) const { return 2.0*(p(1)-_yoffset); } //Determine domain affiliation of a point p int domain(const Point<dim> p) const { double di = dist(p); if (di>=0) return 1; else return -1; } ... }; \end{lstlisting} The function \texttt{double dist(...)} can be used to obtain the value of $\chi$ in a point p. Moreover, we provide the derivatives \texttt{double dist\_x(...)} and \texttt{double dist\_y(...)}, which will be needed by a Newton method to find the position, at which the interface cuts an exterior edge (see Point 3 below). By means of the function \texttt{int domain(...)}, we obtain the index of the sub-domain, in which $p$ lies. \begin{figure}[t] \centering \begin{picture}(0,0)% \includegraphics{impl2.pdf}% \end{picture}% \setlength{\unitlength}{2486sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2{% \fontsize{#1}{#2pt}% \selectfont}% \fi\endgroup% \begin{picture}(9709,5789)(1197,-5741) \put(6076,-1636){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_2$}% }}}} \put(7426,-3211){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\chi<0$}% }}}} \put(8326,-5011){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\chi>0$}% }}}} \put(9946,-4516){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$s$}% }}}} \put(8416,-3796){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\psi=0$}% }}}} \put(10891,-4426){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$=0$}% }}}} \put(1531,-1636){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_0$}% }}}} \put(3736,-1276){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_1$}% }}}} \put(8281,-1276){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_3$}% }}}} \put(9226,-2626){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$v_2$}% }}}} \put(9226,-5326){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$v_1$}% }}}} \put(10126,-4021){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\chi(v_1+s(v_2-v_1))$}% }}}} \put(6571,-5686){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$-$}% }}}} \put(9811,-5686){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$+$}% }}}} \put(9811,-2716){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$-$}% }}}} \put(6571,-2671){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$-$}% }}}} \put(1576,-5281){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_0$}% }}}} \put(1576,-3931){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_0$}% }}}} \put(4231,-5281){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_0$}% }}}} \put(3556,-5371){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_1$}% }}}} \put(5176,-3661){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_2$}% }}}} \put(2836,-3796){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\color[rgb]{0,0,0}$\hat{P}_3$}% }}}} \end{picture}% \caption{Implementation of the parametric patch-based approach. \textit{Top row:} Four different reference patches. \textit{Lower left:} Sample mesh with patches corresponding to all four variants. \textit{Lower right:} Identification of the cut points by means of the level set function $\chi$.} \label{fig:impl} \end{figure} \subsection{Implementation of the class \texttt{LocModFE}} \label{sec.locmodfe} Before we describe the additional steps needed for the locally modified finite element approach in detail, let us note that a patch is affected by the interface if $\chi$ shows different signs in two of the four outer vertices. In the same way, we identify the edges cut by the interface. Let $v_1$ and $v_2$ be the two outer nodes of an edge with $\chi(v_1)>0>\chi(v_2)$, see Figure~\ref{fig:impl}. The exact coordinate where the interface line crosses an edge, can be found by a simple Newton method to find the zero $s_0$ of \[ f(s)=\chi\big(v_1+s( v_2- v_1)\big)=0. \] The following steps are executed in each patch $P\in \mathcal{T}_{2h}$ before the system matrix and right-hand side are assembled. Note that all these operations are local operations on the patch level: \begin{enumerate} \item We equip the four exterior vertices $v_i, i=0,...,3$ of the patches with a colour (-1 or 1), based on the value of the function \texttt{domain}($v_i$) in the LevelSet class. \item We equip each patch with a color (-1,0 or 1): -1 and 1 if the color of the four vertices in 1.$\,$is -1 or 1 for all of them, respectively; 0 for interface patches with vertices in both sub-domains. \item If $P$ is an interface patch, find the two edges $e_1$ and $e_2$ affected by the interface by checking the color of the end vertices $v_1$ and $v_2$ as in 2.$\,$and compute the exact cut position on both edges by using Newton's method to find the zeros $s_0$ of \[ f(s) = \chi\big(v_1+s( v_2- v_1)\big). \] \item Specify the type of the cut (configuration A,...,D) and define the reference patch type $\hat{P}_0,...,\hat{P}_3$. \item Define the local mapping $\hat{T}_P: \hat{P}_i \to P$ by means of the position of the 9 vertices in the physical patch $P$: The degrees of freedom of the two edges $e_1$ and $e_2$ affected by the interface are moved to the point $v_1+s_0( v_2- v_1)$. The position of the midpoint depends on the configuration $A,...,D$ (see Sections~\ref{sec:fe} and~\ref{sec_hierarchical}). \item Choose one of the four quadrature formulas, depending on the reference patch $\hat{P}_0,...,\hat{P}_3$. \end{enumerate} \subsubsection*{Step 1 and 2 (implemented in \texttt{set\_material\_ids})} We will provide now some code snippets to illustrate how these steps are implemented in the class \texttt{LocModFE}. Step 1 and 2 are implemented in the function \texttt{void set\_material\_ids}: \begin{lstlisting} template <int dim> void LocModFE<dim>::set_material_ids (const DoFHandler<dim> &dof_handler, const Triangulation<dim> &triangulation) { ... unsigned int subdom1_counter; for (unsigned int cell_counter = 0; cell!=endc; ++cell, cell_counter++) { subdom1_counter = 0; for (unsigned int v=0; v<GeometryInfo<dim>::vertices_per_cell; ++v) { //First determine the sub-domain of the four outer vertices double chi_local = chi.domain(cell->vertex(v)); node_colors[cell->vertex_index(v)] = chi_local; if (chi_local > 0) subdom1_counter ++; } //Based on the colors of the vertices, specify a color for the patches //(0 stands for an interface patch) if (subdom1_counter == 4) cell_colors[cell_counter] = 1; else if (subdom1_counter == 0) cell_colors[cell_counter] = -1; else cell_colors[cell_counter] = 0; } } \end{lstlisting} First, we set in line 16 the \texttt{node\_color} for each of the four outer vertices of the patch, based on the value of the Level set function \texttt{chi} (step 1). Moreover, we count the number of outer vertices of the patch lying in sub-domain 1 (line 18) by means of the counter \texttt{subdom1\_counter}. If the result is 0 or 4, the patch lies completely in one sub-domain and the \texttt{node\_color} of the four vertices (-1 or 1) is set as \texttt{cell\_color} for the patch; otherwise we set the \texttt{cell\_color} to 0, which corresponds to an interface patch (line 28). \subsubsection*{Step 3 to 5 (implemented in \texttt{init\_FEM})} The steps 3-5 can be found in the function \begin{lstlisting} void init_FEM(const typename DoFHandler<dim>::active_cell_iterator &cell, unsigned int cell_counter, FullMatrix<double> &M, const unsigned int dofs_per_cell, unsigned int &femtype_int, std::vector<double> &LocalDiscChi, std::vector<int>& NodesAtInterface); \end{lstlisting} As the implementation of this function is quite lengthy, let us only discuss its outputs: The resulting reference patch type (step 4) is written to the variable \texttt{femtype\_int}. As shown in \eqref{TP}, the map $\hat{T}_P$ can be parametrised by the coordinates of the nine vertices $x_i^P, i=1,...,9$ in the physical patch $P$. These are memorised in the $2\times 9$-matrix \texttt{M}. Moreover, we would like to mention the vector \texttt{LocalDiscChi}, which contains the nine values of the Level set function $\chi(x_i^P)$ in the vertices. These parametrise a discrete level set function $\chi_h$, that will be used in the computations, see the following paragraph. \subsubsection*{Step 6 (implemented in \texttt{compute\_quadrature})} For the choice of the quadrature formula depending on the reference patch type (step 6), we use the function \begin{lstlisting} Quadrature<dim> compute_quadrature (int femtype); \end{lstlisting} The four different quadrature formulas that can be chosen are defined in the function \begin{lstlisting} void initialize_quadrature(); \end{lstlisting} that has to be called once in the beginning of the program (for example within the function \texttt{run}, see Section~\ref{sec.UsingLocModFE}). The integration points are chosen as the four Gauss points of the Gaussian integration formula of order one in each of the sub-quadrilaterals and as the three Gauss points of the corresponding Gaussian integration formula in each of the sub-triangles. This results in a total of 16 integration points in regular patches and of 24 integration points in interface patches. \subsection{Using the functions of the class \texttt{LocModFE} in a standard finite element program} \label{sec.UsingLocModFE} In order to access the functions of the class \texttt{LocModFE} we have added the object \begin{lstlisting} LocModFE<dim> lmfe; \end{lstlisting} as a member to the user class \texttt{InterfaceProblem}. \subsubsection*{The \texttt{run} method} As in {almost all} \texttt{deal.II} tutorial steps, the workflow of the code is controlled by the function \texttt{void run()} of the user class \texttt{InterfaceProblem}. We show this function here for test case 2, skipping some lines with \texttt{'...'} that contain only output to the console (\texttt{std::cout <<}): \begin{lstlisting} template <int dim> void InterfaceProblem<dim>::run () { set_runtime_parameters(); setup_system(); lmfe.initialize_quadrature(); //Memorize initial solution Vector<double> initial_solution = solution; std::cout << std::endl; if (test_case == 1) { ... } else if (test_case == 2) { for (unsigned int i=0; i < N_testcase2; ++i) { // Move y-position of circle at each step _yoffset = (double)i / (double)N_testcase2 * min_cell_vertex_distance; lmfe.LevelSetFunction()->set_y_offset (_yoffset); // Reset material_ids based on the new interface location lmfe.set_material_ids (dof_handler, triangulation); std::cout << ... // Solve system with Newton solver newton_iteration (); // Compute functional values (error norms) compute_functional_values(false); ... // Write solutions as *.vtk file lmfe.plot_vtk (dof_handler,fe,solution,i); } } // end test_case 2 } \end{lstlisting} \subsubsection*{\texttt{void initialize\_quadrature()} and level set function} The first function of the class \texttt{LocModFE} that is used, is \texttt{void initialize\_quadrature()} in line 6, which initialises the four quadrature formulas for the four reference patch types $\hat{P}_0,...,\hat{P}_3$, that can be accessed by means of \texttt{lmfe.compute\_quadrature(int femtype)} later on. In the lines 21 and 22, the vertical position of the circular interface is updated by means of the y-coordinate (\texttt{\_yoffset}) of the midpoint $x_m$ of the circle and then passed to the level set function of the class \texttt{LocModFE}. Remember that in this test case the interface is moved gradually upwards. \subsubsection*{\texttt{void set\_material\_ids(...)} and \texttt{newton\_iteration()}} Next, the function \texttt{void set\_material\_ids(...)} is called in line 25, which sets the colours for vertices and patches as explained above. All the computations are then done within the function \texttt{newton\_iteration()} in line 30. The source code of this function itself contains no content that is specific to the locally modified finite element method. In fact the Newton solver is mostly copy and paste from \cite{Wi11_fsi_with_deal}. The only modified functions that are called within \texttt{newton\_iteration()} are the assembly of the system matrix and right-hand side, which will be discussed below and the function \texttt{set\_initial\_bc}, which has to be modified in interface patches by calling\\ \texttt{lmfe.interpolate\_boundary\_values(...)}. After the Newton iteration, functional values are computed in the function \texttt{compute\_functional\_values(...)} that uses the modified function \texttt{lmfe.integrate\_difference\_norms(...)}. Finally, the results are written to a vtk file by \texttt{lmfe.plot\_vtk} in line 37, together with a mesh consisting of the sub-cells of the patches. \subsubsection*{\texttt{assemble\_system\_matrix()}} Within the function \texttt{newton\_iteration}, the functions \texttt{assemble\_system\_matrix()} and \texttt{assemble\_system\_rhs()} are called. We show here the prior exemplarily, the modifications in the assembly of the right-hand side are analogous: \begin{lstlisting} template <int dim> void InterfaceProblem<dim>::assemble_system_matrix () { ... LocModFEValues<dim>* fe_values; //We initialize one LocModFEValue object for patch type 0 and one for patch //types 1 to 3, due to the different number of integration points Quadrature<dim> quadrature_formula0 = lmfe.compute_quadrature(0); LocModFEValues<dim> fe_values0 (fe, quadrature_formula0, _hierarchical, update_values | update_quadrature_points | update_JxW_values | update_gradients); Quadrature<dim> quadrature_formula1 = lmfe.compute_quadrature(1); LocModFEValues<dim> fe_values1 (fe, quadrature_formula1, _hierarchical, update_values | update_quadrature_points | update_JxW_values | update_gradients); \end{lstlisting} After some variable definitions that we have skipped here in line 4, we initialise a pointer\\ \texttt{LocModFEValues<dim>* fe\_values(...)}. Depending on the patch type, this pointer will be set for each patch in the following loop to one of the objects \texttt{LocModFEValues<dim> fe\_values0(...)} (patch type $\hat{P}_0$) or \texttt{LocModFEValues<dim> fe\_values1} (patch type $\hat{P}_1,...,\hat{P}_3$) defined in the lines 11 and 16. We initialise these two objects before the loop over all patches for efficiency reasons. Two different objects are needed as the local number of quadrature points is different for patch type $\hat{P}_0$ compared to the interface patch types. Next, we start the loop over all patches, in which the local contribution to the global system matrix is computed. Before we can compute the local basis functions and their gradients, we have to call the function \texttt{init\_FEM(...)}, that sets the patch type (\texttt{femtype}), the local mapping $\hat{T}_P$ (\texttt{M}) and the discrete level set function $\chi_h$ (\texttt{LocalDiscChi}), see line 25. Then, the quadrature formula that corresponds to the patch type is set in line 27 and one of the two objects of type \texttt{LocModFEValues}, that were initialised above, is chosen. The quadrature formula, as well as the patch type and the local mapping $\hat{T}_P$ are then passed to this object in line 34. Now, we are ready to compute the local basis functions, their gradients and the derivatives of the mapping $\hat{T}_P$, that are needed to compute the entries of the system matrix. This is done by \texttt{fe\_values->reinit(J)} in line 38:\\ \begin{lstlisting}[firstnumber=19] for (unsigned int cell_counter = 0; cell!=endc; ++cell,++cell_counter) { local_matrix=0; //Set patch type (femtype), map T_P (M), local level set function (LocalDiscChi) //and list of nodes at the interface lmfe.init_FEM (cell,cell_counter,M,dofs_per_cell,femtype, LocalDiscChi, NodesAtInterface); Quadrature<dim> quadrature_formula = lmfe.compute_quadrature(femtype); const unsigned int n_q_points = quadrature_formula.size(); //Choose one of the initialized objects for LocModFEValues if (femtype==0) fe_values = &fe_values0; else fe_values = &fe_values1; fe_values->SetFemtypeAndQuadrature(quadrature_formula, femtype, M); std::vector<double> J(n_q_points); //Now the shape functions on the reference patch are initialized fe_values->reinit(J); \end{lstlisting} Next, we have a loop over the quadrature points and over the local degrees of freedom as usual in a finite element program. In order to compute the diffusion coefficient $\kappa$ (\texttt{viscosity}), we use the discrete level set function $\chi_h$. The value of $\chi_h$ in the quadrature point $q$ is extracted from the vector \texttt{LocalDiscChi} by the function \texttt{lmfe.ComputeLocalDiscChi(...)} in line 50. Note that it is important to use this discrete level set function for the assembly of matrix and right-hand side, as otherwise $\kappa$ would jump within a sub-element and the program would not be robust with respect to high-contrast coefficients. The remaining lines are {standard and very similar to many other \texttt{deal.II} tutorial steps (e.g., deal.II-step-22)}:\\ \begin{lstlisting}[firstnumber=39] for (unsigned int q=0; q<n_q_points; ++q) { for (unsigned int k=0; k<dofs_per_cell; ++k) { phi_i_u[k] = fe_values->shape_value (k, q); phi_i_grads_u[k] = fe_values->shape_grad (k, q); } //Get the domain affiliation to set the viscosity. //This is based on the discrete level set function, such that //all quadrature points in a sub-cell lie in the same sub-domain lmfe.ComputeLocalDiscChi(ChiValue, q, *fe_values, dofs_per_cell, LocalDiscChi); if (ChiValue < 0.) viscosity = visc_1; //Subdomain Omega_1 (inside the circle) else viscosity = visc_2; //Subdomain Omega_2 (outside the circle) //Compute matrix entries as in other deal.II program. for (unsigned int i=0; i<dofs_per_cell; ++i) { for (unsigned int j=0; j<dofs_per_cell; ++j) { local_matrix(j,i) += viscosity * phi_i_grads_u[i] * phi_i_grads_u[j] * J[q] * quadrature_formula.weight(q); } } } //Write into global matrix ... } ... } \end{lstlisting} Finally, we remark that besides the described function calls no further modifications are necessary in comparison to any other standard FEM code or deal.II tutorial program. \section{Conclusion and outlook} \label{sec.conclusion} In this paper, we have explained the implementation of the locally fitted finite element method first proposed in \cite{FreiRichter2014} in detail. The underlying framework is based on the open-source finite element library deal.II, \cite{dealII85}. Moreover, we have illustrated the performance of the method by means of two numerical tests. We have shown that iterative methods such as the CG method can be used to solve the arising linear systems of equations and analysed the performance of the linear iterative solvers with respect to mesh refinement and different anisotropies. The method can be applied to simulate the Stokes or Navier-Stokes equations with equal-order elements and pressure stabilisations. The only difficulty lies in the treatment of the anisotropic cells within the stabilisation terms. A solution for the Continuous Interior Penalty (CIP) stabilisation has been proposed by \cite{FreiDiss}, \cite{FreiPressureStab}. In order to obtain higher-order accuracy, the interface has to be resolved with higher order. This can be achieved by using maps $\hat{T}_P$ of higher polynomial degrees. We would like to remark, however, that this might lead to additional difficulties concerning the degeneration of the sub-elements within the patches. A promising alternative is the use of so-called ``boundary value correction'' techniques at the interface, see \cite{Burmanetal2018}. Moreover, the locally modified FEM method has a natural extension to three space dimensions. The mathematical, numerical, and algorithmic requirements are currently ongoing work. Another desirable feature is the parallelisation of the approach. Here, we do not assume major difficulties since the programming structure is similar to step-42 of the deal.II tutorial programs. As all the additions compared to a standard deal.II code are local on the patch level, this should in principle be possible without further difficulties. \section{Acknowledgements} The first author was supported by the DFG Research Scholarship FR3935/1-1. The third author gratefully acknowledges the travel support from University College London (i.e., Eric Burman) for finalising this work. \bibliographystyle{plainnat} \input{paper_TOMS.bbl} \end{document}
{ "timestamp": "2018-06-05T02:15:16", "yymm": "1806", "arxiv_id": "1806.00999", "language": "en", "url": "https://arxiv.org/abs/1806.00999" }
\section{Introduction} \noindent Since Henry Darcy's remarkable modeling of linear relation between water flux and the hydraulic gradient, e.g., Darcian flow, in 1856, many researchers found the Darcy's law is not good enough for description of water and gas flow in low-permeability media like clay and shale. Consequently, in past decades, extensive efforts have been devoted to modeling approaches of nonlinear relation between water flux and hydraulic gradient, called non-Darcian flow \citep{Liu2014,Liu2016}. Aiming at low-permeability media like clay, \cite{Miller1963} supposed a threshold gradient for water flow in clays to distinguish linear and nonlinear flow. They found that water flow rate is linearly related to hydraulic gradient at gradients above the threshold gradient, but no flow occurs below threshold gradient. \cite{Deng2007} suggested a new equation of nonlinear flow in saturated clays that can describe characteristics of flow curve of the nonlinear flow from low to high hydraulic gradients. Generally speaking, non-Darcian flow can be described by nonlinear functions of water flux and hydraulic gradient such as exponential and power functions. \cite{Hansbo1960,Hansbo2001} proposed a power relationship between water flux and hydraulic gradient for non-Darcian flow in clay media. By analyzing data sets for water flow in clay soils, \cite{Swartzendruber1962} proposed an exponential function to validate Darcy's law, resulting in a nonlinear relation of water flux versus gradient. In order to capture the non-Darcian flow behavior, \cite{Liu2012} developed a new relationship between water flux and hydraulic gradient by generalizing the currently existing relationships. The new relationship is shown to be consistent with experimental observations for both saturated and unsaturated conditions. To validate Darcy's law and develop non-Darcian models seem to be an endless challenge. It therefore leads to a new channel. According to Darcy's law, water flux is directly proportional to the hydraulic gradient, i.e., the first order (as an integral number) derivative of water head with respect to the flow distance. In other words, Darcian flow can be described by an integer derivative of water head. Non-Darcian flow in porous media as a nonlinear phenomenon requires a new mathematical approach. In this case, non-Darcian flow could be characterized by a fractional derivative. The fractional calculus, referred to as calculus of integrals and derivatives of any arbitrary real or complex order, is a 300 years old mathematical discipline. Its original conception is believed to have stemmed from a question raised in the year of 1695 by Marquis de L'H\^{o}pital (1661-1704) to Gottfried Wilhelm Leibnitz (1646-1716), the founder of Calculus. In the past few decades, the fractional calculus has gained remarkable popularity and importance because of its demonstration applications in numerous seemingly diverse and widespread fields of science and engineering \citep{Herrmann2011, Ortigueira2011} such as applications of fractional calculus to time-dependent behavior of rocks \citep{Zhou2011,Zhou2013} and composites \citep{Zhou2017}, fluid mechanics \citep{Kulish2002}, and solid mechanics \citep{Carpinteri2002,Carpinteri2004,Rossikhin2010}. Moreover, some researchers devoted themselves to a nonlinear modeling approach of fractional derivative to non-Darcian flow. \cite{He1998} proposed a new model for seepage flow in porous media to modify the Darcy's law with fractional derivatives. \cite{Tian2006} researched the flow characteristics of fluids through a fractal reservoir with the fractional order derivative. By regarding the water flow as a function of a fractional derivative of the piezometric head, \cite{Cloot2006} generalized the classical Darcy's law to derive a new equation of groundwater flow. \cite{Chen2013} developed a new variable-order fractional diffusion equation to describe the diffusion process of chloride ions in the reinforced concrete structure. \cite{Babak2014} presented a unified fractional differential approach to modeling flows of slightly compressible fluids through naturally fractured media. Recently, \cite{Wang2015} applied Caputo fractional constitutive equation to describe the transient electro-osmotic flow of a generalized Maxwell fluid in a cylindrical capillary. As described by \cite{Cloot2006}, the underlying basic assumption of the fractional derivative modeling approach to transport in porous media is that the fluid flow at a given point of the porous media is governed not only by the properties of the piezometric field at the specific position but also depends on the global spatial distribution of that field in soil matrix. As a consequence, time or space fractional derivatives are extensively used in models of solute transport in porous media in order to take into account the memory effect or nonlocal properties induced by the interactions of fluid particles with pores of the porous media. Nevertheless, time or space fractional derivative models usually need to make dimensionless for convenience. Therefore, a different perspective to address this problem will be shown herein to interleave with the fractional calculus. This paper represents an attempt to describe non-Darcian flow mathematically. The Swartzendruber equation as a non-Darcian flow model is generalized to describe the relation between water flux and hydraulic gradient using fractional derivative, resulting in a new model called the fractional derivative flow model. The analytic solution of fractional derivative flow model is presented and all parameters of the fractional derivative flow model are determined on the basis of the experimental data of water flow in low-permeability media. The results estimated by the fractional derivative flow model proposed in the paper are in better agreement with the experimental data than the results estimated by the Swartzendruber model. It indicates that our perspective of fractional derivative modeling approach is acceptable for non-Darcian flow in porous media. \section{Fractional derivative approach to non-Darcian flow} \subsection{Definition of the Caputo derivative} The some definitions of fractional derivatives are popular in mathematics like Grunwald-Letnikov, Riemann-Liouville, and Caputo derivative \citep{Podlubny1999}. Among them, Caputo derivative is widely used in physics and mechanics because of its advantages in solving fractional differential equations with initial conditions. For a given function $f(x)$ Caputo derivative is defined by \begin{equation}\label{Eq.(1)} \frac{{{d^\gamma }f(x)}}{{d{x^\gamma }}} = \frac{1}{{\Gamma (n - \gamma )}}\int_0^x {\frac{{{f^{(n)}}(t)}}{{{{(x - t)}^{\gamma - n + 1}}}}dt}, \end{equation} where $\gamma>0$, $n$ is the least integer greater than $\gamma$, and $\Gamma(\cdot)$ is the Gamma function, i.e., $\Gamma (\gamma ) = \int_0^\infty {{t^{\gamma - 1}}{e^{ - t}}dt}$. In particular, for $\gamma=0$, $\frac{{{d^\gamma }}}{{d{x^\gamma }}}$ denotes the identity operator. \subsection{ Darcian flow} Considering one-dimensional steady-state flow, suppose a fluid flows along a straight line, say, $x$-direction, the flux is related to hydraulic gradient by a well-known Darcy's law given by \begin{equation}\label{Eq.(2)} q = \frac{K}{\mu }\frac{{dp}}{{dx}}, \end{equation} where $q$ is the bulk velocity of fluid $(\mathrm{m}/\mathrm{s})$, or fluid flux, $K$ is permeability $(\mathrm{m}^2)$, $\mu$ is dynamic viscosity $(\mathrm{N}\cdot\mathrm{s}/\mathrm{m}^2)$, $p$ is fluid pressure $(\mathrm{N}/\mathrm{m}^2)$. Eq.(\ref{Eq.(2)}) can be usually described by \begin{equation}\label{Eq.(3)} q = ki, \end{equation} where $k = \frac{K}{\mu }\rho$ is hydraulic conductivity $(\mathrm{m}/\mathrm{s})$, $\rho$ is density of fluid $(\mathrm{N}/\mathrm{m}^3)$, and $i$ is hydraulic gradient. \subsection{Non-Darcian flow} Non-Darcian flow, generally, can be described by nonlinear functions of water flux and hydraulic gradient such as power and exponential functions. (1) Power function Darcy's law leads to a linear relation between $q $ and $i$ as shown in Eq.(\ref{Eq.(3)}). By producing first order derivative to both sides of Eq.(\ref{Eq.(3)}), we then have a differential equation like \begin{equation}\label{Eq.(4)} dq = kdi \quad or \quad \frac{{dq}}{{di}} = k. \end{equation} It is shown that Darcian flow can be described by an integer derivative of flux with respect to the hydraulic gradient $i$, as a dimensionless variable. A similar model is the Newtonian dashpot for description of a linear relationship between the viscous stress and the rate of strain. The Newtonian dashpot was developed to the Abel dashpot by invoking the fractional derivative \citep{Scott-Blair1944,Kiryakova1999,Zhou2011,Zhou2013}. In an analogous way, we suppose non-Darcian flow can be described by a fractional derivative of flux, which leads to a dimensionless form, i.e., \begin{equation}\label{Eq.(5)} \frac{{{d^\gamma }q}}{{d{i^\gamma }}} = k,\quad \gamma>0, \end{equation} where $\frac{{{d^\gamma }}}{{d{i^\gamma }}}$ is the Caputo fractional derivative operator. Applying the bilateral Laplace transform $(LT)$ to Eq.(\ref{Eq.(5)}) gives \begin{equation}\label{Eq.(6)} LT\left[\frac{{d^\gamma }q}{d{i^\gamma }}\right] = {s^\gamma } \tilde q(s) - \sum\limits_{j = 0}^{n - 1} {{s^{\gamma - j - 1}}\frac{{{d^j}q(0)}}{{d{i^j}}}} = \frac{k}{s}. \end{equation} Let $q(0) = 0$, we have: \begin{equation}\label{Eq.(7)} \tilde q(s){\rm{ = }}\frac{k}{{{s^{\gamma + 1}}}}. \end{equation} Applying the inverse Laplace transform to Eq.(\ref{Eq.(7)}), i.e., \begin{equation}\label{Eq.(8)} L{T^{ - 1}}[\tilde q(s)]{\rm{ = }}L{T^{ - 1}}\left[\frac{k}{{{s^{\gamma + 1}}}}\right] = \frac{k}{{\Gamma (1 + \gamma )}}{i^\gamma }, \end{equation}we have: \begin{equation}\label{Eq.(9)} q = k\frac{{{i^\gamma }}}{{\Gamma (1 + \gamma )}}. \end{equation} In this case, we get a power function of water flux $q$ and hydraulic gradient $i$ in Eq.(\ref{Eq.(9)}), showing a similar form of nonlinear equation supposed by \cite{Hansbo1960, Hansbo2001}. (2) Exponential function: Fractional Swartzendruber equation \cite{Swartzendruber1962} proposed an exponential relation between water flux and hydraulic gradient to modify Darcy's law, i.e., \begin{equation}\label{Eq.(10)} \frac{{dq}}{{di}} = k(1 - {e^{ - \frac{i}{I}}}). \end{equation} Integrating both sides of Eq.(\ref{Eq.(10)}) and considering $q(0)=0$, we have: \begin{equation}\label{Eq.(11)} q = k[i - I(1 - {e^{ - \frac{i}{I}}})] \end{equation} where $I$ is the threshold gradient and actually refers to the intersection of the linear part in plot of the hydraulic gradient and the water flux. Replacing integer derivative with fractional derivative in Eq.(\ref{Eq.(10)}), we have the fractional derivative Swartzendruber equation, i.e., \begin{equation}\label{Eq.(12)} \frac{{{d^\gamma }q}}{{d{i^\gamma }}} = k(1 - {e^{ - \frac{i}{I}}}),\quad 0\leq\gamma\leq1. \end{equation} Application of the Laplace transform $(LT)$ to Eq.(\ref{Eq.(12)}) leads to: \begin{equation}\label{Eq.(13)} {s^\gamma } \tilde q(s) = k\left( {\frac{1}{s} - \frac{1}{{s + {1 \mathord{\left/ {\vphantom {1 I}} \right. \kern-\nulldelimiterspace} I}}}} \right), \end{equation}then we have: \begin{equation}\label{Eq.(14)} \tilde q(s) = \frac{{k{s^{ - \gamma - 1}}}}{{1 + Is}}. \end{equation} Applying the inverse Laplace transform to Eq.(\ref{Eq.(14)}) like \begin{equation}\label{Eq.(15)} L{T^{ - 1}}[\tilde q(s)]{\rm{ = }}L{T^{ - 1}}\left[\frac{{k{s^{ - \gamma - 1}}}}{{1 + Is}}\right] = \frac{k}{I}{i^{\gamma + 1}}{E_{1,\gamma + 2}}( - \frac{i}{I}), \end{equation}we have: \begin{equation}\label{Eq.(16)} q = \frac{k}{I}{i^{\gamma + 1}}{E_{1,\gamma + 2}}( - \frac{i}{I}), \end{equation} where ${E_{1,\gamma + 2}}(\cdot)$ refers to Mittag-Leffler function, i.e., ${E_{1,\gamma + 2}}( - \frac{i}{I}) = \sum\limits_{k = 0}^\infty {\frac{{{{( - \frac{i}{I})}^k}}}{{\Gamma (k + \gamma + 2)}}}$ \citep{Mainardi2010}. In the case of $\gamma=0$, using ${E_{1,2}}( - \frac{i}{I}) = \frac{{{e^{ - i/I}} - 1}}{{ - i/I}}$, Eq.(\ref{Eq.(16)}) can be rewritten as \begin{equation}\label{Eq.(17)} q = \frac{k}{I}i{E_{1,2}}( - \frac{i}{I}) = k(1 - {e^{ - \frac{i}{I}}}), \end{equation} which appearance is the same as Eq.(\ref{Eq.(10)}) if differential order $\gamma=0$. In addition, the case of $\gamma=1$ gives \begin{equation}\label{Eq.(18)} {E_{1,3}}( - \frac{i}{I}) = \frac{{{e^{ - i/I}} - 1 + i/I}}{{{{(i/I)}^2}}}. \end{equation} Substituting Eq.(\ref{Eq.(18)}) into Eq.(\ref{Eq.(16)}), we have $q = k(i - I(1 - {e^{ - \frac{i}{I}}}))$ indicating that the Swartzendruber equation in Eq.(\ref{Eq.(11)}) is a special case of the fractional derivative flow model when the fractional derivative order $\gamma=1$. \section{Parameter determination for fractional derivative non-Darcian model} \subsection{Parameter determination} The efficacy of the fractional derivative model is dependent on its ability to adequately fit experimental data. Using the experimental data of water flux with hydraulic gradient, the parameters $k, I, \gamma$ in Eq.(\ref{Eq.(16)}) can be determined by the Levenberg-Marquardt method, a nonlinear least-squares fitting (LSF) method (see \citealt{Zhou2011} for details). In what follows, we now use the fractional derivative flow model to fit the experimental data \citep{Wang2016} by LSF analysis. \cite{Wang2016} developed an experimental study to investigate the non-Darcian behavior of water flow in soil-rock mixtures (SRM) with various rock block percentages. Their work presented the data set of water flux as a power function of hydraulic gradient. In addition, the relationship between threshold hydraulic gradient and rock block percentage was also considered. The exact value of threshold hydraulic gradient $I$ for SRM specimens are listed in \autoref{Table 1}. Consequently, only the two parameters $k$ and $\gamma$ are remain to be determined. The results of least-squares fit of the parameters in Eq.(\ref{Eq.(16)}) to the experimental data \citep{Wang2016} are listed in \autoref{Table 1}. \begin{table}[!tb] \centering \caption{\small Determination of parameters for fractional derivative flow model based on SRM specimens}\label{Table 1} \scalebox{0.85}{ \begin{tabular}[]{cccccccccc} \hline SRM &\multicolumn{4}{c}{Swartzendruber equation} & \multicolumn{5}{c}{Fractional derivative flow model}\\ \cline{2-10} specimens & $k\times10^{-5}(\mathrm{m}/\mathrm{s})$ & $I$ &$\mathrm{R}^2$ &MSE & $ k\times10^{-5}(\mathrm{m}/\mathrm{s})$ & $I $ &$\gamma$& $\mathrm{R}^2$ & MSE\\ \hline SRM20-1 & 0.2039 & 141.00 & 0.9817 &0.2242 & 0.2039 & 141.00 & 1 &0.9817 &0.2242\\ SRM30-1 & 0.1378 & 130.20 & 0.9898 &0.0531 & 0.1378 & 130.20 & 1 &0.9898 &0.0531\\ SRM40-1 & 0.07383& 123.60 & 0.9869 &0.1358 & 0.1311 & 123.60 &0.8501 &0.9908 &0.0095\\ SRM50-1 & 0.1297 & 102.50 & 0.9326 &0.2322 & 0.1297 & 102.50 & 1 &0.9326 &0.2322\\ SRM60-1 & 0.1175 & 85.33 & 0.9786 &0.0363 & 0.1954 & 85.33 &0.8567 &0.9819 &0.0307\\ SRM70-1 & 0.1402 & 73.59 & 0.9710 &0.0291 & 0.3223 & 73.59 &0.7454 &0.9807 &0.0193\\ \hline \end{tabular}} \end{table} The data as well as the fitting curves given by the fractional derivative flow model in Eq.(\ref{Eq.(16)}) is shown in \autoref{Fig1}. Making the fitting analysis to the same experimental data using the Swartzendruber equation in Eq.(\ref{Eq.(11)}), another set of parameters are given in \autoref{Table 1} as well. The least-squares analysis results in \autoref{Table 1} indicate that the fractional derivative flow model in Eq.(\ref{Eq.(16)}) is in better agreement with the experimental data than the Swartzendruber equation in Eq.(\ref{Eq.(11)}) with higher correlation coefficients ($\mathrm{R}^2$) and lower mean squared errors (MSE). In addition, \autoref{Table 1} shows that an increase of rock block percentage in SRM specimens leads a decrease of the hydraulic conductivity $k$ to a minimum value at a rock block percentage of 40\%, and an increase if rock block percentage exceed 40\%. The similar behavior is also given in \cite{Wang2016}. \begin{figure}[!htbp] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM20-1} \subcaption*{\small(a)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM30-1} \subcaption*{\small(b)} \end{subfigure} \\ \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM40-1} \subcaption*{\small(c)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM50-1} \subcaption*{\small(d)} \end{subfigure} \\ \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM60-1} \subcaption*{\small(e)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM70-1} \subcaption*{\small(f)} \end{subfigure} \caption{\small Fitting curves theoretically given by fractional derivative flow model on the basis of experimental data for SRM specimens with different rock block percentages: 20\% (a), 30\% (b), 40\% (c), 50\% (d), 60\% (e) and 70\% (f) (see \citealt{Wang2016}).} \label{Fig1} \end{figure} Furthermore, using a more data set \citep{Deng2007}, the validity of our fractional derivative flow model was evaluated by LSF analysis. \cite{Deng2007} presented a nonlinear model of flow in saturated clays. \begin{table}[!bht] \centering \caption{\small Determination of parameters for fractional derivative flow model based on saturated clays}\label{Table 2} \scalebox{0.85}{ \begin{tabular}[]{cccccccccc} \hline Saturated &\multicolumn{4}{c}{Swartzendruber equation} & \multicolumn{5}{c}{Fractional derivative flow model}\\ \cline{2-10} clays& $k\times10^{-9}(\mathrm{m}/\mathrm{s})$ & $I$ & $\mathrm{R}^2$ &MSE & $ k\times10^{-9}(\mathrm{m}/\mathrm{s})$ & $I $ &$\gamma$& $\mathrm{R}^2$ & MSE\\ \hline NO.64-3 & 7.973 & 0.7901 & 0.9973 &0.2050 & 11.52 & 1.565 & 0.8094 &0.9975 &0.1501\\ NO.64-4 & 4.856 & 2.754 & 0.9998 &0.0116 & 7.153 & 4.408 & 0.8814 &0.9998 &0.0100\\ \hline \end{tabular}} \end{table} Comparisons of experimental data and fitting curves given by both Swartzendruber equation in Eq.(\ref{Eq.(11)}) and the fractional derivative flow model in Eq.(\ref{Eq.(16)}) are illustrated in \autoref{Fig2}. For better analysis, the parameters in Eq.(\ref{Eq.(11)}) and Eq.(\ref{Eq.(16)}) are determined and listed in \autoref{Table 2}. The results demonstrate that the proposed fractional derivative flow model is in better agreement with the experimental data than the Swartzendruber equation in Eq.(\ref{Eq.(11)}). \begin{figure}[!ht] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{NO64-3} \subcaption*{\small(NO.64-3)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{NO64-4} \subcaption*{\small(NO.64-4)} \end{subfigure} \caption{\small Fitting curves theoretically given by fractional derivative flow model on the basis of experimental data for saturated clays NO.64-3, NO.64-4 (see \citealt{Deng2007}).}\label{Fig2} \end{figure} Moreover, since Swartzendruber equation in Eq.(\ref{Eq.(11)}) is a special case of the fractional derivative flow model when the fractional derivative order $\gamma=1$, our fitting results verified that the presented fractional derivative flow model is more flexible and accurate. \subsection{Sensitivity analysis} \noindent(1) Fractional derivative order Eq.(\ref{Eq.(16)}) shows that the relationship between water flux $q$ and hydraulic gradient $i$ depends on parameters $k, I, \gamma$. In order to get a better understanding of the effects of these parameters, sensitivity analyses have been carried out. The effect of fractional order $\gamma$ on the variation of water flux with hydraulic gradient is shown in \autoref{Fig3}. In which one parameter $\gamma$ takes three different values to show its effect on the $q-i$ curve under the condition of $k=2\times10^{-6}\mathrm{m}/\mathrm{s}, I=80$. It is shown that the higher the fractional derivative order, in general, the larger the fluid flux. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{Fig3} \caption{\small Sensitivity of the water flux to the fractional derivative order}\label{Fig3} \end{figure} \noindent(2) Threshold hydraulic gradient In Eq.(\ref{Eq.(16)}), let the threshold gradient $I$ changes and other parameters be constant, where $k=1.5\times10^{-6}\mathrm{m}/\mathrm{s}, \gamma=0.85$. A series of curves can be obtained as shown in \autoref{Fig4}, indicating that the higher threshold gradient, the smaller water flux. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{Fig4} \caption{\small Sensitivity of the water flux to threshold hydraulic gradient}\label{Fig4} \end{figure} \section{Conclusions} The object of the present work is to develop fractional order equations for describing non-Darcian behavior between water flux and hydraulic gradient. Based upon the extensively adopted fractional calculus theory, we generalized the currently existing relationships such as Hansbo equation and Swartzendruber equation. The analytic solution for the fractional derivative flow model is acquired and the relative parameters are determined. Sets of experimental data are utilized to verify the validity of the proposed fractional derivative flow model. The comparative analysis demonstrates that including the Swartzendruber equation as a special case when the fractional derivative order $\gamma=1$, the fractional derivative flow model turns out to be a more flexible and accurate one to characterize the behavior of non-Darcian flow. Furthermore, a sensitivity study shows that the fractional derivative order is an essential parameter impacting the shape of $q-i$ curve. However, the corresponding physical interpretation of fractional derivative is not clear and further research is required to determine the relationship between the fractional derivative order and other mechanical parameters. \section*{Acknowledgement} The present work is supported by the National Natural Science Foundation of China (51674266), and the State Key Research Development Program of China (2016YFC0600704) and the Specialized Research Fund for the Doctoral Program of Higher Education (20130023110017). The financial supports are gratefully acknowledged. Special thanks are due to H.H. Liu, Aramco Services Company, for his valuable suggestions and help in improving the article. \section*{References} \section{Introduction} \noindent Since Henry Darcy's remarkable modeling of linear relation between water flux and the hydraulic gradient, e.g., Darcian flow, in 1856, many researchers found the Darcy's law is not good enough for description of water and gas flow in low-permeability media like clay and shale. Consequently, in past decades, extensive efforts have been devoted to modeling approaches of nonlinear relation between water flux and hydraulic gradient, called non-Darcian flow \citep{Liu2014,Liu2016}. Aiming at low-permeability media like clay, \cite{Miller1963} supposed a threshold gradient for water flow in clays to distinguish linear and nonlinear flow. They found that water flow rate is linearly related to hydraulic gradient at gradients above the threshold gradient, but no flow occurs below threshold gradient. \cite{Deng2007} suggested a new equation of nonlinear flow in saturated clays that can describe characteristics of flow curve of the nonlinear flow from low to high hydraulic gradients. Generally speaking, non-Darcian flow can be described by nonlinear functions of water flux and hydraulic gradient such as exponential and power functions. \cite{Hansbo1960,Hansbo2001} proposed a power relationship between water flux and hydraulic gradient for non-Darcian flow in clay media. By analyzing data sets for water flow in clay soils, \cite{Swartzendruber1962} proposed an exponential function to validate Darcy's law, resulting in a nonlinear relation of water flux versus gradient. In order to capture the non-Darcian flow behavior, \cite{Liu2012} developed a new relationship between water flux and hydraulic gradient by generalizing the currently existing relationships. The new relationship is shown to be consistent with experimental observations for both saturated and unsaturated conditions. To validate Darcy's law and develop non-Darcian models seem to be an endless challenge. It therefore leads to a new channel. According to Darcy's law, water flux is directly proportional to the hydraulic gradient, i.e., the first order (as an integral number) derivative of water head with respect to the flow distance. In other words, Darcian flow can be described by an integer derivative of water head. Non-Darcian flow in porous media as a nonlinear phenomenon requires a new mathematical approach. In this case, non-Darcian flow could be characterized by a fractional derivative. The fractional calculus, referred to as calculus of integrals and derivatives of any arbitrary real or complex order, is a 300 years old mathematical discipline. Its original conception is believed to have stemmed from a question raised in the year of 1695 by Marquis de L'H\^{o}pital (1661-1704) to Gottfried Wilhelm Leibnitz (1646-1716), the founder of Calculus. In the past few decades, the fractional calculus has gained remarkable popularity and importance because of its demonstration applications in numerous seemingly diverse and widespread fields of science and engineering \citep{Herrmann2011, Ortigueira2011} such as applications of fractional calculus to time-dependent behavior of rocks \citep{Zhou2011,Zhou2013} and composites \citep{Zhou2017}, fluid mechanics \citep{Kulish2002}, and solid mechanics \citep{Carpinteri2002,Carpinteri2004,Rossikhin2010}. Moreover, some researchers devoted themselves to a nonlinear modeling approach of fractional derivative to non-Darcian flow. \cite{He1998} proposed a new model for seepage flow in porous media to modify the Darcy's law with fractional derivatives. \cite{Tian2006} researched the flow characteristics of fluids through a fractal reservoir with the fractional order derivative. By regarding the water flow as a function of a fractional derivative of the piezometric head, \cite{Cloot2006} generalized the classical Darcy's law to derive a new equation of groundwater flow. \cite{Chen2013} developed a new variable-order fractional diffusion equation to describe the diffusion process of chloride ions in the reinforced concrete structure. \cite{Babak2014} presented a unified fractional differential approach to modeling flows of slightly compressible fluids through naturally fractured media. Recently, \cite{Wang2015} applied Caputo fractional constitutive equation to describe the transient electro-osmotic flow of a generalized Maxwell fluid in a cylindrical capillary. As described by \cite{Cloot2006}, the underlying basic assumption of the fractional derivative modeling approach to transport in porous media is that the fluid flow at a given point of the porous media is governed not only by the properties of the piezometric field at the specific position but also depends on the global spatial distribution of that field in soil matrix. As a consequence, time or space fractional derivatives are extensively used in models of solute transport in porous media in order to take into account the memory effect or nonlocal properties induced by the interactions of fluid particles with pores of the porous media. Nevertheless, time or space fractional derivative models usually need to make dimensionless for convenience. Therefore, a different perspective to address this problem will be shown herein to interleave with the fractional calculus. This paper represents an attempt to describe non-Darcian flow mathematically. The Swartzendruber equation as a non-Darcian flow model is generalized to describe the relation between water flux and hydraulic gradient using fractional derivative, resulting in a new model called the fractional derivative flow model. The analytic solution of fractional derivative flow model is presented and all parameters of the fractional derivative flow model are determined on the basis of the experimental data of water flow in low-permeability media. The results estimated by the fractional derivative flow model proposed in the paper are in better agreement with the experimental data than the results estimated by the Swartzendruber model. It indicates that our perspective of fractional derivative modeling approach is acceptable for non-Darcian flow in porous media. \section{Fractional derivative approach to non-Darcian flow} \subsection{Definition of the Caputo derivative} The some definitions of fractional derivatives are popular in mathematics like Grunwald-Letnikov, Riemann-Liouville, and Caputo derivative \citep{Podlubny1999}. Among them, Caputo derivative is widely used in physics and mechanics because of its advantages in solving fractional differential equations with initial conditions. For a given function $f(x)$ Caputo derivative is defined by \begin{equation}\label{Eq.(1)} \frac{{{d^\gamma }f(x)}}{{d{x^\gamma }}} = \frac{1}{{\Gamma (n - \gamma )}}\int_0^x {\frac{{{f^{(n)}}(t)}}{{{{(x - t)}^{\gamma - n + 1}}}}dt}, \end{equation} where $\gamma>0$, $n$ is the least integer greater than $\gamma$, and $\Gamma(\cdot)$ is the Gamma function, i.e., $\Gamma (\gamma ) = \int_0^\infty {{t^{\gamma - 1}}{e^{ - t}}dt}$. In particular, for $\gamma=0$, $\frac{{{d^\gamma }}}{{d{x^\gamma }}}$ denotes the identity operator. \subsection{ Darcian flow} Considering one-dimensional steady-state flow, suppose a fluid flows along a straight line, say, $x$-direction, the flux is related to hydraulic gradient by a well-known Darcy's law given by \begin{equation}\label{Eq.(2)} q = \frac{K}{\mu }\frac{{dp}}{{dx}}, \end{equation} where $q$ is the bulk velocity of fluid $(\mathrm{m}/\mathrm{s})$, or fluid flux, $K$ is permeability $(\mathrm{m}^2)$, $\mu$ is dynamic viscosity $(\mathrm{N}\cdot\mathrm{s}/\mathrm{m}^2)$, $p$ is fluid pressure $(\mathrm{N}/\mathrm{m}^2)$. Eq.(\ref{Eq.(2)}) can be usually described by \begin{equation}\label{Eq.(3)} q = ki, \end{equation} where $k = \frac{K}{\mu }\rho$ is hydraulic conductivity $(\mathrm{m}/\mathrm{s})$, $\rho$ is density of fluid $(\mathrm{N}/\mathrm{m}^3)$, and $i$ is hydraulic gradient. \subsection{Non-Darcian flow} Non-Darcian flow, generally, can be described by nonlinear functions of water flux and hydraulic gradient such as power and exponential functions. (1) Power function Darcy's law leads to a linear relation between $q $ and $i$ as shown in Eq.(\ref{Eq.(3)}). By producing first order derivative to both sides of Eq.(\ref{Eq.(3)}), we then have a differential equation like \begin{equation}\label{Eq.(4)} dq = kdi \quad or \quad \frac{{dq}}{{di}} = k. \end{equation} It is shown that Darcian flow can be described by an integer derivative of flux with respect to the hydraulic gradient $i$, as a dimensionless variable. A similar model is the Newtonian dashpot for description of a linear relationship between the viscous stress and the rate of strain. The Newtonian dashpot was developed to the Abel dashpot by invoking the fractional derivative \citep{Scott-Blair1944,Kiryakova1999,Zhou2011,Zhou2013}. In an analogous way, we suppose non-Darcian flow can be described by a fractional derivative of flux, which leads to a dimensionless form, i.e., \begin{equation}\label{Eq.(5)} \frac{{{d^\gamma }q}}{{d{i^\gamma }}} = k,\quad \gamma>0, \end{equation} where $\frac{{{d^\gamma }}}{{d{i^\gamma }}}$ is the Caputo fractional derivative operator. Applying the bilateral Laplace transform $(LT)$ to Eq.(\ref{Eq.(5)}) gives \begin{equation}\label{Eq.(6)} LT\left[\frac{{d^\gamma }q}{d{i^\gamma }}\right] = {s^\gamma } \tilde q(s) - \sum\limits_{j = 0}^{n - 1} {{s^{\gamma - j - 1}}\frac{{{d^j}q(0)}}{{d{i^j}}}} = \frac{k}{s}. \end{equation} Let $q(0) = 0$, we have: \begin{equation}\label{Eq.(7)} \tilde q(s){\rm{ = }}\frac{k}{{{s^{\gamma + 1}}}}. \end{equation} Applying the inverse Laplace transform to Eq.(\ref{Eq.(7)}), i.e., \begin{equation}\label{Eq.(8)} L{T^{ - 1}}[\tilde q(s)]{\rm{ = }}L{T^{ - 1}}\left[\frac{k}{{{s^{\gamma + 1}}}}\right] = \frac{k}{{\Gamma (1 + \gamma )}}{i^\gamma }, \end{equation}we have: \begin{equation}\label{Eq.(9)} q = k\frac{{{i^\gamma }}}{{\Gamma (1 + \gamma )}}. \end{equation} In this case, we get a power function of water flux $q$ and hydraulic gradient $i$ in Eq.(\ref{Eq.(9)}), showing a similar form of nonlinear equation supposed by \cite{Hansbo1960, Hansbo2001}. (2) Exponential function: Fractional Swartzendruber equation \cite{Swartzendruber1962} proposed an exponential relation between water flux and hydraulic gradient to modify Darcy's law, i.e., \begin{equation}\label{Eq.(10)} \frac{{dq}}{{di}} = k(1 - {e^{ - \frac{i}{I}}}). \end{equation} Integrating both sides of Eq.(\ref{Eq.(10)}) and considering $q(0)=0$, we have: \begin{equation}\label{Eq.(11)} q = k[i - I(1 - {e^{ - \frac{i}{I}}})] \end{equation} where $I$ is the threshold gradient and actually refers to the intersection of the linear part in plot of the hydraulic gradient and the water flux. Replacing integer derivative with fractional derivative in Eq.(\ref{Eq.(10)}), we have the fractional derivative Swartzendruber equation, i.e., \begin{equation}\label{Eq.(12)} \frac{{{d^\gamma }q}}{{d{i^\gamma }}} = k(1 - {e^{ - \frac{i}{I}}}),\quad 0\leq\gamma\leq1. \end{equation} Application of the Laplace transform $(LT)$ to Eq.(\ref{Eq.(12)}) leads to: \begin{equation}\label{Eq.(13)} {s^\gamma } \tilde q(s) = k\left( {\frac{1}{s} - \frac{1}{{s + {1 \mathord{\left/ {\vphantom {1 I}} \right. \kern-\nulldelimiterspace} I}}}} \right), \end{equation}then we have: \begin{equation}\label{Eq.(14)} \tilde q(s) = \frac{{k{s^{ - \gamma - 1}}}}{{1 + Is}}. \end{equation} Applying the inverse Laplace transform to Eq.(\ref{Eq.(14)}) like \begin{equation}\label{Eq.(15)} L{T^{ - 1}}[\tilde q(s)]{\rm{ = }}L{T^{ - 1}}\left[\frac{{k{s^{ - \gamma - 1}}}}{{1 + Is}}\right] = \frac{k}{I}{i^{\gamma + 1}}{E_{1,\gamma + 2}}( - \frac{i}{I}), \end{equation}we have: \begin{equation}\label{Eq.(16)} q = \frac{k}{I}{i^{\gamma + 1}}{E_{1,\gamma + 2}}( - \frac{i}{I}), \end{equation} where ${E_{1,\gamma + 2}}(\cdot)$ refers to Mittag-Leffler function, i.e., ${E_{1,\gamma + 2}}( - \frac{i}{I}) = \sum\limits_{k = 0}^\infty {\frac{{{{( - \frac{i}{I})}^k}}}{{\Gamma (k + \gamma + 2)}}}$ \citep{Mainardi2010}. In the case of $\gamma=0$, using ${E_{1,2}}( - \frac{i}{I}) = \frac{{{e^{ - i/I}} - 1}}{{ - i/I}}$, Eq.(\ref{Eq.(16)}) can be rewritten as \begin{equation}\label{Eq.(17)} q = \frac{k}{I}i{E_{1,2}}( - \frac{i}{I}) = k(1 - {e^{ - \frac{i}{I}}}), \end{equation} which appearance is the same as Eq.(\ref{Eq.(10)}) if differential order $\gamma=0$. In addition, the case of $\gamma=1$ gives \begin{equation}\label{Eq.(18)} {E_{1,3}}( - \frac{i}{I}) = \frac{{{e^{ - i/I}} - 1 + i/I}}{{{{(i/I)}^2}}}. \end{equation} Substituting Eq.(\ref{Eq.(18)}) into Eq.(\ref{Eq.(16)}), we have $q = k(i - I(1 - {e^{ - \frac{i}{I}}}))$ indicating that the Swartzendruber equation in Eq.(\ref{Eq.(11)}) is a special case of the fractional derivative flow model when the fractional derivative order $\gamma=1$. \section{Parameter determination for fractional derivative non-Darcian model} \subsection{Parameter determination} The efficacy of the fractional derivative model is dependent on its ability to adequately fit experimental data. Using the experimental data of water flux with hydraulic gradient, the parameters $k, I, \gamma$ in Eq.(\ref{Eq.(16)}) can be determined by the Levenberg-Marquardt method, a nonlinear least-squares fitting (LSF) method (see \citealt{Zhou2011} for details). In what follows, we now use the fractional derivative flow model to fit the experimental data \citep{Wang2016} by LSF analysis. \cite{Wang2016} developed an experimental study to investigate the non-Darcian behavior of water flow in soil-rock mixtures (SRM) with various rock block percentages. Their work presented the data set of water flux as a power function of hydraulic gradient. In addition, the relationship between threshold hydraulic gradient and rock block percentage was also considered. The exact value of threshold hydraulic gradient $I$ for SRM specimens are listed in \autoref{Table 1}. Consequently, only the two parameters $k$ and $\gamma$ are remain to be determined. The results of least-squares fit of the parameters in Eq.(\ref{Eq.(16)}) to the experimental data \citep{Wang2016} are listed in \autoref{Table 1}. \begin{table}[!tb] \centering \caption{\small Determination of parameters for fractional derivative flow model based on SRM specimens}\label{Table 1} \scalebox{0.85}{ \begin{tabular}[]{cccccccccc} \hline SRM &\multicolumn{4}{c}{Swartzendruber equation} & \multicolumn{5}{c}{Fractional derivative flow model}\\ \cline{2-10} specimens & $k\times10^{-5}(\mathrm{m}/\mathrm{s})$ & $I$ &$\mathrm{R}^2$ &MSE & $ k\times10^{-5}(\mathrm{m}/\mathrm{s})$ & $I $ &$\gamma$& $\mathrm{R}^2$ & MSE\\ \hline SRM20-1 & 0.2039 & 141.00 & 0.9817 &0.2242 & 0.2039 & 141.00 & 1 &0.9817 &0.2242\\ SRM30-1 & 0.1378 & 130.20 & 0.9898 &0.0531 & 0.1378 & 130.20 & 1 &0.9898 &0.0531\\ SRM40-1 & 0.07383& 123.60 & 0.9869 &0.1358 & 0.1311 & 123.60 &0.8501 &0.9908 &0.0095\\ SRM50-1 & 0.1297 & 102.50 & 0.9326 &0.2322 & 0.1297 & 102.50 & 1 &0.9326 &0.2322\\ SRM60-1 & 0.1175 & 85.33 & 0.9786 &0.0363 & 0.1954 & 85.33 &0.8567 &0.9819 &0.0307\\ SRM70-1 & 0.1402 & 73.59 & 0.9710 &0.0291 & 0.3223 & 73.59 &0.7454 &0.9807 &0.0193\\ \hline \end{tabular}} \end{table} The data as well as the fitting curves given by the fractional derivative flow model in Eq.(\ref{Eq.(16)}) is shown in \autoref{Fig1}. Making the fitting analysis to the same experimental data using the Swartzendruber equation in Eq.(\ref{Eq.(11)}), another set of parameters are given in \autoref{Table 1} as well. The least-squares analysis results in \autoref{Table 1} indicate that the fractional derivative flow model in Eq.(\ref{Eq.(16)}) is in better agreement with the experimental data than the Swartzendruber equation in Eq.(\ref{Eq.(11)}) with higher correlation coefficients ($\mathrm{R}^2$) and lower mean squared errors (MSE). In addition, \autoref{Table 1} shows that an increase of rock block percentage in SRM specimens leads a decrease of the hydraulic conductivity $k$ to a minimum value at a rock block percentage of 40\%, and an increase if rock block percentage exceed 40\%. The similar behavior is also given in \cite{Wang2016}. \begin{figure}[!htbp] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM20-1} \subcaption*{\small(a)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM30-1} \subcaption*{\small(b)} \end{subfigure} \\ \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM40-1} \subcaption*{\small(c)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM50-1} \subcaption*{\small(d)} \end{subfigure} \\ \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM60-1} \subcaption*{\small(e)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{SRM70-1} \subcaption*{\small(f)} \end{subfigure} \caption{\small Fitting curves theoretically given by fractional derivative flow model on the basis of experimental data for SRM specimens with different rock block percentages: 20\% (a), 30\% (b), 40\% (c), 50\% (d), 60\% (e) and 70\% (f) (see \citealt{Wang2016}).} \label{Fig1} \end{figure} Furthermore, using a more data set \citep{Deng2007}, the validity of our fractional derivative flow model was evaluated by LSF analysis. \cite{Deng2007} presented a nonlinear model of flow in saturated clays. \begin{table}[!bht] \centering \caption{\small Determination of parameters for fractional derivative flow model based on saturated clays}\label{Table 2} \scalebox{0.85}{ \begin{tabular}[]{cccccccccc} \hline Saturated &\multicolumn{4}{c}{Swartzendruber equation} & \multicolumn{5}{c}{Fractional derivative flow model}\\ \cline{2-10} clays& $k\times10^{-9}(\mathrm{m}/\mathrm{s})$ & $I$ & $\mathrm{R}^2$ &MSE & $ k\times10^{-9}(\mathrm{m}/\mathrm{s})$ & $I $ &$\gamma$& $\mathrm{R}^2$ & MSE\\ \hline NO.64-3 & 7.973 & 0.7901 & 0.9973 &0.2050 & 11.52 & 1.565 & 0.8094 &0.9975 &0.1501\\ NO.64-4 & 4.856 & 2.754 & 0.9998 &0.0116 & 7.153 & 4.408 & 0.8814 &0.9998 &0.0100\\ \hline \end{tabular}} \end{table} Comparisons of experimental data and fitting curves given by both Swartzendruber equation in Eq.(\ref{Eq.(11)}) and the fractional derivative flow model in Eq.(\ref{Eq.(16)}) are illustrated in \autoref{Fig2}. For better analysis, the parameters in Eq.(\ref{Eq.(11)}) and Eq.(\ref{Eq.(16)}) are determined and listed in \autoref{Table 2}. The results demonstrate that the proposed fractional derivative flow model is in better agreement with the experimental data than the Swartzendruber equation in Eq.(\ref{Eq.(11)}). \begin{figure}[!ht] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{NO64-3} \subcaption*{\small(NO.64-3)} \end{subfigure} \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{NO64-4} \subcaption*{\small(NO.64-4)} \end{subfigure} \caption{\small Fitting curves theoretically given by fractional derivative flow model on the basis of experimental data for saturated clays NO.64-3, NO.64-4 (see \citealt{Deng2007}).}\label{Fig2} \end{figure} Moreover, since Swartzendruber equation in Eq.(\ref{Eq.(11)}) is a special case of the fractional derivative flow model when the fractional derivative order $\gamma=1$, our fitting results verified that the presented fractional derivative flow model is more flexible and accurate. \subsection{Sensitivity analysis} \noindent(1) Fractional derivative order Eq.(\ref{Eq.(16)}) shows that the relationship between water flux $q$ and hydraulic gradient $i$ depends on parameters $k, I, \gamma$. In order to get a better understanding of the effects of these parameters, sensitivity analyses have been carried out. The effect of fractional order $\gamma$ on the variation of water flux with hydraulic gradient is shown in \autoref{Fig3}. In which one parameter $\gamma$ takes three different values to show its effect on the $q-i$ curve under the condition of $k=2\times10^{-6}\mathrm{m}/\mathrm{s}, I=80$. It is shown that the higher the fractional derivative order, in general, the larger the fluid flux. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{Fig3} \caption{\small Sensitivity of the water flux to the fractional derivative order}\label{Fig3} \end{figure} \noindent(2) Threshold hydraulic gradient In Eq.(\ref{Eq.(16)}), let the threshold gradient $I$ changes and other parameters be constant, where $k=1.5\times10^{-6}\mathrm{m}/\mathrm{s}, \gamma=0.85$. A series of curves can be obtained as shown in \autoref{Fig4}, indicating that the higher threshold gradient, the smaller water flux. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{Fig4} \caption{\small Sensitivity of the water flux to threshold hydraulic gradient}\label{Fig4} \end{figure} \section{Conclusions} The object of the present work is to develop fractional order equations for describing non-Darcian behavior between water flux and hydraulic gradient. Based upon the extensively adopted fractional calculus theory, we generalized the currently existing relationships such as Hansbo equation and Swartzendruber equation. The analytic solution for the fractional derivative flow model is acquired and the relative parameters are determined. Sets of experimental data are utilized to verify the validity of the proposed fractional derivative flow model. The comparative analysis demonstrates that including the Swartzendruber equation as a special case when the fractional derivative order $\gamma=1$, the fractional derivative flow model turns out to be a more flexible and accurate one to characterize the behavior of non-Darcian flow. Furthermore, a sensitivity study shows that the fractional derivative order is an essential parameter impacting the shape of $q-i$ curve. However, the corresponding physical interpretation of fractional derivative is not clear and further research is required to determine the relationship between the fractional derivative order and other mechanical parameters. \section*{Acknowledgement} The present work is supported by the National Natural Science Foundation of China (51674266), and the State Key Research Development Program of China (2016YFC0600704) and the Specialized Research Fund for the Doctoral Program of Higher Education (20130023110017). The financial supports are gratefully acknowledged. Special thanks are due to H.H. Liu, Aramco Services Company, for his valuable suggestions and help in improving the article. \section*{References}
{ "timestamp": "2018-08-28T02:15:25", "yymm": "1806", "arxiv_id": "1806.00977", "language": "en", "url": "https://arxiv.org/abs/1806.00977" }
\section{Background}\label{sec:background} For the sake of readability, we briefly state the definitions and theorems used to obtain an interval estimate, denoted {\em confidence intervals} in the following. Table \ref{tab:notion} provides a summary of the notation used throughout the paper. \begin{definition}[Confidence interval: ] Let $X$ be a random sample from a probability distribution with statistical parameter~$\theta$, which is a quantity to be estimated. The confidence interval $[\theta_0,\theta_1)$, is obtained by \begin{equation} P(\theta_0\leqslant\theta<\theta_1)=1-\alpha,\ 0<\alpha<1 \label{eq:ci} \end{equation} where $(1-\alpha)$ is the confidence coefficient (or \emph{degree of confidence}). The confidence interval contains the statistical parameter~$\theta$ with probability $1-\alpha$. \end{definition} \begin{definition}[Central Limit Theorem (CLT): ] Let $X$ be a random sample of size $n$ ($X=\{X_1,X_2,\ldots,X_n\}$) taken from a population with expected value $\mathrm{E}(X_i)=\mu$ and variance $\mathrm{Var}(X_i)=\sigma^2<\infty$, $\ i=1,2,\ldots,n$, then the sample mean $\hat{X}$ asymptotically follows a normal distribution with expected value $\mu$ and variance $\sigma^2/n$ as $n\to\infty$. \begin{equation} \hat{X}\mathop{\sim}_{n\to\infty}\mathcal{N}\left(\mu,\frac{\sigma^2}{n}\right) \label{eq:clt} \end{equation} \end{definition} \begin{definition}[Confidence intervals for sample mean: ] The confidence interval for the sample mean $\hat{X}$, with $E[X] = \theta$ and standard error of the sample mean $S/\sqrt{n}$ according to CLT, can be obtained by \begin{equation} \hat{X} \pm z_{\alpha/2} \cdot \frac{S}{\sqrt{n}} \label{eq:ciMean} \end{equation} where $z_{\alpha/2}$ is the $\alpha/2$-quantile of the standard normal distribution $N(0,1)$. $S^2$ estimates the unknown variance $\sigma^2$. \end{definition} This assumes that the sample size $n$ is large, and that the sampling distribution is symmetric, which is not always the case. In the following, we detail how to establish a confidence interval in the case of a sampling distribution whose density function is symmetric or non-symmetric around the mean. Note that the variance of a sample mean, $\Var[\hat{X}] = \Var[\frac{1}{n} \sum_{i=1}^n X_i] = \Var[X]/n$, where $\Var[X]$ is the variance of the sample $X$. This implies that when $n\to \infty$ then $\Var[\hat{X}]\to 0$ while $\Var[X] \to \sigma^2$. \input{notion} \section{Background}\label{sec:background} \hoss{The notation is not consistent so far, we should introduce a table with the notation and variables used.} For the sake of readability, we briefly state the definitions and theorems used to derive CI estimators. Table \ref{tab:notion} provides a summary of the notation and variables used throughout the paper. \subsection{Definition and Theorems} \martin{Since we're not proving any theorem here, maybe we should rename all this to ``Definitions and assumptions''?} \lea{Is this ``theorem'' regarding CIs actually a definition? Another question: do we really need subsection B here on desirable properties? Given the space limitations, I think we should cut back on textbook stuff to free up room needed to fit to 6 pgs. I would keep just this subsection, but it then doesn't need to be a subsection...} \martin{I agree on the ``Desirable properties'' section. I'd drop it. We can see about the textbook stuff if we need to further cut; we could keep it very succint, if need be.} \begin{theorem}[Confidence interval] Let $X$ be a random sample from a probability distribution with statistical parameter $\theta$, which is a quantity to be estimated. A confidence interval for the parameter $\theta$, with confidence level $\gamma$ is the interval $[u(X), v(X)]$ which satisfies \begin{equation} \forall \theta: P(u(X)<\theta<u(X))\geq \gamma \, . \label{eq:ci} \end{equation} \end{theorem} \begin{theorem}[Central limit theorem] Let $\{X_1, \dots, X_n\}$ be a random sample of size $n$ with independent, identically distributed random variables $X_i$. For large samples, the sample mean has a normal distribution with mean equal to $\E[X]$ and variance equal to $\Var[X]/n$. \begin{equation} \xbar \sim N(\E[X],\Var[X]/n) \label{eq:clt} \end{equation} \end{theorem} The CLT implies that the sample mean can be rescaled to a standard normal distribution with expected value 0 and variance equal to 1. \begin{equation} \frac{\xbar-\mu}{\sigma/\sqrt{n}} \sim N(0,1) \label{eq:} \end{equation} This allows to derive CIs for mean values with $P(Z \leq -z) = P(Z \geq z) = \gamma/2$ and $Z \sim N(0,1)$. For $\gamma=1-\alpha=0.95$, it is $z \approx 1.96$. \begin{align} & P(-z \leq \frac{\xbar-\mu}{\sigma/\sqrt{n}} \leq z) = \\ & P(\xbar -z \sigma/\sqrt{n} \leq \mu \leq \xbar + z \sigma/\sqrt{n} ) \quad \geq \gamma \end{align} \begin{theorem}[Confidence intervals for mean values] The confidence interval for the unknown mean value $\mu$ can be derived using the sample mean $\xbar$ and the standard error of sample mean, $S/\sqrt{n}$ for a sample of size $n$. Thereby $z$ is the $\gamma/2$-quantile of the standard normal distribution $N(0,1)$. \begin{equation} \xbar \pm z \cdot \frac{S}{\sqrt{n}} \label{eq:ciMean} \end{equation} \end{theorem} The size of the confidence interval depends on the sample size and the standard deviation of the study groups. If the sample size is large, this leads to "more confidence" and a narrower confidence interval. If the confidence interval is wide, this may mean that the sample is small. If the dispersion is high, the conclusion is less certain and the confidence interval becomes wider. \subsection{Desirable Properties of Confidence Intervals} Confidence level: how reliable is the interval Confidence interval width: how meaningful is this CI Exact confidence intervals vs. approximate confidence intervals Computational effort \input{notion} \section{Recommendations and Conclusions}\label{sec:conclusions} Subjective QoE studies often involve a relatively small number of test participants. Moreover, used rating scales are commonly discrete and bounded at both ends, with study results reported in the form of MOS values and CIs derived for various test conditions to quantify the significance of MOS values. Given the importance of using efficient CI estimators in the context of deriving QoE models, we evaluate several MOS CI estimators, and develop our own estimator based on binomial proportions. The numerical results indicate that the proposed idea based on binomial estimators is robust and conservative in practice. Wilson, Clopper-Pearson, and Jeffreys lead to comparable results, with excellent coverage and outlier properties. However, very good coverage comes along with costs of having larger CI widths. The Wald interval performs poorly, unless $n$ is quite large, which is not commonly the case in QoE studies. Standard confidence intervals based on normal and student-t distribution, as well as simultaneous CIs for multinomial distributions, suffer from the CIs exceeding the bounds of the rating scale. Bootstrapping has similar issues, i.e., some test conditions are not captured properly, but the outlier ratio is always zero due to sampling. In summary, for QoE tests characterized by a small sample size and the use of discrete bounded rating scales, the proposed binomial estimators (Clopper-Pearson, Wilson, Jeffreys) are conservative, but exact and recommended. For decreasing the CI widths, bootstrapping or standard CI may be used in case of low variance (when the SOS parameter $a<0.1$) at the cost of decreased coverage -- but the most effective way is to increase the number of subjects. If the SOS parameter is larger than for a binomial distribution ($a>\frac{1}{k-1}$), the results and test design should be checked, as there may be hidden influence factors in the study. An implementation of the CI estimators and the recommended estimators based on the SOS parameter is available in Github \url{https://github.com/hossfeld}. \section{Confidence Interval Estimators for MOS}\label{sec:estimators} \subsection{Problem Formulation} We assume we have a discrete rating scale with $k$ rating items, leading to a multinominal distribution, which is a generalization of the binomial distribution. For a certain test condition, $n$ users rate the quality on a discrete $k$-point rating scale, e.g., $k=5$ for the commonly used 5-point ACR scale. Each scale item is selected with probability $p_i$ for $i=1,\dots,k$; $\sum_{i=1}^k p_i = 1$. The $n$ users rate quality as one of the $k$ categories. Samples $(n_1,\dots,n_k)$ indicate the number of ratings obtained per category, with $\sum_{i=1}^k n_i = n$ (i.e., each user has provided one rating). With each category having a fixed probability $p_i$, the multinomial distribution gives the probability of any particular combination of numbers $n_i$ of successes for the various categories (under the condition $n_k = n - \sum_{i=1}^{k-1} n_i$) \begin{equation} P(N_1=n_1,\dots,N_k=n_k) = \frac{n!}{n_1! \cdots n_k!} p_1^{n_1}\cdots p_k^{n_k} \label{eq:multinominal} \end{equation} In QoE tests, we are interested in the rating of an arbitrary user. The marginal distribution (when $n=1$) with $p_i$ estimates the the expected rating $\E[Y]$ by the sample mean $\hat{Y}$ (aka MOS), assuming a linear rating scale \begin{equation} \hat{Y} = \frac{1}{k} \sum_{i=1}^k i p_i \label{eq:unknownMOS} \end{equation} We denote $Y$ as a random variable of the rating of the users. We observe a sample $Y_1, \dots, Y_n$ with $Y_i \in \{1,\dots,k\}$. As previously stated, in subjective QoE tests, the number $n$ of users is typically not very high. From the samples $(n_1,\dots,n_k)$, the MOS and CI can be estimated. However, given the use of a bounded rating scale and small sample size, existing estimators of CI do not follow the CLT and might be asymmetric around the sample mean, and will potentially violate the bounds of the rating scale, i.e., $\theta_0 < 1$ and/or $\theta_1 > k$. \subsection{Regular Normal and Student's t-distribution} The most common way of constructing a CI from a set of samples, $X=\{ X_1, \cdots ,X_n\}$, is to apply the CLT. When the variance of $X$ is not known, then the quantile $t_{\alpha/2, n-1}$ must be taken from a Student's t-distribution with confidence level $1-\alpha$ and $n-1$ degrees of freedom, unless the number of samples are sufficiently large ($n>30$ according to ITU-T recommendation P.1401). Then the quantiles in the Student's t-distribution and standard Normal distributions are approximately the same. The CI for both Student's t-distribution and Normal distribution is estimated by use of~\eqref{eq:ciMean}, the only difference is the quantiles. Observe; truncating the upper and lower bounds, i.e., $\theta^*_0 = \max(1, \theta_0)$ and/or $\theta^*_1 = \min(k, \theta_1)$ is not correct. \subsection{Simultaneous CIs for Multinomial Distribution} A complementary approach is to consider the multinomial proportions $p_i$ of user ratings on the scale for item $i$ and then to derive exact confidence coefficients of simultaneous CI for those multinomial proportions. A method for computing the CIs for functions of the multinomial proportions is proposed in \cite{jin2013computing} which can be directly applied to the computation of the MOS, see ~Eq.(\ref{eq:unknownMOS}). There are $n_i$ user ratings for category $i$ and $\chi_{1-\alpha/k}$ is the quantile of the $\chi^2$-distribution with one degree of freedom considering $k$ simultaneous CIs. The MOS is $\hat{Y}=\sum_{i=1}^k i \frac{n_i}{n}$. \begin{equation} \sum_{i=1}^k i \frac{n_i}{n} \pm \sqrt{\frac{\chi_{1-\alpha/k}}{n} \left(\sum_{i=1}^k i^2 \frac{n_i}{n}\right)- \left( \sum_{i=1}^k i \frac{n_i}{n} \right)^2 } \label{eq:simCI} \end{equation} \newcommand{{k_0}}{{k_0}} \subsection{Using Binomial Proportions for Discrete Rating Scales} The shifted binomial distribution can be used as an upper bound distribution for user rating distributions when users rate on a $k$-point rating scale ($1,\dots,k$). The binomial distribution leads to high standard deviations in QoE tests \cite{hossfeld2011sos} and follows exactly the SOS hypothesis with parameter $a=1/{k_0}$ with ${k_0}=k-1$. Let us consider $n$ users. Assume the user ratings follow a shifted binomial distribution, $Y_i \sim \bino{{k_0},p}+1$. Then, the sum of the user ratings follows also a binomial distribution. \begin{equation} Y=\sum_{i=1}^n Y_i \sim \bino{\sum_{i=1}^n {k_0},p}+1=\bino{n \cdot {k_0},p}+1 \end{equation} and then $\hat{Y} = \frac{1}{n} \sum_{i=1}^n Y_i \sim \bino{\frac{1}{n} \sum_{i=1}^n {k_0},p}+1=\bino{\cdot {k_0},p}+1$. Due to differences among users, it may be $p_i \neq p_j$ for users $i$ and $j$. The binomial sum variance inequality can be used to derive an upper bound. Let us consider $Y=\sum_{i=1}^n Y_i$, which does not follow a binomial distribution. We define $Z\sim \bino{n\cdot {k_0},\bar{p}}+1$ with $\bar{p}=\frac{1}{n} \sum_{i=1}^n p_i$. As a result of the binomial sum variance inequality we observe that the variance of $Z$ is an upper bound for QoE tests. \begin{equation} \Var[Y] < \Var[Z] \end{equation} % Hence, we may use $\hat{Z}$ instead of $\hat{Y}$ to derive conservative CIs for the MOS based on the CI $[\hat{p}_0;\hat{p}_1]$ for the unknown $p$. \begin{equation} [\hat{Z}_0,\hat{Z}_1] = [\hat{p}_0,\hat{p}_1] \cdot (k-1) + 1 \label{eq:mos:bino} \end{equation} CI estimation for binomial distributions has drawn attention in the literature and several suggestions have been provided. A few works compare the CI estimators for binomial proportions \cite{pires2008interval,vollset1993confidence,brown2001interval,newcombe2012confidence}. For example, \cite{brown2001interval} suggests using Wilson interval and Jeffreys prior interval for small $n$. The normal theory approximation of a confidence interval for a proportion is known as the Wald interval, which is however not recommended \cite{agresti1998approximate}. For readability, we write $z=z_{\alpha/2}$ for the $\alpha/2$-quantile of the standard normal distribution. \subsubsection{Wald interval employing normal approximation} From the MOS $\hat{Y}$ we obtain $\hat{p}=\frac{\hat{Y}-1}{k-1}$. The standard deviation is $S=\sqrt{\hat{p}(1-\hat{p})}$. The CI for the MOS is as follows. \begin{equation} (\hat{p} \pm z \frac{S}{\sqrt{n}})\cdot (k-1) + 1 \quad \Leftrightarrow \quad \hat{Y}\pm z \frac{S}{\sqrt{n}} (k-1) \label{eq:wald} \end{equation} \subsubsection{Wilson score interval with continuity correction} For the Wilson interval, a continuity correction is proposed which aligns the minimum coverage probability, rather than the average probability, with the nominal value. \begin{align} d &= 1+z \sqrt{(z^2-\frac{1}{n{k_0}} + 4n{k_0}\hat{p}(1-\hat{p})+(4\hat{p}-2))} \\ \hat{Y}_0 &= \max\left(1, {k_0} \frac{(2n{k_0}\hat{p} + z^2 - d }{(2(n{k_0}+z^2)} +1 \right)\\ \hat{Y}_1 &= \min\left(k, {k_0} \frac{(2n{k_0}\hat{p} + z^2 + d }{(2(n{k_0}+z^2)} +1 \right) \end{align} \subsubsection{Clopper-Pearson} It is the central exact interval \cite{clopper1934use} and we use the implementation based on the beta distribution with parameters $c$ and $d$ \cite{agresti1998approximate}. The parameter $c$ quantifies the number of `successes' of the corresponding binomial proportion, i.e. $c=\sum_{i=1}^n{(y_i-1)}$ for user ratings $y_i$, and $d=n(k-1)-c+1$. The $q$-quantile of the beta distribution is denoted by $\beta_q(c,d)$. \begin{align} \hat{Y}_0 &= \max\left(1, b_{\alpha/z}(c,d) \cdot (k-1)+1 \right)\\ \hat{Y}_1 &= \min\left(k, b_{1-\alpha/z}(c,d) \cdot (k-1)+1 \right) \end{align} \subsubsection{Jeffreys Interval} A Bayesian approach for binomial proportions is Jeffreys interval which is an exact Bayesian credibility interval and guarantees a mean coverage probability of $\gamma$ under the specified prior distribution. \cite{brown2001interval} have chosen the Jeffreys prior \cite{jeffreys1998theory}. Although it follows a different paradigm, it has also good frequentist properties and looks similar to Clopper-Pearson. The calculation also uses the number of successes $c$ as defined above and the quantiles of the beta distribution. \begin{align} \hat{Y}_0 &= \begin{cases} b_{\alpha/2}(c+\frac{1}{2},d- \frac{1}{2}) \cdot (k-1)+1 & \\ 0 \, \text{ if } c=0 \end{cases} \\ \hat{Y}_1 &= \begin{cases} b_{1-\alpha/2}(c+\frac{1}{2},d- \frac{1}{2}) \cdot (k-1)+1 & \\ k \, \text{ if } c=n(k-1) \end{cases} \end{align} \subsection{Bootstrap Confidence Intervals} The non-parametric bootstrap method as introduced by Efron~\cite{efron1992bootstrap} uses solely the empirical distribution of the observed sample. Simulations from the empirical distribution lead to many observations of various MOS estimators $\hat{Y_r}$ for each simulation run $r$. As a result, a distribution of mean values is observed and the CIs can be directly obtained based on Eq.~\eqref{eq:ci}. We use Matlab's implementation of the `bias corrected and accelerated percentile' method to cope with the skewness of the observed distribution, cf.~\cite{efron1992bootstrap}. \section{Confidence Interval Estimators for MOS}\label{sec:estimators} \subsection{Problem Formulation} \martin{A ``feeling'' note: I'd tend to speak of ``quality'' rather than ``QoE'' throughout here} \lea{OK, I changed it to quality.} We assume we have a discrete rating scale with $k$ rating items, leading to a multinominal distribution, which is a generalization of the binomial distribution. For a certain test condition, $n$ users rate the quality on a discrete $k$-point rating scale, e.g., $k=5$ for the commonly used 5-point ACR scale. Each scale item is selected with probability $p_i$ for $i=1,\dots,k$; $\sum_{i=1}^k p_i = 1$. The $n$ users rate quality as one of the $k$ categories. Samples $(n_1,\dots,n_k)$ indicate the number of ratings obtained per category, with $\sum_{i=1}^k n_i = n$ (i.e., each user has provided one rating). With each category having a fixed probability $p_i$, the multinomial distribution gives the probability of any particular combination of numbers $n_i$ of successes for the various categories. \begin{equation} P(N_1=n_1,\dots,N_k=n_k) = \frac{n!}{n_1! \cdots n_k!} p_1^{n_1}\cdots p_k^{n_k} \label{eq:multinominal} \end{equation} In the QoE tests, we are interested in the rating of an arbitrary user. When $n=1$, it is the categorical distribution with $p_i$. The true, but unknown mean value, i.e. the MOS, is then \begin{equation} \E[Y] = \frac{1}{k} \sum_{i=1}^k i p_i \label{eq:unknownMOS} \end{equation} by interpreting the category rating scale as a linear scale. We denote $Y$ as a random variable of the rating of the users. We observe a sample $y_1, \dots, y_n$ with $y_i \in \{1,\dots,k\}$. As previously stated, in subjective QoE tests, the number $n$ of users is typically not very high. From the samples $(n_1,\dots,n_k)$, the MOS is estimated, with CIs being the corresponding interval estimators for the mean value. However, given the use of a bounded rating scale and small sample size, existing estimators do not work properly (e.g., normality assumptions no longer hold), and they violate the bounds of the confidence interval.\martin{Shouldn't this read ``violate the bounds of the rating scale''?} \subsection{Regular Normal and Student-T Approximation} \hoss{Description of the computation of the confidence intervals can be a mix of programming code and proper formulas. } \martin{I think just having the math would be more readable and succint. We could put the matlab code in pastebin (anonymized) and link to it. We can later move it to a github gist when we do the camera ready} standard formula $y_i$ is the rating of user $i$ \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] ci.norm = mean(yi)+norminv([ci.alpha/2, 1-ci.alpha/2],0,1).*std(yi)/sqrt(n); ci.studentT = mean(yi)+tinv([ci.alpha/2 1-ci.alpha/2],n-1).*std(yi)/sqrt(n); \end{lstlisting} \hoss{Can you simply limit the confidence interval to not violate the bounds? No, this would not be correct!} \subsection{Using Binomial Proportions for Discrete Rating Scales} \begin{itemize} \item The binomial distribution can be used as an upper bound distribution for user rating distributions. The binomial distribution leads to high standard deviations in QoE tests \cite{hossfeld2011sos}. The binomial distribution follows exactly the SOS hypothesis with parameter $a=1/m$ and $m+1$ rating scale items, for example $m=u-l=4$ in case of a 5-point ACR scale with bounds $l=1$ and $u=5$. \item Let us consider $n$ users. Assume the user ratings follow a binomial distribution, $Y_i \sim \bino{m,p}$ with $l=0$ and $u=m$. Then, the sum of the user ratings follows also a binomial distribution. \begin{equation} Y=\sum_{i=1}^n Y_i \sim \bino{\sum_{i=1}^n m,p}=\bino{n\cdot m,p} \end{equation} \item Due to differences among users, it may be $p_i \neq p_j$ for user $i$ and $j$. Due to the binomial sum variance inequality, we know the following. Thus, let us consider $Y=\sum_{i=1}^n Y_i$ which is not following a binomial distribution. Let us define $Z\sim \bino{n\cdot m,\bar{p}}$ with $\bar{p}=\frac{1}{n} \sum_{i=1}^n p_i$. \begin{equation} \Var[Y] < \Var[Z] \end{equation} \item Hence, the variance of $Z$ is an upper bound for QoE tests. Accordingly, we may use $Z$ to derive conservative confidence intervals. \end{itemize} Some approaches for binomial distributions CI estimation for binomial distributions has drawn attention in literature and several suggestions have been provided. There are a couple of works which compare the CI estimators for binomial proportions \cite{pires2008interval,vollset1993confidence,brown2001interval,newcombe2012confidence}. For example, \cite{brown2001interval} suggest Wilson interval and Jeffreys prior interval for small $n$. The standard Wald confidence interval howvers is not recommended \cite{agresti1998approximate}. \verb|res(i,j)| indicates the confidence interval width of test condition $i$ and confidence estimator $j$ \verb|low(i,j)| is the lower confidence interval end; \verb|up(i,j)| is the upper bound \begin{itemize} \item Wald interval employing normal approximation The normal theory approximation of a confidence interval for a proportion is known as the Wald interval \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] z=norminv(1-ci.alpha/2,0,1); mu=mean(yi)-1; m=4; p=mu/m; s = sqrt(p*(1-p)/n); p1 = p-z*s;p2=p+z*s; low(i,4)=p1*m+1; up(i,4)=p2*m+1; res(i,4)=(up(i,4)-low(i,4))/2; \end{lstlisting} \item Wilson score interval with continuity correction \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] low(i,5)=max(1, (2*p*n*m + z2 - (1+z*sqrt(z2-1./(n*m) + 4*n*m*p*(1-p)+(4*p-2)) ) )/(2*(n*m+z2)) *m+1 ); up(i,5)=min(m+1, (2*p*n*m + z2 + (1+z*sqrt(z2-1./(n*m) + 4*n*m*p*(1-p)+(4*p-2)) ) )/(2*(n*m+z2)) *m+1 ); res(i,5)=(up(i,5)-low(i,5))/2; \end{lstlisting} \item Clopper-Pearson (using beta distribution) \cite{clopper1934use} is the central exact interval \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] numSuc=sum(yi-1);trials=n*m; low(i,6) = max(1, betainv(ci.alpha/2,numSuc, trials-numSuc+1) *m + 1); up(i,6) = min(m+1, betainv(1-ci.alpha/2,numSuc+1, trials-numSuc) *m + 1); res(i,6)=(up(i,6)-low(i,6))/2; \end{lstlisting} \end{itemize} Approximate results for binomial proportions are sometimes more useful than exact results because of the inherent conservative calculation of exact methods \cite{agresti1998approximate}. \subsection{Simultaneous Confidence Intervals for Multinomial Distribution} Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions \cite{jin2013computing} \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] di=1:5; xi = hist(yi,di); a = chi2inv(1-ci.alpha/length(di),1); scim = sum((1:5).*xi/n); low(i,7)=scim-sqrt(a/n.*(sum((1:5).^2.*xi/n)- (sum((1:5).*xi/n)).^2 )); up(i,7)=scim+sqrt(a/n.*(sum((1:5).^2.*xi/n)- (sum((1:5).*xi/n)).^2 )); res(i,7)=(up(i,7)-low(i,7))/2; \end{lstlisting} \subsection{Jeffreys Interval: Bayesian Approach for Binomial Proportions} Another exact method is a Bayesian credibility interval which guarantees a mean coverage probability of $\gamma$ under the specified prior distribution. \cite{brown2001interval} have chosen the Jeffreys prior \cite{jeffreys1998theory}. \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] if numSuc==0, low(i,8)=1; else low(i,8)=betainv(ci.alpha/2,numSuc+1/2, trials-numSuc+1/2)*m+1; end if numSuc==trials, up(i,8)=5; else up(i,8)=betainv(1-ci.alpha/2,numSuc+1/2, trials-numSuc+1/2)*m+1; end res(i,8)=(up(i,8)-low(i,8))/2; \end{lstlisting} \subsection{Bootstrap Confidence Intervals} The non-parametric bootstrap method was introduced by Efron introduced by \cite{efron1992bootstrap} uses solely the empirical distribution of the observed sample. We use the Matlab implementation with the `Bias corrected and accelerated percentile' method. \section*{\textcolor{orange}{\huge{Collection of Stuff}}} \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/extremeperformanceCDF}% \caption{\textbf{Extreme scenario.}\xspace Single run. \emph{extremeperformanceCDF.pdf from 15-Jan-2018 00:02:55}}% \label{fig:extremeperformanceCDF}% \centering% \includegraphics[width=0.95\columnwidth]{figs/extremeperformanceBars}% \caption{\textbf{Extreme scenario.}\xspace Single run. \emph{extremeperformanceBars.pdf from 15-Jan-2018 00:02:26}}% \label{fig:extremeperformanceBars}% \end{figure} \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/discreteperformanceBars}% \caption{Results for a real use case: image QoE. Variance is lower than for the binomial case (can be checked with SOS parameter $a=0.16 < 0.25$. \newline \emph{discreteperformanceBars.pdf from 15-Jan-2018 00:07:53}}% \label{fig:discreteperformanceBars}% \end{figure} \begin{figure}% \centering% \includegraphics[width=0.95\columnwidth]{figs/randComputation}% \caption{The drawback of bootstrapping is the increased computation time. But this is not relevant in practice for QoE tests! \newline \emph{randComputation.pdf from 22-Jan-2018 00:45:55}}% \label{fig:randComputation}% \end{figure} \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/randefficiency}% \caption{\textbf{Discrete uniform distribution.}\xspace Instead of the extreme case, we can also investigate the random case, i.e. users are rating randomly. This is already quite extreme for QoE tests and an indicator that the test is not working in practice. \newline \emph{randefficiency.pdf from 22-Jan-2018 00:16:38}}% \label{fig:randefficiency}% \centering% \includegraphics[width=0.95\columnwidth]{figs/randoutlierRatio}% \caption{\textbf{Discrete uniform distribution.}\xspace \newline \emph{randoutlierRatio.pdf from 22-Jan-2018 00:16:40}}% \label{fig:randoutlierRatio}% \centering% \includegraphics[width=0.95\columnwidth]{figs/randciWidthComp}% \caption{\textbf{Discrete uniform distribution.}\xspace \newline \emph{randciWidthComp.pdf from 22-Jan-2018 00:16:42}}% \label{fig:randciWidthComp}% \end{figure} \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/test}% \caption{\newline \emph{test.pdf from 22-Jan-2018 19:07:56}}% \label{fig:test}% \end{figure} \subsubsection{Coverage of Estimators} \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialefficiency}% \caption{\textbf{Binomial distribution.}\xspace This indicates the coverage probability of the confidence interval (for the known real binomial values).\newline \emph{binomialefficiency.pdf from 15-Jan-2018 00:44:45}}% \label{fig:binomialefficiency}% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialLowVarianceefficiency}% \caption{\textbf{Low variance scenario.}\xspace Outlier ratio is zero for all CI estimators. \newline \emph{binomialLowVarianceefficiency.pdf from 15-Jan-2018 01:47:23}}% \label{fig:binomialLowVarianceefficiency}% \centering% \includegraphics[width=0.95\columnwidth]{figs/extremeefficiency}% \caption{\textbf{Extreme scenario.}\xspace \newline \emph{extremeefficiency.pdf from 21-Jan-2018 23:55:31}}% \label{fig:extremeefficiency}% \end{figure} \subsubsection{Outlier Ratio} There are $n=20$ subjects who rate $m=100$ test conditions on a $k=5$-point rating scale. The simulations are repeated $r=200$ times. \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialoutlierRatio}% \caption{\textbf{Binomial distribution.}\xspace \newline \emph{binomialoutlierRatio.pdf from 15-Jan-2018 01:15:56}}% \label{fig:binomialoutlierRatio}% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialLowVarianceoutlierRatio}% \caption{\textbf{Low variance scenario.}\xspace Outlier ratio is zero for all CI estimators. \newline \emph{binomialLowVarianceefficiency.pdf from 15-Jan-2018 01:47:23}}% \label{fig:binomialLowVarianceefficiency}% \centering% \includegraphics[width=0.95\columnwidth]{figs/extremeoutlierRatio}% \caption{\textbf{Extreme scenario.}\xspace \newline \emph{extremeoutlierRatio.pdf from 21-Jan-2018 23:55:34}}% \label{fig:extremeoutlierRatio}% \end{figure} \subsubsection{CI Width} \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialciWidthComp}% \caption{\textbf{Binomial distribution.}\xspace \newline \emph{binomialciWidthComp.pdf from 15-Jan-2018 01:49:01}}% \label{fig:binomialciWidthComp}% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialLowVarianceciWidthComp}% \caption{\textbf{Low variance scenario.}\xspace \newline \emph{binomialLowVarianceciWidthComp.pdf from 15-Jan-2018 01:49:01}}% \label{fig:binomialLowVarianceciWidthComp}% \centering% \includegraphics[width=0.95\columnwidth]{figs/extremeciWidthComp}% \caption{\textbf{Extreme scenario.}\xspace \newline \emph{extremeciWidthComp.pdf from 21-Jan-2018 23:55:35}}% \label{fig:extremeciWidthComp}% \end{figure} \subsubsection{Single Example Run for Binomial Distributions} \begin{figure}[h]% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialperformanceCDF}% \caption{Binomial case. We generate $m=100$ test conditions which cover mean values from 1 to 5. There are $n=20$ subjects rating the TCs. Two CDFs lead to much higher CI widths, Wald and simultaneous CIs. The other estimators show similar behavior. A close look reveals that bootstrap, Clopper-Pearson, Wilson have the same shape, while student-t, normal, Jeffrey reach higher maximum CI widths. Please note that this is a single run only. We need to repeat the simulation run. \newline \emph{binomialperformanceCDF.pdf from 14-Jan-2018 23:44:11}}% \label{fig:binomialperformanceCDF}% \centering% \includegraphics[width=0.95\columnwidth]{figs/binomialperformanceBars}% \caption{Binomial case. Results are summarized in this plot. Please note that this is a single run only. We need to repeat the simulation for many runs. \newline \emph{binomialperformanceBars.pdf from 14-Jan-2018 23:43:21}}% \label{fig:binomialperformanceBars}% \end{figure} \section{Introduction} Quality of Experience (QoE) research commonly relies on the collection of subjective ratings from a chosen panel of users to quantify various QoE dimensions (also referred to as QoE features \cite{QoEWP}), e.g., related to perceived audio/visual quality, perceived usability, or overall perceived quality. While various rating scales have been used in both the user experience (UX) and QoE research fields, the results of subjective studies reported by the QoE community have to a large extent relied on the use of a standardized 5-point Absolute Category Rating (ACR) scale to calculate Mean Opinion Score (MOS) values. While it has been argued that researchers should go beyond the MOS in their studies~\cite{hossfeld2016qoe} in order to consider different applications and user diversity, MOS estimates remain a staple of the QoE literature. In this context, the statistical analysis of subjective study results, subsequently used to derive QoE estimation models~\cite{ITU_P1401}, relies on the estimation of confidence intervals (CIs) to quantify the significance of MOS values per test condition. Challenges arise in dealing with uncertainties resulting from problems such as ordering effects and subject biases~\cite{ITU_P1401,janowski2014_QoMEX}. Such statistical uncertainties are expressed in terms of CIs. Given the nature of conducting QoE studies, two main issues arise. Firstly, rating scales used in quantitative QoE evaluation are bounded at both ends. Therefore, the individual rating scores $Y$ of a subject are limited. However, for the calculation of CIs, normal distributions (due to central limit theorem) or Student's t-distribution are used, which are unbounded. Secondly, due to the inherent complexity of running subjective studies, resulting in a compromise between a large number of test conditions and participant fatigue, the number $n$ of test subjects taking part in a study is generally small, in particular when running tests in a lab environment. We note that while methods such as crowdsourcing may be utilized to obtain a much larger population sample, in many cases the specifics of the study call for a controlled lab environment. As an example, and bearing in mind that the number of required participants clearly depends on the test design, number of test conditions, and target population, the ITU-T recommends a minimum of 24 subjects (controlled environment) or 35 subjects (public environment) for subjective assessment of audiovisual quality \cite{ITU_P913}. ITU-T Recom. P.1401 further states that if fewer than 30 samples are used, the normal distribution starts to become distorted and calculation of CIs based on normality assumptions are no longer valid. In cases with fewer than 30 samples, P.1401 advocates the use of the Student t-distribution when calculating CIs. Given the aforementioned issues, we highlight that commonly used CI estimators do not work properly for small sample sizes, as the normal distribution assumption may not be valid, and that they violate the bounds of the rating scale. In this paper we review statistical approaches in the literature for their application in the QoE domain for MOS interval estimation (instead of having only a point estimator, which is the MOS). Due to space restrictions, we consider only discrete rating scales, and test the CI estimators in terms of efficiency (CI width), coverage (how many CIs overlap the true mean value), and outlier ratio. The remainder of this paper is organized as follows. Section~\ref{sec:background} provides the background on CIs such as the central limit theorem, used to derive CI estimators. Section~\ref{sec:estimators} considers common estimators for the MOS and introduces some estimators based on binomal distributions that are suitable for MOS CI estimation. It also discusses other non-commonly used methods in the QoE community, such as simultaneous CI and bayesian approaches for multinomial distributions, as well as bootstrapping CI. Section~\ref{sec:results} defines various scenarios for evaluating the performance of the estimators in terms of coverage, outlier ratio, and CI width. Section~\ref{sec:conclusions} concludes this work and gives some recommendations on CI estimators for MOS values in practice. \section*{Acknowledgements} \bibliographystyle{IEEEtran} \section{Confidence Interval Estimators for MOS}\label{sec:estimators} \subsection{Problem Formulation} Discrete rating scale with $k$ rating items leading to a multinominal distribution which is the generalization of the binomial distribution. For a certain test condition, $n$ users are rating the QoE on a discrete $k$-point rating scale, e.g. $k=5$ for the commonly used 5-point Absolute Category Rating (ACR) scale. Each scale item is selected with probability $p_i$ for $i=1,\dots,k$. It is $\sum_{i=1}^k p_i = 1$. The $n$ users rate the QoE as one of the $k$ categories, with each category having a fixed probability $p_i$, the multinomial distribution gives the probability of any particular combination of numbers $n_i$ of successes for the various categories. It is $\sum_{i=1}^k n_i = n$. \begin{equation} P(N_1=n_1,\dots,N_k=n_k) = \frac{n!}{n_1! \cdots n_k!} p_1^{n_1}\cdots p_k^{n_k} \label{eq:multinominal} \end{equation} In the QoE tests, we are interested in the rating of an arbitrary user. When $n=1$, it is the categorical distribution with $p_i$. The true, but unknown mean value, i.e. the MOS, is then \begin{equation} \E[Y] = \frac{1}{k} \sum_{i=1}^k i p_i \label{eq:unknownMOS} \end{equation} by interpreting the category rating scale as a linear scale. $Y$ is a random variable of the rating of the users. We observe a sample $y_1, \dots, y_n$ with $y_i \in \{1,\dots,k\}$. In QoE tests, the number $n$ of users is typically not very high ($n=20$ or $n=30$, as recommended in some ITU-T standards), since a lot of test conditions are evaluated. From the samples $(n_1,\dots,n_k)$, i.e. the number of ratings per category, the MOS is estimated. Confidence intervals are the corresponding interval estimators for the mean value. The problem is that exiting estimators do not work properly for small sample sizes (e.g. normal distribution assumption not valid) and that they violate the bounds of the confidence interval. Bounded confidence interval, small sample size \subsection{Regular Normal and Student-T Approximation} \hoss{Description of the computation of the confidence intervals can be a mix of programming code and proper formulas. } standard formula $y_i$ is the rating of user $i$ \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] ci.norm = mean(yi)+norminv([ci.alpha/2, 1-ci.alpha/2],0,1).*std(yi)/sqrt(n); ci.studentT = mean(yi)+tinv([ci.alpha/2 1-ci.alpha/2],n-1).*std(yi)/sqrt(n); \end{lstlisting} \hoss{Can you simply limit the confidence interval to not violate the bounds? No, this would not be correct!} \subsection{Using Binomial Proportions for Discrete Rating Scales} \begin{itemize} \item The binomial distribution can be used as an upper bound distribution for user rating distributions. The binomial distribution leads to high standard deviations in QoE tests \cite{hossfeld2011sos}. The binomial distribution follows exactly the SOS hypothesis with parameter $a=1/m$ and $m+1$ rating scale items, for example $m=u-l=4$ in case of a 5-point ACR scale with bounds $l=1$ and $u=5$. \item Let us consider $n$ users. Assume the user ratings follow a binomial distribution, $Y_i \sim \bino{m,p}$ with $l=0$ and $u=m$. Then, the sum of the user ratings follows also a binomial distribution. \begin{equation} Y=\sum_{i=1}^n Y_i \sim \bino{\sum_{i=1}^n m,p}=\bino{n\cdot m,p} \end{equation} \item Due to differences among users, it may be $p_i \neq p_j$ for user $i$ and $j$. Due to the binomial sum variance inequality, we know the following. Thus, let us consider $Y=\sum_{i=1}^n Y_i$ which is not following a binomial distribution. Let us define $Z\sim \bino{n\cdot m,\bar{p}}$ with $\bar{p}=\frac{1}{n} \sum_{i=1}^n p_i$. \begin{equation} \Var[Y] < \Var[Z] \end{equation} \item Hence, the variance of $Z$ is an upper bound for QoE tests. Accordingly, we may use $Z$ to derive conservative confidence intervals. \end{itemize} Some approaches for binomial distributions \verb|res(i,j)| indicates the confidence interval width of test condition $i$ and confidence estimator $j$ \verb|low(i,j)| is the lower confidence interval end; \verb|up(i,j)| is the upper bound \begin{itemize} \item Wald interval employing normal approximation \hoss{Needs to be checked; performance is too bad for binomial case} \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] z=norminv(1-ci.alpha/2,0,1); mu=mean(yi)-1; m=4; p=mu/m; s = sqrt(p*(1-p)/n); p1 = p-z*s;p2=p+z*s; low(i,4)=p1*m+1; up(i,4)=p2*m+1; res(i,4)=(up(i,4)-low(i,4))/2; \end{lstlisting} \item Wilson score interval with continuity correction \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] low(i,5)=max(1, (2*p*n*m + z2 - (1+z*sqrt(z2-1./(n*m) + 4*n*m*p*(1-p)+(4*p-2)) ) )/(2*(n*m+z2)) *m+1 ); up(i,5)=min(m+1, (2*p*n*m + z2 + (1+z*sqrt(z2-1./(n*m) + 4*n*m*p*(1-p)+(4*p-2)) ) )/(2*(n*m+z2)) *m+1 ); res(i,5)=(up(i,5)-low(i,5))/2; \end{lstlisting} \item Clopper-Pearson (using beta distribution) \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] numSuc=sum(yi-1);trials=n*m; low(i,6) = max(1, betainv(ci.alpha/2,numSuc, trials-numSuc+1) *m + 1); up(i,6) = min(m+1, betainv(1-ci.alpha/2,numSuc+1, trials-numSuc) *m + 1); res(i,6)=(up(i,6)-low(i,6))/2; \end{lstlisting} \end{itemize} Approximate results for binomial proportions are sometimes more useful than exact results because of the inherent conservative calculation of exact methods \cite{agresti1998approximate}. \subsection{Simultaneous Confidence Intervals for Multinomial Distribution} Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions \cite{jin2013computing} \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] di=1:5; xi = hist(yi,di); a = chi2inv(1-ci.alpha,length(di)-1); a = chi2inv(1-ci.alpha/length(di),1); scim = sum((1:5).*xi/n); low(i,7)=scim-sqrt(a/n.*(sum((1:5).^2.*xi/n)- (sum((1:5).*xi/n)).^2 )); up(i,7)=scim+sqrt(a/n.*(sum((1:5).^2.*xi/n)- (sum((1:5).*xi/n)).^2 )); res(i,7)=(up(i,7)-low(i,7))/2; \end{lstlisting} \subsection{Bayesian Approach for Multinomial Distributions} \begin{lstlisting}[backgroundcolor=\color{MatlabCellColour}] if numSuc==0, low(i,8)=1; else low(i,8)=betainv(ci.alpha/2,numSuc+1/2, trials-numSuc+1/2)*m+1; end if numSuc==trials, up(i,8)=5; else up(i,8)=betainv(1-ci.alpha/2,numSuc+1/2, trials-numSuc+1/2)*m+1; end res(i,8)=(up(i,8)-low(i,8))/2; \end{lstlisting} \subsection{Bootstrap Confidence Intervals} \message{ !name(estimators.tex) !offset(-152) } \section{Numerical Results}\label{sec:results} For evaluating the estimators' performance, we consider different scenarios in which the user ratings for a test condition are sampled from a known distribution. The commonly used 5-point ACR scale is considered. We investigate two different scenarios: \begin{inparaenum}[(1)] \item binomial distribution as an upper bound in terms of variance for QoE tests, \item low variance, where users only rate $2,3,4$ and avoid the rating scale edges. \end{inparaenum} The performance is then evaluated with several metrics: the coverage of the CIs, the width of the CIs, and the outlier ratio. \subsection{Scenarios for Performance Evaluation} We consider a $k$-point rating scale. For a certain test condition $x$, the user ratings $Y_x$ follow a certain discrete distribution, with $p_i=P(Y_x=i)$ for $i \in{1, ..., k}$. User ratings $Y_{u,x,i}$ are sampled for test condition $x$ for the users $u \in \{1,\dots,n\}$, from the distribution $F_{Y_x}$. The simulations are repeated $r$-times to get statistically significant results in the evaluation. The index $r \in \{1,\dots,r\}$ represents the $r$-th simulation run. We use $r=200$ repetitions. For the evaluation, we consider $m=101$ test conditions with the known mean value, i.e., the \emph{expected value}, $\E[Y_x] = \mu_x $ for $x \in\{1,\dots,m\}$. It is $\mu_x = \frac{x-1}{m}(H-L)+L$ with $H \leq k$ and $L \geq 1$ indicating the maximum and minimum possible user rating $Y_x$, respectively. \subsubsection{Binomially Distributed User Ratings} This scenario represents a high variance of user ratings which is also observed in real QoE tests. The user rating diversity for any QoE experiment can be quantified in terms of the SOS parameter $a$ which is defined in \cite{hossfeld2011sos}. For example, \cite{hossfeld2016qoe} measured $a=0.27$ for the results of a web QoE study. This was among the highest SOS parameters observed for different QoE studies and applications such as video streaming, VoIP, and image QoE. The results of gaming QoE studies have shown a similarly high SOS parameter. The binomial distribution leads to an SOS parameter of $a=0.25$ and is therefore appropriate as a realistic scenario for high variances. \begin{equation} Y_x \sim \bino{k-1,p}+1 \end{equation} with MOS $\E[Y_x]=p \cdot (k-1)+1$ and $\Var[Y_x]=(k-1)p(1-p)$. Hence, $p=\frac{x-1}{k-1}$. \subsubsection{Low Variance} Next, we consider a scenario with low variances. In that case, users are not using the edge of the rating scale and only rate $2,\dots,k-1$. This can be realized with a shifted binomial distribution. \begin{equation} Y_x \sim \bino{k-3,p}+2 \end{equation} with $\E[Y_x]=p (k-3)+2$ and $\Var[Y_x]=(k-3)p(1-p)$. Then $p=2 \frac{x-1}{k-1}-\frac{1}{2}$. The SOS parameter is numerically derived \cite{hossfeld2016qoe} and found to be $a=0.084$. \subsection{Metrics for Evaluating the Performance of the Estimators} According to the distribution defined in a given scenario, we generate $n$ samples (i.e., user ratings) for $m$ test conditions and repeat the simulation $r$ times. The user rating $y_{u,x,i}$ indicates the user rating of user $u$, test condition $x$, in run $i$. For each test condition $x$ and each run $i$, the MOS is derived by averaging over the $n$ sampled subjects' ratings. \begin{equation} \hat{Y}_{x,i} = \frac{1}{n}\sum_{u=1}^n y_{u,x,i} \label{eq:sampleMOS} \end{equation} The CI estimator does not know the underlying distribution $Y_x$ or the expected values $\mu_x$. We investigate the performance of the CI estimator with the following metrics. \subsubsection{Coverage} For a certain confidence interval derived from the samples of all $n$ users for test condition $x$ in run $i$, we can check whether the expected value $\mu_x$ is contained in the confidence interval $[\theta_L;\theta_U]$. \begin{equation} C_{x,i} = \begin{cases} 1 & \text{if } \theta_L \leq \mu_x \leq \theta_U \\ 0 & \text{otherwise} \end{cases} \end{equation} Then, the coverage of the CI estimator for test condition $x$ is the average over all $r$ simulation runs, i.e., the probability that the CI contains the expected value. The marginal distribution of $C_{x,i}$ for a fixed test condition $x$, gives the \emph{test condition perspective} and will be defined accordingly for the CI width and the outlier ratio. \begin{equation} \hat{C}_x = \frac{1}{r} \sum_{i=1}^r C_{x,i} \label{eq:coverageCx} \end{equation} The marginal distribution of $C_{x,i}$ for a single QoE study, $i$, gives the \emph{QoE study perspective}. \begin{equation} \hat{C}_i = \frac{1}{m} \sum_{x=1}^m C_{x,i} \label{eq:coverageCi} \end{equation} Please note that the overall average over all studies and test condition $\hat{C}$ is obtained either by averaging over $\hat{C_x}$ or $\hat{C_i}$ . \begin{equation} \hat{C} = \frac{1}{m} \sum_{x=1}^m \hat{C}_x = \frac{1}{r} \sum_{i=1}^r \hat{C}_i \label{eq:averageCoverage} \end{equation} \subsubsection{Outlier Ratio} For test condition $x$ and study $i$, we estimate the probability that the confidence interval $[\theta_0;\theta_1]$ is outside the bounds of the rating scale $[1,k]$. \begin{equation} O_{x,i} = \begin{cases} 1 & \text{if } \theta_0<1 \text{ or } \theta_1>k \\ 0 & \text{otherwise} \end{cases} \end{equation} Then, we define the outlier ratio from the test condition perspective and the QoE study perspective, respectively. \begin{equation} \hat{O}_x = \frac{1}{r} \sum_{i=1}^r O_{x,i} \, , \quad \hat{O}_i = \frac{1}{m} \sum_{x=1}^m O_{x,i} \, . \label{eq:outlier} \end{equation} \subsubsection{CI Width} Finally, the width $\hat{W}_x$ and $\hat{W}_i$ of the confidence intervals is considered from the test condition perspective and the QoE study perspective, respectively. Thereby, the confidence intervals are averages over all runs and over all test conditions, respectively. \begin{equation} \hat{W}_x = \frac{1}{r} \sum_{i=1}^r W_{x,i} \, , \quad \hat{W}_i = \frac{1}{m} \sum_{x=1}^m W_{x,i} \, . \label{eq:ciwidth} \end{equation} Please note that the average over $\hat{W}_x$ and the average over $\hat{W}_i$ are identical. \begin{equation} \hat{W} = \frac{1}{m} \sum_{x=1}^m \hat{W}_x = \frac{1}{r} \sum_{i=1}^r \hat{W}_i \label{eq:averageCoverage} \end{equation} \subsection{Scenario with Binomially Distributed Ratings} \begin{figure}% \centering% \includegraphics[width=0.95\columnwidth]{figs/binoPerspectiveSingleInterval}% \caption{\textbf{Binomial distribution.}\xspace The \emph{test condition perspective} considers the performance measures $\hat{M}_x$. We observe that for some estimators (norm., stud., sim.CI, Wald) there are several test conditions with bad properties (low coverage, high outlier ratio). The corresponding numbers are provided in Table~\ref{tab:binoStudyBothTables}. Except for the Wald estimator, the binomial proportion estimators (C-P, Wils., Jeff.) work much better. Bootstrapping also leads to good results, but suffers from coverage outliers. }% \label{fig:binoPerspectiveSingleInterval}% \end{figure} \begin{figure}% \centering% \includegraphics[width=0.95\columnwidth]{figs/binoPerspectiveQoEStudy}% \caption{\textbf{Binomial distribution.}\xspace The \emph{QoE study perspective} focuses on the performance measure $\hat{M}_i$. Hence, the performance (coverage, outlier ratio, CI width) is averaged over all test conditions within a single run. The boxplot summarizes then those average results $\hat{M}_i$ over all $r$ runs. Concrete numbers are provided in Table~\ref{tab:binoStudyBothTables}. }% \label{fig:binoPerspectiveQoEStudy}% \end{figure} \newcommand{\hi}[1]{\textcolor{red}{#1}} \definecolor{bronze}{rgb}{0.8, 0.5, 0.2} \newcommand{\yi}[1]{\textcolor{bronze}{#1}} \begin{table}\centering \caption{ The performance metrics are averaged and differentiated for coverage from \emph{test condition perspective} ($\hat{C}_i$) and a \emph{QoE study perspective} ($\hat{C}_x$) for the three scenarios. Minimum coverage is denoted by $\hat{C}_{x|i}^m$ and coverage outliers in the boxplot by $\hat{C}_{x|i}^o$. } \label{tab:binoStudyBothTables} \begin{tabular}{cccccccc} \toprule \emph{Binomial} & $\hat{C}$ & $\hat{C}_x^o$ & $\hat{C}_x^m$ & $\hat{C}_i^o$ & $\hat{C}_i^m$ & $\hat{O}$ & $\hat{W}$ \\ \midrule norm. & 0.92 & 0.08 & \hi{0.55} & 0.01 & 0.83 & \hi{0.08} & 0.68 \\ stud. & 0.93 & 0.09 & \hi{0.55} & 0.01 & 0.85 & \hi{0.09} & 0.72 \\ sim.CI & 0.96 & 0.08 & \hi{0.55} & 0.00 & 0.92 & \hi{0.13} & 0.87 \\ Wald & 0.98 & 0.14 & \hi{0.55} & 0.04 & 0.94 & \hi{0.30} & \hi{1.36} \\ C-P & 0.97 & 0.01 & 0.93 & 0.03 & 0.91 & 0.00 & 0.72 \\ Wils. & 0.97 & 0.00 & 0.93 & 0.04 & 0.90 & 0.00 & 0.73 \\ Jeff. & 0.95 & 0.00 & 0.92 & 0.04 & 0.89 & 0.00 & 0.68 \\ boot. & 0.93 & 0.05 & \hi{0.52} & 0.00 & 0.87 & 0.00 & 0.67 \\ \toprule \emph{Low. var.} & $\hat{C}$ & $\hat{C}_x^o$ & $\hat{C}_x^m$ & $\hat{C}_i^o$ & $\hat{C}_i^m$ & $\hat{O}$ & $\hat{W}$ \\ \midrule norm. & 0.90 & 0.10 & \hi{0.28} & 0.00 & 0.82 & 0.00 & 0.48 \\ stud. & 0.91 & 0.09 & \hi{0.28} & 0.00 & 0.83 & 0.00 & 0.51 \\ sim.CI & 0.93 & 0.10 & \hi{0.28} & 0.01 & 0.87 & 0.00 & 0.61 \\ Wald & 1.00 & 0.00 & 1.00 & 0.00 & 1.00 & 0.00 & \hi{1.67} \\ C-P & 1.00 & 0.23 & 0.98 & 0.14 & 0.99 & 0.00 & \yi{0.87} \\ Wils. & 1.00 & 0.24 & 0.98 & 0.16 & 0.99 & 0.00 & \yi{0.87} \\ Jeff. & 1.00 & 0.05 & 0.98 & 0.23 & 0.97 & 0.00 & \yi{0.82} \\ boot. & 0.91 & 0.11 & \hi{0.28} & 0.01 & 0.83 & 0.00 & 0.47 \\ \bottomrule \end{tabular} \end{table} Figures~\ref{fig:binoPerspectiveSingleInterval} and~\ref{fig:binoPerspectiveQoEStudy} show the results for the binomial distribution scenario for the TC and QoE study perspective, respectively. The boxplots shows the median within the box. The bottom and top of the box are the first and third quartiles. The upper and lower ends of the whiskers denotes the most extreme data point that is maximum and minimum 1.5 interquartile range (IQR) of the upper and lower quartile, respectively. Data outside 1.5 IQR are marked as outlier with a dot. An overview on the performance measures is provided in Table~\ref{tab:binoStudyBothTables}. The numerical results from the binomial case show that Clopper-Pearson, Jeffreys and bootstrapping have a good performance from the test condition and QoE study perspective. They have a good coverage, do not suffer from outliers, and have small CI widths. The proposed idea based on binomial proportion fails if the distribution has a higher variance than a binomial distribution. Then, the coverage is poor; the confidence intervals are too small, as only binomial variances are assumed, but in reality we have higher variances. This is however very rare in actual QoE studies. If the variances are higher, this is often an indicator for hidden influence factors in the test setup or some other issues \cite{hossfeld2011sos}. \subsection{Low Variance Scenario} We only consider the QoE study perspective now which is provided in Figure~\ref{fig:lowvarPerspectiveQoEStudy}. In case of low variances, the three identified estimators (Wilson, Clopper-Pearson, Jeffreys) still have a very good performance, and coverage is 100\%. However in that case, the CI width is larger than for the normalized or student-t estimators. The reason for this is that the proposed estimators assume a binomial distribution (i.e., a much larger variance) and necessarily overestimate the CIs. For all estimators, the outlier ratio is zero. Still normalized or student-t have some problems to cover certain TCs at the edge (see $\hat{C}^m_x$ or $\hat{C}^o_x$). \begin{figure}% \centering% \includegraphics[width=0.95\columnwidth]{figs/lowvarPerspectiveQoEStudy}% \caption{\textbf{Low variance scenario.}\xspace For all estimators, the outlier ratio is zero. Wald interval average CI width is about 1.67. The proposed binomial based CBI estimators lead to higher CIs than normalized estimators, as the assumed binomial distribution has a higher variance. Thus, the estimators are conservative for low variances. }% \label{fig:lowvarPerspectiveQoEStudy}% \end{figure} Figure~\ref{fig:lowvarciWidthCoverageScatter2} considers the average CI width and coverage when varying the number $n$ of subjects in the study. The most efficient way to decrease the CI width is to increase $n$. It is worth to note that the binomial proportions estimators show almost constant coverage in contrast to bootstrapping. \begin{figure}% \centering% \includegraphics[width=0.95\columnwidth]{figs/lowvarciWidthCoverageScatter2}% \caption{\textbf{Low variance scenario.}\xspace The average coverage $\hat{C}$ and CI width $\hat{W}$ are considered depending on the number of subjects of the study. The outlier ratio is zero for the three considered estimators. }% \label{fig:lowvarciWidthCoverageScatter2}% \end{figure}
{ "timestamp": "2018-06-05T02:18:28", "yymm": "1806", "arxiv_id": "1806.01126", "language": "en", "url": "https://arxiv.org/abs/1806.01126" }
\section{Introduction} Visual tracking aims to estimate a trajectory of an object through a video based on only one bounding box annotation at the beginning of the sequence. Tracking is important for applications in surveillance~\cite{emami2012role}, video understanding~\cite{renoust2016visual} and robotics~\cite{liu2012hand}. One of the main challenges of tracking is the limited data problem: the tracker should be able to track an object based on only a single annotated bounding box. The success of a tracker is therefore very dependent on the quality of the discriminative features which are used by the tracker. Recently, specialized tracking subproblems have emerged. Among these is the field of tracking in thermal infrared (TIR) images, whose importance is further increasing due to improvements of thermal infrared sensors in resolution and quality~\cite{7406435,felsberg2016thermal,Kristan_2017_ICCV}. The advantage of thermal images is that they are not influenced by the illumination variations and shadows, and objects can be distinguished from the background as the background is normally colder. In addition, thermal infrared tracking can be used in total darkness, where visual cameras have no signal. Considering these advantages, thermal infrared tracking has a wide range of applications in car and pedestrian surveillance systems as well as various defense systems~\cite{gade2014thermal}. In recent years, Discriminative Correlation Filter (DCF) based methods~\cite{bolme2010visual, henriques2015high, danelljan2017eco} have shown to provide excellent tracking performance on existing benchmarks~\cite{wu2015object,VOT_TPAMI,mueller2016benchmark}. The DCF based trackers learn a correlation filter from example patches to discriminate between the target and background appearance. Further, the DCF based framework efficiently utilizes all spatial shifts of the training samples by exploiting the properties of circular correlation to train and apply a discriminative classifier in a sliding window fashion. Lately, the DCF based framework has been significantly advanced by employing high-dimensional visual features~\cite{henriques2015high,danelljan2014adaptive,danelljan2016adaptive}, powerful learning methods~\cite{song2017crest,danelljan2017eco}, reducing boundary effects~\cite{danelljan2015learning}, and accurate scale estimation~\cite{danelljan2014accurate}. Due to their superior performance in RGB tracking, some of these methods have also been applied with success to TIR~\cite{danelljan2017eco,danelljan2015learning}. Recently, deep learning has revolutionized the field of computer vision significantly advancing the state-of-the-art in many applications~\cite{krizhevsky2012imagenet}. Generally the deep networks are trained on a large amount of labeled training data. Despite its astounding success, the impact of deep learning on generic visual tracking (RGB data) has been limited. One of the key issues when employing deep features for tracking is the unavailability of large-scale labeled tracking data for training. Further, the tracking model is desired to be learned using a single labeled frame. Therefore, most existing deep learning based DCF trackers~\cite{ma2015hierarchical,danelljan2016beyond,danelljan2017eco} employ deep features pre-trained on the ImageNet dataset~\cite{russakovsky2015imagenet} for image classification task. Other approaches~\cite{valmadre2017end,song2017crest} have investigated the integration of DCF in a deep network by adapting the end-to-end philosophy, but did not result in major improvements over features from pre-trained networks. Even more than for RGB tracking, introducing deep learning to TIR tracking is hampered by the absence of large datasets. The datasets which are available for thermal infrared videos are relatively small. Moreover, there is no ImageNet counterpart of infrared still images on which a large network could be pre-trained. Therefore, the usage of handcrafted features remains dominant for TIR tracking. For instance, the top three trackers in VOT-TIR2017~\cite{Kristan_2017_ICCV,VOT_TPAMI} are still exploiting handcrafted features. The winner~\cite{yu2017dense} of VOT-TIR2017 challenge employs HOG~\cite{dalal2005histograms} and motion features. Further, the other top-performing methods ~\cite{zhu2016beyond,7406435} are based on handcrafted features. The success of these methods show that handcrafted features are still the best choice for TIR tracking. Deep learning has also resulted in fast progress in generative models which are able to generate samples from complex image distributions~\cite{goodfellow2014generative}. These models have been further extended to image-to-image translation models~\cite{isola2017image} which allow to learn mappings between image domains. A further extension of this work allows to learn mappings between unpaired domains~\cite{zhu2017unpaired}, which is based on the observation that transferring an image to another domain and then transferring it back to the first domain should result in the same image which was provided as input. One of the more interesting applications of these generative networks is that they can be used to construct synthetic datasets of small data domains, such as TIR. In this work, we show that labeled data from RGB can be translated to TIR data, and the labels can be transferred. In this work we tackle the key limited-data problem for TIR-tracking by utilizing recent developments in image-to-image translation methods ~\cite{isola2017image,zhu2017unpaired}. The idea is to automatically transfer RGB tracking videos to the TIR domain. We can then automatically transfer the labels from these RGB videos to the synthetic TIR videos. The resulting data can then be used to extract discriminative deep features for the TIR domain. The advantage is that we can generate the TIR counterpart of the available RGB tracking datasets which are much larger compared to the current TIR tracking datasets. {The main contributions of the paper are: \begin{itemize} \item We address the scarcity of labeled data for TIR tracking. Therefore, we propose a framework which transfers RGB data to synthetic TIR data. The labels available for the RGB data are also transfered to the TIR data, resulting in a large synthetic TIR data set for tracking. \item We are the first to perform end-to-end training for TIR tracking, showing that this can significantly improve results (see Table~\ref{table:models}). We also show that a tracker trained on only synthetic data can outperforms trackers trained on available labeled TIR data (see Fig.~\ref{fig:qualResECO}). \item We perform extensive evaluations on the latest TIR tracking challenge~\cite{Kristan_2017_ICCV} verifying the efficiency of our different models trained on synthetic TIR datasets. We show that when combined with motion features our method obtains state of the art on the TIR tracking challenge. \end{itemize}} The rest of the paper is organized as follows. In section~\ref{sec:related} we briefly discuss related work. In section~\ref{sec:corrfilter} we introduce the standard correlation filter and the current end-to-end deep correlation filter. In section~\ref{sec:gan} we describe the prevalent generative adversarial networks and present our generated synthetic tracking videos. In section~\ref{sec:exps} we present our experiments on standard thermal infrared tracking dataset. In section~\ref{sec:conclude} we conclude our work and plan our further research. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/fig1.pdf} \caption{Qualitative comparison of our approach trained on generated data only (red) with baseline ECO~\cite{danelljan2017eco} (green) on the \textit{quadrocopter2}, \textit{garden} and \textit{car2} videos. The ground truth bounding box is provided in yellow. Owing to the synthetic TIR data our model is able to follow the object successfully in case of out-of-plane rotation, partial occlusion and scale changes.} \label{fig:qualResECO} \end{figure*} \section{Related Work}\label{sec:related} \subsection{DCF tracking} In recent years, discriminative correlation filter (DCF) based tracking methods have shown excellent performance in terms of accuracy and robustness on benchmark tracking datasets~\cite{VOT_TPAMI,wu2015object}. The DCF based trackers aim at learning a correlation filter in an online fashion from example image patches to discriminate between the target and background appearance. The seminal work of~\cite{bolme2010visual} was restricted to a single feature channel (grayscale image). Later, the DCF framework was extended to use multi-dimensional handcrafted features by~\cite{galoogahi2013multi,henriques2015high,danelljan2014adaptive}, such as HOG~\cite{dalal2005histograms} and Color Names~\cite{van2009learning}. Some of the recent advances in DCF frameworks can be attributed to reducing boundary effects~\cite{danelljan2015learning}, robust scale estimation~\cite{danelljan2014accurate}, integrating context~\cite{cf_ca_tracking}, and adding a long-term memory component~\cite{ma2015long}. Even after more than five years of flourishing, discriminative correlation filter based tracking is still the mainstream in single object tracking. Recent modifications on DCF include: Mueller et al.~\cite{cf_ca_tracking} sample four context patches around the target and incorporate these to regularize the regression function, which has the same effect as hard negative mining. Alan et al.~\cite{lukezic2017discriminative} enlarge the search region and improve tracking of non-rectangular objects by using spatial maps to restrict the correlation filter. They also give the learned filter adaptive channel-wise weights, which improves the quality of the filter. Kiani et al.~\cite{kiani2017learning} use a mask to crop the object in the spatial domain and get a new closed-form solution of the correlation filter in the Fourier domain by embedding the mask matrix into the formulation. This yields significantly more shifted examples unaffected by boundary effects. Compared to handcrafted features (e.g. HOG~\cite{dalal2005histograms}, intensity and Color Names~\cite{van2009learning}), deep CNN features significantly improve the robustness of the tracker against geometric variations, resulting in a significant improvement of the performance~\cite{danelljan2015convolutional}. This mainly caused by the high discrimination of deep features, since CNNs are trained on the large dataset ILSVRC2012~\cite{russakovsky2015imagenet}. Later, Ma et al.~\cite{ma2015hierarchical} propose to encode the target appearance on several convolutional layers and each layer has a corresponding correlation filter. This hierarchical architecture locates targets by maximizing the response of each layer with different weights. They find an optimized position in a coarse to fine way. Directly using different layers may not take full advantage of the CNN features because of the discrete distribution of features. To exploit the continuity between different layers of networks, Danelljan et al.~\cite{danelljan2016beyond} propose to learn a convolution operator in the continuous spatial domain called CCOT. As CCOT is very slow, Danelljan et al.~\cite{danelljan2017eco} propose to factorize the convolution operator to reduce the dimensions of feature maps. Then they use GMM to generate samples, which significantly accelerate the tracker, enabling the tracker to run in real-time, while still maintaining the same or higher accuracy. \subsection{TIR tracking} Currently, the leading TIR trackers still employ handcrafted features in their models. Yu et al.~\cite{yu2017online} propose structural learning on dense samples around the object. Their tracker uses edge and HOG features and transfers them into the Fourier domain, to obtain a real-time tracker. Later they extend this work, called DSLT~\cite{yu2017dense}, by integrating HOG~\cite{dalal2005histograms} and motion features. With this tracker they won the VOT-TIR2017 challenge~\cite{Kristan_2017_ICCV}. Another TIR tracker, called EBT~\cite{zhu2016beyond}, uses edge features to devise an objectness measure specific for each instance. This enables the generation of high quality object proposals and the use of richer features. Concretely, for each proposal they extract a 2640-dimensional histogram feature as well as a 5-level pyramid computed from the intensity channel. They achieve the runner-up position in the VOT-TIR2017 challenge. SRDCFir~\cite{7406435} extends the SRDCF~\cite{danelljan2015learning} tracker for TIR data by adding motion features. SRDCF is a DCF-based tracker that introduced a spatial regularization function to penalize those DCF coefficients that reside outside the target region, which mitigates the damaging boundary effects present in the traditional DCF. Another branch for TIR tracking combines the input TIR data with the visual modality, concretely with the image intensity given as a grayscale image. For example, Li et al.~\cite{li2016learning} propose an adaptive fusion scheme to incorporate information from grayscale images and TIR videos during tracking. Similarly, the approach in~\cite{li2017grayscale} samples a set of patches around the object and extracts a joint sparse representation in both grayscale and TIR modalities. The usage of generating other modalities was pioneered by Hoffman et al.~\cite{hoffman2016learning}. They used generation of depth data to improve classification on the abundant-data modality (RGB), whereas we use data generation as a source of labeled data for the scarce-data modality (TIR). Xu et al.~\cite{xu2017learning} use a network which generates TIR images to pre-train the weights. These weights are then applied in a network which is used on RGB data with the aim to improve tracking of pedestrians. Other than them, we use the generation of TIR data for data augmentation; we create large synthetic labeled data sets of TIR data to be able to train end-to-end features for TIR data. \subsection{Adversarial image-to-image translation} Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} have achieved promising results in several tasks such as image generation~\cite{denton2015deep}, image editing~\cite{perarnau2016invertible}, and representation learning~\cite{salimans2016improved}. The conditional variants of GANs~\cite{mirza2014conditional} enable to condition the image generation on a selected input variable, for example, an input image. In this case, the task becomes image-to-image translation, and this is the variant we use here. The general method of Isola et al.~\cite{isola2017image}, pix2pix, was the first GAN-based image-to-image translation work that was not designed for a specific task (e.g. colorization~\cite{zhang2016colorful}). The architecture is based on an encoder-decoder with skip connections~\cite{ronneberger2015u} and it is trained using a combination of two losses: a conditional adversarial loss~\cite{goodfellow2014generative} and a loss that maps the generated image close to the corresponding target image. This method achieves excellent results, but requires matching pairs of training images, which limits the applicability of the model as such data might not be easily accessible. In order to overcome this limitation, Zhu et al.~\cite{zhu2017unpaired} extended this model to the case in which paired data is not available. Their method, called CycleGAN, relies on the assumption that mapping an image from the input domain to the target and then back to the input (i.e. the cycle) should result in the identity function. Based on this, they add a cycle consistency loss that enforces the correct reconstruction of the input image resulting of the composition of the two mappings. They demonstrate the effectiveness of their method for multiple tasks such as edges to real images or photo enhancement. In this paper, we use image-to-image translation to generate a synthetic large-scale TIR tracking dataset from a labeled RGB dataset, with the goal of learning better deep features for tracking. \section{Method overview} We aim to train end-to-end deep features for tracking in TIR data. However, to train effective deep features for TIR data, we need a large dataset of labeled TIR videos. Unfortunately, the amount of labeled TIR data is very scarce. To the best of our knowledge, only BU-TIV dataset~\cite{wu2014thermal} contains a considerable amount of labeled TIR videos, but most of them depict only one object class (pedestrian). Therefore, most state of the art TIR tracking methods are still based on hand-crafted features~\cite{yu2017dense,zhu2016beyond,7406435}. On the other hand, there are vast amounts of RGB videos labeled for tracking~\cite{wu2015object,VOT_TPAMI}. One solution could therefore be to use the pre-trained features which are optimal for tracking in RGB data for TIR data. However, this is unlikely to be optimal because TIR and RGB data differ significantly. {To illustrate the difference in nature of RGB and TIR data we measure the average activation of the 96 filters of the first layer of a pre-trained AlexNet on the KAIST dataset. This dataset contains both RGB and TIR images of the same scenes (a similar study for depth images has been performed in~\cite{song2017depth}). The pre-trained network is trained to recognize objects in RGB images (i.e. on ImageNet). In Fig.~\ref{fig:filter} we show the results. The graph shows the average activation of the filters in descending order. When applied to data which is similar to that on which the network is trained, the average activations tend to produce a uniform distribution. This can be seen for RGB images where most filters yield the same average activation and only a few filters differ from this pattern. When we perform the same experiment on TIR data the pattern changes. We can now observe clear differences between filters which have a higher average activation and filters which have lower average activation. This shows that these filters are probably not optimal for TIR tracking. When we look at the exact filters which have low and high activation, we see that low frequency patterns (blobs and edges) are prevalent for the TIR data, whereas high-frequency filters are seldom in TIR data. This is not surprising since most textures, responsible for most of the high-frequency content of images, do not appear in TIR data. In conclusion, given the different nature of the image statistics of RGB data and TIR data it is probable that a network which is trained on TIR data would outperform a network trained on RGB data. } {In this paper, we aim to address the problem of data scarcity of labeled videos for tracking in TIR data. We do this by exploiting the vast amount of labelled RGB videos which are available, in combination with recent advances in image-to-image translation techniques. We will use these image-to-image translation models to transfer large labeled RGB datasets to synthetic TIR dataset together with the tracking annotations. As a result we now have a large labeled synthetic TIR dataset. We use this synthetic TIR dataset to train end-to-end deep networks to obtain optimal TIR features for tracking. Then we plug the optimal TIR feature model into a state-of-the-art tracker. Here we use ECO~\cite{danelljan2017eco}. An overview of our method is provided in Fig.~\ref{fig:pipeline}. In the following section we detail the various parts of our algorithm.} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/aar.pdf} \caption{{Average activation of filters from the first layer of pre-trained AlexNet~\cite{krizhevsky2012imagenet} on the test set of KAIST~\cite{hwang2013multispectral} for RGB and TIR images.}} \label{fig:filter} \end{figure} \section{Deep Learning Features for Correlation Filter Tracking} \label{sec:corrfilter} In this section, we introduce the standard correlation filter and the current end-to-end deep correlation filter. Then we describe Efficient Convolution Operators (ECO)~\cite{danelljan2017eco} which we will use as the correlation filter for our experiments. \subsection{Correlation Filter Tracking} The conventional discriminative correlation filters (DCF) formulation~{\cite{henriques2015high}}, learns a linear correlation filter $f$ that discriminates the target appearance from the background. The target location is predicted by applying the correlation filter to a sample feature map. The desired filter $f$ can be obtained by minimizing a least squares objective: \begin{align} E(f) = \left\|\sum^D_{d=1} f^d*x^d-y \right\|^2+\lambda \sum^D_{d=1} \|f^d\|^2. \label{eq:cf} \end{align} Here $*$ denotes circular correlation. $x^d$ denotes feature maps of training samples $x$, where the layer $d\in\{1,\dots,D\}$. $f^d$ denotes channel $d$ of filter $f$. $y$ is the regression target and $\lambda$ is a regularization weight to control over-fitting. A closed-form solution is obtained in the Fourier domain, \begin{eqnarray} \hat{f}^d = \frac{\hat{x}^d \hat{y}^*}{\sum^D_{d=1}\hat{x}^d (\hat{x}^d)^* + \lambda}. \label{eq:filter} \end{eqnarray} Where the $\hat{y}^*$ denotes the complex conjugate of the discrete Fourier transform $\mathcal{F}(y)$. Recently, researchers have proposed several methods for end-to-end training of features for tracking: CFNet~\cite{valmadre2017end}, DCFNet~\cite{wang2017dcfnet}, and CFCF~\cite{gundogdu2017good}. All use a two-branch Siamese network, of which one branch is used to compute the optimal correlation filter, which is applied in the other branch to obtain the response map (see Fig.~\ref{fig:pipeline}). Both branches share the weights of their convolutional layers. Training is performed with paired images from same video. It backpropagates the gradients through the discriminative correlation filter layer (DCFL) with a closed-from solution~\cite{valmadre2017end}. Surprisingly, trackers based on end-to-end training only slightly outperformed off-the-shelf features. It should be noted that all end-to-end trackers train on RGB datasets, mainly on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC15)~\cite{russakovsky2015imagenet}, and no results for end-to-end tracking on other modalities like TIR are available. In this paper, we use the end-to-end CFNet training procedure proposed by Bertinetto et al.~\cite{valmadre2017end}. This method obtains stable and fast network training due to their Fourier domain implementation of the discriminative correlation filter layer. Other than them we will apply it to TIR tracking. Since current available datasets for TIR tracking are rather small, we propose in the next section our approach to generating synthetic TIR tracking data from labeled RGB tracking data. \subsection{Efficient Convolution Operators} Previously, we have explained how to train end-to-end features for tracking. These features can be used in different discriminative correlation filter methods. In our work we use the Efficient Convolution Operator (ECO) \cite{danelljan2017eco} method, shown to obtain state-of-the-art results while being computationally efficient. However, its original implementation is based on features extracted from a pre-trained CNN model trained on the ImageNet 2012 classification dataset~\cite{russakovsky2015imagenet}. Even though these features are extracted from a model which is trained for image classification, ECO obtains excellent results for tracking. In our experiments we combine ECO with the end-to-end trained features for TIR tracking. The ECO tracker aims at combining shallow and deep features by learning a multi-channel continuous convolution filter in a joint optimization scheme across all feature channels. Furthermore, it learns a projection matrix, to reduce the dimensionality of high-dimensional features. Here we briefly describe the training and inference procedures applied in the ECO tracker. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/pipeline.pdf} \caption{ {Overview of our approach. (a) Image-to-image translation component (proposed in~\cite{isola2017image}) for generating a large labeled synthetic TIR tracking dataset. We use blue dashed line to represent the baseline RGB training model and the green dashed line represents our proposed synthetic data training model. After the translation of RGB data to TIR data, we acquire enough suitable data for end-to-end training networks for TIR tracking. (b) Two-branch architecture for training the network to obtain adaptive features for TIR tracking (proposed in~\cite{valmadre2017end}). The optimal correlation filter is computed in the discriminative correlation filter layer (DCFL) for the image processed in the upper branch. This filter is then applied on the image in the bottom branch. }} \label{fig:pipeline} \end{figure*} ECO learns the target model parameters based on a set of training samples {$\{x_j\}_1^M$} and corresponding labels {$\{y_j\}_1^M$}. {The label function $y_j$ consists of the desired target scores at all spatial locations in the corresponding training sample $x_j$. It is defined as a periodically repeated Gaussian function centered at the sample location~\cite{danelljan2016beyond}.} Each training sample contains multiple feature layers $x_j^d\in\mathbb{R}^{N_d\times N_d}$, where $N_d$ is the spatial resolution of layer $d\in\{1,\dots,D\}$. These feature layers correspond to both shallow and deep features of varying resolutions. The tracker predicts the target location using the target score operator, defined as \begin{align} \label{eco_score} S_{f,P}\{x\} = \sum_{d=1}^Df^d*PJ_d\{x^d\} \,. \end{align} Here, $x$ is the input sample and $f$ is the learned filter that predicts the detection score function $S_{f,P}\{x\}$ of the target. The sample $x$ is first interpolated to the continuous domain using the operator $J_d${, by applying a cubic spline kernel in the Fourier domain (see~\cite{danelljan2016beyond} for details)}. The projection matrix $P$ is then applied to reduce the dimensionality of the feature space. The detection score operator is learnt via minimization of a least squares objective, \begin{align} \label{eco_loss} E(f) = &\sum_{j=1}^M\alpha_j\|S_{f,P}\{x_j\}-y_j\|^2 + \sum_{d=1}^D\|wf^d\|^2 + \lambda\|P\|_F^2 \,. \end{align} Here, the projection and filter are regularized by {a constant $\lambda$}. The spatial regularization weight function $w$ is employed to mitigate the effects of periodic repetition \cite{danelljan2015learning}. Each sample $x_j$ is weighted by $\alpha_j$, based on a learning rate parameter $\gamma$. The label functions $\{y_j\}_1^M$ are set to Gaussian functions centered at the target location. Using Parseval's formula an equivalent loss is obtained as, \begin{align} \label{eco_loss_fourier} E(f) =& \sum_{j=1}^M\alpha_k\|\widehat{S_{f,P}\{x_j\}}-\hat{y}_j\|^2 \!+ \!\sum_{d=1}^D\|\hat{w}*\hat{f}^d\|^2 + \lambda \|P\|_F^2. \end{align} Here $\hat{\cdot}$ denotes the Fourier coefficients. {We learn the projection matrix $P$ jointly with the filter $f$ in the first frame by applying Gauss-Newton and adopting the Conjugate Gradient method~\cite{nocedal2006numerical} for each iteration. In subsequent frames, the resulting normal equations are efficiently solved using the method of Conjugate Gradients, assuming a fixed $P$.} For more details, we refer to \cite{danelljan2017eco,danelljan2016beyond}. \section{Generating TIR images}\label{sec:gan} {In this section we discuss image-to-image translation methods and compare them for the task of transferring RGB to synthetic TIR data.} \subsection{Image-to-image translation methods} We use two different image-to-image translation methods to transform labeled RGB videos into labeled TIR videos. First, we use pix2pix~\cite{isola2017image}, which requires paired training data. Therefore, we need matching frames in both RGB and TIR, which we can obtain from multispectral video datasets such as KAIST~\cite{hwang2013multispectral}. Second, we use CycleGAN~\cite{zhu2017unpaired}, an extension on pix2pix that can be trained from unpaired data. As a consequence, any videos in the RGB and TIR modalities can be used to train CycleGAN. {Despite the higher availability of unpaired training data, we expect the weaker supervision of CycleGAN to result in synthesized TIR images of lower quality. In this section, we present both translation methods and experimentally confirm this intuition. In later sections, we generate TIR data using only pix2pix given its empirically superior performance.} Both methods are based on Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} conditioned on input images. GANs consist of two networks, generator $G$ and discriminator $D$, that compete against each other. The generator tries to generate samples that resemble the original data distribution, whereas the discriminator tries to detect whether samples are real or have been generated by $G$. When the GAN architecture is conditioned on an input image, the task becomes image-to-image translation. In our case, the input image is a color frame from an RGB video and the target is the matching frame in the TIR modality. \subsubsection{Paired - pix2pix} pix2pix~\cite{isola2017image} is an effective, task-agnostic method that can be applied to translate between many domain pairs, including maps to satellite pictures, edge maps to real pictures, or grayscale images to color images. The generator is based on an encoder-decoder architecture with skip connections (U-Net~\cite{ronneberger2015u}). The discriminator is a convolutional PatchGAN~\cite{li2016precomputed}, which classifies each local image patch independently, making it especially suited for modifying textures or styles. Let $x$ be an image from the input domain $X$ and $y$ an image from the target domain $Y$. In pix2pix, both the generator and discriminator are conditioned on the input image $x$. The conditional GAN objective function is defined as \begin{eqnarray} \mathcal{L}_{cGAN}\left(G,D\right) &=& \mathbb{E}_{x,y} [\log D(x,y)] \nonumber \\ &+& \mathbb{E}_{x,z}[ \log\left(1-D\left(x,G\left(x,z\right)\right)\right)], \label{eq:cGAN} \end{eqnarray} where $z$ is a random noise vector used as input for the generator. Additionally, pix2pix also includes an L1 loss to increase the sharpness of the output images \begin{eqnarray} \mathcal{L}_{L1}\left(G\right) = \mathbb{E}_{x,y,z} [\left \| y-G(x,z) \right \|_1]. \label{eq:L1_con} \end{eqnarray} The final objective function is the weighted sum of these two losses. Following the original adversarial training~\cite{goodfellow2014generative}, $G$ tries to minimize this final objective while $D$ tries to maximize it: \begin{eqnarray} G^* = \mathop {\arg \min\limits_G } \mathop {\max }\limits_D \mathcal{L}_{cGAN}\left(G,D\right)+\lambda\mathcal{L}_{L1}\left(G\right). \label{eq:con_obj} \end{eqnarray} We translate an RGB video to TIR by applying pix2pix independently for each video frame. The original model of~\cite{isola2017image} achieves mild stochasticity in its outputs by keeping the dropout layers at test time, which are normally used only during training. In our case, this is not only unnecessary but also damaging, as it makes the output video less stable. For this reason, we only use dropout layers during training. \subsubsection{Unpaired - CycleGAN} Paired data might be hard to come by for particular tasks including RGB to TIR conversion, as the amount of paired videos in both modalities is rather limited. Zhu et al.~\cite{zhu2017unpaired} present CycleGAN, a method for learning to translate between image domains when paired examples are not available. The main idea consists in adding a cycle consistency loss, based on the assumption that mapping an image $x\in X$ to domain $Y$ and back to $X$ should leave it unaltered. { For this reason, besides the classic generator $G:X\rightarrow Y$, CycleGAN also learns a generator to perform the inverse mapping $F:Y\rightarrow X$. The method is then trained with a weighted combination of an unconditional adversarial loss~\cite{goodfellow2014generative} and the cycle consistency loss in both directions \begin{eqnarray} \mathcal{L}_{cyc}\left(G,F\right) &=& \mathbb{E}_{x} [\left \| F(G(x))-x \right \|_1] \nonumber \\ &+&\mathbb{E}_{y} [\left \| G(F(y))-y \right \|_1], \label{eq:Lcyc2} \end{eqnarray} For more details, please see~\cite{zhu2017unpaired}. As in the pix2pix model, we apply CycleGAN independently per frame, and we remove the dropout layers at test time to generate a more stable video output. } \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/GANresults.pdf} \caption{Results for the two image translation methods considered: pix2pix and CycleGAN. The video frames are taken from the test set of KAIST~\cite{hwang2013multispectral}, and have not been seen during training.} \label{fig:GANresults} \end{figure*} \subsection{Datasets} \setlength{\tabcolsep}{4pt} \begin{table} { \begin{center} \begin{tabular}{c|c|c|c} \hlin \multirow{2}{*}{Type} & \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Number of images} \\ \cline{3-4} & & RGB & TIR \\ \hline \multirow{6}{*}{Paired} & KAIST~\cite{hwang2013multispectral} & 50,184 & 50,184 \\ & CVC-14~\cite{gonzalez2016pedestrian} & 8,473 & 8,473 \\ & OSU Color Thermal~\cite{davis2005two} & 8,545 & 8,545 \\ & VAP Trimodal~\cite{palmero2016multi} & 5,924 & 5,924 \\ & Bilodeau~\cite{bilodeau2014thermal} & 7,821 & 7,821 \\ & LITIV2012~\cite{torabi2012iterative} & 6,325 & 6,325\\ \cline{2-4} & total & 87,088 & 87,088 \\ \hline \multirow{6}{*}{Unpaired} & VOT2016~\cite{kristan2016VOT} & 21,455 & - \\ & VOT2017~\cite{Kristan_2017_ICCV} & 4,049 & - \\ & OTB~\cite{wu2015object} & 58610 & -\\ & ASL~\cite{portmann2014people} & - & 6,490 \\ & Long-term~\cite{gade2013long} & - & 47,423 \\ & InfAR~\cite{gao2016infar} & - & 46,121 \\ \cline{2-4} & total & 84,114 & 100,034 \\ \hline \end{tabular} \caption{{Datasets used for training the image-to-image translation models. We test all models using a subset of three videos from the official test set of KAIST~\cite{hwang2013multispectral}.}} \label{table:datasetsGAN} \end{center} } \end{table} { We consider multiple datasets for training our image translation methods, spanning the two presented supervision levels: paired and unpaired. Table~\ref{table:datasetsGAN} details the number of images of all the considered datasets. Among the paired datasets, the biggest and most relevant is} KAIST Multispectral Pedestrian Dataset~\cite{hwang2013multispectral}, which contains a significant amount of aligned images in the RGB and TIR modalities, captured from a moving vehicle in different urban environments and under different lighting conditions. We follow the official data split~\cite{hwang2013multispectral} as in~\cite{isola2017image} and use all the frames from training videos for training. We evaluate both image translation methods using 3 randomly left out videos from the test set, {amounting to 5,728 images. Train and test sets have no videos in common. } { Other paired datasets include images of people captured under different conditions: pedestrians during day or night (CVC-14~\cite{gonzalez2016pedestrian}), static cameras at a busy intersection (OSU Color-Thermal Database~\cite{davis2005two}) or in different positions and zooms (LITIV2012 dataset~\cite{torabi2012iterative}, interactions in indoor scenes with controlled lighting settings (VAP Trimodal People Segmentation Dataset~\cite{palmero2016multi}), or moving in different planes (Bilodeau et al.~\cite{bilodeau2014thermal}). This amounts to a total of 87K image pairs. } { We use all paired datasets to train both pix2pix and CycleGAN. Additionally, we collect an RGB-TIR unpaired dataset as extra training data for CycleGAN. As RGB data we include} all the sequences from VOT2016~\cite{kristan2016VOT}, VOT2017~\cite{Kristan_2017_ICCV}, and OTB~\cite{wu2015object}. As TIR data we include the TIR images from ASL~\cite{portmann2014people}, Long-term~\cite{gade2013long}, and InfAR~\cite{gao2016infar}. This amounts to a total of about 230K images, almost $5\times$ more images than the paired training dataset. \subsection{Implementation details} We train all networks in pix2pix and CycleGAN from scratch, initializing the weights from a Gaussian distribution with zero mean and standard deviation of 0.02. We use the same network architectures as in the original papers~\cite{isola2017image,zhu2017unpaired}. As in~\cite{isola2017image}, we apply random jittering by slightly enlarging the input image and then randomly cropping back to the original size. We train pix2pix for 10 epochs, with batch size 4 and learning rate 0.0002. CycleGAN is trained for 3 epochs, batch size 2 and learning rate 0.0002. Note that both models are trained for an equivalent number of iterations given the size of their training sets. \subsection{TIR image translation quality} In order to test the two image translation methods considered we select a random subset of the test set of KAIST~\cite{hwang2013multispectral}, amounting to about 10\% of the entire dataset. We translate the RGB videos into TIR using pix2pix or CycleGAN, and then compute the Euclidean distance of the translations with the TIR ground-truth images. Finally, we average the distance for all frames. pix2pix obtains an average distance of 35.3, whereas CycleGAN obtains 69.5. This demonstrates the superiority of pix2pix for this task, showing how a paired training signal is more valuable than the unpaired counterpart, despite the bigger training dataset of the latter. Fig.~\ref{fig:GANresults} shows a qualitative comparison of both approaches. We can observe how the translated images using pix2pix are clearly superior to those translated by CycleGAN. Moreover, they look remarkably similar to the ground-truth TIR images, confirming the validity of the proposed data augmentation approach. Therefore, we select pix2pix as our method to generate TIR tracking data from RGB videos. {In addition we compare the statistics of the image gradients of real TIR data and synthetic TIR data generated by pix2pix. The histogram of the gradient magnitude for both datasets on the test set of KAIST is provided in Fig.~\ref{fig:gradients}. We have also added the gradient magnitude of the grayscale images from which the synthetic dataset is generated. The results show that the gradient magnitude of the synthetic data closely follows that of the real data. Only small variations can be seen for low magnitude gradients. The similarity of the image statistics of real and synthetic data suggests that trackers trained on the synthetic data could be successful on real TIR data.} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/histgrad.pdf} \caption{{Histogram of the gradient magnitude for real and synthetic TIR data computed on the test set of KAIST~\cite{hwang2013multispectral}. For comparison we have also added the gradient magnitude histogram for grayscale images from which the synthetic dataset has been generated.}} \label{fig:gradients} \end{figure} \section{Experimental Results}\label{sec:exps} \subsection{Datasets} We train several versions of our tracker using both real and generated TIR tracking data, {summarized in Table~\ref{table:datasetsTracker}. } As real TIR data we use BU-TIV~\cite{wu2014thermal}, ASL~\cite{portmann2014people}, and OTCBVS~\cite{davis2005two}. The predominant class in these datasets is human/pedestrian, although BU-TIV~\cite{wu2014thermal} includes some vehicles and ASL~\cite{portmann2014people} also contains animals like cat and horse. We select all those sequences that include annotated bounding boxes around the objects, {leading to a total of 375K bounding boxes from 34K images.} On the other hand, we generate synthetic TIR tracking data using the RGB videos from VOT2016~\cite{kristan2016VOT}, VOT2017~\cite{Kristan_2017_ICCV}, and OTB~\cite{wu2015object}, which are standard tracking benchmarks used by the community. In total, we obtain 168 TIR videos with tracking annotations by translating the original RGB frames using pix2pix and transferring the corresponding bounding box annotations. {The total number of bounding boxes is 4.5$\times$ greater than in the real TIR images.} Furthermore, the generated TIR videos contain a wider variety of object classes than the real TIR videos. This increases the generality of the learned deep features. In both cases, we leave out around $10\%$ of the videos during training as validation set. \label{sec:datasets} \setlength{\tabcolsep}{4pt} \begin{table} { \begin{center} \begin{tabular}{c|c|c|c|c} \hlin Type & Dataset & Videos & Images & Bounding-boxes\\ \hline \multirow{4}{*}{Real} & BU-TIV~\cite{wu2014thermal} & 5 & 23,393 & 34,7291 \\ & ASL~\cite{portmann2014people} & 13 & 6,490 & 7,773 \\ & OTCBVS~\cite{davis2005two} & 4 & 4861 & 19,944 \\ \cline{2-5} & Total & 22 & 34,744 & 375,008\\ \hline \multirow{4}{*}{Generated} & VOT2016~\cite{kristan2016VOT} & 60 & 21,455 & 21,455 \\ & VOT2017~\cite{Kristan_2017_ICCV} & 10 & 4,049 & 4,049 \\ & OTB~\cite{wu2015object} & 98 & 58,610 & 58,610 \\ \cline{2-5} & Total & 168 & 84,114 & 84,114\\ \hline \end{tabular} \caption{{Datasets used for training the tracker, using real TIR data or generated TIR data from RGB images.}} \label{table:datasetsTracker} \end{center} } \end{table} We evaluate our TIR tracker on the VOT-TIR2017 dataset~\cite{Kristan_2017_ICCV}, which is identical to VOT-TIR2016 dataset~\cite{felsberg2016thermal} as the 2016 edition of this benchmark was far from being saturated. It contains 25 TIR videos of varying image resolution, with an average sequence length of 740 frames adding up to a total of 13,863 frames. Each sequence has been manually annotated with exactly one bounding box per frame around a particular object instance. There is a wide variety of object classes, including pedestrian, animals such as rhino or bird, and vehicles like quadrocopter or car. Moreover, the dataset includes extra annotations in the form of attributes, either at frame level (e.g. camera motion, occlusion) or at the sequence level (e.g. blur, background clutter). {This test dataset has no videos in common with the RGB modality of VOT2016-17 used for training.} \subsection{Evaluation measures and protocol} We follow the measures and evaluation protocol proposed by the VOT-TIR2017 benchmark~\cite{felsberg2016thermal}. The two primary measures are accuracy (A) and robustness (R), which have been shown to be highly interpretable and only weakly correlated~\cite{vcehovin2016visual}. Accuracy is computed as the overlap (intersection over union) between the predicted track region and the ground-truth bounding box, averaged over frames. The VOT protocol establishes that when the evaluated tracker fails, i.e. when the overlap is below a given threshold, it is re-initialized in the correct location five frames after the failure. In order to reduce the positive bias introduced by this protocol, the accuracy measure ignores the first ten frames after the re-initialization when computing the average overlap. Robustness measures the number of times the tracker fails for each sequence and then takes the average over all sequences. These two measures are conflated into a third, the Expected Average Overlap (EAO), which is the main measure used to rank the trackers. The EAO estimates the expected average overlap of a tracker for a particular sequence of a fixed, short length. We refer the reader to~\cite{VOT_TPAMI} for more details. Besides the standard VOT metrics, we also report results following the One-Pass Evaluation (OPE) protocol originally proposed in~\cite{wu2015object}. The most standard evaluation metric used with this protocol is success rate. For each frame in the test video, we compute the overlap between the predicted track and the ground-truth bounding box. A predicted track is considered successful if its overlap with the ground-truth is above a particular threshold. We obtain a success plot by evaluating the success rate at different overlap thresholds. Conventionally, the Area Under the Curve (AUC) of the success plot is reported as a summary measure. Note how this protocol does not reset the tracker in case of failure. We use the VOT toolkit~\cite{Kristan_2017_ICCV} to compute the measure and plot the results. \subsection{Implementation details} We train CFNet following~\cite{valmadre2017end}. {We perform tests with three different networks as base model: AlexNet~\cite{krizhevsky2012imagenet}, VGG-M~\cite{chatfield2014return}, and ResNet-50~\cite{he2016deep}. } As in~\cite{valmadre2017end}, we reduce the total stride of the networks from 16 to 4 by changing the stride of the first and second pooling layers from 2 to 1 in AlexNet, and that of second convolutional and pooling layers in VGG-M. This allows us to obtain bigger feature maps, which benefits the correlation filters. For fairness, we apply this modification to all trained models. As training input data for the network, we randomly pick object regions from pairs of images from the same video. Specifically we crop a centered region on the object of approximately twice the object's size, and resize it to $125\times 125$ pixels. We use Stochastic Gradient Descent (SGD) with momentum of 0.9 and weight decay of 0.0005 to fine-tune the network, which is pre-trained for image classification on ILSVCR12~\cite{russakovsky2015imagenet}. The learning rate is decreased logarithmically at each epoch from $10^{-4}$ to $10^{-5}$. The model is trained for 50 epochs with mini-batches of size 128. For the baseline tracker ECO~\cite{danelljan2017eco} {(Fig.~\ref{fig:pipeline}, blue dashed lines)}, we use the recommended settings (`OTB\_DEEP\_settings') detailed in the code provided by the authors~\cite{ecocode}. ECO is an RGB tracker, so we have adapted the following parameters given the different nature of TIR data. {Following~\cite{nam2016learning,park2018meta}} we use the feature map of the third convolutional layer as the input of the correlation filter, {(convolutional block in case of ResNet-50). We confirm these results in the following section.} We reduce the learning rate used to update the correlation filter from the 0.009 used for RGB data to 0.003. A smaller learning rate is more suitable for TIR data, as TIR images have less detailed information than RGB, for example lacking texture, and thus the object appearance remains more stable during tracking. In order to optimally leverage the learned CNN features, we do not add the dimensionality reduction step at the output of each layer as in~\cite{danelljan2017eco}. ECO uses this to increase the tracker's efficiency, which is not a priority in our work. Upon acceptance we will make the different trained models available for the community. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \begin{tabular}{c|ccc|ccc} \hlin \multirow{2}{*}{Tracker} & \multicolumn{3}{c|}{without motion features} & \multicolumn{3}{c}{with motion features} \\ & EAO & A & R & EAO &A &R\\ \hline handcrafted & 0.235 &0.60 &2.74&0.361 &0.62 &1.12 \\ pretrained & {0.307} & {0.62} &{2.00} &{0.381} &{\textbf{0.69}} &{1.06} \\ real & {0.316} &{0.62} &{2.01} &{0.409} &{0.67} &{1.24} \\ generated & {0.321} &{\textbf{0.63}} &{2.00} &{0.419} &{0.65} &{0.83} \\ \hline generated $\rightarrow$ real & {0.331} &{0.61} &{1.76} & {0.429} & {0.63} &{0.82} \\ generated + real &{\textbf{0.347}} &{\textbf{0.63}} &{\textbf{1.68}} &{\textbf{0.436}} &{0.65} &{\textbf{0.80}} \\ \hline \end{tabular} \caption{Comparison of different tracker variants with and without adding motion features. Results are on the VOT-TIR2017 benchmark~\cite{Kristan_2017_ICCV} with ResNet-50~\cite{he2016deep} as base network. Boldface indicates the best results. In both cases, the best results are achieved when combining both real and generated TIR data. } \label{table:models} \end{center} \end{table} { \subsection{Network layers} \label{sec:layers} Our tracker uses deep features from a particular network layer. Previous works~\cite{nam2016learning,park2018meta} selected mid-level features from the third convolutional layer as optimal for tracking in RGB videos. Here, we validate this choice for TIR data by analyzing the performance of the selected tracker across all layers for the three networks considered. We perform these experiments using only pre-trained features, i.e., we do not fine-tune the networks for tracking. Fig.~\ref{fig:layers} presents the tracking performance measured by EAO on VOT-TIR2017~\cite{Kristan_2017_ICCV} as a function of the network layer. Trackers that use features extracted from the third layer offer the best results, and thus we select this features for the remainder of the paper. } \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/layers.pdf} \caption{{The EAO on VOT-TIR2017~\cite{Kristan_2017_ICCV} when using deep features extracted from different layers.}} \label{fig:layers} \end{figure} { \subsection{Network architectures} In this section, we experiment with all three base networks with different types of data used for training. All models use ECO~\cite{danelljan2017eco} as base tracker, in some cases with the adaptations detailed in section~\ref{sec:corrfilter}. We consider two baselines, `pretrained' and `real'. The first baseline uses features from the corresponding CNN pre-trained for the image classification task. On the other hand, `real' is also fine-tuned using real TIR tracking datasets (sec.~\ref{sec:datasets}). Our tracker (`generated + real') combines both real TIR and synthesized from RGB with pix2pix for the fine-tuning process. Fig.~\ref{fig:topos} presents these results. For all base networks, fine-tuning helps when learning effective features for tracking.} This shows that the generated data is complementary to the available real data, making the generated data beneficial even when a good amount of real data is available. { Moreover, the gain granted by fine-tuning the network is significantly higher when augmenting the training dataset with our generated TIR data. The performance boost is especially remarkable for higher capacity models such as ResNet-50, since networks with more parameters require more data to train. For all following experiments, we use ResNet-50 as base network for the trackers. } \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/topologies.pdf} \caption{{The EAO on VOT-TIR2017~\cite{Kristan_2017_ICCV} when using deep features extracted from different networks. Our synthetic data can benefit general networks for fine-tuning.}} \label{fig:topos} \end{figure} \subsection{Results on real and generated data} {We now present more detailed results for different configurations of our ECO tracker with ResNet-50.} We include another baseline (`handcrafted') that employs handcrafted features as it is prevalent in TIR tracking~\cite{yu2017dense,zhu2016beyond,7406435}, and thus has not been trained using data. The variant called `generated' is fine-tuned using only our generated TIR data with pix2pix. { Finally, we consider another way of combining real and generated data to train the model, `generated $\rightarrow$ real', which besides uses a two-step fine-tuning mechanism, first using generated data and then real data. This is opposed to `generated + real', which fine-tunes using both real and generated data simultaneously without distinction. } Table~\ref{table:models} presents the results for all these models using metrics EAO, A, and R on the VOT-TIR2017 dataset~\cite{Kristan_2017_ICCV}. First, we can observe how the use of deep features is fundamental for the success of this tracker, given the low accuracy of the handcrafted model. Simply using pre-trained features already provides a significant improvement in terms of EAO. Fine-tuning this model on real data brings further benefits. Interestingly, only fine-tuning on the generated data using pix2pix results in better performance than fine-tuning on the real data; with EAO going from 0.289 on real data to 0.300 on generated data. This demonstrates our intuition that having great amounts of diverse data is very relevant when learning specialized deep features for TIR tracking. Finally, simultaneously using both real and generated data to fine-tune the network results in our best model. Moreover, training without distinguishing between the two types of data leads to better results, as opposed to a more complex two-stage fine-tuning process. We present results using the OPE evaluation metric in Fig.~\ref{fig:otb}. Also under this metric, handcrafted features show a clearly inferior performance compared to deep features. Simple pre-trained deep features obtain higher success rates, especially for mid-range overlap thresholds. Fine-tuning on real data gives the tracker a small boost, and when fine-tuning using our generated data, the performance is further improved. Finally, the best performance is achieved when fine-tuning using both types of data simultaneously. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/ope_res.pdf} \caption{The success plot of one-pass evaluation (OPE) on the the VOT-TIR2017 benchmark~\cite{Kristan_2017_ICCV}. We show the AUC score of each tracker in the legend. The best results are obtained when using both real and generated data.} \label{fig:otb} \end{figure} {Finally, we analyze the performance of our generated + real tracker for different amounts of generated TIR data. Fig.~\ref{fig:percentage} shows EAO as a function of the percentage of synthetic TIR data in the total training set. Interestingly, increasing the amount of synthetic data monotonically improves the tracker performance. The rightmost point, which corresponds to using all our generated data (90\% of the training set), does not seem to be saturated, and thus additional generated data could bring an even further performance boost. } \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/dataper.pdf} \caption{{Performance of our tracker (generated+real) on VOT-TIR2017~\cite{Kristan_2017_ICCV} for different percentages of synthetic data. The leftmost point indicates using only real data.}} \label{fig:percentage} \end{figure} \subsection{Adding motion features} As detailed in~\cite{7406435}, the use of handcrafted motion features can substantially improve tracking performance for TIR data. Following the implementation of the SRDCFir tracker~\cite{7406435}, we compute motion features by thresholding the absolute pixel-wise difference between the current and the previous frame. We then use this motion mask as an extra feature channel. Table~\ref{table:models} presents the results of our trained models when motion features are used alongside deep features. We can see how motion features provide significant performance improvements to all models. Furthermore, the conclusions drawn in the previous experiment still hold. The models trained with generated data outperform both the pre-trained model and the model trained with real data only. Finally, the model trained with a combination of generated and real data achieves an impressive performance, surpassing other methods. A qualitative comparison baseline ECO (pretrained) and ours (generated) is shown in Fig.~\ref{fig:qualResECO}. In challenging cases (e.g. second row), the improved features learned through our generated TIR data lead to a tracking model that is accurate while robust to occlusion, scale change and out-of-plane rotation. \subsection{State-of-the-art Comparison} Here, we compare our best model with the three top TIR trackers in the VOT-TIR2017 challenge~\cite{Kristan_2017_ICCV}, i.e. DSLT~\cite{yu2017dense}, EBT~\cite{zhu2016beyond}, and SRDCFir~\cite{7406435}. We also include in our comparison the best CNN-based tracker in VOT-TIR2016, TCNN~\cite{nam2016modeling}. Additionally, we compare with recently introduced CF-based (CSRDCF)~\cite{lukezic2017discriminative} and spatial CF-based (CREST)~\cite{song2017crest} trackers. These trackers have shown excellent performance on VOT~\cite{VOT_TPAMI} and OTB~\cite{wu2015object} RGB datasets. Table~\ref{table:sota} shows the comparison of our best model (generated+real) including motion mask with the state-of-the-art methods in literature on the VOT-TIR2017 benchmark~\cite{Kristan_2017_ICCV}. Among the existing methods, SRDCFir and EBT achieve EAO scores of $0.364$ and $0.368$ respectively. An EAO score of $0.287$ is achieved by the TCNN tracker. The recently introduced CREST and CSRDCF trackers achieve EAO scores of $0.215$ and $0.248$ respectively. The current state-of-the-art on this dataset is the DSLT tracker with an EAO score of $0.401$. Our tracker significantly outperforms DSLT by setting a new state-of-the-art with an EAO score of $0.448$. Our approach also achieves superior performance in terms of accuracy and obtains second best results in terms of robustness. We further analyze the robustness of our tracker and found our approach to {have promising improvements with respect to robustness} in all videos except \textit{trees2}, compared to EBT. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \begin{tabular}{c|ccc} \hlin Tracker & EAO & A & R \\ \hline CREST & 0.215 &0.56 &4.13 \\ CSRDCF & 0.248 &0.57 &3.49\\ \hline TCNN & 0.287 &0.62 &2.79 \\ SRDCFir & 0.364 &0.63 &1.10 \\ EBT & 0.368 &0.44 &0.82 \\ DSLT & {0.401} &0.60 &0.91 \\ \hline Ours & {\textbf{0.436}} &{\textbf{0.65}} &{\textbf{0.80}}\\\hline \end{tabular} \caption{Comparison with state-of-the-art trackers on VOT-TIR2017~\cite{Kristan_2017_ICCV}. Boldface indicates the best results. The results are reported in terms of expected average overlap (EAO), robustness (failure rate) and accuracy. Our proposed tracker significantly outperforms the state-of-the-art by achieving an EAO score of {$0.436$}. } \label{table:sota} \end{center} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{images/quality8.png} \caption{Qualitative comparison of our approach trained on generated and real data with state-of-the-art trackers, \textcolor{cyan}{CREST}, \textcolor{magenta}{TCNN}, \textcolor{blue}{EBT} and \textcolor{green}{DSLT} on the some challenging sequences, \textit{excavator}, \textit{jacket}, \textit{mixed\_distractors}, {\textit{garden}}, {\textit{quadrocopter2}}, {\textit{boat2}}, {\textit{bird}} and \textit{trees2} in VOT-TIR2017~\cite{Kristan_2017_ICCV}. Yellow dashed bounding box means \textcolor{yellow}{Groundtruth} and red solid bounding box is \textcolor{red}{Ours}. {The last two rows show failure cases of our tracker}. } \label{fig:qualResSota} \end{figure*} Fig.~\ref{fig:qualResSota} shows a qualitative comparison of our tracker with state-of-the-art methods. Our tracker follows the target object more accurately and is robust to challenging conditions such as scale change and occlusion. Among existing methods, DSLT also provides improved tracking performance but struggles with accurate target localization. The proposed TIR-specialized deep features learned through abundant generated TIR data enable precise target localization, leading to superior tracking results. {The last two rows of the figure show two example cases in which our tracker fails. In the first case, the object is rather tiny and lies on a cluttered background region, which increases the probability of confusing the tracked object with the background. In the second case, there is a considerable scale change combined with heavy occlusion, leading to a poor estimation of the object extent and the corresponding tracking failure. } \subsection{TIR Data Attributes Analysis} In order to provide a more detailed analysis of the results, we present in Fig.~\ref{fig:attrs} the per-attribute performance comparison of our tracker and several state-of-the-art methods. The attributes are: camera motion, dynamics change, motion change, occlusion, size change, and others. Each attribute plot indicates the expected overlap for every tracker as a function of the sequence length, computed on a particular subset of videos annotated with the corresponding data attribute. For most attributes, including the challenging scenarios of heavy camera motion, motion change, and occlusion, our proposed tracker outperforms state-of-the-art trackers. This consistent improvement on challenging attributes is likely due to specialized discriminative features, learned specifically for TIR tracking. In case of dynamics change, both TCNN and DSLT provide superior tracking performance. The TCNN tracker~\cite{nam2016modeling} can accurately match object proposals due to a tree structure encompassing multiple CNNs. The DSLT tracker~\cite{yu2017dense} also uses dense proposals and structural learning classifier. In case of size change, the EBT tracker~\cite{zhu2016beyond} and DSLT provide superior results. In this attribute, our approach provides the third best results by outperforming trackers such as SRDCFir and TCNN. Overall, our approach achieves best results on 4 out of 6 attributes. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/attrs.pdf} \caption{Attribute-based comparison of our trackers with state of-the-art methods on VOT-TIR2017 dataset. We show expected overlap measure for four attributes: camera motion, dynamics change, motion change, occlusion, size change, and others. Our trackers provide consistent improvements in case of camera motion, motion change, occlusion and others, compared to existing methods. } \label{fig:attrs} \end{figure*} \section{Conclusion}\label{sec:conclude} In this paper, we have proposed a method to generate synthetic TIR data from RGB data. We use recent progress on image to image translation models for this purpose. The main advantage of this is that we can generate a large dataset of labeled TIR sequences. This dataset is far larger than datasets with real labeled sequences which are currently available for TIR tracking. These larger datasets allow us to perform end-to-end training for TIR features. To the best of our knowledge we are the first to train end-to-end features for TIR tracking. We show that our features trained on the synthetic data outperform other features for TIR tracking, including features which are computed by fine-tuning a network on real TIR sequences. In addition, we show that a combination of both real and generated data leads to a further improvement. Once we combine our feature with the motion feature we obtain state of the art results on the VOT-TIR2017. \section*{Acknowledgements} This work was supported by TIN2016-79717-R, and the CHISTERA project M2CR (PCIN-2015-251) of the Spanish Ministry and the ACCIO agency and CERCA Programme / Generalitat de Catalunya. We also acknowledge the generous GPU support from NVIDIA. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-09-27T02:10:00", "yymm": "1806", "arxiv_id": "1806.01013", "language": "en", "url": "https://arxiv.org/abs/1806.01013" }
\section{Spontaneous Magnetization and Spin Correlations in a Central Row} The previous calculation shows that the case with separation of strings $N=1$ gives as much information on the behavior of the system as those with larger separations between the strings. For $N=1$, the row correlation of spins at the center of a strip of width $m=2j$\footnote{To have a row at the center, $m$ needs to be even. Then the model is reflection invariant about this row and translation invariant in the horizontal direction, so that (\ref{Toeplitz}) follows.} is given as the Toeplitz determinant,\footnote{For some early works on the magnetization in layered Ising models see \cite{WZ,KR}.} \begin{eqnarray} \langle \sigma_{0,1}\sigma_{0,r+1}\rangle=\left|\begin{array}{ccccc} a_0&a_{-1}&a_{-2}&\cdots&a_{1-r}\\ a_1&a_0&a_{-1}&\cdots&a_{2-r}\\ a_2&a_1&a_0&\cdots&a_{3-r}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ a_{r-1}&a_{r-2}&a_{r-3}&\cdots&a_{0} \end{array}\right|, \label{Toeplitz} \end{eqnarray} where \begin{eqnarray} a_n=\frac 1{2\pi}\int_{-\pi}^{\pi}{\rm d} \theta\, {\rm e}^{-i n\theta}\,\Phi(\theta), \quad \Phi(\theta)=\sqrt\frac{\overline {A(\theta)}\,\overline {B(\theta)}}{A(\theta)B(\theta)}, \label{Phi}\end{eqnarray} in which ${\bar f}$ denotes the complex conjugate of $f$, and \begin{eqnarray} A(\theta)=\rho_a\prod_{\ell=1}^{j+1}(1-{\hat\gamma}_\ell{\rm e}^{-i\theta} ) \prod_{\ell=j+2}^{2j+1}(1-{\hat\gamma}^{-1}_\ell{\rm e}^{i\theta} ),\cr B(\theta)=\rho_b\prod_{\ell=1}^{j}(1-{\gamma}_\ell{\rm e}^{-i\theta} ) \prod_{\ell=j+1}^{2j+1}(1-{\gamma}^{-1}_\ell{\rm e}^{i\theta} ). \label{AB}\end{eqnarray} To have a central row, we need to have $m$ even, that is $m=2j$. Unlike the row correlation of the Onsager lattice, where the generating function has only two roots $\gamma_1$ and $\hat\gamma_1$, which can be explicitly calculated, the $2j+1$ roots in (\ref{AB}) of these Laurent polynomials in $e^{i\theta}$ can only be calculated numerically. From these calculations, we find that all the roots are real, $A(\theta)$ has $j+1$ roots smaller than 1, and $j$ roots greater than 1 for all temperatures, while $B(\theta)$ has $j+1$ roots smaller than 1 and $j$ roots greater than 1 for $T>T_{\mathrm c}(1,m,n)$, but one of the roots, say $\gamma_{j+1}$, becomes 1 at the critical temperature, and greater than 1 for $T<T_{\mathrm c}(1,m,n)$. In (\ref{AB}), we let $j+2\le\ell\le 2j+1$ be the subscript to denote the $j$ roots which are always greater than 1. Even though these formulae look formidable, it is possible to calculate the spontaneous magnetization using Szeg\H o's theorem. We find that the spontaneous magnetization ${ M}$ at the center of the strip is of the form \begin{eqnarray} {M}=(1-\gamma^{-2}_{j+1})^{1/8}{\cal G}_m \end{eqnarray} where ${\cal G}_m$ is a complicated expression involving the $2(m+1)$ roots of $A(\theta)$ and $B(\theta)$, which will be given in a later paper. We plotted in Fig.~6(a) this spontaneous magnetization at the central row of the strip for fixed string length $n=7$, but for strips of different widths $m=4,8,12$. Even though, as $m$ increases, we need to calculate more roots, which requires more digits for accuracy, we find the magnetization starts from 1, and drops to zero sharply with the 2-d exponent $\beta=1/8$ as $T$ approached its respective critical temperature $T_{\mathrm c}(1,m,n)$. In Fig.~6(b), the magnetization is plotted for fixed $m$, but for different $n$, demonstrating the same behavior. \begin{figure*}[htb] \centering \includegraphics[width=0.48\hsize]{mag-n=7.pdf}\hspace{10pt} \includegraphics[width=0.48\hsize]{mag-m=12.pdf} \caption{(Color online) (a) The spontaneous magnetization $M$ in the center row of a strip is plotted as a function of temperature, for string length $n=7$ and strip width $m=4,8,12$. (b) For fixed $m=12$, we plotted the spontaneous magnetization as a function of temperature $T$ for $n=5,10,15$.} \label{fig:6} \end{figure*} We can also calculate the asymptotic behavior of the spin-spin correlation in the center row of a strip for separation $r$ between the spins large and for $T$ near $T_{\mathrm c}(1,m,n)$.\footnote{The separation must be large compared to the correlation length in the scaling region near $T_{\rm c}$.}\ For $T>T_{\mathrm c}(1,m,n)$, we have $\gamma_{j+1}<1$, and $\gamma_{j+1}>1$ for $T<T_{\mathrm c}(1,m,n)$, we find \begin{eqnarray} \langle \sigma_{0,1}\sigma_{0,r+1}\rangle=\frac{\gamma^N_{j+1}{\cal H}_m^+}{ \sqrt r}+\cdots,\hspace{30pt}\quad T>T_{\mathrm c}(1,m,n), \label{C-above}\\ \langle \sigma_{0,1}\sigma_{0,r+1}\rangle={M}^2\bigg[1+\frac{\gamma_{j+1}^{-2N}{\cal H}_m^-}{r^2}+\cdots\bigg],\quad T<T_{\mathrm c}(1,m,n), \label{C-Below}\end{eqnarray} which is identical to the expressions (2.43) on page 243 and (3.23) on page 260 of the book by McCoy and Wu \cite{MWbk}, except that the functions ${\cal H}_m^{\pm}$ are now complicated functions of the $2(m+1)$ roots of $A(\theta)$ and $B(\theta)$. This demonstrates the same exponents $1/2$ and 2 as those of the regular 2-d Ising model. As we have $\gamma_{j+1}<1$ for $T$ greater than the true critical temperature $T_{\mathrm c}(1,m,n)$, we find from (\ref{C-above}) that the inverse correlation length $\xi^{-1}=\ln \gamma^{-1}_{j+1}$, so that the correlation decays as ${\rm e}^{-r/\xi}$. On the other hand, below the true critical temperature, we have $\gamma_{j+1}>1$. Then from (\ref{C-Below}) the true correlation length is $\xi/2$ with $\xi^{-1}=\ln \gamma_{j+1}$, so that the correlation now decays as ${\rm e}^{-2r/\xi}$ \cite{Kadanoff,Wu}. It is easily seen from (\ref{criticalT1}), that for systems with the same ratio $(n-1)/(m+1)$ have the same critical temperature. Thus by comparing these systems, we may gain insight to dependence of the correlation length as a function of $m$. Particularly for $n=m+2$ (e.g.\ $m=4$ and $n=6$, or $m=12$ and $n=14$), the ratio is 1 and the critical temperature is $T_{\mathrm c}(1,m,m+2)=T_{\mathrm c}(1,12,14)=1.641017930$. For the deviation from criticality we shall use the often used $t=1-T/T_{\mathrm c}$ or $T=T_{\mathrm c}(1-t)$, so that $t>0$ when $T<T_{\mathrm c}$. In Fig.~7 we plot the inverse correlation lengths $\ln\gamma^{\pm}_{j+1}$ as a functions of $|t|=|1-T/1.641017930|$, for $m=4 (n=6)$, $m=8 (n=10)$ and $m=12 (n=14)$. As $m$ increases, we find that the correlation lengths become larger, which is represented by the lower curves in Fig.7a.\footnote{At some larger values of $t$ the curves for $T<T_{\mathrm c}$ cross those for $T<T_{\mathrm c}$, since $t=\pm1$ represent $T=0$ (where $\xi=0$) and $2T_{\mathrm c}$ (where $\xi$ is still finite).} \begin{figure}[htb] \centering \vspace*{0pt} \includegraphics[width=0.47\hsize]{cor-t.pdf} \includegraphics[width=0.47\hsize]{cor-m=12.pdf \caption{(Color online) (a) The inverse correlation lengths $1/\xi$ are plotted for $m=4(n=6)$, $m=8(n=10)$ and $m=12(n=14)$; The red points are $1/\xi=\ln \gamma^{-1}_{j+1}$ for $T>1.641017930$, and the blue points are $1/\xi=\ln \gamma_{j+1}$. (b) Enlarged figure for $m=12(n=14)$, which shows that, as $m$ increases, the regime $1/\xi\propto |t|$, shrinks.} \label{fig:7} \end{figure To understand the corrections to scaling near critical temperature $T_{\mathrm c}(1,m,m+2)=1.641017930$ we expand the inverse correlation lengths for $n=m+2$ in Taylor series, \begin{eqnarray}\fl \frac1{\xi}=\ln \gamma^{\mp1}_{j+1}=\begin{cases} { 0.5590194|t|\pm0.1761049|t|^2+1.7413970|t|^3,&\hbox{$m=4$},\\ 0.0877259|t|\pm0.2328992|t|^2+1.8826931|t|^3,&\hbox{$m=8$},\\ 0.0110719|t|\pm0.0563571|t|^2+0.6323108|t|^3, &\hbox{$m=12$},} \end{cases} \end{eqnarray} with sign choices corresponding to $T\gtrless T_{\mathrm c}(1,m,m+2)$. This shows that as $m$ increases and $n=m+2$, the correlation length of the spins on the central row increases. The second and third terms in these expansions are corrections to scaling, whose coefficients become larger than the one of the leading term. To illustrate this more clearly, we have enlarged the plot for $m=12$ and $n=14$ in Fig.~7b. We shall next include some mathematical details to show the dependence of the generating function on $m$ and $n$ in order to demonstrate the possibility of calculating the scaling function. \subsection{Limiting cases} In calculating the correlation function, we chose to make the vertical and horizontal couplings to be different in order to distinguish the vertical and horizontal correlation lengths. We denote the horizontal coupling by $J'$, and $z'=\tanh J'/k_{\mathrm B}T$. We also need the variable \begin{equation} z^*=(1-z)/(1+z)={\rm e}^{-2J/k_{\mathrm B}T}, \label{dual}\end{equation} related to the dual variable of the Kramers--Wannier duality transform. The functions in (\ref{Phi}) are given by \begin{eqnarray}\fl A(\theta)\!=\!(\alpha^j+\alpha^{-j})[z'(z^n-1){\rm e}^{-i\theta}+(z^n+1)]+\Omega^{-{\scriptstyle \frac{1}{2}}}(\alpha^j-\alpha^{-j})[(z^n-1){\rm e}^{-i\theta}+z'(z^n+1)],\cr \fl B(\theta)\!=\!(\alpha^j+\alpha^{-j})[(z^n-1){\rm e}^{i\theta}+z'(z^n+1)]+\Omega^{-{\scriptstyle \frac{1}{2}}}(\alpha^j-\alpha^{-j})[z'(z^n-1){\rm e}^{i\theta}+(z^n+1)],\cr \label{ABp}\end{eqnarray} where\footnote{This $\alpha$ is related to the integrand of the free energy of the perfect Ising model, and it is the $\alpha_i $ of \cite{HAY}, where it is expressed in terms of $t\propto (T/T_{\mathrm c}-1)$. However, for the correlation function it has to be written in a different form.} \begin{eqnarray} \alpha^{\pm 1}=G\pm\sqrt{G^2-1},\cr G= [(1+{z'}^2)(1+{z^*}^2)-4z'z^*\cos\theta]\Big/[(1-{z'}^2)(1-{z^*}^2)]. \label{G} \end{eqnarray} It can be easily verified that \begin{eqnarray} {G^2-1}&=&\frac{4(1-z'z^*{\rm e}^{i\theta})(1-z'z^*{\rm e}^{-i\theta})(z'-z^*{\rm e}^{i\theta}) (z'-z^*{\rm e}^{-i\theta})}{(1-{z'}^2)(1-{z^*}^2)}\label{G}\\ &=&\frac{4(1-z'z^*{\rm e}^{-i\theta})^2(z'-z^*{\rm e}^{i\theta})^2\Omega}{(1-{z'}^2)(1-{z^*}^2)}, \label{GOmega}\end{eqnarray} so that \begin{equation} \Omega=\frac{(1-z'z^*{\rm e}^{i\theta})(z'-z^*{\rm e}^{-i\theta})}{(1-z'z^*{\rm e}^{-i\theta})(z'-z^*{\rm e}^{i\theta})},\qquad \Omega^{-1}=\overline \Omega. \label{Omega}\end{equation} In (\ref{G}) and (\ref{Omega}), we used the same form as used by Baxter in his most recent paper \cite{BaxIS}. In the limit $m=2j\to\infty$, we may drop $\alpha^{-j}$ in (\ref{ABp}), and find \begin{equation} \overline{A(\theta)}=\Omega^{{\scriptstyle \frac{1}{2}}}B(\theta),\quad \overline{B(\theta)}=\Omega^{{\scriptstyle \frac{1}{2}}}A(\theta), \end{equation} so that the generating function in (\ref{Phi}) becomes \begin{equation} \Phi(\theta)=\Omega^{{\scriptstyle \frac{1}{2}}},\label{PhiMW}\end{equation} which is identical to the generating function given in (1.3) and (1.4) on page 249 in McCoy and Wu's book \cite{MWbk} for the row correlation, as it should. In this limit $m\to\infty$, the infinitely wide strip has its horizontal and vertical correlation length given by\footnote{We ignore here the anomaly that below $T_{\mathrm c}$ the correlation length should have an extra factor $\frac12$ \cite{MWbk}.} \begin{equation} \fl\frac1{\xi^{\pm}_{\mathrm h}}=\pm\ln \Bigg[{z'}\,\frac{1+z}{1-z}\Bigg]=\pm\ln \Bigg[\frac{z'}{z^*}\Bigg], \quad \frac1{\xi^{\pm}_{\mathrm v}}=\pm\ln \Bigg[{ z}\,\frac{1+z'}{1-z'}\Bigg]=\pm\ln \Bigg[\frac{z}{{z'}^*}\Bigg],\label{corrlengths}\end{equation} with $+$ for $T>T_{\mathrm c}$ and $-$ for $T<T_{\mathrm c}$. In the limit $n\to\infty$, we have $z^n\to0$, and (\ref{ABp}) becomes \begin{eqnarray} A(\theta)=(\alpha^j+\alpha^{-j})(1-z'{\rm e}^{-i\theta})+\Omega^{-{\scriptstyle \frac{1}{2}}}(\alpha^j-\alpha^{-j})(z'-{\rm e}^{-i\theta}),\cr B(\theta)=(\alpha^j+\alpha^{-j})(z'-{\rm e}^{i\theta})+\Omega^{-{\scriptstyle \frac{1}{2}}}(\alpha^j-\alpha^{-j})(1-z'{\rm e}^{i\theta}).\label{ABz}\end{eqnarray} Therefore, \begin{equation} B(\theta)\to-{\rm e}^{i\theta}A(\theta),\qquad\overline{B(\theta)}\to-{\rm e}^{-i\theta}\overline{A(\theta)}, \end{equation} so that \begin{equation} \Phi(\theta)=\sqrt{{\rm e}^{-2i\theta}\frac{\overline{A(\theta)}^2}{A(\theta)^2}}=-{\rm e}^{-i\theta}\frac{\overline{A(\theta)}}{A(\theta)}. \label{Phiz0}\end{equation} The choice of sign is to make $-{\rm e}^{-i\pi}=1$. Because the square root disappears, its correlation function behaves very differently from (\ref{C-above}), decaying exponentially as in the one-dimensional Ising model. The full spin-spin correlation of the single finite-width strip case, resulting from this limit $n\to\infty$, is not known. Obviously, it differs from row to row. In fact, it is known that, except for the center-row case above, it may be given in terms of block-Toeplitz determinants \cite{HAYMcCoy2}. However, taking a second limit $m=2j\to\infty$, we can drop $\alpha^{-j}$ as before, and find from (\ref{ABz}) that \begin{equation} A(\theta)=\alpha^j[(1-z'{\rm e}^{-i\theta})+\Omega^{-{\scriptstyle \frac{1}{2}}}(z'-{\rm e}^{-i\theta})],\quad \overline{A(\theta)}=-\Omega^{{\scriptstyle \frac{1}{2}}}{\rm e}^{i\theta}A(\theta). \end{equation} Consequently, the generating function in (\ref{Phiz0}) becomes (\ref{PhiMW}), reproducing the 2-d behavior again as it should. More generally there is a crossover for finite $m$: If $\alpha$ is expressed in terms of $t\propto T/T_{\mathrm c}-1$ as in \cite{HAY}, then $\alpha^{-j}$ is exponentially small when $m|t|$ is large, and the system behaves as two-dimensional, otherwise it acts one-dimensional. This shows that it is possible to study the behavior of the correlation function as a function of the scaling variable $|t|m$. However, to calculate the correlation function, we need to make a Wiener--Hopf splitting of the generating function, which may be very difficult when $\alpha$ is expressed in the scaling form. \section{Open Problems : } \subsection{Correlation Function of a Single Strip of Finite Width} The correlation for the central row of a single strip of width $m$ is a Toeplitz determinant whose generating function is given in (\ref{Phiz0}), but the correlations within other rows are different, and may be expressed as block-Toeplitz determinants. How to calculate these block-Toeplitz determinants is a very challenging problem. Furthermore, even for the central row, the generating functions are ratios of two polynomials with degree $m+1$. As $m$ increases, one needs to calculate more and more roots. We have shown that in the limit $m\to\infty$, the generating function becomes the well-known square-root function (\ref{PhiMW}) for the correlation function of Onsager's 2-d Ising model. Therefore, it would be very difficult, but most interesting, to study the scaling behavior of these correlations in the limit, $m\to\infty$ and $T\to T_{\mathrm c}$, where $T_{\mathrm c}$ is Onsager's critical temperature given by (\ref{OnsagerTc}). One still expects that for $m|T/T_{\mathrm c}-1|\ll1$, the correlation function behaves like that of a one-dimensional system, but behaves as that of a two-dimensional system in the opposite limit. To express the correlation in the scaling regime was already highly nontrivial for the original Ising model \cite{WMTB} with $m=\infty$. \subsection{Scaling Functions for Strips connected by Strings} When we consider an infinite system of horizontal strips of width $m$ connected by sequences of strings of finite length $n$ as in Fig.~1, the behavior changes a great deal. For $n\le 4$, we found that the specific heat diverges at $T_{\mathrm c}(1,m,n)$ logarithmically for all values of $m$. However, as $n$ increases, rounded peaks in the specific heat appear above this temperature signifying the one-dimensional behavior of the strips. The spontaneous magnetization is nonzero for $T<T_{\mathrm c}(1,m,n)$. These results show that the scaling functions for the specific heat and correlations are much more complicated---namely in addition to the dependence on the scaling variable $m/\xi_h=m|T/T_{\mathrm c}-1|$, the dependence on another scaling variable related to the length $n$ of the strings must be added. For the vertical one-dimensional strings, the critical temperature is at $T=0$ and it is well-known that their inverse correlation length $1/\xi^+_{\mathrm s}=\ln z$. The critical temperature equation in (\ref{criticalT1}) for our rectangular Ising model with holes with different horizontal and vertical couplings generalizes to \begin{equation} z^{n+m}\,\Bigg[\frac{1+z'}{1-z'}\Bigg]^{m+1}=1 \end{equation} Taking the log of this and using (\ref{corrlengths}) this can be rewritten as \begin{equation} \frac{n-1}{\xi^+_{\mathrm s}}-\frac{m+1}{\xi_{\mathrm v}^{-}}=0, \label{xiTc}\end{equation} suggesting the possible additional scaling variable to be $(n-1)/\xi_s^+$. This seems to agree with the observation given in \cite{AMSV}. Another problems concerns the distribution of the roots for $A(\theta)$ and $B(\theta)$. The statement that these two functions have $j+1$ roots smaller than 1 and $j$ roots greater than 1 for $T>T_{\mathrm c}(1,m,n)$ is based on numerical evidence. There remain also many other challenging and difficult problems such as the spin-spin correlation at the other rows and the magnetic susceptibility of the system. \section*{References}
{ "timestamp": "2018-06-05T02:12:39", "yymm": "1806", "arxiv_id": "1806.00873", "language": "en", "url": "https://arxiv.org/abs/1806.00873" }
\section{Introduction} Sentiment analysis is a kind of text mining, which is to predict human mind, specifically the emotional state of a person by extracting emotional expressions from text~\cite{pang2008opinion}. Sentiment analysis models have achieved relatively high accuracy despite the ambiguity of both words and emotion itself. One of the commonly used datasets is Stanford Sentiment Treebank (SST), which consists of 5 class labeled reviews (very negative, negative, neutral, positive, and very positive)~\cite{socher2013recursive}. Current state-of-the-art model on SST, Tree-LSTM with refined word embeddings~\cite{yu2017refining} performs 54.0\% on accuracy. On the other hand, as far as we know, there is no research to find that sentiment analysis models could predict emotional state of a person. Starting with the idea that sentiment analysis models should be able to predict not only positive or negative valence but also emotional state, we build a sentiment analysis model to investigate whether the model could predict emotional state or not. \section{Emotion effects on Memory} Emotion affects the contents of memories both when we store and retrieve an memory. Theory “Affect as information framework” argues that if we are positive, we tend to be interpretive, relational processing while being detailed, stimulus-bound, referential processing in negative affective~\cite{clore2001affect}. \citeauthor{kensinger2007effects} found different specificity of an event. Emotion can consolidate memory. Emotional memories are kept fairly untouched and slowly forgotten because they are repeatedly rehearsed. Negative emotion can improve an accuracy of the memory~\cite{kensinger2007negative}. Emotion can make affect regulation~\cite{fonagy2018affect,raes2003autobiographical}. If an event has occurred for a long period, people evaluate the event depending on their emotion when emotional arousal is peak and is at the end of the event. Mood-congruent information which is in the same valence with one’s present mood would be more perceived, memorized, retrieved, and utilized to decide than mood-incongruent information. With the backgrounds, we assume that people retrieve memories about the same event differently depending on their emotional state. \section{Methods} \subsection{Data Collection} We collected two different types of data. The first one is experimental data, which include psychological measurements to examine emotional state of participants as well as book reports the participants retrieved and wrote down in the emotional state. The other is movie review data to train sentiment analysis model. The book report data is too few to train the sentiment analysis model, so we have to borrow a concept of transfer learning~\cite{pan2010survey}, which is conveying knowledge from related target task. \subsubsection{Experimental Data} All the 64 participants were university students in South Korea, who enrolled in "Introduction to Cognitive Science" class in Konkuk University during 2014 fall semester. Evenly distributed participants were sampled; 39 (61\%) male and 25 (39\%) female and 31 (48\%) students majored Liberal Arts while the others (52\%) majored in Science. Its average and standard deviation of age are 22.5 and 2.42, respectively. However, only 55 completed all the experiments. We examined depression (Center for Epidemiological Studies – Depression)~\cite{radloff1977ces}, present positive affective and negative affective (Positive Affective and Negative Affective Schedule)~\cite{watson1988development} because depression and emotional valence are well-known factors that affect the memory and their retrieval styles~\cite{thomas2007depressed}. CES-D measures how depression is severe during specific periods. PANAS measures positive affectivity (interested, excited, strong, enthusiastic, proud, alert, inspired, determined, attentive, and active) and negative affectivity (distressed, upset, guilty, scared, hostile, irritable, ashamed, nervous, jittery, and afraid). Participants first completed the psychological measurements. The result is presented in Table~\ref{tab:1}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Measurement & N & \% & Mean & SD & Range \\ \hline \hline PANAS POS & 55 & 100 & 27.764 & 6.802 & 22-50 \\ PANAS NEG & 55 & 100 & 34.127 &7.149 & 15-46 \\ \hline PANAS & 55 & 100 & 61.891 & 8.381 & 47-92 \\ \hline \hline Depressed & 34 & 61.8 & 33.853 & 8.610 & 21-50 \\ Non-depressed & 21 & 38.2 & 13.714 & 4.051 & 5-20\\ \hline CES-D & 55 & 100 & 26.164 & 12.202 & 5-50 \\ \hline \end{tabular} \caption{The psychological state of participants.} \label{tab:1} \end{table} Unusually, depressed participants are more than non-depressed. Perhaps, this is because we measured them after a difficult midterm exam and in 2014, South Korea, many people were dead by accidents. After the measurements, we gave them a book, Chronicle of a Death Foretold~\cite{marquez2014chronicle}. To make participants read carefully, we announced that they would take a quiz. After a while, we asked the participants to write book reports with stories of the book as detailed as they could remember. As a result, we could collect 63 book reports in Korean with 133.24 average words where their standard deviation is 52.27. The maximum and minimum number of words in the writings are 20 and 278, respectively. \subsubsection{Movie Review Data} We crawled movie review data from Naver, the most famous portal site in Korea. The data consists of the information of titles, scores, and comments on a movie. The distribution of scores in the movie review data is presented in Table~\ref{tab:2}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c||c|c|c|} \hline Score & N & \% & Score & Number & \% \\ \hline \hline 1 & 61,307 & 16.76 & 6 & 26 & 7.17 \\ 2 & 8,700 & 2.38 & 7 & 44.736 & 12.22 \\ 3 & 8,674 & 2.37 & 8 & 89,310 & 24.41 \\ 4 & 9,223 & 2.52 & 9 & 97,169 & 26.56 \\ 5 & 50,463 & 5.59 & \sout{10} & \sout{327,544} & \\ \hline \end{tabular} \caption{The distribution of scores of collected movie review data.} \label{tab:2} \end{table} However, we found the distribution of scores is biased on 10 points. The reason might be because of fake reviews for advertising, or getting some points by leaving commonplace reviews. The fake data could drive our model to be biased, so we decided to ignore 10 points data with an assumption that 8 or 9 points reviews might include positive expressions as much as 10 points reviews. \subsection{Sentiment Analysis Model Implementation} We implement a sentiment analysis model based on well-known deep neural networks, TextCNN~\cite{kim2014convolutional}. The main reasons we choose the algorithm are (i) hard to use state-of-the-art model as it is, because we have different data in a different language. (ii) TextCNN is simple but performs as well as state-of-the-art model. We split our data into train set, validate set, and test set, adapting early-stopping to prevent our model from overfitting. We train the model using the movie review data with the belief that the model will be trained to catch emotional expressions in text. After the training is completed, we evaluate scores of book report data, which is scoring emotional expressions in participants' writing. As a concept of transfer learning, the model for a specific task (scoring the movie review) could be utilized for similar target tasks (scoring the book reports) in the way of extracting expressions and evaluating the expressions in text. This technique helps us to overcome the data problem that only 55 book reports are not enough to train a model as well as experimental data is hard to collect in large scale. \section{Result} We first train TextCNN model in collected movie review data and see its performance. On 9-level polarity, TextCNN performs 41.32\% on accuracy. We can ensure that TextCNN performs well on our data in that our data has 9-level polarity whereas state-of-arts model performs 52.0\% on 5-level polarity. After checking the performance, we evaluate book report data; 3 of them are 1 point, 4 of them are 2 points, 2 of them are 3 points, 3 of them are 4 points, 1 of them is 5 points, 29 of them are 8 points. Finally, we investigate the relationship between the scores and psychological measurements. This approach is based on the studies that the content of retrieved memory depends on psychological state~\cite{bower1981mood,blaney1986affect}. The correlations between evaluation scores and psychological measurements are presented in Table~\ref{tab:3}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|} \hline $\rho$ & \small{PANAS POS} & \small{PANAS NEG} & \small{PANAS} & \small{CES-D} \\ \hline \hline \small{Score} & 0.030 & 0.096 & 0.108 & 0.255 \\ \hline \end{tabular} \caption{Correlations between measurement and model score.} \label{tab:3} \end{table} Our model scores weakly correlate with only CES-D in positive. However, this means that if a person uses positive words, he/she is depressed. The result is in contradict to general knowledge on emotion. Therefore, we conclude that any of correlation is not meaningful. \section{Conclusion} In this study, starting with the idea that sentiment analysis models should be able to predict not only positive or negative valence but also emotional state, we analyze the relationship between scores from a sentiment analysis model and psychological state. We show that sentiment analysis model performs good at predicting a score, but the score does not have any correlation with human's self-checked sentiment.\\ The contribution of this work can be listed as follows: (i) we investigate the relationship between a sentiment analysis model and psychological measurement, claiming that sentiment analysis models should be able to explain not only whether the data contains negative or positive meaning but also whether the person is negative or positive. (ii) we suggest a framework as objective methods to evaluate sentiment analysis model. \bibliographystyle{named}
{ "timestamp": "2018-06-05T02:09:55", "yymm": "1806", "arxiv_id": "1806.00754", "language": "en", "url": "https://arxiv.org/abs/1806.00754" }
\section*{Results} \subsection*{Standard grid cell attractors are not modular} \begin{figure*} \includegraphics[width=\linewidth]{fig1} \caption{\label{fig:model}The entorhinal grid system as coupled 2D continuous attractor networks (\textbf{Methods}). (\textbf{a}) Each network $z$ corresponds to a region along the dorso-ventral MEC axis and contains a 2D sheet of neurons with positions $(x,y)$. (\textbf{b}) Neurons receive excitatory drive $a(x,y)$ that is greatest at the network center and decays toward the edges. (\textbf{c}) Neurons inhibit neighbors within the same network with a weight $w(x, y; z)$ that peaks at a distance of $l(z)$ neurons, which increases as a function of $z$. Each neuron has its inhibitory outputs shifted slightly in one of four preferred network directions and receives slightly more drive when the animal moves along its preferred spatial direction. (\textbf{d}) Each neuron at position $(x,y)$ in network $z$ excites neurons located within a spread $d$ of $(x,y)$ in network $z-1$.} \end{figure*} \begin{figure*} \includegraphics[width=\linewidth]{fig2} \caption{\label{fig:modules}Coupling can induce modularity with fixed scale ratios and orientation differences. (\textbf{a}) A representative simulation without coupling. Top row: network activities at the end of the simulation. Second row: activity overlays between adjacent networks depicted in the top row. In each panel, the network with smaller (larger) $z$ is depicted in magenta (green), so white indicates activity in both networks. Third row: spatial rate map of a single neuron for each $z$ superimposed on the animal's trajectory. Bottom row: spatial autocorrelations of the rate maps depicted in the third row. White scale bars, 50 neurons. Black scale bars, \SI{50}{\cm}. (\textbf{b}) Same as \textbf{a} but for a representative simulation with coupling. (\textbf{c}--\textbf{e}) Data from 10 replicate uncoupled and coupled simulations. (\textbf{c}) Left: network grid scales $\lambda(z)$. For each network, there are 10 closely spaced red circles and 10 closely spaced blue squares corresponding to replicate simulations. Inset: $\lambda(z)$ divided by the inhibition distance $l(z)$. Middle: histogram for $\lambda$ collected across all networks. Right: network grid orientations $\theta$ relative to the network in the same simulation with largest scale. (\textbf{d}) Left: spatial grid scales $\Lambda(z)$. For each $z$, there are up to 30 red circles and 30 blue squares corresponding to 3 neurons recorded during each simulation. Inset: $\Lambda(z)$ divided by the inhibition distance $l(z)$. Middle: histogram for $\Lambda$ collected across all networks. In the coupled model, grid cells are clustered into three modules. Right: spatial grid orientations $\Theta$ relative to the grid cell in the same simulation with largest scale. (\textbf{e}) Spatial scale ratios and orientation differences between adjacent modules for the coupled model. Standard parameter values provided in \textbf{Table~\ref{tab:params}}.} \end{figure*} We assemble a series of networks along the longitudinal MEC axis, numbering them $z = 1, 2, \ldots, 12$ from dorsal to ventral (\textbf{Fig.~\ref{fig:model}a}). Each network contains the standard 2D continuous attractor architecture of the Burak-Fiete model~\cite{Burak:2009fx}. Namely, neurons are arranged in a 2D sheet with positions $(x,y)$, receive broad excitatory drive (Ref.~\onlinecite{Bonnevie:2013eu} and \textbf{Fig.~\ref{fig:model}b}), and inhibit one another at a characteristic separation on the neural sheet (\textbf{Fig.~\ref{fig:model}c}; see \textbf{Methods} for a complete description). This inhibition distance $l$ is constant within each network but increases from one network to the next along the longitudinal axis of the MEC. With these features alone, the population activity in each network self-organizes into a triangular grid whose lattice points correspond to peaks in neural activity (first row of \textbf{Fig.~\ref{fig:modules}a}). Importantly, the scale of each network's grid, which we call $\lambda(z)$, is proportional to that network's inhibition distance $l(z)$ (``uncoupled'' simulations in \textbf{Fig.~\ref{fig:modules}c}). Also, network grid orientations $\theta$ show no consistent pattern across scales and among replicate simulations with different random initial firing rates. Following the standard attractor model~\cite{Burak:2009fx}, the inhibitory connections in each network are slightly modulated by the animal's velocity such that the population activity pattern of each network translates proportionally to animal motion at all times (\textbf{Methods}). This modulation allows each network to encode the animal's displacement through a process known as path-integration, and projects the network grid pattern onto spatial rate maps of single neurons. That is, a recording of a single neuron over the course of an animal trajectory would show high activity in spatial locations that form a triangular grid with scale $\Lambda$ (third row of \textbf{Fig.~\ref{fig:modules}a}). Moreover, $\Lambda(z)$ for a neuron from network $z$ is proportional to that network's population grid scale $\lambda(z)$, and thus also proportional to its inhibition distance $l(z)$ (uncoupled simulations in \textbf{Fig.~\ref{fig:modules}d}). To be clear, we call $\Lambda$ the ``spatial scale''; it corresponds to a single neuron's activity over the course of a simulation and has units of physical distance in space. By contrast, $\lambda$, the ``network scale'' described above, corresponds to the population activity at a single time and has units of separation on the neural sheet. Similarly, $\Theta(z)$ describes the orientation of the spatial grid of a single neuron in the network $z$; we call $\Theta$ the ``spatial orientation''. Like the network orientations $\theta$ discussed above, spatial orientations of grids show no clustering (uncoupled simulations in \textbf{Fig.~\ref{fig:modules}d}). With an inhibition distance $l(z)$ that increases gradually from one network to the next (\textbf{Fig.~\ref{fig:model}c}), proportional changes in network and spatial scales $\lambda(z)$ and $\Lambda(z)$ lead to a smooth distribution of grid scales (uncoupled simulations in \textbf{Fig.~\ref{fig:modules}c},~\textbf{d}). To reproduce the experimentally observed jumps in grid scale between modules, the inhibition length would also have to undergo discrete, sharp jumps between certain adjacent networks. A further mechanism would be needed to enforce the preferred orientation differences that are observed between modules. In summary, a grid system created by disjoint attractor networks will not self-organize into modules. \subsection*{Coupled attractor networks produce modules} Module self-organization can be achieved with one addition to the established features listed above: we introduce excitatory connections from each neuron to those in the preceding network with approximately corresponding neural sheet positions (\textbf{Fig.~\ref{fig:model}d}; see \textbf{Methods} for a complete description). That is, a neuron in network $z$ (more ventral) with position $(x,y)$ will excite neurons in network $z-1$ (more dorsal) with positions that are within a distance $d$ of position $(x,y)$. In other words, the distance $d$ is the ``spread'' of excitatory connections, and we choose a constant value across all networks comparable to the inhibition distance $l(z)$. Similar results are obtained with dorsal-to-ventral or bidirectional excitatory coupling (below) or with a spread $d(z)$ that increases with the inhibition distance $l(z)$ (\textbf{Supp.\@ Fig.~1}). The self-organization of triangular grids in the neural sheet and the faithful path-integration that projects these grids onto single neuron spatial rate maps persist after introduction of inter-network coupling (\textbf{Fig.~\ref{fig:modules}b}). Network and spatial scales $\lambda(z)$ and $\Lambda(z)$ still increase from network $z = 1$ (dorsal) to network $z = 12$ (ventral). Yet, \textbf{Fig.~\ref{fig:modules}c},~\textbf{d} shows that for the coupled model, these scales exhibit plateaus that are interrupted by large jumps, disrupting their proportionality to inhibition distance $l(z)$, which is kept identical to that of the uncoupled system (\textbf{Fig.~\ref{fig:model}c}). Collecting scales across all networks illustrates that they cluster around certain values in the coupled system while they are smoothly distributed in the uncoupled system. We identify these clusters with modules M1, M2, and M3 of increasing scale. Note that multiple networks at various depths $z$ can belong to the same module. Moreover, coupling causes grid cells that cluster around a certain scale to also cluster around a certain orientation (\textbf{Fig.~\ref{fig:modules}c},~\textbf{d}), as seen in experiment~\cite{Stensola:2012gn}. The uncoupled system does not demonstrate co-modularity of orientation with scale, i.e., two networks with similar grid scales need not have similar orientations unless this is imposed by an external constraint. In summary, excitatory coupling between grid attractor networks dynamically induces discreteness in grid scales that is co-modular with grid orientation, as observed experimentally~\cite{Stensola:2012gn}, and as needed for even coverage of space by the grid map~\cite{Sanzeni:2016fg}. \subsection*{Modular geometry is determined by lattice geometry} \begin{figure} \includegraphics[width=\linewidth]{fig3} \caption{\label{fig:same}Modules produced by commensurate lattices are robust to parameter perturbations. Data from 10 replicate simulations in each subfigure. (\textbf{a}) Left: we use a less concave inhibition distance profile $l(z)$ (dark filled circles) compared to \textbf{Fig.~\ref{fig:model}c} (light empty circles). Middle: spatial grid scales exhibit modules when collected in a histogram across networks. Right: modules have the same scale ratios and orientation differences as in \textbf{Fig.~\ref{fig:modules}e}. (\textbf{b}) Same as \textbf{a}, but with a more concave $l(z)$. (\textbf{c}) Simulations with bidirectional point-to-point coupling instead of the unidirectional spread coupling in \textbf{Fig.~\ref{fig:model}d}. Top: schematic of the neuron at position $(x,y)$ in network $z$ exciting only the neuron at $(x,y)$ in networks $z-1$ and $z+1$. Bottom left/right: same as middle/right in \textbf{a}. In \textbf{a}, inhibition distance exponent $l_\textrm{exp} = 0$. In \textbf{b}, $l_\textrm{exp} = -2$. In \textbf{c}, coupling spread $d = 1$ and coupling strength $u_\textrm{mag} = 0.4$ in both directions. Other parameter values are in \textbf{Table~\ref{tab:params}}.} \end{figure} Not only does excitatory coupling produce modules, it can do so with consistent scale ratios and orientation differences. For the coupled system depicted in \textbf{Fig.~\ref{fig:modules}}, scale ratios and orientation differences between pairs of adjacent modules consistently take values $\num{1.74(2)}$ and $\SI{29.5(4)}{\degree}$, respectively (mean $\pm$ s.d.; \textbf{Fig.~\ref{fig:modules}e}). If we perturb the inhibition distance profile $l(z)$ by making it less or more concave, these scale ratios and orientation differences are unchanged (\textbf{Fig.~\ref{fig:same}a},~\textbf{b}). Concavity only affects the number of grid cells in each module, which can be tuned to match experimental observations. The same scale ratios and orientation differences also persist after changes to the directionality and spread of excitatory connections. For example, we replace the ventral-to-dorsal connections with bidirectional coupling and decrease the coupling spread $d$ such that a neuron in network $z$ excites only a single neuron in both networks $z-1$ and $z+1$; scale ratios and orientation differences remain at $1.7$ and $\SI{30}{\degree}$, respectively (\textbf{Fig.~\ref{fig:same}c}). Representative network activities and single neuron rate maps for these simulations are provided in \textbf{Supp.\@ Fig.~2}. Data for simulations with only dorsal-to-ventral connections are provided in \textbf{Supp.\@ Fig.~3}; they also exhibit the same scale ratios and orientation differences. We can intuitively understand this precise modularity through the competition between lateral inhibition within networks and longitudinal excitation across networks. In the uncoupled system, grid scales decrease proportionally as the inhibition distance $l(z)$ decreases from $z = 12$ to $z = 1$. However, coupling causes areas of high activity in network $z$ to preferentially excite corresponding areas in network $z-1$, which encourages adjacent networks to share the same grid pattern. Thus, coupling adds rigidity to the system and provides an opposing ``force'' against the changing inhibition distance that attempts to drive changes in grid scale. This rigidity produces the plateaus in network and spatial scales $\lambda(z)$ and $\Lambda(z)$ that delineate modules across multiple networks. At interfaces between modules, coupling can no longer fully oppose the changing inhibition distance, and the grid pattern changes. However, the rigidity fixes a geometric relationship between the grid patterns of the two networks spanning the interface. In the coupled system of \textbf{Fig.~\ref{fig:modules}}, module interfaces occur between networks $z = 4$ and $5$ and between $z = 9$ and $10$. The network population activity overlays of \textbf{Fig.~\ref{fig:modules}b} reveal overlap of many activity peaks at these interfaces. However, the more dorsal network (with smaller $z$) at each interface contains additional small peaks between the shared peaks. In this way, adjacent networks still share many corresponding areas of high activity, as favored by coupling, but the grid scale changes, as favored by a changing inhibition distance. Pairs of grids whose lattice points demonstrate regular registry are called \emph{commensurate} lattices~\cite{chaikinlubensky} and have precise scale ratios and orientation differences, here respectively $\sqrt{3} \approx 1.7$ and $\SI{30}{\degree}$, which match the results in \textbf{Figs.~\ref{fig:modules}e} and \textbf{\ref{fig:same}}. In summary, excitatory coupling can compete against a changing inhibition distance to produce a rigid grid system whose ``fractures'' exhibit stereotyped commensurate lattice relationships. These robust geometric relationships lead to discrete modules with fixed scale ratios and orientation differences. In our model, commensurate lattice relationships naturally lead to field-to-field firing rate variability in single neuron spatial rate maps (for example, $z = 8$ in the third row of \textbf{Fig.~\ref{fig:modules}b}), another experimentally observed feature of the grid system~\cite{Ismakov:2017jj,Dunn:2017jk}. At interfaces between two commensurate lattices, only a subset of population activity peaks in the grid of smaller scale overlap with, and thus receive excitation from, those in the grid of larger scale. The network with smaller grid scale will contain activity peaks of different magnitudes; this heterogeneity is then projected onto the spatial rate maps of its neurons. \subsection*{Excitation-inhibition balance sets lattice geometry} \begin{figure*} \includegraphics[width=\linewidth]{fig4} \caption{\label{fig:phase}Diverse lattice relationships emerge over wide ranges in simulation parameters. In models with only two networks $z = 1$ and $2$, we vary the coupling strength $u_\textrm{mag}$ and the ratio of inhibition distances $l(2)/l(1)$ for two different coupling spreads $d$. (\textbf{a},~\textbf{b}) Approximate phase diagrams based on 10 replicate simulations for each set of parameters, with the mean of $l(1)$ and $l(2)$ fixed to be 9. The most frequently occurring scale ratio and orientation difference are indicated for each region; coexistence between multiple lattice relationships may exist at drawn boundaries. (\textbf{a}) Phase diagram for small coupling spread $d = 6$. Solid lines separate four regions with different commensurate lattice relationships labeled by scale ratio and orientation difference, and dotted lines mark one region of discommensurate lattice relationships. (\textbf{b}) Phase diagram for large coupling spread $d = 12$. There are five different commensurate regions, a discommensurate region, as well as a region containing incommensurate lattices (gray). (\textbf{c}) Network activity overlays for representative observed (left) and idealized (right) commensurate relationships. Numbers at the top right of each image indicate network scale ratios $\lambda(2)/\lambda(1)$ and orientation differences $\theta(2) - \theta(1)$. Networks $z = 1$ and $2$ in magenta and green, respectively, so white indicates activity in both networks. (\textbf{d}) Expanded region of \textbf{b} displaying discommensurate lattice statistics. For each set of parameters, a representative overlay for the most prevalent discommensurate lattice relationship is shown. The number in the lower right indicates the proportion of replicate simulations with scale ratio within 0.01 and orientation difference within \SI{3}{\degree} of the values shown at top right. In one overlay, discommensurations are outlined by white lines. (\textbf{e}) The discommensurate relationships described in \textbf{d} demonstrate positive correlation between scale ratio and the logarithm of orientation difference (Pearson's $\rho = 0.91$). Parameter values provided in \textbf{Supp.\@ Info.}} \end{figure*} Adjusting the balance between excitatory coupling and a changing inhibition distance produces other commensurate lattice relationships, each of which enforces a certain scale ratio and orientation difference. To explore this competition systematically, we use a smaller coupled model with just two networks, $z = 1$ and $2$, and vary three parameters: the coupling spread $d$, the coupling strength $u_\textrm{mag}$, and the ratio of inhibition distances between the two networks $l(2)/l(1)$ (\textbf{Supp.\@ Info.}). For each set of parameters, we measure network scale ratios and orientation differences produced by multiple replicate simulations (\textbf{Supp.\@ Fig.~4}). We find that as the excitation-inhibition balance is varied by changing $u_\textrm{mag}$ and $l(2)/l(1)$, a number of discretely different relationships appear, which can be summarized in ``phase diagrams'' (\textbf{Fig.~\ref{fig:phase}a},~\textbf{b}). In many regions of the phase diagrams, these lattice relationships are commensurate, each with a characteristic scale ratio and orientation difference (\textbf{Fig.~\ref{fig:phase}c}). When parameters are chosen near a boundary between two regions, replicate simulations may adopt either lattice relationship or occasionally be trapped in other metastable relationships due to variations in random initial conditions (\textbf{Supp.\@ Fig.~4}). At larger $u_\textrm{mag}$ in both phase diagrams, there are fewer regions as $l(2)/l(1)$ varies because a higher excitatory coupling strength provides more rigidity against gradients in inhibition distance (\textbf{Fig.~\ref{fig:phase}a},~\textbf{b}). However, a larger coupling spread $d$ would cause network $z = 2$ to excite a broader set of neurons in network $z = 1$, softening the rigidity imposed by coupling and producing a wider variety of lattices in \textbf{Fig.~\ref{fig:phase}b} than \textbf{Fig.~\ref{fig:phase}a}. Also in \textbf{Fig.~\ref{fig:phase}b}, when excitation is weak and approaching the uncoupled limit, there is a noticeable region dominated by \emph{incommensurate} lattices, in which the two grids lack consistent registry or relative orientation, and grid scale is largely determined by inhibition distance (\textbf{Supp.\@ Fig.~4}). \textbf{Figure~\ref{fig:phase}b} also contains a larger region of \emph{discommensurate} lattices (although strictly speaking, in condensed matter physics, they would be termed commensurate lattices with discommensurations~\cite{chaikinlubensky}). Discommensurate networks have closely overlapping activities in certain areas that are separated by a mesh of regions lacking overlap called discommensurations (\textbf{Fig.~\ref{fig:phase}d}). They exhibit ranges of scale ratios 1.1--1.4 and orientation differences \SI{0}{\degree}--\SI{10}{\degree} that ultimately arise from a single source: the density of discommensurations, whose properties can also be explained through excitation-inhibition competition. Stronger coupling drives more activity overlap, which favors sparser discommensurations and lowers the scale ratio and orientation difference. However, a larger inhibition distance ratio drives the two networks to differ more in grid scale, which favors denser discommensurations. To better accommodate the discommensurations, grids rotate slightly as observed previously in a crystal system~\cite{Wilson:1990cj}. \textbf{Figure~\ref{fig:phase}e} confirms that scale ratios and orientation differences vary together as the discommensuration density changes. Thus, by changing the balance between excitation and inhibition, a two-network model yields geometric lattice relationships with various scale ratios and corresponding orientation differences. All of the commensurate relationships (\textbf{Fig.~\ref{fig:phase}c}) and almost the entire range of discommensurate relationships (\textbf{Fig.~\ref{fig:phase}d}) have scale ratios that fall in the range of experimental measurements, which is roughly 1.2--2.0~\cite{Stensola:2012gn,Barry:2007gv,Krupic:2015gj}. \subsection*{Discommensurate lattices produce distinct modular geometries but with more variation} \begin{figure*} \includegraphics[width=\linewidth]{fig5} \caption{\label{fig:dis}Discommensurate lattice relationships can produce realistic modules. (\textbf{a}) We use a shallower inhibition distance profile $l(z)$ (dark filled circles) compared to \textbf{Fig.~\ref{fig:model}c} (light empty circles). (\textbf{b}) Representative activity overlays between adjacent networks $z$ in magenta and green, so white indicates activity in both networks. Scale bar, 50 neurons. (\textbf{c--e}) Data from 10 replicate simulations. (\textbf{c}) Left: spatial grid scales $\Lambda(z)$. For each network, there are up to 30 red circles corresponding to 3 neurons recorded during each simulation. Middle: histogram for $\Lambda$ collected across all networks. Right: spatial orientations $\Theta$ relative to the grid cell in the same simulation with largest scale. (\textbf{d}) Clustering of spatial scales and orientations for 3 representative simulations. Due to 6-fold lattice symmetry, orientation is a periodic variable modulo $\SI{60}{\degree}$. Different colors indicate separate modules. (\textbf{e}) Spatial scale ratios and orientation differences between adjacent modules. (\textbf{f}) Representative activity overlays demonstrating defects with low activity overlap. Maximum inhibition distance $l_\textrm{max} = 10$, coupling spread $d = 12$. We use larger network size $n \times n = 230 \times 230$ to allow for discommensurate relationships whose periodicities span longer distances on the neural sheets. Other parameter values are in \textbf{Table~\ref{tab:params}}.} \end{figure*} As mentioned above, discommensurate lattices have a range of allowed geometries (\textbf{Fig.~\ref{fig:phase}d},~\textbf{e}), but they still produce modules in a full 12-network grid system with a preferred scale ratio and orientation difference. However, these values do not cluster as strongly as they do for a commensurate relationship, which is geometrically precise. The phase diagrams of \textbf{Fig.~\ref{fig:phase}} provide guidance for modifying a 12-network system that exhibits a $[\sqrt{3}, \SI{30}{\degree}]$ relationship to produce discommensurate relationships instead. We make the inhibition distance profile $l(z)$ shallower (\textbf{Fig.~\ref{fig:dis}a}) and increase the coupling spread $d$ by 50\%. Network activity overlays of these new simulations reveal grids obeying discommensurate relationships (\textbf{Fig.~\ref{fig:dis}b}), which are projected onto single neuron spatial rate maps through faithful path-integration (\textbf{Supp.\@ Fig.~5}). Across replicate simulations with identical parameter values but different random initial firing rates, the discommensurate system demonstrates greater variation in scale and orientation (\textbf{Fig.~\ref{fig:dis}c}) than the commensurate systems of \textbf{Figs.~\ref{fig:modules}} and \textbf{\ref{fig:same}}. Nevertheless, analysis of each replicate simulation reveals clustering with well-defined modules (\textbf{Fig.~\ref{fig:dis}d} and \textbf{Supp.\@ Fig.~5}). These modules have scale ratio $\num{1.39(10)}$ and orientation difference $\SI{6.7(35)}{\degree}$ (mean $\pm$ s.d.; \textbf{Fig.~\ref{fig:dis}e}). The preferred scale ratio agrees well with the mean value observed experimentally in~\cite{Stensola:2012gn}. Conceptually, we can interpret the greater spread of scales and orientations in terms of coupling rigidity. Excitatory coupling, especially when the spread is larger, provides enough rigidity in the discommensurate system to cluster scale ratios and orientation differences but not enough to prevent variations in these values. The degree of variability observed in \textbf{Fig.~\ref{fig:dis}c},~\textbf{d} appears consistent with experimental measurements, which also demonstrate spread~\cite{Stensola:2012gn,Barry:2007gv}. A few module pairs in \textbf{Fig.~\ref{fig:dis}e} exhibit a large orientation difference ${>}\SI{10}{\degree}$. This is not expected from a discommensurate relationship, and indeed, inspecting the network activities reveals adjacent networks trapped in a relationship with low activity overlap and large orientation difference (\textbf{Fig.~\ref{fig:dis}f}). In the context of a grid system that otherwise obeys commensurate or discommensurate geometries containing more overlap, we call this less common relationship a ``defect.'' We distinguish between these relationships and the incommensurate lattices discussed above, which also have low activity overlap. Defects arise when the excitatory coupling is strong, and incommensurate lattices arise when this coupling is weak. Also, defects have smaller scale ratios ${<}1.1$ and larger orientation differences ${>}\SI{10}{\degree}$, whereas incommensurate lattices have larger scale ratios ${>}1.3$ and any orientation difference (\textbf{Fig.~\ref{fig:phase}b} and \textbf{Supp.\@ Fig.~4}). Thus, networks governed by discommensurate relationships also cluster into modules with a preferred scale ratio and orientation difference within the experimental range~\cite{Stensola:2012gn,Krupic:2015gj}. Due to lower coupling rigidity compared to commensurate grid systems, they exhibit increased variability and occasional defects across replicate simulations. \subsection*{A diversity of lattice geometries maintains constant-on-average scale ratios} \begin{figure} \includegraphics[width=\linewidth]{fig6} \caption{\label{fig:range}Simulations spanning different parameters contain diversity in lattice relationships, but average scale ratios are still constant between module pairs. Data from 5 replicate simulations for each set of parameters. (\textbf{a}) Clustering of spatial scales and orientations for one representative simulation (left) and lattice relationship distribution across all pairs of adjacent modules (right) for each set of parameters. (\textbf{b}) Spatial scale ratios and orientation differences between adjacent modules with respective histograms to the right and above. Scale ratios and orientation differences exhibit positive rank correlation (Spearman's $\rho = 0.44$, $p = 0.001$). (\textbf{c}) Spatial scale ratios. Means indicated by lines. Medians compared through the Mann-Whitney $U$ test with reported $p$-value. (\textbf{d}) Spatial scale differences normalized by the scale of the first module (M1) in each simulation. Same interpretation of lines and $p$-value as in \textbf{c}. The $u_\textrm{mag} = 2.6$ and $l_\textrm{max} = 10$ data are taken from simulations in \textbf{Fig.~\ref{fig:dis}}. Some simulations produced only two modules M1 and M2; one simulation produced four modules, and M4 was excluded from further analysis (\textbf{Supp.\@ Fig.~6}). Coupling spread $d = 12$ and network size $n \times n = 230 \times 230$. Other parameter values are in \textbf{Table~\ref{tab:params}}.} \end{figure} So far, each set of 12-network simulations contained replicates with identical parameter values and exhibited a single dominant lattice relationship. We now present results with different parameter values to imitate biological network variability across animals. This procedure leads to modules with different commensurate and discommensurate relationships (\textbf{Fig.~\ref{fig:range}a} and \textbf{Supp.\@ Fig.~6}). There is no longer a single preferred scale ratio or orientation difference (\textbf{Fig.~\ref{fig:range}b}), but patterns emerge due to the predominance of discommensurate and commensurate relationships. Recall from \textbf{Fig.~\ref{fig:dis}e} that discommensurate module pairs exhibit scale ratios ${\approx}1.4$ and orientation differences ${\approx}\SI{7}{\degree}$. Combined with $[\sqrt{3} \approx 1.7, \SI{30}{\degree}]$ module pairs we find a bimodal distribution of orientation differences around $\SI{7}{\degree}$ and $\SI{30}{\degree}$, consistent with experimental data~\cite{Krupic:2015gj}, and positive correlation between scale ratio and orientation difference. Modules with low scale ratio but high orientation difference decrease this correlation; they arise from defects (\textbf{Fig.~\ref{fig:dis}f}). Scale ratios across the network variations span a range of values, but their averages are constant across module pairs. That is, the median scale ratio does not change between the pair of modules with smaller scales and the larger pair (\textbf{Fig.~\ref{fig:range}c}). Similarly, mean values are respectively \num{1.52(5)} and \num{1.53(5)} (mean $\pm$ s.e.m.) for module pairs M2 \& M1 and M3 \& M2. Combining data from both module pairs gives scale ratio \num{1.52(3)} (mean $\pm$ s.e.m.), which agrees well with the mean value of 1.56 from Ref.~\onlinecite{Krupic:2015gj}. Reference~\onlinecite{Stensola:2012gn} reports a slightly smaller mean value of \num{1.42(17)} (mean $\pm$ s.d.; re-analyzed by Ref.~\onlinecite{Wei:2015hl}), but its broad distribution of scale ratios overlaps considerably with ours. Moreover, we find that the normalized scale \emph{difference} does change its median across module pairs (\textbf{Fig.~\ref{fig:range}d}). This result that scale ratios are constant on average but scale differences are not matches experiment~\cite{Stensola:2012gn}. Thus, although our model can produce modules with fixed scale ratios, allowing for a range of network parameters also produces modules with a range of scale ratios. Nevertheless, the scale ratio averaged over these parameters is still constant across module pairs, a key feature of the grid system that holds even if scales are not governed by a universal ratio~\cite{Stensola:2012gn}. \subsection*{Testing for coupling with a mock lesion experiment} \begin{figure*} \includegraphics[width=\linewidth]{fig7} \caption{\label{fig:lesion}Lesioning a network changes grid scales and orientations of more dorsal networks. (\textbf{a}) Lesion protocol. (\textbf{b}) A representative simulation before the lesion. Top row: network activities at the end of the pre-lesion simulation. Second row: activity overlays between adjacent networks depicted in the top row. In each panel, the network with smaller (larger) $z$ is depicted in magenta (green), so white indicates activity in both networks. Third row: spatial rate map of a single neuron for each $z$ superimposed on the animal's trajectory. White scale bars, 50 neurons. Black scale bars, \SI{50}{\cm}. (\textbf{c}) Same as \textbf{b} but after the lesion. Spatial rate maps are recorded from the same neurons as in \textbf{b}. (\textbf{d},~\textbf{e}) Data from 10 replicate simulations. (\textbf{d}) Left: spatial grid scales $\Lambda(z)$ before and after the lesion. Middle: histogram for $\Lambda$ collected across all networks. Right: spatial orientations $\Theta$ relative to the grid cell in the same simulation with largest scale. (\textbf{e}) Spatial scale ratios and orientation differences between adjacent modules. Standard parameter values provided in \textbf{Table~\ref{tab:params}}.} \end{figure*} Excitatory coupling locks networks into scales and orientations imposed by more ventral networks. Disrupting the coupling frees networks from this rigidity, which can change scales and orientations far from the disruption. We demonstrate this effect by inactivating one network $z = 7$ midway through the simulation (\textbf{Fig.~\ref{fig:lesion}a}). This corresponds experimentally to disrupting all excitatory connections at one location along the dorsoventral MEC axis. After the lesion, grid cells ventral to the lesion location ($z \geq 8$) are unaffected, but those dorsal to the lesion location ($z \leq 6$) change scale and orientation and form a single module (\textbf{Fig.~\ref{fig:lesion}b}--\textbf{d}). Network $z = 6$ is no longer constrained by larger grids of more ventral networks, so its scale decreases. The coupling that remains from $z = 6$ to $1$ then rigidly propagates the new grid down to network $z = 1$. This post-lesion module M1 has larger scale and \SI{30}{\degree} orientation difference compared to the pre-lesion M1; these changes also appear as corresponding changes in the scale ratio and orientation difference between modules M2 and M1 (\textbf{Fig.~\ref{fig:lesion}e}). Immediate changes in grid scale and/or orientation observed at one location along the longitudinal MEC axis due to a lesion at another location would strongly support the presence of the excitatory coupling predicted by our model. Moreover, the anatomical distribution of the changes would indicate the directionality of coupling; those in grid cells dorsal to the lesion would indicate ventral-to-dorsal coupling and those ventral to the lesion would indicate dorsal-to-ventral coupling. \section*{Discussion} We propose that the hierarchy of grid modules in the MEC is self-organized by competition in attractor networks between excitation along the longitudinal MEC axis and lateral inhibition. We showed that such an architecture, with an inhibition length scale that increases smoothly along the MEC axis, reproduces a central experimental finding: grid cells form modules with scales clustered around discrete values~\cite{Stensola:2012gn,Barry:2007gv,Krupic:2015gj}. The distribution of scales across modules in our model quantitatively matches experiments. Different groups have reported mean scale ratios of 1.64 (6 module pairs), 1.42 (24 module pairs), and 1.56 (11 module pairs)~\cite{Barry:2007gv,Stensola:2012gn,Krupic:2015gj}. These data could be interpreted as an indication that the grid system has a preferred scale ratio roughly in range of 1.4--1.7. As we showed, our model naturally produces a hierarchy of modules with scale ratios in this range; its network parameters lead to both commensurate and discommensurate grids (\textbf{Fig.~\ref{fig:phase}}). On the other hand, the data on scale ratios between individual pairs of modules actually span a range of values in the different experiments: 1.6--1.9, 1.1--1.8, and 1.2--2.0~\cite{Barry:2007gv,Stensola:2012gn,Krupic:2015gj}. This suggests that the underlying mechanism that produces grid modules must be capable of producing different scale ratios as its parameters vary. This is indeed the case for our model, in which variation of network parameters produces a realistic range of scale ratios (\textbf{Fig.~\ref{fig:range}}). Despite variability across individual scale ratios, experiments strikingly reveal that the average scale ratio is the same from the smallest pair of modules to the largest pair, whereas the average scale \emph{difference} changes across the hierarchy~\cite{Stensola:2012gn}. Our model robustly reproduces this observation (\textbf{Fig.~\ref{fig:range}c},~\textbf{d}) because its fundamental mechanism of geometric coordination between grids enforces constant-on-average scale ratios even with variation in parameters among individual networks. Our model requires that grid orientation be co-modular with scale, as observed in experiment~\cite{Stensola:2012gn}. Studies characterizing the statistics of orientation differences between modules are limited, but values seem to span the entire range \SI{0}{\degree}--\SI{30}{\degree}, with some preference for values at the low and high ends of this range~\cite{Krupic:2015gj}. Our model can capture the entire range of orientation differences with discommensurate relationships favoring small differences and commensurate relationships favoring large differences (\textbf{Fig.~\ref{fig:phase}}). Overall, our model predicts a positive correlation between scale ratio and orientation difference (\textbf{Figs.~\ref{fig:phase}e} and \textbf{\ref{fig:range}b}), which can be tested experimentally. Existing datasets~\cite{Stensola:2012gn,Krupic:2015gj} have a confound---animals are tested in square and rectangular enclosures which have distinguishable orientations marked by the corners. Grid orientations can anchor to such features~\cite{Stensola:2015cj}, either through the integration of visual and external cues~\cite{raudies2015differences,savelli2017framing}, or through interaction with boundaries~\cite{Bush:2014iq,krupic2016framing,giocomo2016environmental,Evans:2016cf,Hardcastle:2017ce,Keinath:2018el,Ocko:2018ed}. Experiments in circular or other non-rectangular environments may help disambiguate the effects of such anchoring. Our model also predicts that orientation differences between modules will be preserved between environments with different geometries since the differences are internally generated by the dynamics of the network. This effect has been observed~\cite{Krupic:2015gj}. Our model produces consistent differences in firing rate from one grid field to another for some grid cells. This variability arises at module interfaces from the selective excitation of some network activity peaks in the smaller-scale grid by the overlapping activity peaks of the larger-scale grid. Such an explanation for firing rate variability is suggested by Ref.~\onlinecite{Ismakov:2017jj} and would be further supported by observing spatial periodicity in the variability corresponding to the scale of the larger grids. An alternative model, in which field-to-field firing rate variability arises from place cell inputs~\cite{Dunn:2017jk}, would not lead to such periodicity. Our model requires excitatory coupling between grid cells at different locations along the longitudinal MEC axis, either through direct excitation or disinhibition~\cite{Fuchs:2016et}. As a result, it predicts that destruction of grid cells, or inactivation of excitatory coupling~\cite{Zutshi:2018ku}, at a given location along the axis will change grid scales and/or orientations at other locations (\textbf{Fig.~\ref{fig:lesion}}). The presence of noise correlations across modules, as previously investigated but not fully characterized~\cite{mathis2013multiscale,Tocker:2015ff}, would suggest connections between modules. Such correlations, and perhaps even lattice relationships, could be observed via calcium imaging of the MEC~\cite{Heys:2014cv,Gu:2018dm}. A direct test for coupling would involve patch clamp experiments akin to those used to identify local inhibition and excitation and interhemispheric excitation between principal cells in superficial MEC layers~\cite{Couey:2013fi,Fuchs:2016et,Winterer:2017fl}. Since spatial grid scales are both proportional to inhibition length scale $l$ and inversely proportional to velocity gain $\alpha$ (Ref.~\onlinecite{Burak:2009fx} and \textbf{Methods}), we also simulated excitatorily coupled networks with a depth-dependent velocity gain $\alpha(z)$ and a fixed inhibition distance $l$ (\textbf{Supp.\@ Info.}). In contrast to simulations in one dimension~\cite{widloskifiete}, while we observed module self-organization, the system gave inconsistent results among replicate simulations and lacked fixed scale ratios (\textbf{Supp.\@ Figs.~7} and \textbf{8} and \textbf{Supp.\@ Video}). Moreover, recent calcium imaging experiments suggest that activity on the MEC is arranged a deformed triangular lattice~\cite{Gu:2018dm}, as predicted by the continuous attractor model~\cite{Burak:2009fx}, and that regions with activity separated by larger anatomic distances contain grid cells of larger spatial scale. These observations support a changing inhibition length scale over a changing velocity gain as a mechanism for producing different grid scales, under the assumption that anatomic and network distances correspond to each other. Our results differ from previous work on mechanisms for forming grid modules. Grossberg and Pilly hypothesize that grid cells arise from stripe cells in parasubiculum, and that discreteness in the spatial period of stripe cells leads to modularity of grid cells~\cite{Grossberg:2012ih}. However, stripe cells have only been observed once~\cite{Krupic:2012id,Navratilova:2016iq}, and the origin of discrete periods with constant-on-average ratios in stripe cells would then need to be addressed. Urdapilleta, Si, and Treves propose a model in which discrete modules self-organize from smooth gradients in parameters in a model where grid formation is driven by firing rate adaptation in single cells~\cite{Urdapilleta:2017kn}. They also utilize excitatory coupling among grid cells along the longitudinal MEC axis. However, this model does not have a mechanism to dynamically enforce the average constancy of grid scale ratios, which is a feature of the grid system~\cite{Stensola:2012gn}. The model also does not demonstrate modules with orientation differences near \SI{30}{\degree}~\cite{Krupic:2015gj}. Our model naturally reproduces these features of the grid system. Furthermore, over the past few years, multiple reports have provided independent experimental support for the importance of recurrent connections among grid cells~\cite{Couey:2013fi,Dunn:2015he,Fuchs:2016et,Zutshi:2018ku} and for the continuous attractor model in particular~\cite{Yoon:2013hv,Heys:2014cv,Gu:2018dm}. Our work establishes that continuous attractor networks can produce a discrete hierarchy of modules with a constant-on-average scale ratio. The competition generated between excitatory and inhibitory connections bears a strong resemblance to the Frenkel-Kontorova model of condensed matter physics, in which a periodic potential of one scale acts on particles that prefer to form a lattice of a different, competing scale~\cite{Kontorova:1938en}. This model has a rich literature with many deep theoretical results, including the calculation of complicated phase diagrams involving ``devil's staircases''~\cite{Bak:1982it,chaikinlubensky} which mirror those of our model (\textbf{Fig.~\ref{fig:phase}}). Under certain conditions, our model produces networks with quasicrystalline approximant grids that are driven by networks with standard triangular grids at other scales (\textbf{Supp.\@ Fig.~9}). Quasicrystalline order lacks periodicity, but contains more nuanced positional order~\cite{Levine:1986kb}. This phenomenon wherein quasicrystalline structure is driven by crystalline order in a coupled system was recently observed for the first time in thin-film materials that contain Frenkel-Kontorova-like interactions~\cite{Forster:2013de,Forster:2016bd,Passens:2017kl}. Commensurate and discommensurate lattice relationships are a robust and versatile mechanism for self-organizing a grid system whose scale ratios are constant or constant on average across a hierarchy of modules. We demonstrated this mechanism in a basic extension of the continuous attractor model with excitatory connections between networks. This model is amenable to extensions that capture other features of the grid system, such as spiking dynamics, learning of synaptic weights~\cite{Widloski:2014dl}, the union of our separate networks into a single network spanning the entire MEC, and the addition of border cell inputs or recurrent coupling between modules to correct path-integration errors or react to environmental deformations~\cite{Hardcastle:2015il,Keinath:2018el,Ocko:2018ed,pollock,mosheiffburak}. \begin{acknowledgments} We are grateful to Xue-Xin Wei, Tom Lubensky, Ila Fiete, and John Widloski for their thoughtful ideas and suggestions, and to the Honda Research Institute and the NSF (grant PHY-1734030) for research support. L.K. is also supported by the Miller Institute for Basic Research in Science. Work on this project at the Aspen Center for Physics was supported by NSF grant PHY-1607611. \end{acknowledgments} \section*{Methods} \begin{table} \caption{\label{tab:params}Main model parameters and their values unless otherwise noted.} \begin{ruledtabular}\begin{tabular}{ccc} Parameter & Variable & Value \\ \hline Number of networks & $h$ & 12 \\ Number of neurons per network & $n \times n$ & $160 \times 160$ \\ Neurons recorded per network & & $3$ \\ Animal speed & $|\ve V|$ & 0--\SI{1}{\m\per\s} \\ Diameter of enclosure & & \SI{180}{\cm} \\ Simulation time & & \SI{500}{\s} \\ Simulation timestep & $\Delta t$ & \SI{1}{\ms} \\ Neural relaxation time & $\tau$ & \SI{10}{\ms} \\ Hippocampal input strength & $a_\textrm{mag}$ & 1 \\ Hippocampal input falloff & $a_\textrm{fall}$ & 4 \\ Inhibition distance minimum & $l_\textrm{min}$ & 4 \\ Inhibition distance maximum & $l_\textrm{max}$ & 15 \\ Inhibition distance exponent & $l_\textrm{exp}$ & $-1$ \\ Inhibition strength & $w_\textrm{mag}$ & 2.4 \\ Subpopulation shift & $\xi$ & 1 \\ Coupling spread & $d$ & 8 \\ Coupling strength & $u_\textrm{mag}$ & 2.6 \\ Velocity gain & $\alpha$ & \SI{0.3}{\s\per\m} \end{tabular}\end{ruledtabular} \end{table} \subsection*{Model setup and dynamics} We implemented the Burak-Fiete model as follows~\cite{Burak:2009fx}. Networks $z = 1, \ldots, h$ each contain a 2D sheet of neurons with indices $\ve r = (x, y)$, where $x = 1, \ldots, n$ and $y = 1, \ldots, n$. Neurons receive broad excitatory input $a(\ve r)$ from the hippocampus, and, to prevent edge effects, those toward the center of the networks receive more excitation than those toward the edges. Each neuron also inhibits others that lie around a length scale of $l(z)$ neurons away in the same network $z$ Moreover, every neuron belongs to one of four subpopulations that evenly tile the neural sheet. Each subpopulation is associated with both a preferred direction $\veh e$ along one of the network axes $\pm \veh x$ or $\pm \veh y$ and a corresponding preferred direction $\veh E$ along an axis $\pm \veh X$ or $\pm \veh Y$ in its spatial environment. A neuron at position $\ve r$ in network $z$ has its inhibitory outputs $w(\ve r; z)$ shifted slightly by $\xi$ neurons in the $\veh e(\ve r)$ direction and its hippocampal excitation modulated by a small amount proportional to $\veh E(\ve r) \cdot \ve V$, where $\ve V$ is the spatial velocity of the animal. Note that lowercase letters refer to attractor networks at each depth $z$ in which distances have units of neurons, and uppercase letters refer to the animal's spatial environment in which distances have physical units, such as centimeters. In addition to these established features~\cite{Burak:2009fx}, we introduce excitatory connections $u(\ve r)$ from every neuron $\ve r$ in network $z$ to neurons located within a spread $d$ of the same $\ve r$ but in the preceding network with depth $z-1$. $u(\ve r)$ is constant for all networks except for the last one $z = h$, which has $u(\ve r) = 0$. These components lead to the following dynamical equation for the dimensionless neural firing rates $s(\ve r, z, t)$: \begin{eqnarray} &&\tau \frac{s(\ve r,z,t+\Delta t)-s(\ve r,z,t)}{\Delta t} + s(\ve r,z,t) \nonumber\\ && \quad{}= \bigg\{\sum_{\ve r'} w(\ve r-\ve r'+\xi \veh e(\ve r');z) s(\ve r',z,t) \nonumber\\ && \quad\qquad{}+ \sum_{\ve r'} u(\ve r-\ve r') s(\ve r',z+1,t) \nonumber\\ && \quad\qquad{}+ a(\ve r) \left[1 + \alpha \veh E(\ve r) \cdot \ve V(t)\right]\bigg\}_+. \label{eqn:s} \end{eqnarray} Inputs to each neuron are rectified by $\{c\}_+ = 0$ for $c < 0$, $c$ for $c \geq 0$. $\Delta t$ is the simulation time increment, $\tau$ is the neural relaxation time, and $\alpha$ is the velocity gain that describes how much the animal's velocity $\ve V$ modulates the hippocampal inputs $a(\ve r)$. Note that $s$ can be treated as a dimensionless variable because \textbf{Eq.~\ref{eqn:s}} is invariant to scaling of $s$ and $a$ by the same factor. We use velocities $\ve V(t)$ corresponding to a real rat trajectory~\cite{Hafting:2005dp, Burak:2009fx}. Details are provided in \textbf{Supp.\@ Info.} \subsection*{Inhibitory and excitatory connections} The hippocampal input is \begin{equation} a(\ve r) = \begin{cases} a_\textrm{mag} \mathrm{e}^{-a_\textrm{fall} r_\textrm{scaled}^2} & r_\textrm{scaled} < 1 \\ 0 & r_\textrm{scaled} \geq 1, \end{cases} \label{eqn:a} \end{equation} where $r_\textrm{scaled} = \sqrt{\left(x-\frac{n+1}{2}\right)^2 + \left(y-\frac{n+1}{2}\right)^2}/\frac{n}{2}$ is a scaled radial distance for the neuron at $\ve r = (x, y)$, $a_\textrm{mag}$ is the magnitude of the input, and $a_\textrm{fall}$ is a falloff parameter. The inhibition distance for network $z$ is \begin{equation} l(z) = \left[l_\textrm{min}^{l_\textrm{exp}} + \left(l_\textrm{max}^{l_\textrm{exp}} - l_\textrm{min}^{l_\textrm{exp}}\right) \frac{z-1}{h-1}\right]^{1/l_\textrm{exp}}, \label{eqn:l} \end{equation} which ranges from $l_\textrm{min} = l(1)$ to $l_\textrm{max} = l(h)$ with concavity tuned by $l_\textrm{exp}$. More negative values of $l_\textrm{exp}$ lead to greater concavity; for $l_\textrm{exp} = 0$, we use the limiting expression $l(z) = l_\textrm{min}^{(h-z)/(h-1)} l_\textrm{max}^{(z-1)/(h-1)}$. The recurrent inhibition profile for network $z$ is \begin{equation} w(\ve r; z) = \begin{cases} -\dfrac{w_\textrm{mag}}{l(z)^2} \dfrac{1-\cos[\pi r/l(z)]}{2} & r < 2 l(z) \\ 0 & r \geq 2 l(z), \end{cases} \label{eqn:w} \end{equation} where $w_\textrm{mag}$ is the magnitude of inhibition. We scale this magnitude by $l(z)^{-2}$ to make the integrated inhibition constant across $z$. The excitatory coupling is \begin{equation} u(\ve r) = \begin{cases} \dfrac{u_\textrm{mag}}{d^2} \dfrac{1+\cos[\pi r/d]}{2} & r < d \\ 0 & r \geq d, \end{cases} \label{eqn:u} \end{equation} where $u_\textrm{mag}$ and $d$ are the magnitude and spread of coupling, respectively. In analogy to $w_\textrm{mag}$, we scale $u_\textrm{mag}$ by $d^{-2}$. \subsection*{Overview of data analysis techniques} To determine spatial grid scales, orientations, and gridness, we consider an annular region of the spatial autocorrelation map that contains the 6 peaks closest to the origin. Grid scale is the radius with highest value, averaging over angles. Grid orientation and gridness are determined by first averaging over radial distance and analyzing the sixth component of the Fourier series with respect to angle~\cite{Weber:2019gb}. The power of this component divided by the total Fourier power measures ``gridness'' and its complex phase measures the orientation. Grid cells are subject to a gridness cutoff of 0.6. For each replicate simulation, we cluster its grid cells with respect to scale and orientation using a $k$-means procedure with $k$ determined by kernel smoothed densities~\cite{Stensola:2012gn}. See \textbf{Supp.\@ Info.} for full details. \input{Manuscript.bbl} \end{document}
{ "timestamp": "2019-03-13T01:00:44", "yymm": "1806", "arxiv_id": "1806.00975", "language": "en", "url": "https://arxiv.org/abs/1806.00975" }
\section{Introduction} Pancreatic ductal adenocarcinoma (PDAC), which accounts for 90\% of malignant pancreatic cancer, has the highest mortality rates. Despite significant advances in medicine over the past decades, it has a poor five-year survival rate of less than 5\%, most probably because of the advanced and incurable stage of the disease at the time of diagnosis \cite{Ryan2014}. Intraductal papillary mucinous neoplasm (IPMN) and mucinous cystic neoplasm (MCN) are two precursor cystic lesions of PDAC and the early diagnosis of these precursor lesions can significantly increase the patient survival \cite{Canto2013}. However, benign cysts such as serous cystic neoplasm (SCN) and solid pseudopapillary tumor (SPT) which rarely or never give rise to malignant cancer, present very similar imaging properties as PDAC precursors. Up to 16\% of screening subjects were reported to have pancreatic cysts \cite{Reichert2011}. Unnecessary pancreas surgery of benign lesions may dramatically reduce the quality of the life of the patients. Therefore, early differential diagnosis plays a key role in such dilemma situation. Despite of the urgent needs, there is still no clinically available method to effectively differentiate pancreatic cysts to date \cite{vincent2011pancreatic}. A recent study \cite{sahani2011prospective} reported an accuracy of 67-70\% for the discrimination of 130 pancreatic cysts on CT scans performed by two physician with 10+ years of experience in abdominal imaging. A computer-aided diagnosis (CAD) system was proposed recently for pancreatic cysts classification \cite{dmitriev2017classification}. This method requires the cysts to be well annotated before general demographic information and texture information being extracted by convolutional neural network were aggregated by Bayesian combination. However, pancreatic cystic lesions have a large variation in size (as small as a few mm) and geometry. The heterogeneity of the lesions makes the precise identification and segmentation of the lesions extremely difficult and incur additional risks for the successful application of this method. Furthermore, focusing only on the region of cysts may lead to two technical limitations: 1) useful contextual information including the shape of abnormal pancreas caused by the inflammation is missing after masking the cysts; 2) the texture information inside some types of small cysts such as SPT shown in Fig. \ref{fig:samples_cysts} is poor. In this paper, we present the first CAD approach for early differential diagnosis of pancreatic cysts \textbf{without the requirement of detection and segmentation of lesions} beforehand. This is achieved by using densely-connected convolutional networks (Dense-Net) \cite{huang2017densely} on whole pancreas in CT imaging, which not only learns high-level features from the whole pancreas and builds mappings from pathological types to imaging appearance, but also could generate better saliency maps to visualize important region by taking advantage of its dense connection between convolutional layers. Four subtypes of cystic lesions, i.e.~IPMN, MCN, SCN and SPT, are classified. To further explore contextual information from outside of the lesion, we generate saliency maps to highlight the important pixels and indicate the spatial support relevant to the decision. This can assist the radiologists to understand the computer-aided decision, for example, to what degree the pancreas shape will make a difference in evaluating disease state. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95 \linewidth,height=0.25\linewidth]{cysts_samples_.pdf} \end{center} \caption{Examples of pancreatic cyst appearance in CT images. The blue contour indicates the region of pancreas and the red contour indicates the area of cysts.} \label{fig:samples_cysts} \end{figure*} \section{Data Preparation} A cohort of 206 patients (81 males, 125 females, mean age 53.4 $\pm$ 15.1 years) referred to surgery because of suspected malignant pancreatic cysts were included in this study. These patients have not been reported of pancreatic disease until the diagnosis on abdominal contrast-enhanced CT scans (slice thickness 3 mm). Pathology on the surgical specimens confirmed 64 cases of IPMNs, 35 cases of MCNs, 66 cases of SCNs, and 41 cases of SPTs, which means all the benign cysts in the cohort were misdiagnosed by reading on the CT images. Before the development of our computer-aided methods, a rough contour of the pancreas as shown in Fig. \ref{fig:samples_cysts} were defined by a junior physician. \section{Method} \subsection{Densely-Connected Convolutional Networks} \label{method} \begin{figure*}[t] \begin{center} \includegraphics[width=0.95 \linewidth,height=0.25\linewidth]{architecture_dense.pdf} \end{center} \caption{Overview of our proposed Dense-Net architecture. It contains three convolution blocks and 10 connected convolutional layers in each block. For convolutional layers in one block, feature maps of all the previous layers are used as the inputs for each succeeding layer. 1$\times$1 convolution followed by 2$\times$2 average pooling as transition layers was employed between two contiguous dense blocks.} \label{fig:DenseNet} \end{figure*} In this work, we explicitly learn the mapping models from imaging appearance to histopathological types via densely-connected convolutional neural networks (Dense-Net). Note that this type of CNN architecture was first proposed by \cite{huang2017densely} as a deep learning based method for general image classification. We however find that it can be a suitable model for high-level feature extraction of pancreas as well as visualization of dominant locations. The Dense-Net tries to address two important issues: 1) dealing with the vanishing or exploding gradient problem \cite{bengio1994learning} by strengthening feature propagation; 2) reducing computation complexity by promoting feature reuse, which strengthen its power on both image feature representation and generating gradient-based saliency maps. In Dense-Net, each layer is connected with every other layers in the network by skip connection. Thus, the feature maps of all the previous layers are used as the inputs for each succeeding layer. The \emph{l} layer concatenates preceding layer's feature maps, x$_{1}$, x$_{2}$, ..., x$_{l-1}$ as its input: \begin{equation} \emph{x$_{l}$} = \emph{H$_{l}$([x$_{1}$, x$_{2}$, ..., x$_{l-1}$])} \end{equation} where [x$_{1}$, x$_{2}$, ..., x$_{l-1}$] represents the feature map concatenation of the ones generated in layers \emph{0}, \emph{1}, ..., \emph{l-1}. And \emph{H$_{l}$} which follows the improved residual networks \cite{he2016deep}, is a compound function of ReLUs \cite{nair2010rectified}, preceded by Batch Normalization \cite{ioffe2015batch}, and is followed by convolution. If each function \emph{H$_{l}$} produces \emph{k} feature maps, it follows that the \emph{m$^{th}$} layer has \emph{k$_{0}$+k$\times${m}} input feature-maps, where \emph{k$_{0}$} is the number of channels in the input layer. The hyperparameter \emph{k} is referred as \emph{growth rate} in the network. In our classification task, the image size of the bounding box for the pancreas is 144$\times$144 and the class number is only 4, thus we choose to modify the original network designed for ImageNet in order to reduce computation complexity. Furthermore, by reducing the number of convolution blocks, we could gain better gradients for generating the saliency maps with less risk of causing gradient vanishing. \subsubsection{Implementation Details} We apply Dense-Net as a high-level feature extractor and classifier, which aims to better learn shape and texture information of the pancreas. The proposed Dense-Net, as is shown in Fig. \ref{fig:DenseNet}, contains 3 convolutional blocks, 2 max-pooling layers between each block, 1 averaged global pooling layer. For convolutional layers with kernel size 3$\times$3, each side of the inputs is zero-padded by one pixel in order to keep the feature-map size fixed. We use 1$\times$1 convolution followed by 2$\times$2 average pooling as transition layers between two contiguous dense blocks. At the end of the last dense block, a global average pooling is performed and then a softmax classifier is employed. We set number of layers \emph{L = 10} for each block, growth rate \emph{k = 9}, number of feature maps of first layer \emph{k$_{0}$ = 2k}. Bottleneck layer (1$\times$1 convolutional layer) before each 3$\times$3 convolution was used to reduce the number of input feature-maps, and thus to improve computational efficiency. For comparison with traditional CNN, we designed a similar architecture as the model in \cite{dmitriev2017classification} which specifically tailoring to pancreatic cysts classification. \subsubsection{Training and Testing} The data for training and testing the proposed Dense-Net were generated as follows. Each \emph{2D} axial slice X$_{ij}^{Slice}$ of the original $3D$ bounding box \{X$_{ij}^{Slice}$\} with a segmented pancreas x$_{i}$ was bounded to 144 $\times$ 144 squares pixels. Due to the generally near-spherical shape of a pancreas head, slices close to the top or bottom of the volume do not contain enough pixels of a pancreas to make an accurate diagnosis. Therefore, slices with the overlap ratio of less than 10\%, defined as the percentage of pancreas pixels in a slice, were excluded. Overfitting of network was further reduced by applying data augmentation: 1) random rotations within a degree range of [$−$25\degree, $+$25\degree]; 2) random zoom within a range of [0.9, 1.2]; 3) random vertical flips. The network was implemented using the Keras \cite{chollet2015keras} library and trained on 40-sized mini-batches to minimize the class-balanced cross-entropy loss function using Stochastic Gradient Descent with a 0.0005 learning rate for 100 epochs. In the testing phase, each slice with the overlap ratio of more than 10\% was analyzed by the Dense-Net separately, and the final probabilities were obtained by averaging the class probabilities of each slice: \begin{equation} \widetilde{P}_{DenseNet}(y_{m} = y|\{X_{ij}^{Slice}\}) = \frac{1}{N_{m}}\sum_{j=1}^{N_{m}}P_{DenseNet}(y_{m} = y|X_{mj}^{Slice}) \end{equation} where P$_{DensetNet}$(y$_{m}$ = y$|$X$_{mj}^{Slice}$) is the vector of class probabilities, and N$_{m}$ is the number of 2D axial slices used for the classification of pancreas sample x$_{m}$. \subsection{Saliency Maps for Computer-Aided Analysis} \label{section_saliency_maps} The saliency maps which visualize the dominant locations are employed as a computer-aided analysis tool in our study. We generated the saliency maps of testing images by using Guided Back-propagation method in \cite{springenberg2014striving}. It visualized the part of an input image that mostly activates a given neuron and used a simple backward pass of the activation of a single neuron after a forward pass through the network. To this end, it computed the gradient of the activation w.r.t. the image. These gradients were then used to highlight input regions that cause the most changes to the output. In each convolutional block of Dense-Net, every convolutional layer is connected to others, which address the vanishing gradient problem, thus we can gain better gradients for generating the saliency maps. \section{Results and Discussion} We evaluated the performance of the proposed method using a stratified 10-fold cross-validation strategy, maintaining similar data distribution in training and testing datasets considering the imbalance in the dataset. For each cross-validation, we trained a DenseNet on training set and evaluated on test set. Classification performance is reported in terms of the normalized averaged confusion matrix and the overall classification accuracy. \subsubsection{Results of Dense-Net} \begin{table*} \caption{Confusion matrices of the Dense-Net (left) and traditional CNN (right). The figures in red indicate better accuracy achieved by \emph{Dense-Net} than traditional CNN while the ones in bold indicate the accuracies of each class.}\label{table:Results}. \begin{tabular}{c} \begin{minipage}{0.51\linewidth} \begin{tabular}{ c | c | c | c | c } \hline \textbf{Types}&~~IPMN~~&~~MCN~~&~~SCN~~&~~SPT\\ \hline IPMN& \color{red}\textbf{81.25}\% & 0.03\% & 17.19$\%$ & 3.13$\%$\\ {MCN}& 11.43\% & \textbf{65.71}\% &8.57 $\%$ & 14.29$\%$\\ {SCN}& 18.18\% & {1.51}\% &\color{red}\textbf{75.76}$\%$ & 4.54$\%$\\ {SPT}& 12.20\% & {2.43}\% & 24.39 $\%$ & \color{red}\textbf{60.98}$\%$\\ \hline \end{tabular} \end{minipage} \bigskip \begin{minipage}{.51\linewidth} \begin{tabular}{ c | c | c | c |c } \hline \textbf{Types}&~~IPMN~~&~~MCN~~&~~SCN~~&~~SPT\\ \hline IPMN& \textbf{67.19}\% & 4.69\% & 23.44$\%$ & 3.13$\%$\\ {MCN}& 17.14\% & \textbf{65.71}\% &11.43 $\%$ & 5.71$\%$\\ {SCN}& 21.21\% & {3.03}\% & \textbf{66.67}$\%$ & 9.09$\%$\\ {SPT}& 12.20\% & {2.43}\% & 39.02 $\%$ & \textbf{48.78}$\%$\\ \hline \end{tabular} \end{minipage} \end{tabular} \end{table*} The proposed method achieved an overall accuracy of \emph{72.8\%}, which is significantly higher than the baseline diagnostic accuracy of \emph{48.1\%} by manual reading. From Table \ref{table:Results}, we observed that SCNs were easily misclassified into IPMNs, which is consistent with the diagnostic experiences of physicians on this cohort. By further looking into the misclassified cases, we found that the misclassified cases of SCNs have a very similar appearance to the IPMNs while some pancreas with SCNs have a very different shape with that of other histological types. In addition, we evaluated the benefit of using data augmention and found an overall benefit of 10.7\% as an average over all cases and classes. To assess the performance of the proposed Dense-Net, the results were compared with conventional CNN (see Table \ref{table:Results}), which is widely employed in computer-aided diagnosis. A same training strategy was employed for the two models during the training process. We found that the Dense-Net outperforms traditional CNN on three classes. Interestingly, in the MCN type, although Dense-Net and traditional CNN achieved same accuracy, they have different distributions on the misclassified types. This could be explained as that Dense-Net distinguishes from traditional CNN in architectural realization, especially regarding to the feature connection schema, which might be the direct cause of varied feature representations for organs \subsubsection{Results of Saliency Maps} We visualized the testing images by using the gradient-based method introduced in Section \ref{section_saliency_maps}, to show the critical regions that contributed to the classification results. Some results were shown in Fig. \ref{fig:saliency_maps}. From the 3$^{rd}$ row, we observed that the pancreas cysts were highlighted on the four cases, demonstrating that Dense-Net extracted rich information from the region of cysts. As a comparison, the maps generated via traditional CNN were presented in 2$^{nd}$ row, which did not highlight pixels related to abnormal region. There are two main reasons: 1) the Dense-Net is more powerful than traditional CNN in feature extraction and it focuses on both abnormal region as well as shape information; 2) the connections between each two convolutional layers guarantee that Dense-Net gains better gradients for visualization than traditional CNN. For most cases, the boundaries of pancreas were highlighted, it turns out that the shape information of pancreas can also contribute to the decision, in particular around a large lesion that has modified the organ. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95 \linewidth,height=0.70\linewidth]{saliency_maps.pdf} \end{center} \caption{Samples of saliency maps. From top to down: original images and saliency maps generated by traditional CNN and Dense-Net respectively. From left to right: one of the axial slices of IPMN, MCN, SCN and SPT respectively. Compared to traditional CNN, Dense-Net gains better visualization because its uniqueness on connection between convolutional layers.} \label{fig:saliency_maps} \end{figure*} \section{Summary and Conclusion} In this work, we proposed a computer-aided diagnosis and visualization system to identify and classify pancreatic cyst. The proposed algorithm is based on densely-connected convolutional networks in order to utilize fine imaging information from CT scans and highlight the dominant locations/pixels which not only justify the effectiveness of our model but also could serve as computer-aided diagnosis tool in clinical practice. Although the reported statistics are not attractive at first glance, it has improved 51.4\% of accuracy compared to the baseline manual diagnosis. In fact, the relatively low accuracy of many reports on differential diagnosis of pancreatic cysts were based on similar challenging data, which contained a large number of misdiagnosed cases. However, differential diagnosis on pancreatic cyst is extremely difficult. Even fine needle biopsy suffer from the heterogeneity of lesions and the inaccurate sampling due to pancreas motion. Only pathology on surgical specimens is the gold standard at the moment, which restricts the recruitment of patient cohorts. Therefore, a small improvement of the accuracy may already contribute to the state-of-the-art. The significantly improved accuracy and easy application strongly support the clinical potential of our developed method. A limitation of our method is the need of a rough segmentation of pancreas before applying the computer-aided approach. Nevertheless, compared to the detection and segmentation of cystic lesions, the approximate segmentation of this organ avoids demanding performance burdens. Furthermore, there exists various work focusing on segmentation of whole pancreas \cite{roth2015deep,cai2017pancreas} and their results are good enough as an initial step for our method. Further improvement of the accuracy of our method can be achieved by including demographic information for classification~\cite{dmitriev2017classification}, which will be integrated as future work. For example, the incidence of SPT typically afflicts young women. The inclusion of gender and age may largely improve the accuracy for SPT. \bibliographystyle{splncs03}
{ "timestamp": "2018-06-20T02:07:48", "yymm": "1806", "arxiv_id": "1806.01023", "language": "en", "url": "https://arxiv.org/abs/1806.01023" }
\section{Introduction}\label{sec:Intro} Agitated granular materials tend to exhibit intricate phenomena such as pattern formation \cite{Aranson2006}, collapse \cite{vanderMeer2002} or segregation \cite{Sanders2004, Rosato1987}. In quasi two-dimensional systems it is not uncommon to observe two coexisting phases such as condensed clusters of particles surrounded by a gas-like phase \cite{Olafsen1998, Roeller2011, Prevost2004, Risso2018}. For freely cooling systems of inelastic particles studied \emph{in silico,} it has been reported that particles tend to form clusters inside which the rate of energy dissipation exceeds that in the rest of the system, in a process known as clustering instability \cite{Goldhirsch1993}. Even though granular materials are systems far from equilibrium, several authors have proposed the introduction of effective interactions among particles to describe the observed phase separation and segregation \cite{Ciamarra2006, Bordallo-Favela2009}. Effective potentials have been calculated for experimentally observed quasi two-dimensional systems of granular spheres under mechanical agitation \cite{Bordallo-Favela2009} or under the effects of external, oscillating magnetic fields \cite{Tapia-Ignacio2016, Donado2017}, by measuring the radial distribution function and inverting it by means of the Percus--Yevick \mbox{(PY)} integral equation \cite{Percus1958}. Following the same approach, Vel\'{a}zquez-P\'{e}rez and co-workers have studied the effect of the interparticle coefficient of restitution on the shape of the effective potential, reporting an increment of the effective particle attraction with decreasing values of the coefficient of restitution \cite{Velazquez-Perez2016}. In their paper, they present a complicated shape of the attractive effective potential as a function of the particle separation distance. In the present work we show that a simple inversion of the \mbox{PY} integral equation is insufficient for obtaining the correct form of the effective potential in most granular systems. Instead, we propose the use of the novel Iterative Ornstein--Zernike Inversion \mbox{(IO--ZI)} method \cite{Heinen2018} which is shown here to yield more reliable and simpler forms of the effective potential. This paper is organized as follows: In Sec.~\ref{sec:Granul_Sim}, we describe our Granular Dynamics simulations. The \mbox{IO--ZI} method for calculating the effective pair potential of a two-dimensional reference fluid is explained in Sec.~\ref{sec:IO--ZI}, including a subsection~\ref{subsec:IO--ZI_validation} in which the method is validated by test cases. Our results for the effective pair potential are reported in Sec.~\ref{sec:Results}, which is followed by the conclusions. \section{Granular Dynamics Simulations}\label{sec:Granul_Sim} \begin{figure} \centering \includegraphics[width=.96\columnwidth]{Figure01.eps} \vspace{0em} \caption{\label{fig:Snapshot} (Color online) A snapshot of our Granular \mbox{Dynamics} simulation for $\phi = 0.4$, $\epsilon = 0.7$. The Cartesian box dimensions are $L \times L \times 3\sigma$ with $L / \sigma = \sqrt{\pi N / 4 \phi} ~ \approx 31.7$.} \end{figure} Figure~\ref{fig:Snapshot} features a representative snapshot from one of our Granular Dynamics simulations. All simulations are for monodisperse systems of $N = 512$ spherical particles with diameter $\sigma$, confined between two horizontal plates at $z = \delta z(x, y)$ and $z = 3\sigma + \delta z(x, y)$. Including the gentle sinusoidal surface roughness $\delta z(x,y) = 10^{-3} \times \sigma \times \left[ \sin (\psi x) + \sin (\psi y) \right]$ with $\psi \sigma = 210$ on the plates helps to avoid a suppression of the $x$- and $y$-components of the spheres' velocities due to friction between the particles and the plates \cite{Perera-Burgos2010}. Our choice of the parameter $\psi$ corresponds to a surface roughness wavelength that is much shorter than $\sigma$, resulting in quasi-random lateral velocity kicks. Periodic boundary conditions are applied in the Cartesian $x$- and $y$-directions, and the particles have three translational and three rotational degrees of freedom. \mbox{Newton's} equation of motion is integrated in time by means of a Verlet algorithm with a velocity-prediction step \cite{Perez2008}. Forces that act orthogonal to the particle surfaces are modeled by a spring-dashpot model \cite{Shafer1996}, whereas tangential interactions are modeled as Coulomb friction for the sake of simplicity in calculations. The orthogonal forces are characterized by the restitution coefficients $\epsilon$ and $\epsilon_w$ in case of particle-particle and particle-wall collisions, respectively. In all our simulations, the particle-wall restitution coefficient $\epsilon_w = 0.9$ is assumed. For the particle-particle normal restitution coefficient we have used the three values $\epsilon = 0.5, 0.7$ and $0.9$. The tangential forces in particle pairs and between particles and walls are both characterized by the tangential friction coefficient $\mu = 0.4$ in all our simulations. In an initialization step, the particles are placed at random vertices of a horizontal, two-dimensional triangular lattice with a lattice constant of $1.001\sigma$, at the center plane $z = 3\sigma/2$ between the confining plates. All spheres are assigned random velocity vectors $\boldsymbol{v}_0$ with magnitudes in the range $0 < \left| \boldsymbol{v}_0 \right| < 8\times 10^{-5} ~\sigma/ \delta t$, and random angular velocity vectors $\boldsymbol{\omega}_0$ with magnitudes in the range $0 < \left| \boldsymbol{\omega}_0 \right| < \sqrt{3} \times 10^{-10} ~\text{rad} / \delta t$, where $\delta t$ is the time step of the numerical integration scheme. The confining plates are then moved sinusoidally in the $z$-direction with an amplitude $A = 0.012678~\sigma$ and a frequency $\nu = 1.4 \times 10^{-4} / \delta t$. The particles are affected by a gravitational acceleration $g$ in the negative $z$-direction. Setting the value of $g = 981~\text{cm} / \text{s}^{2}$, $\sigma = 0.5$~cm and $\delta t = 2\times 10^{-6}$~s, it is possible to express all simulation parameters in cgs units, so that $\nu = 70$~Hz and $A = 0.006339$~cm. Such parameters are realistic for experimental systems \cite{Bordallo-Favela2009}. The reduced, dimensionless peak acceleration of the plates is $\Gamma = A(2\pi\nu)^2/g = 1.25$, and we define a quasi-two-dimensional particle packing fraction as $\phi = (\pi N \sigma^2) / (4 L^2)$, where $L$ is the simulation box length in the $x$- and $y$-directions. We have performed simulations for packing fractions $\phi = 0.2, 0.4$ and $0.5$. After a short initial transient, the simulations enter a steady state that appears stationary if short-time averages are considered. In this steady state, the particles rebound vertically and acquire horizontal velocity components due to the surface undulations of the confining plates and also via particle-particle collisions. A snapshot of all particle positions was stored after every 16,667-th time step, corresponding to an interval of $1/30$~s between subsequent recordings. A total number of 2,000 snapshots was recorded for each simulation, with an exception being the system at $\phi = 0.2$, $\epsilon = 0.5$ (lower right panel in Fig.~\ref{fig:Main_Figure} and Fig.~\ref{fig:rsquare}) for which we have recorded 10,000 snapshots. From the snapshots we have calculated the projected two-dimensional radial distribution function \begin{equation}\label{eq:gr_extraction} g_{T}(r) = \dfrac{1}{N} \left\langle \sum\limits_{\substack{i,j = 1\\i \neq j}}^{N} \delta \left( \boldsymbol{r}^{\parallel} - \boldsymbol{r}^{\parallel}_{i} + \boldsymbol{r}^{\parallel}_{j} \right) \right\rangle \end{equation} in terms of the Dirac $\delta$ distribution, and the projected two-dimensional static (steady state) structure factor \begin{equation}\label{eq:Sq_extraction} S_{T}(q) = \dfrac{1}{N} \left\langle {\left[ \sum\limits_{i = 1}^{N} \cos ( \boldsymbol{q}^{\parallel} \cdot \boldsymbol{r}^{\parallel}_{i} ) \right]}^2 \hspace{-.4em} + {\left[ \sum\limits_{i = 1}^{N} \sin ( \boldsymbol{q}^{\parallel} \cdot \boldsymbol{r}^{\parallel}_{i} ) \right]}^2 \right\rangle \end{equation} where $\left\langle \ldots \right\rangle$ stands for the average over all snapshots, $\boldsymbol{r}^{\parallel}_{i} = \left( \mathbb{1} - \hat{\boldsymbol{e}}_{z} \hat{\boldsymbol{e}}_{z} \right) \cdot \boldsymbol{r}_{i}$ is the projection of the position vector $\boldsymbol{r}_{i}$ of particle $i$ into the $(x, y)$-plane, and $\boldsymbol{q}^{\parallel} = \left( \mathbb{1} - \hat{\boldsymbol{e}}_{z} \hat{\boldsymbol{e}}_{z} \right) \cdot \boldsymbol{q}$ is the corresponding projection of the wave vector $\boldsymbol{q}$. The arguments $r = \left| \boldsymbol{r}^{\parallel} \right|$ and $q = \left| \boldsymbol{q}^{\parallel} \right|$ of the correlation functions are the norms of the projected distance and wave vectors. We have checked that all simulated systems are homogeneous and isotropic on average. The lower index '$T$' on both functions $g_{T}(r)$ and $S_{T}(q)$ stands for 'Target', as we have used these functions as the target functions for the Iterative Ornstein--Zernike Inversion method, described in Sec.~\ref{sec:IO--ZI}. \section{Iterative Ornstein--Zernike Inversion}\label{sec:IO--ZI} Iterative Ornstein--Zernike Inversion \mbox{(IO--ZI)} is a recently introduced inverse Monte Carlo method that allows to determine the reduced, dimensionless pair potential $\beta u(r)$ of particles in thermodynamic equilibrium from their radial distribution function $g(r)$ and the static structure factor $S(q)$. Here, $\beta = 1 / (k_B T)$ is the inverse thermal energy in terms of the Boltzmann constant $k_B$ and the absolute temperature $T$. The interested reader is referred to Ref.~\cite{Heinen2018} for a comprehensive description of the IO--ZI method and its validation for three-dimensional fluid systems. For brevity's sake, we explain here only the essential working principle of \mbox{IO--ZI}, and we mention the differences between the algorithm in Ref.~\cite{Heinen2018} and the version for two-dimensional systems that we have used for the present work: The \mbox{IO--ZI} method shares its underlying principle with the well-established, but less accurate Iterative Boltzmann Inversion \mbox{(IBI)} method \cite{Reith2003}. In an initial step, a first estimate $\beta u_1(r)$ of the true potential $\beta u(r)$ is calculated via approximate, numerical inversion of the target correlation functions $g_{T}(r)$ and $S_{T}(q)$ at known particle number density $n$. The reduced potential $\beta u_1(r)$ is then used in a strictly two-dimensional $(N,V,T)$ Metropolis Monte Carlo \mbox{(MC)} simulation from which the correlation functions $g_1(r)$ and $S_1(q)$ are extracted. The differences between $g_{T}(r)$ and $g_1(r)$ and between $S_{T}(q)$ and $S_1(q)$ are the inputs for an iteration update rule by which the function $\beta u_1(r)$ is transformed into the next estimate $\beta u_2(r)$. The latter serves as the reduced pair potential in a second \mbox{MC} simulation, resulting in $g_2(r)$ and $S_2(q)$. This sequence of potential adjustments and \mbox{MC} simulations is continued until $g_n(r)$ and $S_n(q)$ are indistinguishable from $g_{T}(r)$ and $S_{T}(q)$, within the level of the stochastic noise floor. At this point, $\beta u_n(r)$ constitutes the output of the \mbox{IO--ZI} (or the \mbox{IBI}) method. Both the initial seed $\beta u_1(r)$ and the iteration update rule in \mbox{IO--ZI} rely on an approximation of the unknown bridge function \cite{Hansen_McDonald1986} in the Ornstein--Zernike integral equation formalism. Different bridge function approximations, also known as closure relations, constitute different flavors of \mbox{IO--ZI} such as Iterative Hypernetted Chain Inversion \mbox{(IHNCI)} which is based on the \mbox{HNC} closure \cite{Morita1958} or Iterative Percus-Yevick Inversion \mbox{(IPYI)}, based on the \mbox{PY} closure \cite{Percus1958}. The \mbox{IHNCI} algorithm has been published in Ref.~\cite{Heinen2018}, and the \mbox{IPYI} algorithm is obtained if Eqs.~(8) and (9) from Ref.~\cite{Heinen2018} are replaced by the equations \begin{equation} \beta u_1(x) = \ln\left[g_{T}(x) - c_{T}(x)\right] - \ln\left[g_{T}(x)\right] \nonumber \end{equation} and \begin{equation} \beta \mu_i(x) = \beta u_i(x) + \ln\left[ \dfrac{g_{T}(x) - c_{T}(x)}{g_{i}(x) - c_{i}(x)} \right] + \ln\left[ \dfrac{g_{i}(x)}{g_{T}(x)} \right], \nonumber \end{equation} respectively. Here, $x = r n^{1/d}$ is the dimensionless particle center-to-center distance in terms of the mean geometric particle distance $n^{-1/d}$. The symbols $\mu_i(x)$ and $c_{T}(x)$ denote the output of a single Picard iteration of the \mbox{IPYI} algorithm and the target direct correlation function, respectively. The meaning of both these quantities is discussed in great detail in Ref.~\cite{Heinen2018} and will not be repeated here for the sake of brevity. The \mbox{IHNCI} and \mbox{IPYI} methods are surpassing the \mbox{IBI} method in terms of accuracy of the converged solution for the particle pair potential because the initial seed and the iteration update rule in \mbox{IBI} are both based on the comparatively inaccurate approximation of the true pair potential by the potential of mean force \cite{Hansen_McDonald1986}. Moreover, the \mbox{IO--ZI} methods make use of the information contained in the Fourier-space functions $S_{T}(q)$ and $S_{i}(q)$ as well as the real space functions $g_{T}(r)$ and $g_{i}(r)$, whereas the \mbox{IBI} method relies on the real space information from the radial distribution functions only. The initial seed $\beta u_1(r)$ in \mbox{IHNCI} and \mbox{IPYI} is obtained via inversion of the \mbox{HNC} and \mbox{PY} integral equations, respectively. We will therefore use the notation HNC Inversion \mbox{(HNCI)} and PY Inversion \mbox{(PYI)} for the numerical schemes that are obtained when only the initialization steps of \mbox{IHNCI} or \mbox{IPYI} are executed, and the subsequent MC simulations and iterative potential corrections are omitted. The so-obtained \mbox{PYI} method has already been used \cite{Bordallo-Favela2009, Velazquez-Perez2016} to calculate effective potentials of granular beads in vibrated quasi-two-dimensional cells, but we are going to demonstrate in Sec.~\ref{sec:Results} that the results from \mbox{PYI} and \mbox{HNCI} are not reliable as they contain a large systematic error. Effective potentials of granular beads that have so far been published must therefore be challenged and re-checked in every particular case. As an additional technical comment, we note that the necessary inverse Fourier (or Hankel) transform $\mathcal{F}^{-1}$ of the isotropic direct correlation function $\tilde{c}(q)$ from wavenumber space into the real-space function $c(r)$ should preferentially be carried out via the equation \begin{equation} c(x) = g(x) - 1 - \mathcal{F}^{- 1} \left\lbrace \dfrac{{\left[S(y) - 1\right]}^2}{S(y)} \right\rbrace (x) \nonumber \end{equation} \cite{Heinen2018, Heinen2011}, in which $y = q n^{-1/d}$ is a dimensionless wavenumber, and where the Fourier integrand ${\left[S(y) - 1\right]}^2 / S(y)$ decays considerably quicker as a function of $y$ than the integrand $\tilde{c}(y)$ in $c(x) = \mathcal{F}^{-1} \left\lbrace \tilde{c}(y) \right\rbrace (x)$. A fast decay of the Fourier integrand is a desirable feature as the correlation functions are typically only known in very limited ranges of the variables $x$ and $y$. The Fourier transform is most accurately and conveniently carried out in arbitrary dimension by virtue of Hamilton's FFTLog algorithm \cite{Hamilton2000, Hamilton_website}, which is based on Talman's original publication \cite{Talman1978}. All \mbox{IHNCI} and \mbox{IPYI} runs reported here were carried out with the generalized accelerated fixed-point iteration method originally proposed by Ng \cite{Ng1974, Heinen2014, Heinen2018}. The \mbox{MC} simulations were performed on a graphics processing unit with ensemble averaging over $256$ statistically independent systems, each containing $256$ particles. While this may appear to be a dangerously small particle number, our results confirm that it is large enough to avoid significant finite size effects on $\beta u(r), g(r)$ and $S(q)$. In the validation and results sections \ref{subsec:IO--ZI_validation} and \ref{sec:Results} we will observe that the functions $g(r)$ and $S(q)$ from our 'forward direction' MC simulations for twice the number of ($N = 512$) particles are perfectly reproduced in the inverse MC runs with $N = 256$. The physical reason is that the particle interactions are short ranged. Each one of the \mbox{IHNCI} and \mbox{IPYI} runs reported in subsection~\ref{subsec:IO--ZI_validation} and in Sec.~\ref{sec:Results} took $\sim 2$ hours to complete. A \mbox{HNCI} or \mbox{PYI} run requires less than a second of runtime. \subsection{Validation of \mbox{IO--ZI} for two-dimensional systems}\label{subsec:IO--ZI_validation} \begin{figure} \centering \includegraphics[width=.85\columnwidth]{Figure02.eps} \vspace{0em} \caption{\label{fig:2D_HD_Tests} (Color online) The \mbox{HNCI} (open pink circles), \mbox{IHNCI} (filled pink circles), \mbox{PYI} (open blue diamonds) and \mbox{IPYI} (filled blue diamonds) methods are tested in their capabilities to reproduce the pair potential of hard disks in two dimensions (red horizontal lines), at packing fractions $\phi = 0.3, 0.4, 0.5$ and $0.6$ (bottom panel to top panel). } \end{figure} Comprehensive validation tests of the \mbox{IO--ZI} method in its \mbox{IHNCI} flavor have been reported in Ref.~\cite{Heinen2018} for systems with various types of particle pair potentials, but in three spatial dimensions only. Before applying \mbox{IHNCI} and \mbox{IPYI} in Sec.~\ref{sec:Results}, we validate both methods for the case of two-dimensional systems in the present subsection. Figure~\ref{fig:2D_HD_Tests} features the results from the \mbox{HNCI}, \mbox{IHNCI}, \mbox{PYI} and \mbox{IPYI} methods for four test cases in which the target functions $g_{T}(r)$ and $S_{T}(q)$ are those of non-overlapping hard disks in two dimensions, at packing fractions $\phi = 0.3, 0.4, 0.5$ and $0.6$. The functions $g_{T}(r)$ and $S_{T}(q)$ were calculated via Eqs.~\eqref{eq:gr_extraction} and \eqref{eq:Sq_extraction} in MC simulations of $512$ disks with diameter $\sigma$, in two-dimensional square simulation boxes with periodic boundary conditions in both Cartesian directions. The interaction potential $u(r > \sigma) = 0$ is represented by the horizontal red lines in Fig.~\ref{fig:2D_HD_Tests}. Any deviation from these lines quantifies an inaccuracy of the \mbox{HNCI}, \mbox{IHNCI}, \mbox{PYI} or \mbox{IPYI} method. Note that \mbox{IHNCI} and \mbox{IPYI} are considerably more accurate than \mbox{HNCI} and \mbox{PYI} in all studied cases, with the exception of the densest system at $\phi = 0.6$, where \mbox{IHNCI} fails dramatically. For all other systems at packing fractions $\phi = 0.5$ or less, the error of the converged reduced potentials $\beta u(r)$ from \mbox{IHNCI} and \mbox{IPYI} stays below, or well below $0.1$ for practically all particle distances $r$. As one should expect, the \mbox{IPYI} method is more accurate than the \mbox{IHNCI} method (and, likewise, \mbox{PYI} is more accurate than \mbox{HNCI}) in the hard disk test cases. This is due to the well-known fact that the \mbox{PY} closure is more accurate for hard disks than the \mbox{HNC} closure \cite{Hansen_McDonald1986}. \begin{figure} \centering \vspace{1em} \includegraphics[width=.85\columnwidth]{Figure03.eps} \vspace{0em} \caption{\label{fig:2D_Freestyle_Tests} (Color online) Same as Fig.~\ref{fig:2D_HD_Tests}, but for a generic freehand-curve test potential (red curves), and at the packing fractions $\phi = 0.4, 0.5$ and $0.6$ (bottom panel to top panel). } \end{figure} For different interaction potentials, it is in general not known \textit{a priori} which one of the two closures -- HNC or PY -- is more accurate. We have therefore conducted a set of three additional validation tests of \mbox{IHNCI} and \mbox{IPYI} with two-dimensional systems at packing fractions $\phi = 0.4, 0.5$ and $0.6$, where the potential to be reproduced was taken from a digitalized free-hand curve that features strong repulsion at distances $r < \sigma$, an attractive region of maximum depth $-0.5 k_B T$ in the region $\sigma < r < 1.25 \sigma$, and a quickly decaying, slightly repulsive part at $r > 1.25 \sigma$. The results of these tests are shown in Fig.~\ref{fig:2D_Freestyle_Tests}, where the red solid curves represent the test potential. The target functions $g_{T}(r)$ and $S_{T}(q)$ for \mbox{HNCI}, \mbox{IHNCI}, \mbox{PYI} and \mbox{IPYI} were extracted from MC simulations of $512$ particles in two-dimensional square simulation boxes with periodic boundary conditions in both Cartesian directions, and with interactions described by the test potential. As a result, we note that \mbox{IHNCI} and \mbox{IPYI} are considerably more accurate in reproducing the test potential than \mbox{HNCI} and \mbox{PYI}, especially at the two higher packing fractions $\phi = 0.5$ and $\phi = 0.6$. \begin{figure*} \centering \includegraphics[width=0.92\textwidth]{Figure04.eps} \vspace{0em} \caption{\label{fig:2D_Freestyle_gr_Sq} (Color online) Radial distribution functions $g(r)$ and static structure factors $S(q)$ of the systems at packing fraction $\phi = 0.5$, and with reduced potentials plotted in the central panel of Fig.~\ref{fig:2D_Freestyle_Tests}. Red solid curves represent the target correlation functions $g_{T}(r)$ and $S_{T}(q)$.} \end{figure*} The level of accuracy at which the target correlation functions $g_{T}(r)$ and $S_{T}(q)$ are reproduced by the \mbox{HNCI}, \mbox{IHNCI}, \mbox{PYI} and \mbox{IPYI} methods is demonstrated in Fig.~\ref{fig:2D_Freestyle_gr_Sq}, which features our results for the systems with reduced potentials plotted in the central panel of Fig.~\ref{fig:2D_Freestyle_Tests}: All four inversion methods result in correlation functions $g(r)$ and $S(q)$ that are nearly identical to $g_{T}(r)$ and $S_{T}(q)$, to a level at which the functions are almost indistinguishable within the stochastic noise floor of the simulation results. Nevertheless, close observation of the correlation functions (as in panels a, b, e and f of Fig.~\ref{fig:2D_Freestyle_gr_Sq}) reveals that \mbox{IHNCI} is ever so slightly more accurate in reproducing $g_{T}(r)$, $S_{T}(q)$ than \mbox{HNCI} is, and the same can be said about \mbox{IPYI} and its relation to \mbox{PYI}. The minuscule differences between the correlation functions from \mbox{IHNCI} and \mbox{HNCI}, or between \mbox{IPYI} and \mbox{PYI}, are crucial, as they translate into stark differences between the reduced potentials. This is a manifestation of the low practical usefulness of Henderson's theorem \cite{Henderson1974} as discussed in Refs.~\cite{Potestio2014, Heinen2018}: In equilibrium fluids with pairwise additive particle interactions a bijective functional mapping $\beta u(r) \leftrightarrow \left[ g(r), S(q) \right]$ is guaranteed to exist, but the mapping is highly nonli near in general. Large differences in $\beta u(r)$ may correspond to tiny differences in $g(r)$ and $S(q)$ which complicates severely the calculation of $\beta u(r)$ from the correlation functions if these are only known within a statistical error margin. This explains the severe failure of simple methods such as \mbox{HNCI} or \mbox{PYI}. More sophisticated methods such as \mbox{IHNCI}, \mbox{IPYI}, or alternative approaches such as pressure-corrected \mbox{IBI} \cite{Reith2003, Potestio2014} or multistate \mbox{IBI} \cite{Moore2014} are required instead. A few important characteristics of \mbox{IHNCI} and \mbox{IPYI} can be observed in both Figs.~\ref{fig:2D_HD_Tests} and \ref{fig:2D_Freestyle_Tests}: Both methods are very accurate at small packing fractions and they gradually loose accuracy when the packing fraction is increased. The packing fraction at which any one of the two methods starts to fail gravely can be estimated by comparison with the respective other method. In other words, for cases where \mbox{IHNCI} and \mbox{IPYI} predict similar results, we have strong empirical evidence for the accuracy of both methods. In converse cases where the results of \mbox{IHNCI} and \mbox{IPYI} differ markedly, neither of the two methods can be trusted. We make use of the reassuring comparison between \mbox{IHNCI} and \mbox{IPYI} throughout the results section~\ref{sec:Results}, where the effective potentials for granular particles are calculated by both methods in all cases. \section{Results}\label{sec:Results} \begin{figure*} \centering \includegraphics[width=.9\textwidth]{Figure05.eps} \vspace{0em} \caption{\label{fig:Main_Figure} (Color online) Effective potentials for inelastic granular beads at packing fractions $\phi = 0.2$ (bottom row of panels), $\phi = 0.4$ (central row of panels) and $\phi = 0.5$ (top row of panels), and for restitution coefficients $\epsilon = 0.9$ (left column of panels), $\epsilon = 0.7$ (central column of panels) and $\epsilon = 0.5$ (right column of panels). Our results from the HNCI (open pink circles), IHNCI (filled pink circles), PYI (open blue diamonds) and IPYI (filled blue diamonds) methods are shown. One-parametric functions of the form $\alpha / r^2$ (black curves) have been fitted to the IHNCI results in the range $1.25 < r / \sigma < 4$. The horizontal axis range is $1 < r/\sigma < 4$ is every panel, and the vertical axis range is varying by factors of $2$ from the bottom row to the center row, and from the center row to the top row of panels. } \end{figure*} Figure~\ref{fig:Main_Figure} features the main results of the present paper. The \mbox{HNCI}, \mbox{IHNCI}, \mbox{PYI} and \mbox{IPYI} results for the reduced potentials $\beta u(r)$ in nine two-dimensional equilibrium systems are plotted. The input (or target) functions $g_{T}(r)$ and $S_{T}(q)$ for the four inversion methods are those that were obtained from our Granular Dynamics simulations as described in Sec.~\ref{sec:Granul_Sim}, for the restitution coefficients $\epsilon = 0.9, 0.7$ and $0.5$ and the packing fractions $\phi = 0.2, 0.4$ and $0.5$. We have also conducted Granular Dynamics simulations at $\phi = 0.6$ but we refrain from showing the results for the effective potentials here, as each one of the four inversion methods is clearly failing at $\phi = 0.6$. Our observations in Fig.~\ref{fig:Main_Figure} are the following: The \mbox{HNCI} and \mbox{PYI} results are in strong disagreement with each other and with the \mbox{IHNCI} and \mbox{IPYI} results, for all but the most dilute systems at $\phi = 0.2$ (panels g, h and i of Fig.~\ref{fig:Main_Figure}). Both \mbox{HNCI} and \mbox{PYI} are thus unreliable and should never be used in the determination of effective interaction potentials. Our \mbox{PYI} results for $\beta u(r)$ at $\phi = 0.4$ (panels d, e and f of Fig.~\ref{fig:Main_Figure}) resemble those in Fig.~4 of Ref.~\cite{Velazquez-Perez2016} as far as the shape of the functions is concerned, but the reduced potentials in Ref.~\cite{Velazquez-Perez2016} are more strongly attractive, with a minimum value around $-1$ to $-1.5$, which is approximately three times deeper than the minima of our results for $\beta u(r)$ at $\phi = 0.4$. We presume that the reason for this quantitative disagreement might be a difference between the particle-wall restitution coefficients $\epsilon_{w}$ of our Granular Dynamics simulations and those that were used in Ref.~\cite{Velazquez-Perez2016}. If the value of $\epsilon_{w}$ was chosen smaller than our value of $\epsilon_{w} = 0.9$, then the effective, kinetic temperature of the Granular beads in Ref.~\cite{Velazquez-Perez2016} can be expected to be lower than in our case, which would be in line with a larger value of $\beta$. Unfortunately we are not in the position to test our presumption as the value of $\epsilon_{w}$ has not been reported in Ref.~\cite{Velazquez-Perez2016}. Our \mbox{IHNCI} and \mbox{IPYI} results are in close agreement with each other, in all nine cases shown in Fig.~\ref{fig:Main_Figure}, which serves as a reassurance for the fidelity of both methods. A non-trivial finding is that both \mbox{IHNCI} and \mbox{IPYI} are converging in all nine cases, and that the target correlation functions $g_{T}(r)$ and $S_{T}(q)$ of the out-of-equilibrium granular systems are reproduced by the two methods (as we have checked in every case). This implies that there is indeed an equilibrium system with correlation functions identical to those of the granular system in the entire parameter range $0.2 \leq \phi \leq 0.5$ and $0.5 \leq \epsilon \leq 0.9$. The effective potentials from \mbox{IHNCI} and \mbox{IPYI} are attractive and follow a simple, monotonically increasing and concave shape in all cases, with the exception of the system at $\phi = 0.2$ and $\epsilon = 0.9$ in panel g of Fig.~\ref{fig:Main_Figure}, where a gentle upturn of the reduced potentials is observed at very close particle proximity. We cannot be sure about the statistical significance of that upturn and refrain from over-interpreting it as a physical effect as it may just as well be a numerical artifact. That the effective interactions are attractive is physically quite intuitive: In a steady state with vanishing average particle currents, the normal velocity restitution causes an increase in particle number density around any tagged particle, as it is also caused by attractive interactions in the effective equilibrium system with the same particle correlation functions. \begin{figure} \centering \includegraphics[width=.88\columnwidth]{Figure06.eps} \vspace{0em} \caption{\label{fig:gr_Sq_granular} (Color online) Radial distribution function (upper panel) and structure factor (lower panel) for $\phi = 0.4$ and $\epsilon = 0.7$. Solid red curves are our Granular Dynamics \mbox{(GD)} results, extracted from the simulations via Eqs.~\eqref{eq:gr_extraction} and \eqref{eq:Sq_extraction}. The corresponding effective potentials are plotted in the central panel of Fig.~\ref{fig:Main_Figure}. } \end{figure} The correlation functions for the system at $\phi = 0.4$, $\epsilon = 0.7$ (central panel `e' in Fig.~\ref{fig:Main_Figure}, also featured in Fig.~\ref{fig:Snapshot}) are shown in Fig.~\ref{fig:gr_Sq_granular}, where an upturn of $S(q)$ at small values of $q$ supports our finding of attractive effective interactions, and $g(r < \sigma) \ll 1$ signals that the granular system is indeed nearly perfectly two-dimensional. \begin{figure} \centering \vspace{2em} \includegraphics[width=.82\columnwidth]{Figure07.eps} \vspace{0em} \caption{\label{fig:Tgranular_scaling} (Color online) Black circles filled in gray: Converged \mbox{IHNCI} potentials from Fig.~\ref{fig:Tgranular_scaling}. Pink circles: The same data, rescaled with respect to the effective granular temperature and the effective inverse thermal energy $\beta^*$, as described in the main text. Every panel is for one packing fraction $\phi$. The results for different restitution coefficients $\epsilon$, corresponding to different values of $\beta^*$, are overlaid in the panels. The vertical spread among the data narrows upon effective temperature rescaling.} \end{figure} Figure~\ref{fig:Tgranular_scaling} repeats all the converged \mbox{IHNCI} potentials from Fig.~\ref{fig:Main_Figure} as black circles filled in gray. Every panel of Fig.~\ref{fig:Tgranular_scaling} is for one of the three packing fractions $\phi = 0.2, 0.4$ and $0.5$, as indicated in the panels a -- c. The panels contain the results for three different restitution coefficients $\epsilon = 0.5, 0.7$ and $0.9$ in an overlaid manner, such that the spread among the symbols of equal type indicates the difference between the results for equal packing fraction and for varying coefficient of restitution. While the data for different $\epsilon$ appear to follow the same functional form (within statistical scatter), the observed vertical spread among the black/gray circles indicates that different values of $\epsilon$ correspond to different values of the effective inverse thermal energy $\beta$. Keeping in mind that all our data are for equal intensities of vertical shaking, this apparent spread in effective granular temperature is in line with the intuitive picture in which different restitution coefficients $\epsilon$ correspond to different amounts of kinetic energy dissipation in the steady state. As wee have checked, the distributions of the Cartesian velocity components parallel to the confining plates are nearly Maxwellian, with slight deviations from the Maxwellian form for very slow and very fast velocities. This is in line with the observations that have been reported in several instances in the literature \cite{Olafsen1999, VanZon2004}. An effective granular temperature was determined for each of the cases displayed in Figs.~\ref{fig:Main_Figure},~\ref{fig:Tgranular_scaling}, by fitting the Cartesian velocity histograms from our Granular Dynamics simulations to Maxwellian (Gaussian) functions, using the variance $\sigma(\phi, \epsilon)$ of the distribution for each given pair of values $(\phi, \epsilon)$ as a fit parameter. Assuming that the so-determined velocity variance is proportional to an effective granular Temperature, we have rescaled the data with prefactors \mbox{$\beta^*(\phi, \epsilon) = \beta(\phi, \epsilon) \times \sigma(\phi, \epsilon) / \sigma(\phi, \epsilon = 0.5)$}. That is: we have scaled all data for equal $\phi$ and different $\epsilon$ to the effective granular temperature that corresponds to $\epsilon = 0.5$. The results can be observed in Fig.~\ref{fig:Tgranular_scaling} as the pink symbols, the spread among which is considerably less than the spread among the black/gray symbols, especially for the two higher packing fractions $\phi = 0.4$ and $0.5$ (panels b and a of Fig.~\ref{fig:Tgranular_scaling}, respectively). This confirms that $\sigma(\phi, \epsilon)$ is a good measure for an effective granular temperature. It also supports the conceptual idea of fitting the out-of-equilibrium, steady state particle pair correlations with those of equilibrium systems, as it is done in the \mbox{IO--ZI} methods. \begin{figure} \centering \vspace{2em} \includegraphics[width=.75\columnwidth]{Figure08.eps} \vspace{0em} \caption{\label{fig:rsquare} (Color online) Circles: Absolute value of the reduced effective IHNCI potential for $\phi = 0.2$, $\epsilon = 0.5$ (as in panel i of Fig.~\ref{fig:Main_Figure}) on a double logarithmic and a linear-logarithmic scale (inset). Red dashed curve: Two-parametric fit in which both the prefactor and the exponent were allowed to vary. Black solid curve: One-parametric fit with fixed exponent of $-2$, where only the prefactor was adjusted. } \end{figure} The observed simple shapes of $\beta u(r)$ in Fig.~\ref{fig:Main_Figure} encourage an attempt to determine the functional form of the potential, at least for the most dilute case $\phi = 0.2$. To this end, in Fig.~\ref{fig:rsquare} we are plotting the absolute value of $\beta u(r)$ from \mbox{IHNCI}, for $\phi = 0.2$, $\epsilon = 0.5$ (as in panel i of Fig.~\ref{fig:Main_Figure}) on a double logarithmic scale and on a linear-logarithmic scale (inset of Fig.~\ref{fig:rsquare}). An exponential form of the potential is ruled out by the linear-logarithmic plot, where the \mbox{IHNCI} results exhibit a significant non-zero curvature. The double logarithmic plot reveals that the \mbox{IHNCI} result is compatible with the power law $\beta u(r) = \alpha ~ r^{-2}$, with a single adjustable parameter $\alpha$. If the exponent in the power law is allowed to vary in a non-linear regression-like fit, then an optimal exponent of $-1.97$ is obtained, providing strong support for the hypothesized exponent of $-2$. In default of an analytical theory for the shape of the potential, we do not want to over-interpret the results in Fig.~\ref{fig:rsquare} by stating that the effective potential is truly of the form $\beta u(r) = \alpha ~ r^{-2}$. We merely report that our data is compatible with such a power law, and that the theoretical justification or falsification of the power law is a rewarding task for future studies. \section{Conclusions}\label{sec:Conclusions} Our successful application of \mbox{IO--ZI} in its two flavors \mbox{IHNCI} and \mbox{IPYI} demonstrates that the particle correlation functions in quasi-two-dimensional vibrated granular systems can be mapped onto those of an equivalent, truly equilibrium system in a wide range of granular packing fractions and restitution coefficients. The resulting effective interaction potentials exhibit a simple shape that is in line with intuitive physical arguments. At low packing fraction, there is strong empirical evidence for the one-parametric power-law form $\beta u(r) = \alpha ~ r^{-2}$ of the effective potential. Additional analytical-theoretical work is required to support or falsify the validity of the suggested power-law form of $\beta u(r)$. The simple \mbox{HNCI} and \mbox{PYI} methods should not be used in the determination of (effective) particle interaction potentials as the results of these methods suffer from great systematic errors unless the particle packing fraction is very small. Our work includes the first reported validation of \mbox{IHNCI} and \mbox{IPYI} for two-dimensional systems. Both methods are awaiting further applications in two- and three-dimensional granular, molecular and Brownian systems. \section*{Acknowledgements} We acknowledge financial support from CONACyT (Grant No. 237425/2014) and PRODEP (Grant No. 511-6/17-11852).
{ "timestamp": "2018-09-14T02:12:28", "yymm": "1806", "arxiv_id": "1806.00862", "language": "en", "url": "https://arxiv.org/abs/1806.00862" }
\section{Introduction} In the early 1980's, Alan McIntosh introduced the $H^{\infty}$-functional calculus as a refined version of the Dunford holomorphic functional calculus for unbounded sectorial operators (see the original paper \cite{McI} in the Hilbert space setting, their extensions to Banach spaces \cite{cdmy,KalWei}, and the monographs \cite{Haase, HNVW2}). This calculus is meant to be an operator-theoretic abstraction of the calculus of Fourier multipliers, which it recovers when applied to constant coefficient differential operators such as the Laplacian on $L^{2}({\mathbb R}^{d})$. One of its roles is to provide a framework for perturbation theory: deriving properties of the functional calculus of differential operators with varying coefficients from its constant coefficient counterpart. The quintessential example of such an application is given in \cite{AKMc}, where perturbation is first understood in the operator-theoretic sense, then in the harmonic analytic sense of (an extension of) Calder\'on--Zygmund theory. The combination of both perspectives leads to a striking boundedness result for the $H^{\infty}$-functional calculus of Dirac operators that includes the solution of the celebrated Kato's square root problem (originally obtained in \cite{AHLMcT}).\\ In the present paper, we introduce an operator-theoretic framework which aims to generalise pseudo-differential calculus in the same way that the McIntosh $H^{\infty}$-functional calculus generalises Fourier multiplier calculus. Our starting point is the Weyl calculus of standard position and momentum operators $Q_{j}f(x)=x_{j}f(x)$ and $P_{j}f(x) = i\partial_{j}f(x)$, $j=1,\dots,d$, acting on their natural domains in $L^{2}({\mathbb R}^{d})$. For Schwartz functions $a \in \mathscr{S}({\mathbb R}^{2d})$ one can define a bounded operator $a(Q,P)$ acting on $L^2({\mathbb R}^d)$ by \begin{equation} \label{intro:def} a(Q,P)f = \frac{1}{(2\pi)^{d}} \int _{{\mathbb R}^{2d}} \widehat{a}(u,v) e^{i(uQ+vP)}f\,{\rm d} u\,{\rm d} v, \quad f\in L^2({\mathbb R}^d). \end{equation} Here, $ e^{i(uQ+vP)}$ is understood as the Schr\"odinger representation \begin{equation}\label{intro:Schr} e^{i(uQ+vP)}f(x) := e^{\frac12iuv}e^{iuQ}e^{ivP}f(x) = e^{\frac{1}{2}iuv + iux} f(x+v) \end{equation} which unitarily represents the canonical commutation relations for the position and momentum operators on $L^2({\mathbb R}^d)$; the first identity is suggested by the Baker--Campbell-Hausdorff formula, noting that all higher commutators of $P$ and $Q$ vanish. As shown in \cite[Proposition 1, page 554]{Stein}, \eqref{intro:def} encodes the standard pseudo-differential calculus, in the sense that for every $a\in \mathscr{S}({\mathbb R}^{2d})$ there exists a unique $b \in \mathscr{S}({\mathbb R}^{2d})$ such that \begin{equation} \label{intro:pseudodef} a(Q,P)f(x) = \frac1{(2\pi)^{d/2}}\int _{{\mathbb R}^{d}} b(x,\xi)\widehat{f}(\xi)e^{i x\xi}\,{\rm d}\xi, \end{equation} the map $a \mapsto b$ being continuous with respect to various relevant topologies. The advantage of \eqref{intro:def} over \eqref{intro:pseudodef} is that the former makes sense for generators of bounded groups on an arbitrary Banach spaces, whereas a representation such as \eqref{intro:pseudodef} is restricted to function spaces on which an appropriate Fourier transform can be defined. We thus take \eqref{intro:def} as our starting point for the development of a general theory.\\ We work with general Weyl pairs (see Section \ref{sec:Weyl} for precise definition), i.e., two $d$-tuples $A=(A_{1}, \dots , A_{d})$ and $B=(B_{1}, \dots , B_{d})$ acting on a Banach space $X$ such that $iA_{1}, \dots , iA_{d}$ and $iB_{1}, \dots , iB_{d}$ generate bounded $C_0$-groups satisfying the canonical (integrated) commutation relations \begin{equation} \begin{aligned} e^{isA_j}e^{itA_k} &= e^{itA_k}e^{isA_j},\quad e^{isB_j}e^{itB_k} = e^{itB_k}e^{isB_j}\\ e^{isA_j}e^{itB_k} & = e^{-ist \delta_{jk}} e^{itB_k}e^{isA_j}. \end{aligned} \end{equation} In this context, \eqref{intro:Schr} is replaced by $$ e^{i(uA+vB)} := e^{\frac12iuv}e^{iuA}e^{ivB} := e^{\frac12iuv}\prod_{j=1}^d e^{iu_jA_j}\prod_{k=1}^d e^{iv_kB_k}.$$ The analogue of \eqref{intro:def}, \begin{equation} \label{intro:def2} a(A,B)f = \frac{1}{(2\pi)^{d}} \int _{{\mathbb R}^{2d}} \widehat{a}(u,v) e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v, \quad f\in X, \end{equation} defines an algebra homomorphism between $\mathscr{S}({\mathbb R}^{2d})$ endowed with the (non-comm\-ut\-ative) Moyal product \begin{equation*} \begin{aligned} \ & (a\,\#\,b)(x,\xi) \\ & \ \ = \frac{1}{\pi^{2d}} \int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} a(x+u, \xi+u') b(x+v, \xi+v') e^{-2i(vu'-uv')}\,{\rm d} u\,{\rm d} u'\,{\rm d} v\,{\rm d} v' \end{aligned} \end{equation*} into the space of bounded linear operators $\mathscr{L}(X)$. The Moyal product is used in pseudo-differential operator theory to deal with composition of symbols. In Section \ref{sec:Weyl}, we show that if this algebra homomorphism is continuous from $\mathscr{S}({\mathbb R}^{2d})$ endowed with the topology of the standard pseudo-differential class of symbols $S^{0}$ to $\mathscr{L}(X)$, then the calculus can be meaningfully extended from $\mathscr{S}({\mathbb R}^{2d})$ to $S^{0}$. This is an analogue of the fundamental convergence lemma in the theory of $H^{\infty}$-functional calculus (see, e.g. \cite[Proposition 10.2.11]{HNVW2}), and is proved using asymptotic expansions of the Moyal product, typical of pseudo-differential calculus. Having such a convergence lemma shows that a pseudo-differential calculus for $(A,B)$ can be defined as soon as appropriate bounds on the operators defined in \eqref{intro:def2} are obtained.\\ One of the applications of pseudo-differential calculus is to study Schr\"odinger operators such as the harmonic oscillator defined by $\frac12\Delta f(x)- \frac12|x|^2f(x)$ on $L^{2}({\mathbb R}^{d})$. In our abstract situation, we show that it is possible to express, in Section \ref{sec:A2B2}, the semigroup generated by \begin{equation}\label{eq:defL} -L:= \frac12d - \frac12\sum_{j=1}^d (A_j^{2}+B_j^{2}) \end{equation} in terms of the Weyl calculus as \begin{equation}\label{eq:exptL}e^{-tL} = a_t(A,B), \end{equation} where $a_t\in \mathscr{S}({\mathbb R}^{2d})$ is the function \begin{align*} a_{t}(x,\xi) := \Big(1+\frac{1-e^{-t}}{1+e^{-t}}\Big)^{d}\exp\Bigl(-\frac{1-e^{-t}}{1+e^{-t}}(|x|^{2}+|\xi|^{2})\Bigr). \end{align*} For the pair of position and momentum operators associated with the Ornstein--Uhlenbeck operator (see Example \ref{ex:OU}), \eqref{eq:exptL} is a well known formula for the Orn\-stein--Uhlenbeck semigroup which goes back, at least, to \cite{unter}; see also \cite{NP}, where this formula was rediscovered by a reduction to Mehler's formula. Here we show, with a different proof, that it generally holds for the operators $L$ associated with Weyl pairs through \eqref{eq:defL}. As such, \eqref{eq:exptL} can be thought of as an abstract analogue of Mehler's formula for Weyl pairs. To obtain useful bounds for various functions of $L$ we use, in Section \ref{sec:transf}, the idea of transference to derive bounds for $a(A,B)$ acting on $X$ from corresponding bounds on the twisted convolution with $\widehat a$, viewed as an operator acting on $L^{p}({\mathbb R}^{2d};X)$. This idea can be traced back to Coifman and Weiss \cite{CW} and the form used here is inspired by the work of Hieber and Pr\"uss \cite{HiePru}, Haase \cite{H}, and Haase and Rozendaal \cite{HR}. They have shown that bounds on the Phillips functional calculus defined, for a generator $iG$ of a bounded $C_0$-group acting on a Banach space $X$, by $$ a(G)f = \frac{1}{\sqrt{2\pi}} \int _{{\mathbb R}} \widehat{a}(u)e^{iuG} f\,{\rm d} u, $$ can be obtained from bounds on convolution operators acting on $L^{2}({\mathbb R};X)$. The latter can then be proven using, for instance, Bourgain's UMD-valued Fourier multiplier theorem \cite{Bour86}, or its analogue for operator-valued kernels proven by Weis in \cite{Wei}. For twisted convolutions, however, no UMD-valued theory is yet available. Developing such a theory is bound to be difficult, given that the (scalar-valued) $L^p$-theory of twisted convolutions, as developed by Mauceri in \cite{mauceri80}, is already subtle (see also \cite{MPR}). For applications to spectral multipliers theorems for $L$, fortunately, we only need to handle highly specific twisted convolutions that can effectively be ``untwisted''. This is shown in Section \ref{sec:untwist}, where we prove $R$-sectoriality for the operator $L$ defined by \eqref{eq:defL} in UMD lattices $X$. In Section \ref{sec:Hinfty}, we use this result to deduce the boundedness of the $H^\infty$-calculus of $L$ on UMD lattices $X$ from the boundedness of the Weyl calculus of $(A,B)$. We also show that the angle of this calculus is best possible (namely $0$). Going even further, we apply the recent Kriegler--Weis approach to spectral multipliers developed in \cite{Krieg,KriegW} to show that this $H^{\infty}$-calculus can in fact be extended to a H\"ormander class of sufficiently smooth but not necessarily analytic functions. This is possible because the estimates obtained in Section \ref{sec:untwist} are precise enough for us to check the assumption of \cite{KriegW}. \\ The present paper provides a foundation for a generalised pseudo-differential operator theory in at least three directions: Witten pseudo-differential calculus, global pseudo-differential calculus on Lie groups, and rough pseudo-differential calculus. In the Witten pseudo-differential calculus, one is interested in pairs $(A,B)$ acting on $L^{p}({\mathbb R}^d,e^{-\phi(x)}\,{\rm d} x)$, such that, informally, the ``Witten Laplacian'' $L$ is of the form $h(A,B)$ for an appropriate ``Hamiltonian'' $h$ which is chosen so that the measure $e^{-\phi(x)}\,{\rm d} x$ is an invariant measure for $L$. We started such a theory in \cite{NP} in the most classical case where the choice $\phi(x)=\frac12|x|^{2}$ brings us back to the Gaussian setting and $L$ reduces to the Ornstein--Uhlenbeck operator. In work in progress, some of the results proven in the present paper are applied to extend the functional calculus theory of the Ornstein--Uhlenbeck operator in \cite{GMMST}. From the Lie group point of view, the present paper can be seen as an approach to (sub)pseudo-differential calculus on $L^p(H)$, where $H$ is the Heisenberg group. The prefix ``sub" here indicates that we consider a pseudo-differential calculus that extends the Fourier multiplier calculus given by the functional calculus of the sub-Laplacian (removing this prefix by extending the present paper to add $\partial_{t}$ to the joint functional calculus of the Weyl pair $(X,Y)$, in the spirit of \cite{stric}, would be interesting). In the setting where $X$ is an $L^p$-space, a Lie group representation approach to some of the results in Section \ref{sec:A2B2} has already been pursued in \cite{DG,DGE} for more general higher-order commutator relations; see also \cite{ElstRob94, ElstRob98}. Building on earlier work in \cite{ElstRob94}, in the setting of $L^p$-spaces the boundedness of the $H^\infty$-calculus of $\varepsilon+L$ for $\varepsilon>0$ has been proved in \cite{Smul} by more direct transference arguments. The present operator-theoretic perspective could help construct global pseudo-differential calculi on nilpotent Lie groups. Such a theory is currently being developed by Ruzhansky, Fischer, and their collaborators (see, in particular, \cite{FR}). The theory of pseudo-differential operators on the Heisenberg group is developed in \cite{BFKG}. Last but not least, we aim to perturb the Weyl calculus, both from an operator-theoretic and a harmonic analytic perspective, to eventually treat pairs of the form \begin{align*} Q_{B,j}f(x) &= \frac{1}{2}\Bigl(\partial_{j}f(x)+x_{j}f(x) - \sum _{k=1} ^{d}\beta_{kj}(x)(\partial_{k}f(x)-x_{k}f(x))\Bigr), \\ P_{B,j}f(x) &= \frac{1}{2i}\Bigl(\partial_{j}f(x)+x_{j}f(x) + \sum _{k=1} ^{d}\beta_{kj}(x)(\partial_{k}f(x)-x_{k}f(x))\Bigr), \end{align*} where both the matrix $B=(\beta_{kj})_{k,j=1}^d$ and its inverse have bounded measurable coefficients. Notice that we recover the standard pair with $B=I$. These are analogues of the perturbations of Dirac operators considered in \cite{AKMc}. Since the latter can be interpreted as a rough Fourier multiplier theory, a corresponding theory for $(Q_{B},P_{B})$ could be interpreted as a rough pseudo-differential calculus.\\ \noindent {\em Acknowledgment} We thank Tom ter Elst, Markus Haase, Sean Harris, and Javier Parcet for interesting discussions. We dedicate this paper to the memory of Alan McIntosh (1942-2016). His philosophy of using operator theory as a mean to extend harmonic analysis towards rougher settings, very much underpins the present research. Alan McIntosh was a close friend of Joe Moyal, whose phase space perspective on quantum mechanics gives the non-commutative structure on appropriate algebra of functions that we use here to extend McIntosh's (commutative) functional calculus. We thus like to think of the present paper as establishing a posthumous connection between the works of these two friends.\\ {\em Notation and conventions.} All vector spaces are complex unless the contrary is stated. To be in line with standard notation in pseudo-differential calculus, we reserve the notation $(x,\xi)$ for the general point in ${\mathbb R}^{2d} = {\mathbb R}^d\times{\mathbb R}^d$. Because most applications are concerned with function spaces anyway, general elements in a Banach space $X$ will be denoted by $f,g,\dots$. For $\xi\in {\mathbb R}^d$ we write $\langle \xi\rangle = (1+|\xi|^2)^{1/2}$. Standard multi-index notation is used. We let ${\mathbb N} = \{0,1,2,\dots\}$. When $A=(A_1,\dots,A_d)$ and $B = (B_1,\dots,B_d)$ are $d$-tuples of linear operators with domains ${\mathsf{D}}(A_j)$ and ${\mathsf{D}}(B_j)$ respectively, we set $\Dom(A) = \bigcap_{j=1}^d \Dom(A_j)$ and $\Dom(B)=\bigcap_{j=1}^d \Dom(B_j).$ For $u,v\in {\mathbb R}^d$ we write $uv:= \sum_{j=1}^d u_jv_j$ and define the operators $uA$ and $vB$, with domains ${\mathsf{D}}(A)$ and ${\mathsf{D}}(B)$ respectively, by $$uA= \sum_{j=1}^d u_jA_j, \quad vB= \sum_{j=1}^d v_jB_j. $$ We write $a\lesssim_{p_1,p_2,\dots} b$ to express that there exists a constant $C$, depending on the data $p_1,p_2,\dots$, but not on any other relevant data, such that $a\le Cb$. If the constant is independent of all relevant data we write $a\lesssim b$, \section{Preliminaries} We assume familiarity with the basic theory of pseudo-differential operators and semigroup theory. Good sources for our purposes are \cite{abels, Stein} and \cite{EngNag}. Here we collect some terminology and results concerning UMD spaces, $R$-boundedness, and the $H^\infty$-calculus of sectorial operators. Our main references are \cite{HNVW1, HNVW2}; other sources for these notions are, respectively, \cite{Pisier}, \cite{DHP, KunWei}, \cite{Haase, Haase-ISEM, KunWei}. \subsection{UMD spaces} A Banach space $X$ is said to have the {\em UMD$_p$ property}, where $1<p<\infty$, if there exists a finite constant $C\ge 0$ such that whenever $(m_n)_{n=1}^N$ is a finite $X$-valued martingale (defined on a measure space which may vary from case to case and whose length $N$ may vary as well) and $(\epsilon_n)_{n=1}^N$ is a sequence of scalars of modulus one, we have $$ {\mathbb E} \Big\Vert \sum_{n=1}^N \epsilon_n m_n \Big\Vert^p \le C^p {\mathbb E} \Big\Vert \sum_{n=1}^N m_n \Big\Vert^p.$$ It can be shown that if $X$ has the UMD$_p$ property for some $1<p<\infty$, then it has this property for all $1<p<\infty$. Accordingly it makes sense to call a Banach space a {\em UMD space} if it has the UMD$_p$ property for some (equivalently, for all) $1<p<\infty$. In some treatments only scalars $\epsilon_n = \pm 1$ are used. This leads to an equivalent definition, the only difference being that the numerical value of the constant may change (see \cite[Proposition 4.2.10]{HNVW1}). The importance of the class of UMD spaces derives from a celebrated theorem due Burk\-holder and Bourgain \cite{Bour83, Burk83} which characterises it as precisely the class of Banach spaces $X$ for which the Hilbert transform extends to a bounded operator on $L^p({\mathbb R};X)$ for some (equivalently, for all) $1<p<\infty$. This, in turn, allows one to prove the boundedness in $L^p({\mathbb R}^d;X)$ of very general classes of singular integral operators. For some of the sharpest results presently available see \cite{Hyt-vvTb}. In particular every Calder\'on--Zygmund operator with a kernel satisfying the so-called ``standard estimates'' is bounded on $L^p({\mathbb R}^d;X)$ for all UMD spaces $X$ and exponents $1 < p < \infty$. Examples of UMD spaces include Hilbert spaces, the $L^p$-spaces with $1<p<\infty$, and the Schatten classes $\mathscr{C}_p$ with $1<p<\infty$. The class of UMD spaces is stable under passing to equivalent norms and taking closed subspaces, quotients, and $\ell^p$-direct sums. If $X$ is UMD and $1<p<\infty$, then also $L^p(M,\mu;X)$ is UMD, for any measure space $(M,\mu)$. As a consequence, all ``classical'' function spaces used in Analysis such as Sobolev spaces, Besov spaces, and Triebel--Lizorkin spaces are UMD as long as the exponents in their definitions are within the reflexive range. UMD spaces are reflexive, and therefore spaces such as $c_0$, $\ell^1$, $\ell^\infty$, $C(K)$, $L^1(M,\mu)$, $L^\infty(M,\mu)$ are not UMD (with exception of the trivial cases when the latter three are finite-dimensional). \subsection{$R$-boundedness} A {\em Rademacher sequence} is a sequence of independent random variables $(\varepsilon_n)_{n=1}^\infty$, defined on some probability space, the values of which are uniformly distributed in the set of scalars of modulus one. Thus if the scalar field is real, Rademacher variables take values $\pm 1$ with with equal probability $\frac12$, and if the scalar field is complex their values are uniformly distributed in the unit circle in the complex plane. Let $X$ and $Y$ be Banach spaces and let $\mathscr{L}(X,Y)$ denote the space of all bounded linear operators from $X$ into $Y$. A subset $\mathscr{T}$ of $\mathscr{L}(X,Y)$ is said to be {\em $R_p$-bounded}, where $0<p<\infty$, if there exists a finite constant $C\ge 0$ such that for all finite sequences $T_1,\dots,T_N\in \mathscr{T}$ and $x_1,\dots,x_N\in X$ (where $N$ may vary) one has $$ {\mathbb E} \Big\Vert \sum_{n=1}^N \varepsilon_n T_n x_n\Big\Vert^p \le C^p {\mathbb E} \Big\Vert \sum_{n=1}^N \varepsilon_n x_n\Big\Vert^p.$$ The least admissible constant $C$ is called the {\em $R_p$-bound} of $\mathscr{T}$ and is denoted by $\mathscr{R}_p(\mathscr{T})$. By the Kahane--Khintchine inequality (see \cite[Theorem 6.2.4]{HNVW2}), if $\mathscr{T}$ is $R_p$-bounded for some $0<p<\infty$, then it is $R_p$-bounded for all $0<p<\infty$, and for all $0<p<\infty$ we have $$ \mathscr{R}_p(\mathscr{T})\eqsim_{p} \mathscr{R}(\mathscr{T}),$$ where by default we write $\mathscr{R}(\mathscr{T}):= \mathscr{R}_2(\mathscr{T})$. Accordingly it makes sense to call $\mathscr{T}$ {\em $R$-bounded} if it is $R$-bounded for some (equivalently, for all) $0<p<\infty$. In some treatments real-valued Rademacher variables (random variables taking the values $\pm 1$ with equal probability) are used. This leads to an equivalent definition, the only difference being that the numerical value of the $R$-bounds may change (see \cite[Proposition 6.1.9]{HNVW2}). Upon replacing the role of Rademacher variables by Gaussian variables, one arrives at the notion of {\em $\gamma$-boundedness}. Every $R$-bounded set of operators is $\gamma$-bounded (by a simple randomisation argument, see \cite[Theorem 8.1.3]{HNVW2}), and every $\gamma$-bounded set is uniformly bounded (take $N=1$). If $X$ has finite cotype, every $\gamma$-bounded family in $\mathscr{L}(X,Y)$ is $R$-bounded, and if $X$ has cotype $2$ and $Y$ has type $2$ (in particular, if $X$ and $Y$ are isomorphic to Hilbert spaces), then every uniformly bounded family in $\mathscr{L}(X,Y)$ is $R$-bounded (see \cite[Theorem 8.1.3]{HNVW2}). The Kahane contraction principle (see \cite[Theorem 6.1.13]{HNVW2}) implies that bounded subsets of the scalar field, viewed as bounded operators on a Banach space $X$ though scalar multiplication, are $R$-bounded. $R$-Bounded sets enjoy many permanence properties; in particular they are closed under taking convex hulls and weak operator closure (see \cite[Sections 8.1.e, 8.4.a, 8.5.a]{HNVW2}). The notion of $R$-boundedness originates from Harmonic Analysis, where it captures the essence of so-called ``square function estimates''. As such it goes back to the works \cite{BG, Bour86}; its first systematic study is \cite{CPSW}. Rather than explaining this aspect in full detail (for this we refer to \cite[Chapter 8]{HNVW2}) we mention (see \cite[Proposition 6.3.3]{HNVW2}) that if $X = L^q(M,\mu)$ with $1\le q<\infty$, then for all $0<p<\infty$ one has the equivalence of norms $$ \Big({\mathbb E} \Big\Vert \sum_{n=1}^N \varepsilon_n f_n\Big\Vert_{L^q(M,\mu)}^p\Big)^{1/p} \eqsim_{p,q} \Big\Vert \Big( \sum_{n=1}^N |f_n|^2\Big)^{1/2}\Big\Vert_{L^q(M,\mu)} $$ with implied constants that depend only on $p$ and $q$. Thus, in the context of $L^q$-spaces, $R$-boundedness reduces to a square function estimate. \subsection{$H^\infty$-calculus}\label{subsec:Hinfty} Let $X$ be a Banach space and let $0<\sigma<\pi$. A closed operator $L: {\mathsf{D}}(L)\supseteq X\to X$ (with ${\mathsf{D}}(L)$ the {\em domain} of $L$) is said to be {\em $\sigma$-sectorial} if its spectrum is contained in the closure of the sector $$\Sigma_\sigma = \{z\in {\mathbb C}:\ z\not=0, \ |\arg z| < \sigma\}$$ (arguments are taken in $(-\pi,-\pi)$) and satisfies \begin{align}\label{eq:sectorial} \Vert R(z,L)\Vert \le \frac{M}{|z|} \end{align} on the complement of $\overline{\Sigma_\sigma}$, for some finite constant $M\ge 0$. Here, $R(z,L) := (z-L)^{-1}$ is the resolvent operator. An operator is said to be {\em sectorial} if it is $\sigma$-sectorial for some $0<\sigma<\pi$. The number $$\omega(L):= \inf\bigl\{\sigma\in (0,\pi): \ \hbox{$L$ is $\sigma$-sectorial}\big\}$$ is called the {\em angle of sectoriality of $L$}. For $0<\theta<\pi$ let $H^1(\Sigma_\theta)$ be the Banach space of all holomorphic functions $\phi:\Sigma_\theta\to {\mathbb C}$ satisfying $$ \Vert \phi\Vert_{H^1(\Sigma_\theta)}:= \sup_{0<\nu<\theta} \frac1{2\pi} \int_{\partial \Sigma_\nu} |\phi(z)|\,\frac{|{\rm d}z|}{|z|} <\infty.$$ If $L$ is $\sigma$-sectorial, then for any $\phi\in H^1(\Sigma_\theta)$ with $\sigma<\theta<\pi$ we may define \begin{equation}\label{eq:Dunford} \phi(L) := \frac1{2\pi i} \int_{\partial \Sigma_\nu} \phi(z) R(z,L)\, {\rm d}z, \end{equation} taking $\sigma<\nu<\theta$ with the understanding that $\partial \Sigma_\nu$ is downwards oriented. This integral converges absolutely and defines a bounded operator of norm at most $M\Vert \phi\Vert_{H^1(\Sigma_\theta)}$, where $M$ is the constant of \eqref{eq:sectorial}. It is a consequence of Cauchy's theoremand \cite[Proposition H.2.5]{HNVW2} that the definition of $\phi(L)$ is independent of the choice of the angle $\nu$. If we were to replace the role of $H^1(\Sigma_\theta)$ by the space $H^\infty(\Sigma_\theta)$ of all bounded holomorphic functions on $\Sigma_\theta$, we would run into the difficulty that the corresponding Dunford integral in \eqref{eq:Dunford} becomes singular at both the origin and at infinity. To handle this situation\, a sectorial operator $L$ is said to have a {\em bounded $H^\infty(\Sigma_\sigma)$-calculus}, where $\omega(L)<\sigma<\pi$, if there exists a finite constant $K\ge 0$ such that $$ \Vert \phi(L)\Vert \le K \Vert \phi\Vert_{H^\infty(\Sigma_\sigma)}$$ for all $\phi\in H^1(\Sigma_\sigma)\cap H^\infty(\Sigma_\sigma)$. A sectorial operator $L$ is said to have a {\em bounded $H^\infty$-calculus} if it has a bounded $H^\infty(\Sigma_\sigma)$-calculus for some $\omega(L)<\sigma<\pi$. The number $$\omega_{H^\infty}(L):= \inf\bigl\{\sigma\in (\omega(L),\pi): \ \hbox{$L$ has a bounded $H^\infty(\Sigma_\sigma)$-calculus}\bigr\} $$ is called the {\em angle of the $H^\infty$-calculus} of $L$. If $L$ is densely defined, has dense range, and has a bounded $H^\infty(\Sigma_\sigma)$-calculus, the {\em McIntosh convergence lemma} \cite{McI} (see also \cite[Theorem 10.2.13]{HNVW2}) allows one to uniquely define, for every $\phi\in H^\infty(\Sigma_\sigma)$, a bounded operator $\phi(L)$ by $$\phi(L)f:= \lim_{n\to\infty} \phi_n(L)f, \quad f\in X,$$ where $(\phi_n)_{n\ge 1}$ is any sequence in $H^1(\Sigma_\sigma)\cap H^\infty(\Sigma_\sigma)$ that is uniformly bounded and converges to $\phi$ pointwise on $\Sigma_\sigma$. The prime example of a sectorial operator with a bounded $H^\infty$-calculus (of angle $0$) is the negative Laplacian $L = -\Delta$ on $L^p({\mathbb R}^d;X)$ for any UMD space $X$ and $1<p<\infty$. More generally, under minor regularity assumptions on the coefficients, uniformly elliptic operators on sufficiently regular domains $D$ in ${\mathbb R}^d$ have bounded $H^\infty$-calculi on $L^p(D;X)$ under various boundary conditions. Examples with precise formulations are reviewed in \cite{DHP, HNVW2, KunWei}. There is an interesting interplay between $R$-boundedness and $H^\infty$-calculi. Let us say that a closed operator $L$ is {\em $\sigma$-$R$-sectorial} if $\sigma(L)$ is contained in $\overline{\Sigma_\sigma}$ and the set $$\{z R(z,L): \ z\in \complement\overline{\Sigma_\sigma}\big\}$$ is $R$-bounded. Since $R$-boundedness implies boundedness, every $\sigma$-$R$-sectorial is $\sigma$-sectorial. The operator $L$ is said to be {\em $R$-sectorial} if it is $\sigma$-$R$-sectorial for some $0<\sigma<\pi$. The infimum $$ \omega_R(L) := \inf\big\{\sigma\in (\omega(L),\pi): \ \hbox{$L$ is $\sigma$-$R$-sectorial}\big\}$$ is called the {\em angle of $R$-sectoriality} of $L$. It was shown by Kalton and Weis \cite{KalWei} (see also \cite[Corollary 10.4.10]{HNVW2}) that if $L$ is a sectorial operator with a bounded $H^\infty$-calculus on a UMD Banach space $X$ (actually a slightly weaker assumption will do for this purpose, but this is not relevant to us here), then $L$ is $R$-sectorial and we have \begin{equation}\label{eq:KW-Hinfty} \omega_{R}(L) = \omega_{H^\infty}(L). \end{equation} In this context it is interesting to observe that for $R$-sectorial operators $L$ it may happen that $\omega_R(L)> \omega(L)$; see \cite{KLW}. \section{Weyl pairs}\label{sec:Weyl} Let $A=(A_1,\dots,A_d)$ and $B = (B_1,\dots,B_d)$ be two $d$-tuples of closed and densely defined operators acting in a complex Banach space $X$. We assume that each of the operators $iA_j$ and $iB_j$ generates a uniformly bounded $C_0$-group on $X$. We denote these groups by $(e^{itA_j})_{t\in {\mathbb R}}$ and $(e^{itB_j})_{t\in {\mathbb R}}$, respectively. \begin{definition} Under the above assumptions, the pair $(A,B)$ will be called a {\em Weyl pair} of dimension $d$ if the (integrated) {\em canonical commutation relations} hold for all $s,t\in {\mathbb R}$ and $1\le j,k\le d$: \begin{equation}\label{eq:CCR} \begin{aligned} e^{isA_j}e^{itA_k} &= e^{itA_k}e^{isA_j}\\ e^{isB_j}e^{itB_k} &= e^{itB_k}e^{isB_j}\\ e^{isA_j}e^{itB_k} & = e^{-ist \delta_{jk}} e^{itB_k}e^{isA_j} \end{aligned} \end{equation} where $\delta_{jk}$ is the usual Kronecker symbol. \end{definition} Being a Weyl pair is an isomorphic notion, in that it is insensitive to changing to an equivalent norm. More generally, if $(A,B)$ is a Weyl pair on $X$ and $T:X\to Y$ is an isomorphism of Banach spaces, then $(TAT^{-1}, TBT^{-1})$ is a Weyl pair on $Y$. This is of course trivial, but it is of some interest in connection with the next example, for on Hilbert spaces it easily provides examples of non-selfadjoint Weyl pairs. \begin{example}[Standard position/momentum pair]\label{ex:HO} On $L^p({\mathbb R}^d)$, $1\le p<\infty$, the position and momentum operators $Q_j$ and $P_j$, $1\le j\le d$, are defined by \begin{align*} Q_jf(x) = x_jf(x), \quad P_jf(x) = \frac1i \partial_j f(x), \quad x\in {\mathbb R}^d. \end{align*} With their natural domains, it is easily checked that they define a Weyl pair $(Q,P)$. Indeed, $iQ_j$ generates the multiplication group on $L^p({\mathbb R}^d)$ given by $$e^{itQ_j}g(x) = e^{itx}g(x), \quad x\in {\mathbb R}^d, \ t\in {\mathbb R},$$ and $iP_j$ generates the translation group on $L^{p}({\mathbb R}^d)$ given by $$e^{itP_j}g(x) = g(x+te_j), \quad x\in{\mathbb R}^d, \ t\in {\mathbb R},$$ with $e_j$ the $j$-th unit vector of ${\mathbb R}^d$. The commutation relations are easily checked. The position/momentum pair is sometimes referred to as the {\em standard pair} and provides the main example of a Weyl pair. A well-known uniqueness result of Stone and von Neumann (see, e.g., \cite[Chapter 14]{Hall} or \cite[Section 4.3]{Put}) asserts that every Weyl pair of dimension $d$ of self-adjoint operators in a Hilbert space is unitarily equivalent to a direct sum of copies of standard pairs on $L^2({\mathbb R}^d)$. \end{example} \begin{example}[Gaussian position/momentum pair]\label{ex:OU} Let us denote by $\gamma$ the standard Gaussian measure on ${\mathbb R}^d$. On $L^p({\mathbb R}^d,\gamma)$, $1\le p<\infty$, we consider the position and momentum pair $(Q^\gamma,P^\gamma)$ given by $Q^\gamma = (Q^\gamma_1,\dots,Q^\gamma_d)$ and $P^\gamma = (P^\gamma_1,\dots,P^\gamma_d)$ defined by $$ Q^\gamma_j :=\frac1{\sqrt2}(a_j + a^\dagger_j), \quad P^\gamma_j :=\frac1{i\sqrt{2}}(a_j - a_j^\dagger),$$ where the annihilation and creation operators $a_j$ and $a_j^\dagger$ are defined by $$ a_j = \partial_j, \quad a_j^\dagger = -\partial_j+x_j.$$ Thus, for $f\in C_{\rm c}^1({\mathbb R}^d)$, $$Q^\gamma_jf(x) = \frac1{\sqrt 2} x_j f(x), \quad P^\gamma_jf(x) = \frac1{i\sqrt 2}(2\partial_j - x_j) f(x).$$ It is readily verified that the pair $(Q^\gamma,P^\gamma)$ satisfies the canonical commutation relations. As we will explain in a moment, for $p=2$ this pair is unitarily equivalent to the standard pair. It is clear that the operators $iQ^\gamma_{j}$ generate $C_0$-contraction groups of multiplication operators on $L^p({\mathbb R}^d,\gamma)$ for all $1\le p<\infty$. On the other hand, the operators $iP^\gamma_{j}$ generate bounded $C_0$-groups on $L^p({\mathbb R}^d,\gamma)$ if and only if $p=2$. Thus $(Q^\gamma,P^\gamma)$ is a Weyl pair on $L^p({\mathbb R}^d,\gamma)$ if and only if $p=2$. This can be deduced from Theorem \ref{thm:A2B2} below as follows. By a result of \cite{NP}, in $L^2({\mathbb R}^d,\gamma)$ the operator $\frac12((Q^\gamma)^2+(P^\gamma)^2)-\frac12d$ considered in Theorem \ref{thm:A2B2} is the Ornstein--Uhlenbeck operator. If $(Q^\gamma,P^\gamma)$ were to be a Weyl pair in $L^p({\mathbb R}^d,\gamma)$ for certain $p\in (1,\infty)\setminus\{2\}$, the theorem would imply that the Ornstein--Uhlenbeck semigroup extends holomorphically to the right half-plane $\{\Re z>0\}$, and this is well known to be false. In fact the optimal angle $\theta_p$ of holomorphy for the Ornstein--Uhlenbeck semigroup on $L^p({\mathbb R}^d,\gamma)$ is known to be $\cos\theta_p = \frac{|p-2|}{2\sqrt{p-1}}$ (see \cite{GMMST}). The failure of $iP^\gamma_j$ to generate a bounded $C_0$-group on $L^p({\mathbb R}^d,\gamma)$ for $p\not=2$ can also be easily checked by hand. Let $m({\rm d} x) = {\rm d} x/(2\pi)^{d/2}$ denote the normalised Lebesgue measure on ${\mathbb R}^d$. On $L^2({\mathbb R}^d,\gamma)$ the group generated by $iP^\gamma_j$ is given by $e^{itP^\gamma_j} = U^{-1}T_j(t)U$, where $T_j(t)$ is the translation group on $L^2({\mathbb R}^d,m)$ in the $j$-th direction and $U : L^2({\mathbb R}^d,\gamma) \to L^2({\mathbb R}^d,m)$ is the unitary mapping given by $U = \delta\circ E$ with $$ Ef(x) = e^{-\frac14|x|^2}f(x), \quad \delta f(x) := (\sqrt 2)^d f\bigl({\sqrt 2}x\bigr).$$ An easy computation shows that, in $L^2({\mathbb R}^d,\gamma)$, the operators $e^{itP^\gamma_j}$ are given by \begin{align*} e^{itP^\gamma_j}f(x) = e^{\frac14|x|^2 - \frac12(\frac{x}{\sqrt 2}-t)^2}f(x+t\sqrt 2). \end{align*} Then, after an integration and change of variable, $$ \Vert e^{itP^\gamma_j}f\Vert_p^p = \frac{1}{(2\pi)^{d/2}}\int_{{\mathbb R}^d} e^{(\frac12-\frac{p}{4})(2{\sqrt 2}xt -2t^2)}|f(x)|^p\,{\rm d} \gamma(x).$$ For $p\in [1,2)$ it follows that $e^{itP^\gamma_j}$ fails to extend to a bounded operator on $L^p({\mathbb R}^d,\gamma)$ for all $t>0$, and for $p\in (2,\infty)$ the operators $e^{itP^\gamma_j}$ are bounded on $L^p({\mathbb R}^d,\gamma)$, but not uniformly bounded as a function of $t>0$. \end{example} \begin{example}[Modified Gaussian position/momentum pair] It is of some interest to note that the pair $(Q^\gamma,P^\gamma)$ of the previous example does form a Weyl pair on $L^p({\mathbb R}^d,\gamma_{2/p})$ for all $p\in [1,2]$, where $\gamma_\tau({\rm d} x) = (2\pi\tau)^{-d/2}e^{-|x|^2/2\tau}\,{\rm d} x$. This is simply because with this scaling of the measure the mapping $U$ considered above defines an isomorphism from $L^p({\mathbb R}^d,\gamma_{2/p})$ onto $L^p({\mathbb R}^d,m)$. Then each $iP^\gamma_j$ generates a bounded $C_0$-group on $L^p({\mathbb R}^d,\gamma_{2/p})$ which, under $U$, conjugates with the translation group in the $j$-th direction on $L^p({\mathbb R}^d,m)$. \end{example} \begin{example}[Duality] If $(A,B)$ is a Weyl pair in $X$, then the pair of adjoint operators $(B^*,A^*)$ is a Weyl pair in $X^{*}$ provided the operators $A_j^*$ and $B_j^*$ are densely defined (by a classical result in semigroup theory (see \cite[Proposition I.5.14]{EngNag}) this is always the case if $X$ is reflexive). \end{example} \begin{example}[Additive commuting perturbations] If $(A,B)$ is a Weyl pair and $C$ is a bounded operator resolvent commuting with $A$, then $(A,B+C)$ is a Weyl pair whenever the group generated by $i(B+C)$ is bounded. Indeed, the assumption implies that $C$ commutes with the operators $e^{itA}$, and the commutation relations \eqref{eq:CCR} follow from this by going through the standard proof of the variation of constants formula for perturbed (semi)groups using Picard iteration. The simplest example is obtained by taking $C = \omega I$ with $\omega\in{\mathbb R}$. This amounts to frequency modulating the group generated by $iB$. More generally one could take $C$ to be any densely defined closed operator such that $iC$ generates a bounded group commuting with the group generated by $iA$. Similarly, if $(A,B)$ is a Weyl pair and $C$ is a bounded operator commuting with the resolvent of $B$, then $(A+C,B)$ is a Weyl pair whenever the group generated by $i(A+C)$ is bounded. \end{example} \begin{example}[Skew transforms] If $(A,B)$ is a Weyl pair, then for every $\lambda\in {\mathbb R}$ the pair $(A,\lambda A+B)$ is a Weyl pair. Some care has to be taken with the interpretation of $\lambda A+B$; we interpret it as the generator of the $C_0$-group given by $$ e^{it(\lambda A+B)}:= e^{\frac12 i\lambda t^2}e^{i\lambda t A}e^{it B} $$ (this idea will be further developed in a moment). Similarly, if $(A,B)$ is a Weyl pair, then for every $\lambda \in {\mathbb R}$ the pair $(A+\lambda B,B)$ is a Weyl pair. \end{example} \begin{example} Let $((Q_1,Q_2), (P_1,P_2))$ be the standard pair of dimension $2d$ on $L^2({\mathbb R}^{2d}),$ i.e., \begin{align*} Q_{1,j}f(x,\xi) = x_jf(x,\xi), \quad & Q_{2,j}f(x,\xi) = \xi_jf(x,\xi), \\ P_{1,j}f(x,\xi) = \frac1i \frac{\partial f}{\partial x_j}(x,\xi),\quad &P_{2,j}f(x,\xi) = \frac1i \frac{\partial f}{\partial \xi_j}(x,\xi), \end{align*} for $1\le j\le d$. Reasoning as in the preceding examples, we see that $(-\frac12 Q_2-P_1,\frac12Q_1-P_2)$ is a Weyl pair of dimension $d$ on $L^2({\mathbb R}^{2d})$. As we show in Lemma \ref{lem:twiststand}, the Weyl calculus of this pair encodes twisted convolutions. Many variations on twisted convolutions can be considered through the Weyl calculus of twisted standard pairs obtained from different twists than the one above. \end{example} \begin{example}[Quantum variables] In \cite{qes}, Gonz\'alez-P\'erez, Junge, and Parcet, introduce a (non-commutative) Fourier transform, as well as position and momentum operators, associated with certain von Neumann algebras called quantum euclidean spaces (or Moyal deformations, or CCR algebras). Their construction allows them to define non-commutative analogues of the key notions of Calder\'on-Zygmund theory, including off-diagonal kernel estimates and H\"ormander symbol classes, and then to prove analogues of the main theorems in singular integral operator theory. We cannot describe their construction in detail here, but note that their quantum variables $(x_{\Theta,j})_{j=1,...,2d}$ are Weyl pairs (for the appropriate choice of $\Theta$) acting on some non-commutative $L^p$-spaces (see \cite[Proposition 1.9]{qes}). \end{example} We now collect some easy properties of Weyl pairs which will be useful later on. For $d=1$ they are due to Kato \cite{Kato} (see also \cite[Section 4.9]{Put}) and the proofs given there extend without difficulty to the present case. The main observation is that, upon taking Laplace transforms, the third commutation relation in \eqref{eq:CCR} implies the identities \begin{equation}\label{eq:CCR-res} \begin{aligned} R(\lambda,iA_j)e^{itB_j} & = e^{itB_j}R(\lambda+it,iA_j), \\ R(\lambda,iB_j) e^{itA_j} &= e^{itA_j}R(\lambda-it,iB_j), \end{aligned} \end{equation} for all $t\in {\mathbb R}$, $\Re\lambda\not=0$, and $1\le j\le d$. It follows that $e^{itB_j}$ leaves ${\mathsf{D}}(A_j)$ invariant, $e^{itA_j}$ leaves ${\mathsf{D}}(B_j)$ invariant, and \begin{equation}\label{eq:CCR-op1} \begin{aligned} A_je^{itB_j}f & = e^{itB_j}(A_j-t)f , \quad f\in {\mathsf{D}}(A_j),\\ B_j e^{itA_j}f & =e^{itA_j}(B_j+t)f, \quad f\in {\mathsf{D}}(B_j). \end{aligned} \end{equation} The same argument applies to the remaining combinations of $A_j$ and $B_k$, but no shifts over $\pm t$ occur when the operators commute. Thus we obtain: \begin{lemma}\label{lem:Kato} Let $(A,B)$ be a Weyl pair. The operators $e^{itA_j}$ and $e^{itB_j}$ leave both ${\mathsf{D}}(A):= \bigcap_{k=1}^d {\mathsf{D}}(A_k)$ and ${\mathsf{D}}(B):= \bigcap_{k=1}^d {\mathsf{D}}(B_k)$ invariant. For $j\not=k$ we have \begin{equation}\label{eq:CCR-op2} \begin{aligned} A_je^{itB_k}f = e^{itB_k}A_jf, \quad f\in {\mathsf{D}}(A_j),\\ B_k e^{itA_j}f =e^{itA_j}B_kf, \quad f\in {\mathsf{D}}(B_k), \end{aligned} \end{equation} while for $j=k$ the identities \eqref{eq:CCR-op1} hold. \end{lemma} Differentiating \eqref{eq:CCR-res} at $t=0$ gives \begin{equation}\label{eq:kato} \begin{aligned} R(\lambda,iB_j)R(\mu,iA_j) & = R(\mu,iA_j) R(\lambda,iB_j)[I -iR(\lambda,iB_j)R(\mu,iA_j)], \\ R(\lambda,iA_j)R(\mu,iB_j) &= R(\mu,iB_j) R(\lambda,iA_j)[I +i R(\lambda,iA_j)R(\mu,iB_j)]. \end{aligned} \end{equation} If $$g = \prod_{j,j'=1}^d R(\lambda_j,iA_j)R(\lambda_{j'},iA_{j'}) \prod_{k,k'=1}^d R(\mu_k,iB_k) R(\mu_{k'},iB_{k'})f$$ with $f\in X$, then \eqref{eq:kato} may be used to rewrite $g$, for any pair $1\le j,k\le d$, as \begin{align*} g & = R(\lambda,iA_j)R(\mu,iB_k) C_{jk}f \\ & = R(\mu,iB_k) R(\lambda,iA_j)D_{jk}f \\ & = R(\mu,iB_j) R(\lambda,iB_k)E_{jk}f\end{align*} for suitable bounded operators $C_{jk}$, $D_{jk}$, $E_{jk}$. From this we see that $g$ belongs to $\bigcap_{1\le j,k\le d}({\mathsf{D}}(A_jA_k)\cap{\mathsf{D}}(A_jB_k)\cap{\mathsf{D}}(B_kA_j))\cap {\mathsf{D}}(B_jB_k)$. Since $$\lim_{\lambda,\mu\to\infty} \prod_{j,j',k,k'=1}^d \lambda_j \lambda_{j'}R(\lambda_j,iA_j)R(\lambda_{j'},iA_{j'})\mu_k\mu_{k'} R(\mu_k,iB_k)R(\lambda_{k'},iB_{k'})f = f$$ for all $f\in X$, the limit being taken in any order for $\lambda_1,\dots,\lambda_d$, $\lambda_1',\dots,\lambda_d'$, $\mu_1,\dots,\mu_d$, $\mu_1,'\dots,\mu_d'\to \infty$, this subspace is dense in $X$. The identity \eqref{eq:kato} also gives the identity $A_jB_j g - B_jA_j g = ig$ for $g$ of the above form. The same argument gives commutation for the remaining combinations of $A_j$ and $B_k$. Thus we obtain: \begin{lemma}\label{lem:D} Let $(A,B)$ be a Weyl pair of dimension $d$. The subspace $$ \bigcap_{1\le j,k\le d}({\mathsf{D}}(A_jA_k)\cap {\mathsf{D}}(A_jB_k)\cap {\mathsf{D}}(B_kA_j)\cap{\mathsf{D}}(B_jB_k))$$ is dense, and on this subspace we have $A_jA_k = A_kA_j$, $B_jB_k = B_kB_j$, and $A_jB_k - B_kA_j = \delta_{jk} iI.$ \end{lemma} Let $(A,B)$ be a Weyl pair of dimension $d$. Consider, for $t\in {\mathbb R}$ and $u,v\in {\mathbb R}^d$, the bounded operators $$T_{u,v}(t):= e^{\frac12 it^2uv}e^{itu A}e^{itv B} = e^{-\frac12 it^2uv} e^{itv B}e^{itu A}.$$ \begin{proposition} \label{prop:core} The family $(T_{u,v})_{t\in{\mathbb R}}$ is a bounded $C_0$-group on $X$, $\Dom(A)\cap \Dom(B)$ is a core for its generator $G_{u,v}$, and, on this core, the generator is given by $$ G_{u,v}f = iu A f+iv Bf, \quad f\in\Dom(A)\cap \Dom(B).$$ \end{proposition} \begin{proof} The identity $T_{u,v}(0)=I$ is trivial. The group property $T_{u,v}(t_0)\circ T_{u,v}(t_1) = T_{u,v}(t_0+t_1)$ follows straightforwardly from the commutation relations \eqref{eq:CCR}. Strong continuity is also clear. It follows from the general properties of Weyl pairs mentioned earlier that each operator $T_{u,v}(t)$ maps the subspace $\Dom(A)\cap \Dom(B)$ into itself. Moreover, $\Dom(A)\cap\Dom(B)$ is dense in $X$. By Lemma \ref{lem:diff} below, every $f\in \Dom(A)\cap\Dom(B)$ belongs to $\Dom(G_{u,v})$ and differentiation gives $$ G_{u,v}f = \frac{{\rm d}}{{\rm d}t}\Big|_{t= 0}T_{u,v}(t)f = iu Af +iv Bf, \quad f\in {\mathsf{D}}(A)\cap{\mathsf{D}}(B).$$ A general result in semigroup theory (see, e.g., \cite[Proposition II.1.7]{EngNag}) now implies that $\Dom(A)\cap \Dom(B)$ is a core for $G_{u,v}$. \end{proof} The proof of Proposition \ref{prop:core} is completed by the following observation, which we leave as an easy exercise to the reader. \begin{lemma}\label{lem:diff} Let $(S(t))_{t\in{\mathbb R}}$ and $(T(t))_{t\in{\mathbb R}}$ be strongly continuous families of operators, and let $f\in X$ be fixed. If \begin{enumerate} \item $t\mapsto S(t)f$ is differentiable at $t=0$, with derivative $S'(0)f:= \frac{\rm d}{{\rm d}t}\big|_{t=0}S(t)f$,\vskip2pt \item $t\mapsto T(t)S(0)f$ is differentiable at $t=0$, with derivative $T'(0)S(0)f:=\frac{\rm d}{{\rm d}t}\big|_{t=0}T(t)S(0)f$,\vskip2pt \end{enumerate} then $t\mapsto T(t)S(t)f$ is differentiable at $t=0$, with derivative $$\frac{{\rm d}}{{\rm d}t}\Big|_{t=0} T(t)S(t)f = T'(0)S(0)f + T(0)S'(0)f.$$ \end{lemma} \section{The Weyl calculus} Let $(A,B)$ be a Weyl pair of dimension $d$ on a Banach space $X$. For $(x,\xi)\in {\mathbb R}^{2d}$ we consider the bounded operators \begin{align}\label{eq:abstr-Schr}e^{i(uA+v B)}:= e^{\frac12 iuv}e^{iu A}e^{iv B}. \end{align} This notation is justified by Proposition \ref{prop:core}. \begin{example} For the standard pair $(Q,P)$ on $L^2({\mathbb R}^d)$, \eqref{eq:abstr-Schr} reduces to the {\em Schr\"od\-inger representation}: the operators $e^{i(uQ+v P)}$ are unitary on $L^2({\mathbb R}^d)$ and given by $$ e^{i(uQ+v P)}f(x) = e^{\frac12 iuv + iu x}f(x+v).$$ \end{example} \begin{definition}[Weyl calculus] Let $(A,B)$ be a Weyl pair of dimension $d$. For functions $a \in \mathscr{S}({\mathbb R}^{2d})$ we define \begin{align*} a(A,B)f := \frac1{ (2\pi)^d }\int_{{\mathbb R}^{2d}} \widehat a(u,v) e^{i(uA+vB)}f \,{\rm d} u\,{\rm d} v, \quad f\in X, \end{align*} where \begin{align*} \widehat a(u,v) = \frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} a(x,\xi) e^{-i(xu+\xi v)}\,{\rm d} x\,{\rm d} \xi \end{align*} is the Fourier--Plancherel transform of $a$. The mapping $a\mapsto a(A,B)$ from $\mathscr{S}({\mathbb R}^{2d})$ to $\mathscr{L}(X)$ is called the {\em Weyl calculus} of $(A,B)$. \end{definition} An easy computation based on the identity \begin{align}\label{eq:sigma} e^{i(uA+v B)}\circ e^{i(u'A+v'B)} = e^{\frac12i(u'v-uv')} e^{i(u+u')A+(v+v')B}, \end{align} which follows from the commutation relations \eqref{eq:CCR}, gives the following analogue of the multiplicativity property of the functional calculus of a single operator: for all $a,b\in \mathscr{S}({\mathbb R}^{2d})$ we have $$ a(A,B)\circ b(A,B) = (a\,\# \,b)(A,B),$$ where $a\# b$ is the {\em Moyal product} of $a$ and $b$, given by (see \cite[Section XII.3.3]{Stein}) \begin{align*} \ & (a\,\#\,b)(x,\xi) \\ &\qquad = \frac{1}{\pi^{2d}} \int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} a(x+u, \xi+u') b(x+v, \xi+v') e^{-2i(vu'-uv')}\,{\rm d} u\,{\rm d} u'\,{\rm d} v\,{\rm d} v'. \end{align*} \begin{definition}\label{def:typeAB} Let $N,m \in {\mathbb N}$. A Weyl pair $(A,B)$ is said to admit a {\em bounded Weyl calculus of type $(-N,m)$} if, for all $a \in \mathscr{S}({\mathbb R}^{2d})$, we have $$ \|a(A,B)\|\lesssim \underset{|\alpha|,|\beta|\le m}{\max}\ \underset{(x,\xi) \in{\mathbb R}^{2d}}{\sup}\ \langle \xi\rangle^{N+|\alpha|}| \partial_{\xi} ^{\alpha}\partial_{x}^{\beta} a(x,\xi)|, $$ with a constant independent of $a$. The pair $(A,B)$ is said to admit a {\em bounded Weyl calculus of type $-N$} if it admits a bounded Weyl calculus of type $(-N,m)$ for some $m\in{\mathbb N}$. \end{definition} In Subsection \ref{sec:stand} we will prove that if $X$ is a UMD space and $1<p<\infty$, then the standard pair $(Q,P)$ has a bounded Weyl calculus of type $0$ on $L^p({\mathbb R}^d;X)$. The convergence lemma for the Dunford calculus for sectorial operators (see, e.g., \cite[Theorem 10.2.2]{HNVW2}) has the following analogue for the Weyl calculus: \begin{lemma}[Convergence lemma]\label{lem:McI} Let $(a_{n})_{n \in {\mathbb N}}$ be a sequence of Schwartz functions defined on ${\mathbb R}^{2d}$ and let $N\in {\mathbb N}$. There exist $m=m(d,N) \in {\mathbb N}$ and $M=M(d,N) \in {\mathbb N}$, both depending only on $d$ and $N$, such that the following holds. If $(A,B)$ is a Weyl pair with a bounded Weyl calculus of type $(-N-1,m)$, and if \begin{enumerate} \item[\rm(i)] for all multi-indices $\gamma \in {\mathbb N}^{d}$ with $|\gamma|\le M$ we have $\lim_{n\to\infty}\partial^{\gamma}a_{n}= 0$ uniformly on compact sets, \item[\rm(ii)] $ \displaystyle\sup_{n \in {\mathbb N}} \|a_{n}(A,B)\|<\infty, $ \end{enumerate} then $\lim_{n\to\infty} a_{n}(A,B)f= 0$ for all $f\in X$. \end{lemma} Admittedly the formulation of this lemma is a bit awkward; the point here is that we need $(A,B)$ to be of type $(-N-1,m)$ for all $m\ge m_0$, where $m_0$ may depend on $N$ and $d$. The proof of the lemma is based on an asymptotic expansion representation for Moyal products of Schwartz functions. \begin{lemma}\label{lem:appmoy} There exists a sequence $(c_{\alpha})_{\alpha \in {\mathbb N}^{2d}}$ of complex numbers such that, for all $a,b \in \mathscr{S}({\mathbb R}^{2d})$ and all integers $M\in {\mathbb N}$, there exists a function $r_{a,b;M+1} \in \mathscr{S}({\mathbb R}^{2d})$ such that $$ a(A,B)b(A,B) = \sum_{\substack{\alpha\in {\mathbb N}^{2d} \\ |\alpha|_\infty\le M}} c_{\alpha}\partial^{\alpha}(ab)(A,B) +r_{a,b;M+1}(A,B) $$ whenever $(A,B)$ is a Weyl pair. Moreover, there exists an $m \in {\mathbb N}$, depending only on $d$ and $M$, such that if $(A,B)$ has a bounded Weyl calculus of type $(-M-1,m)$, then \begin{align*} \ & \|r_{a,b;M+1}(A,B)\| \\ & \qquad\lesssim \underset{\substack{\alpha', \beta', \alpha'', \beta''\in {\mathbb N}^{d} \\ |\alpha'|,|\beta'|, |\alpha''|, |\beta''| \leq m}}{\max}\ \underset{x,\xi\in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle ^{\min(|\alpha'|,|\alpha''|)} |\partial^{\alpha'} _{\xi} \partial^{\beta'} _{x} a(x,\xi)\partial^{\alpha''} _{\xi} \partial^{\beta''} _{x}b(x,\xi)|. \end{align*} \end{lemma} \begin{proof} Let $a,b \in \mathscr{S}({\mathbb R}^{2d})$. Recall that $a(A,B)b(A,B) = (a \# b)(A,B)$, where $a\#b$ is the Moyal product of $a$ and $b$. By \cite[Theorem 3.16]{abels}, for any $M\geq 0$ there exists a function $r_{a,b;M+1}\in \mathscr{S}({\mathbb R}^{2d})$ such that \begin{equation}\label{eq:expansion} a \# b(x,\xi)\, = \sum _{\substack{\alpha\in {\mathbb N}^{d} \\ |\alpha| \le M}} \frac1{\alpha!}\frac1{i^\alpha}\partial_\xi^\alpha a(x,\xi)\partial_x^\alpha b(x,\xi) + r_{a,b;M+1}(x,\xi). \end{equation} This gives the formula in the first part of the theorem (with many coefficients $c_\alpha$ equal to $0$). Suppose next that $(A,B)$ has a bounded Weyl calculus of type $(-M-1,m)$ for some $M\in {\mathbb N}$, where $m\in {\mathbb N}$ is arbitrary for the moment but will be fixed later. Then, by assumption, the remainder $r_{a,b;M+1}(A,B)$ in the expansion \eqref{eq:expansion} for this particular value of $M$ satisfies the estimate $$ \|r_{a,b;M+1}(A,B)\|\lesssim \underset{|\gamma|,|\delta|\le m}{\max}\ \underset{(x,\xi) \in{\mathbb R}^{2d}}{\sup}\ \langle \xi\rangle^{M+1+|\gamma|}| \partial_{\xi} ^{\gamma}\partial_{x}^{\delta} r_{a,b;M+1}(x,\xi)| $$ with a constant only depending on $M$, $m$ and the pair $(A,B)$. By \cite[Theorem 3.15]{abels}, $r_{a,b,M+1}(x,\xi)$ is given by a finite linear combination, extending over all multi-indices satisfying $|\alpha|=M+1$, of terms of the form $$ R_{\alpha,a,b}(x,\xi) := \int_{{\mathbb R}^{2d}}e^{-ix'\xi'}(\xi')^\alpha \int _{0} ^{1} \partial^{\alpha}_{\xi} p(x,\xi+\theta \xi',x+x',\xi)(1-\theta)^{M}d\theta\,{\rm d} x'\,{\rm d} \xi' $$ for $p(x,\xi,x',\xi')=a(x,\xi)b(x',\xi').$ As in the proof of \cite[Theorem 3.15]{abels} (see, in particular, (3.20) on page 54 and (3.10) on page 47), there exists $m(d,M) \in {\mathbb N}$, depending only on $d$ and $M$, such that for all multi-indices $\gamma,\delta$ satisfying $|\gamma|, |\delta|\le m(d,M)$ we have $$ |\partial^{\gamma} _{\xi} \partial^{\delta} _{x} R_{\alpha,a,b}(x,\xi)| \lesssim \langle \xi\rangle ^{-(|\alpha|+|\gamma|)} = \langle \xi\rangle ^{-(M+1+|\gamma|)},$$ with constant depending linearly on $$ \underset{|\alpha'|,|\beta'|, |\alpha''|, |\beta''| \leq m(d,M)}{\max}\ \underset{x,\xi\in {\mathbb R}^{2d}}{\sup}\ \langle \xi\rangle ^{\min(|\alpha'|,|\alpha''|)} |\partial^{\alpha'} _{\xi} \partial^{\beta'} _{x} a(x,\xi)\partial^{\alpha''} _{\xi} \partial^{\beta''} _{x}b(x,\xi)|. $$ If we fix the integer $m$ to be this $m(d,M)$, the second part of the lemma follows by collecting estimates. \end{proof} The proof of the convergence lemma requires one further auxiliary result. Given a function $\eta:{\mathbb R}^{2d}\to {\mathbb C}$ and a real number $\delta>0$ we set $\eta_{\delta}(x,\xi):= \eta(\delta x,\delta\xi)$. \begin{lemma}\label{lem:appid} For all $\eta \in C^{\infty}_{\rm c}({\mathbb R}^{2d})$ with $\eta(0,0)=1$, and $f\in X$, we have $$\lim_{k\to\infty}\eta_{\frac{1}{k}}(A,B)f = f.$$ \end{lemma} \begin{proof} For all $f \in X$ we have \begin{align*} \eta_{\frac{1}{k}}(A,B)f & = \frac1{(2\pi)^d}\int _{{\mathbb R}^{2d}} \widehat{{\eta}_{\frac{1}{k}}}(u,v) e^{i(uA+v B)}f \,{\rm d} u \,{\rm d} v \\ & = \frac1{(2\pi)^d}\int _{{\mathbb R}^{2d}} k^{2d}\widehat{\eta}(ku,kv) e^{i(uA+v B)}f \,{\rm d} u \,{\rm d} v \\ & = \frac1{(2\pi)^d}\int _{{\mathbb R}^{2d}} \widehat{\eta}(u,v) e^{i(\frac{u}{k}A+\frac{v}{k}B)}f \,{\rm d} u\,{\rm d} v \underset{k \to \infty}{\longrightarrow} \eta(0,0)f = f. \end{align*} \end{proof} \begin{proof}[Proof of Lemma \ref{lem:McI}] Fix $N\in {\mathbb N}$, let $m= m(d,N)$ be as in Lemma \ref{lem:appmoy} (where we take $M= N$), and suppose $(A,B)$ has a bounded Weyl calculus of type $(-N,m)$. Let $(a_n)_{n\ge 1}$ be a sequence of Schwartz functions satisfying the assumptions (i) and (ii) in the statement of the lemma. Let $\eta \in C^{\infty}_{\rm c}({\mathbb R}^{2d})$ be supported in $B(0,2)$ and identically $1$ on $B(0,1)$. Fixing $f \in X$ and $\varepsilon >0$, by Lemma \ref{lem:appid} and the uniform boundedness of the operators $a_{n}(A,B)$ we may choose a large enough integer $k$ so that \begin{align}\label{eq:limsup} \underset{n \to \infty}{\lim\sup}\|a_{n}(A,B)f\| \le \underset{n \to \infty}{\lim\sup}\|a_{n}(A,B)\eta_{\frac{1}{k}}(A,B)f\| +\varepsilon. \end{align} Fix $n\ge 1$ for the moment. By Lemma \ref{lem:appmoy}, \begin{equation}\label{eq:aeta} \begin{aligned} \|& a_{n}(A,B) \eta_{\frac{1}{k}}(A,B)\| \\ & \quad \lesssim \Big\Vert\sum_{|\alpha|_{\infty}\leq {N}}c_\alpha \partial^{\alpha}(a_{n}\eta_{\frac{1}{k}})(A,B) \Big\| + \|r_{a_{n},\eta_{\frac{1}{k}};N+1}(A,B)\| \\ & \quad \lesssim \max_{\substack{\alpha\in {\mathbb N}^{2d} \\|\alpha|_{\infty}\leq {N}}} \|\partial^{\alpha}(a_{n}\eta_{\frac{1}{k}})(A,B)\| + \max_{\substack{\alpha',\beta'\in {\mathbb N}^{d} \\|\alpha'|,|\beta'|\leq m}} \underset{(x,\xi) \in B(0,2k)}{\sup} \langle \xi\rangle ^{|\alpha'|} |\partial^{\alpha'} _{\xi} \partial^{\beta'} _{x} a_{n}(x,\xi)|. \end{aligned} \end{equation} with constants independent of $n$. For later reference (we don't need this here) we observe that the constants are also uniform in $k$, as is evident from the proof of Lemma \ref{lem:appmoy}. The first term on the right-hand side of \eqref{eq:aeta} can be estimated as follows: \begin{align*} \underset{|\alpha|_{\infty}\leq {N}}{\max} \|\partial^{\alpha}(a_{n}\eta_{\frac{1}{k}})(A,B)\| & \lesssim \underset{|\alpha|_{\infty}\leq {N}}{\max} \|\widehat{\partial^{\alpha}(a_{n}\eta_{\frac{1}{k}})}\|_{1} \\ & \lesssim \underset{|\alpha|_{\infty}\leq {N}}{\max} \|(u,v) \mapsto \langle (u,v) \rangle^{2d+1}\widehat{\partial^{\alpha}(a_{n}\eta_{\frac{1}{k}})}\|_{\infty} \\ & \lesssim \underset{|\beta|_{\infty}\leq {N}+2d+1}{\max} \|\partial^{\beta}(a_{n}\eta_{\frac{1}{k}})\|_{1} \\ & \lesssim \underset{|\beta|_{\infty}\leq {N}+2d+1}{\max} \|\partial^{\beta}a_{n}\|_{L^{\infty}(B(0,2k))}, \end{align*} with constants independent of $n$. This results in the estimate \begin{align*} \ & \|a_{n}(A,B) \eta_{\frac{1}{k}}(A,B)\| \\ & \ \lesssim \underset{|\beta|_{\infty}\leq {N}+2d+1}{\max} \|\partial^{\beta}a_{n}\|_{L^{\infty}(B(0,2k))} + \underset{|\alpha'|,|\beta'|\leq m}{\max}\ \underset{(x,\xi) \in B(0,2k)}{\sup} \langle \xi\rangle ^{|\alpha'|} |\partial^{\alpha'} _{\xi} \partial^{\beta'} _{x} a_{n}(x,\xi)| \end{align*} with constants independent of $n$. Set $M:= \max(dN+d+2d^{2},2m)$; the extra factor $d$ in the first term in the maximum comes from $|\alpha|\le d|\alpha|_\infty$. If all partial derivatives up to order $M$ tend to $0$ uniformly on $B(0,2k)$, it follows that $ \underset{n \to \infty}{\lim} \|a_{n}(A,B)f\| = 0$. \end{proof} \begin{definition}\label{def:S-N} A function $a\in C^\infty({\mathbb R}^{2d})$ is said to belong to the standard symbol class $S^{-N}$, with $N\in{\mathbb Z}$, if $$ \underset{(x,\xi) \in{\mathbb R}^{2d}}{\sup}\langle \xi\rangle^{N+|\alpha|}\ | \partial_{\xi} ^{\alpha}\partial_{x}^{\beta} a(x,\xi)| <\infty $$ for all multi-indices $\alpha,\beta \in {\mathbb N}^{d}$. \end{definition} The Schwartz class is included in $S^{0}$, and if $N\ge M$ then $S^{-N}\subseteq S^{-M}$. The class $S^{-N}$ for $N>0$ plays a key role in estimating error terms that arise from the difference between the pointwise product of functions and their Moyal product. In particular, we use the fact that, for any $N>0$ and $r \in S^{-N}$, we may write \begin{align}\label{eq:neg1} T_{r}f(x) = \int _{{\mathbb R}^{d}} K_{r}(x,x-y)f(y)\,{\rm d} y, \end{align} with \begin{equation}\label{eq:neg2} \begin{aligned} \ & \underset{x \in {\mathbb R}^{d}}{\sup} \int _{{\mathbb R}^{d}} |K_{r}(x,x-y)|\,{\rm d} y + \underset{y \in {\mathbb R}^{d}}{\sup} \int _{{\mathbb R}^{d}} |K_{r}(x,x-y)|\,{\rm d} x \\ & \qquad\qquad\qquad \lesssim \underset{|\alpha|,|\beta|\leq 2d+1}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} r(x,\xi)|. \end{aligned} \end{equation} This is proven by combining \cite[Proposition 1 page 554]{Stein} and \cite[Theorem 5.12]{abels} (see also \cite[Theorem 5.15, Corollary 5.16]{abels}). We are now ready to state and prove the main result of this section. It asserts that the calculus of a Weyl pair with bounded calculus of type $(-N,m)$ extends continuously to symbols in the class $S^{-N}$: \begin{theorem}\label{thm:Weyl-S0} Let $N \in {\mathbb N}$. If $(A,B)$ has a bounded Weyl calculus of type $(-N,m)$, where $m=m(d,N)$ is as in Lemma \ref{lem:appmoy}, then the Weyl calculus $a\mapsto a(A,B)$ extends continuously to functions $a\in S^{-N}$. More precisely, if $a\in S^{-N}$ is given and $(a_{n})_{n \in {\mathbb N}}$ is sequence in $\mathscr{S}({\mathbb R}^{2d})$ such that for all multi-indices $\gamma \in {\mathbb N}^{2d}$ we have $$\partial^{\gamma}a_{n} \to \partial^\gamma a $$ uniformly on compact sets as $n\to\infty$, then the limit $$a(A,B):= \lim_{n\to\infty} a_{n}(A,B)$$ exists in the strong operator topology of $\mathscr{L}(X)$ and is independent of the approximating sequence. Furthermore, for all $a \in S^{-N}$ we have $$ \|a(A,B)\| \lesssim \underset{|\alpha|,|\beta|\le m+N}{\max}\ \underset{(x,\xi) \in{\mathbb R}^{2d}}{\sup}\langle \xi\rangle^{N+|\alpha|}|\partial_{\xi} ^{\alpha}\partial_{x}^{\beta} a(x,\xi)|.$$ \end{theorem} \begin{proof} The existence and uniqueness of the strong operator limits follows from what we have already proved. As pointed out in \cite[Section 1.4, page 232]{Stein}, it is possible to approximate functions $a\in S^{-N}$ by Schwartz functions in the way stated, by taking $a_{n}(x,\xi) = a(x,\xi)\eta(\frac{x}{n},\frac{\xi}{n}) = a(x,\xi)\eta_{\frac1n}(x,\xi) $ for some $\eta \in C^{\infty}_{\rm c}({\mathbb R}^{2d})$ such that $\eta(0,0)=1$. It remains to prove the bound for the norm of $a(A,B)$. For this we return to \eqref{eq:limsup} and \eqref{eq:aeta}, both of which also hold if we replace $a_n$ by $a$. For a given $\varepsilon>0$, and a large enough $k$, this gives \begin{align*} \|a(A,B)\| & \le\| (a\eta_{\frac{1}{k}})(A,B)\| + 2\varepsilon \\ & \lesssim \max_{\substack{\alpha\in {\mathbb N}^{2d} \\|\alpha|_{\infty}\leq {N}}} \|\partial^{\alpha}(a\eta_{\frac{1}{k}})(A,B)\| \\ & \qquad + \max_{\substack{\alpha',\beta'\in {\mathbb N}^{d} \\|\alpha'|,|\beta'|\leq m}} \underset{(x,\xi) \in B(0,2k)}{\sup} \langle \xi\rangle ^{|\alpha'|} |\partial^{\alpha'} _{\xi} \partial^{\beta'} _{x} a(x,\xi)| +2\varepsilon \end{align*} with estimates uniform in $\varepsilon>0$ and $k\ge 1$ (note that the sup norms of the derivatives of $\eta_k$ are uniform in $k\ge 1$). Each expression in the first term on the right-hand side can be estimated using the type $(-N,m)$ of the Weyl calculus of $(A,B)$: \begin{align*} \|\partial^{\alpha}(a\eta_{\frac{1}{k}})(A,B)\| & \lesssim \underset{\substack{\gamma,\delta\in {\mathbb N}^{d} \\|\gamma|,|\delta|\le m}}{\max}\ \underset{(x,\xi) \in{\mathbb R}^{2d}}{\sup}\ \langle \xi\rangle^{N+|\gamma|}| \partial_{\xi} ^{\gamma}\partial_{x}^{\delta}\partial^\alpha (a\eta_{\frac{1}{k}})(x,\xi)| \\ & \lesssim \underset{\substack{\alpha',\beta'\in {\mathbb N}^{d} \\ |\alpha'|,|\beta'|\le m+Nd}}{\max}\ \underset{(x,\xi) \in{\mathbb R}^{2d}}{\sup}\ \langle \xi\rangle^{N+|\alpha'|}| \partial_{\xi} ^{\alpha'}\partial_{x}^{\beta'} a(x,\xi)|, \end{align*} again with estimates uniform in $\varepsilon>0$ and $k\ge 1$. Since $\varepsilon>0$ was arbitrary, this results in the desired estimate. \end{proof} \subsection{Bounded Weyl calculus of type $0$ for Banach space-valued standard pairs}\label{sec:stand} Let $X$ be a UMD space. On $L^p({\mathbb R}^d;X)$, $1<p<\infty$, we consider the vector-valued standard pair $(Q \otimes I_{X},P \otimes I_{X})$ defined by $Q \otimes I_{X} = (Q_j\otimes I_{X})_{j=1} ^{d}$ and $P \otimes I_{X} = (P_j \otimes I_{X})_{j=1} ^{d}$, where $Q_j$ and $P_j$ are the position and momentum operators as in Example \ref{ex:HO}. Note that $(Q \otimes I_{X},P \otimes I_{X})$ is a Weyl pair: as in the scalar case, $iQ_j \otimes I_{X}$ and $iP_j \otimes I_{X}$ generate multiplication and translation groups on $L^p({\mathbb R}^d;X)$ given by the same formulas as in the scalar-valued case (Example \ref{ex:HO}). The commutation relations for the vector-valued extensions also follow from their scalar-valued counterparts. \medskip\noindent {\em Notation}. \ In order to simplify notation we will suppress the tensors with $I_X$ when no confusion is likely to arise. \medskip As an illustration of Definition \ref{def:typeAB} we now prove: \begin{theorem}\label{thm:type0} If $X$ is a UMD Banach space, the standard pair $(Q,P)$ has a bounded Weyl calculus of type $0$ on $L^p({\mathbb R}^d;X)$ for all $1<p<\infty$. \end{theorem} To prove this theorem we will use \cite[Theorem 6]{pz}. To do so, we need to view $a(Q,P)$ as a pseudo-differential operator acting on $L^{2}({\mathbb R}^{d};X)$. This is possible thanks to the following lemma. \begin{lemma} \label{lem:weyltopseudo} For every $a \in \mathscr{S}({\mathbb R}^{2d})$ there exists a unique $b \in \mathscr{S}({\mathbb R}^{2d})$ such that $a(Q,P) = T_{b}$, where $T_{b}$ is the pseudo-differential operator on $L^2({\mathbb R}^d)$ defined by $$ T_{b}f(x) =\frac1{(2\pi)^{d/2}} \int _{{\mathbb R}^{d}} b(x,\xi)\widehat{f}(\xi)e^{i\xi x}\,{\rm d}\xi. $$ This function is given by \begin{align}\label{eq:b-asympt} b(x,\xi) = \sum _{|\alpha|=1} \frac1{\alpha!}\frac1{i^{|\alpha|}}\partial_\xi^{\alpha}\partial_y^{\alpha}p_a(x,\xi,y,\xi')\Big|_{y=x, \xi'=\xi} +r_{a}(x,\xi), \end{align} where $r_{a}\in \mathscr{S}({\mathbb R}^{2d})$ and $p_a(x,\xi,y,\xi') = a(\frac{x+y}{2},\xi)$. Moreover, for all $m \in {\mathbb N}$, there exists $\tilde{m} \geq m$, depending only on $m$ and $d$, such that $$\underset{|\alpha|,|\beta|\leq m}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup}\ \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} r_a(x,\xi)|\lesssim \underset{|\alpha|,|\beta|\leq \tilde{m}}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup}\ \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} a(x,\xi)|.$$ \end{lemma} \begin{proof} The first assertion follows from \cite[Proposition 1, page 554]{Stein} (see also \cite[formula (58), page 258]{Stein}). As in the proof of Lemma \ref{lem:appmoy}, the estimate follows from \cite[Theorem 3.15]{abels}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:type0}] We must show that there exists an integer $m\in {\mathbb N}$ such that for all $a \in \mathscr{S}({\mathbb R}^{2d})$ we have $$ \|a(Q,P)\|_{\mathscr{L}(L^p({\mathbb R}^d;X))}\lesssim \underset{|\alpha|,|\beta|\le m}{\max}\ \underset{(x,\xi) \in{\mathbb R}^{2d}}{\sup}\langle \xi\rangle^{|\alpha|}| \partial_{\xi} ^{\alpha}\partial_{x}^{\beta} a(x,\xi)|. $$ Let $a \in \mathscr{S}({\mathbb R}^{2d})$. We first apply Lemma \ref{lem:weyltopseudo} to write \begin{equation}\label{eq:aPQX} \begin{aligned} a(P,Q) = T_{b} = \sum _{|\alpha|=1} \frac1{\alpha!}\frac1{i^{|\alpha|}}T_{\partial_{\xi} ^{\alpha} \partial_{x} ^{\alpha} p_a}+ T_{r_{a}}, \end{aligned} \end{equation} where $\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} p_a(x,\xi)$ is short-hand for the expression $\partial_\xi^{\alpha}\partial_y^{\alpha}p_a(x,\xi,y,\xi')|_{y=x, \xi'=\xi}$ occurring in \eqref{eq:b-asympt}. We now estimate the $L^p({\mathbb R}^d;X)$-norms of the terms on the right-hand side of \eqref{eq:aPQX} separately, starting with $T_{r_{a}}$. As pointed out in \eqref{eq:neg1} and \eqref{eq:neg2} we have \begin{align*} T_{r_{a}}f(x) & =\int _{{\mathbb R}^{d}} K_{r_a}(x,y)f(y)\,{\rm d} y \end{align*} with \begin{align*} \ & \underset{x \in {\mathbb R}^{d}}{\sup} \int _{{\mathbb R}^{d}} |K_{r_a}(x,y)|\,{\rm d} y + \underset{y \in {\mathbb R}^{d}}{\sup} \int _{{\mathbb R}^{d}} |K_{r_a}(x,y)|\,{\rm d} x \\ & \qquad\qquad \lesssim \underset{|\alpha|,|\beta|\leq 2d+1}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} r_{a}(x,\xi)|. \end{align*} Therefore, by Schur's lemma (in the formulation of \cite[Lemma 4.1 with $p=q$, $r=1$, $\phi=\psi\equiv 1$]{NP}, noting that the proof extends without change to the vector-valued case), $T_{r_{a}}$ extends to a bounded operator on $L^p({\mathbb R}^d;X)$ of norm at most \begin{equation}\label{eq:est-Tr} \begin{aligned} \Vert T_r\Vert_{\mathscr{L}(L^p({\mathbb R}^d;X))} &\lesssim \underset{|\alpha|,|\beta| \leq 2d+1}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} r_{a}(x,\xi)| \\ & \lesssim \underset{|\alpha|,|\beta| \leq \tilde{m}}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} {a}(x,\xi)|, \end{aligned} \end{equation} for some $\tilde{m} \geq 2d+1$, the second inequality being a consequence of Lemma \ref{lem:weyltopseudo}. Next we estimate the $L^p({\mathbb R}^d;X)$-norms of the operators $T_{\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} p_a}$. Let $\alpha,\beta \in {\mathbb N}^{d}$ be such that $|\alpha|,|\beta| \leq 1$. In order to apply \cite[Theorem 6]{pz}, we remark that $p_{a,\alpha,\beta}:=\partial^{\alpha} _{\xi} \partial ^{\beta} _{x}p_a$ has the following (trivial) properties: \begin{enumerate} \item[\rm(a)] for all $|\gamma|\le 2d+5$ and $x\in {\mathbb R}^d$ we have \begin{align*} \langle \xi\rangle^{|\gamma|} |\partial_{\xi} ^{\gamma} p_{a,\alpha,\beta}(x,\xi)| & = \langle \xi\rangle^{|\gamma|} |\partial_{\xi} ^{\alpha+\gamma} p_a(x,\xi)| \\ & \le \underset{|\alpha'|,|\beta'| \leq 2d+6}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha'|}|\partial_{\xi} ^{\alpha'} \partial_{x} ^{\beta'} a(x,\xi)| \end{align*} \item[\rm(b)] for all $|\gamma|,|\delta|\le 2d+5$ we have \begin{align*} |\partial_{\xi} ^{\gamma} \partial_{x} ^{\delta} p_{a,\alpha,\beta}(x,\xi)| & = |\partial_{\xi} ^{\alpha+\gamma} \partial_{x} ^{\beta+\delta} a(x,\xi)| \\ & \le \underset{|\alpha'|,|\beta'| \leq 2d+6}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha'|}|\partial_{\xi} ^{\alpha'} \partial_{x} ^{\beta'} a(x,\xi)|. \end{align*} \end{enumerate} This means that each $b_{\alpha,\beta}$ belongs to the class $S^{0}_{1,0}(2d+5,X)$ as defined in \cite[Definition 3]{pz} (note that the $R$-boundedness condition in this definition reduces to a uniform boundedness condition in view of the fact that we are considering scalar-valued symbols). Therefore, by \cite[Theorem 6]{pz} (and its proof, which shows that the estimates depend linearly on the expressions on the right-hand sides in (a) and (b)), the operators $T_{\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} a}$ are bounded on $L^{p}({\mathbb R}^d;X)$, and \begin{equation}\label{eq:est-Tpartial} \begin{aligned} \|T_{\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} a}\|_{\mathscr{L}(L^{p}({\mathbb R}^d;X))} & \lesssim \underset{|\alpha|,|\beta| \leq 2d+6}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} a(x,\xi)|. \end{aligned} \end{equation} Putting together the estimates \eqref{eq:est-Tr} and \eqref{eq:est-Tpartial} we obtain $$ \|a(P,Q)\|_{\mathscr{L}(L^{p}({\mathbb R}^d;X))} \lesssim \underset{|\alpha|,|\beta| \leq \max(\tilde{m},2d+6)}{\max}\ \underset{(x,\xi) \in {\mathbb R}^{2d}}{\sup} \langle \xi\rangle^{|\alpha|}|\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta} a(x,\xi)|, $$ which concludes the proof. \end{proof} \section{The operator $A^2+B^2$}\label{sec:A2B2} In this section we show how the Weyl calculus of the pair $(A,B)$ relates to the functional calculus of the operator $A^2+B^2$. For Weyl pairs $(A,B)$ of dimension $d$ we define $$A^2 := \sum_{j=1}^d A_j^2, \quad B^2 := \sum_{j=1}^d B_j^2$$ with domains $\Dom(A^2) := \bigcap_{j=1}^d {\mathsf{D}}(A_j^2)$ and $\Dom(B^2) := \bigcap_{j=1}^d {\mathsf{D}}(B_j^2)$. The operator $A^2+B^2$ is understood as being defined on $\Dom(A^2)\cap \Dom(B^2)$. Earlier we have already defined $\Dom(A) := \bigcap_{j=1}^d {\mathsf{D}}(A_j)$ and $\Dom(B) := \bigcap_{j=1}^d {\mathsf{D}}(B_j)$. The following proposition is an immediate consequence of Lemmas \ref{lem:Kato} and \ref{lem:D}. \begin{proposition}\label{prop:dd} If $(A,B)$ is a Weyl pair of dimension $d$ on $X$, then $\Dom(A^2)\cap \Dom(B^2)$ is dense in $X$ and invariant under the groups $(e^{itA_j})_{t\in{\mathbb R}}$ and $(e^{itB_j})_{t\in{\mathbb R}}$, $1\le j\le d$. \end{proposition} The next theorem shows, among other things, that for any Weyl pair $(A,B)$ the operator $-(A^2+B^2)$ is closable and its closure generates an analytic $C_0$-semigroup of angle $\frac12\pi$. Up to a scaling, this semigroup can be thought of as an abstract version of the Ornstein--Uhlenbeck semigroup. For the standard pair, such a theorem is well-known to mathematical physicists, going back at least, to \cite{unter}. It was rediscovered for the Ornstein-Uhlenbeck semigroup in \cite[Theorem 3.1]{NP}. Here we prove that it holds for all Weyl pairs. \begin{theorem}\label{thm:A2B2} Let $(A,B)$ be a Weyl pair. The operators $$ P(t) := \Big(1+\frac{1-e^{-t}}{1+e^{-t}}\Bigr)^d\exp\Bigl(-\frac{1-e^{-t}}{1+e^{-t}}(A^2+B^2)\Bigr) \quad (t \ge 0) $$ define a uniformly bounded $C_0$-semigroup on $X$. The dense set $\Dom(A^2)\cap \Dom(B^2)$ is a core for its generator $-L$, and, on this core, we have the identity $$L =\frac12(A^2+B^2)- \frac12d.$$ The semigroup $(P(t))_{t\ge 0}$ extends to an analytic semigroup of angle $\frac12\pi$ that is uniformly bounded and strongly continuous on every subsector of smaller angle. \end{theorem} In the above formula for $P(t)$, for $t>0$ the right-hand side is interpreted in terms of the Weyl calculus for the pair $(A,B)$, i.e., $P(t) = a_{t}(A,B)$, where \begin{align}\label{def:a}a_{t}(x,\xi):= (1+\lambda_t)^d e^{-\lambda_t(|x|^2+|\xi|^2)} \end{align} with $\lambda_t = \frac{1-e^{-t}}{1+e^{-t}}$. For $t=0$ we interpret the formula as stating that $P(0)=I$. \begin{proof} The semigroup property $P(t_1)P(t_2) = P(t_1+t_2)$ follows from the identity \begin{equation}\label{eq:asas} \begin{aligned} a_{{t_1}} \#\, a_{{t_2}} (x, \xi) & = \frac{1}{\pi^{2d}} (1+\lambda_{t_1})^d(1+\lambda_{t_2})^d \\ & \qquad \times \int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} e^{-\lambda_{t_1}((x+u)^2+( \xi+u')^2)} e^{-\lambda_{t_2}((x+v)^2+ ( \xi+v')^2)} \\ & = (1+\lambda_{t_1+t_2})^d e^{-\lambda_{t_1+t_2}(|x|^2+ |\xi|^2)} \\ & = a_{{t_1+t_2}}(x, \xi) \end{aligned} \end{equation} which is obtained by elementary computation. Next we prove the strong continuity $\lim_{t\downarrow 0}P(t)f=f$ for all $f\in X$. Fix $t>0$ for the moment. We have \begin{equation}\label{eq:Pt1} \begin{aligned} P(t)f & = a_{t}(A,B)f \\ & = \frac1{(2\pi)^{d}}\int_{{\mathbb R}^{2d}} \widehat a_t(u,v) e^{i(uA+v B)}f\,{\rm d} u\,{\rm d} v \\ & = \frac1{(2\pi)^d}(1+\lambda_t)^d \frac1{(2\lambda_t)^d}\int_{{\mathbb R}^{2d}} \exp\Bigl( -\frac{1}{4\lambda_t}(|u|^2+|v|^2) \Bigr)e^{i(uA+v B)}f\,{\rm d} u\,{\rm d} v \\ & = \frac1{(2\pi)^d}\frac{1}{(1-e^{-t})^d} \int_{{\mathbb R}^{2d}} \exp\Bigl( -\frac{1+e^{-t}}{4(1-e^{-t})}(|u|^2+|v|^2) \Bigr)e^{i(uA+v B)}f\,{\rm d} u\,{\rm d} v\end{aligned} \end{equation} so that \begin{align}\label{eq:Pt2} \Vert P(t)\Vert \le \frac{M_AM_B}{(2\pi)^{d}} \frac{1}{(1-e^{-t})^d}\int_{{\mathbb R}^{2d}}e^{ -\frac{1}{4(1-e^{-t})}(|u|^2+|v|^2)}\,{\rm d} u\,{\rm d} v \lesssim M_AM_B. \end{align} This proves the uniform boundedness of $P(t)$ for $t>0$. Strong continuity follows from the fact that $\widehat a_t \to \delta_0$ weakly (in the sense that we have strong convergence against every $f\in C_{\rm b}({\mathbb R};X)$). Let us denote the generator of the $C_0$-semigroup $(P(t))_{t\ge 0}$ by $-L$. We claim that $Lf = \frac12df-\frac12(A^2 f+B^2f)$ for all $f\in \Dom(A^2)\cap \Dom(B^2)$. Our argument will be somewhat formal. The reader will have no difficulty in making it rigorous by proceeding as follows: write $$ e^{i(uA+vB)}f = \psi(u,v) e^{i(uA+vB)}f + (1-\psi(u,v)) e^{i(uA+vB)}f$$ for some compactly supported smooth function $a\psi$ which equals $1$ in a neighbourhood of $(0,0)$. Treating the resulting integrals separately, the ones involving $\psi$ will give the desired convergence while the ones involving $1-\psi$ will vanish as we pass to the limit. Proceeding to the details, we write $P(t) = (1+\lambda)^d R(\lambda)$, where $\lambda = \lambda_t = \frac{1-e^{-t}}{1+e^{-t}}.$ Then, \begin{align*} \frac{{\rm d} }{{\rm d} t}P(t)f = \frac{{\rm d} }{{\rm d} \lambda}[(1+\lambda)]^d R(\lambda)f] \frac{{\rm d} \lambda}{{\rm d} t} = \frac12(1-\lambda^2)\frac{{\rm d} }{{\rm d} \lambda}[(1+\lambda)]^d R(\lambda)f] . \end{align*} In the limit $t\downarrow 0$ we also have $\lambda\downarrow 0$ and $\frac12(1-\lambda^2) \to \frac12$. Hence the claim will be proved if we show that $\frac{{\rm d} }{{\rm d} \lambda}R(\lambda)f\to df- (A^2+B^2) f$ for $f\in \Dom(A^2)\cap \Dom(B^2)$. We have \begin{align*} \ & \lim_{\lambda\downarrow 0} \frac{{\rm d} }{{\rm d} \lambda}[(1+\lambda)]^d R(\lambda)f] \\ & \ \ = \lim_{\lambda\downarrow 0}\frac{{\rm d} }{{\rm d} \lambda}\Bigl[(1+\lambda)^d \frac1{(2\pi)^{d}}\int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} \widehat{e^{-\lambda(|u|^2+|v|^2)}} e^{i(uA+v B)}f\,{\rm d} u\,{\rm d} v \Bigr] \\ & \ \ = \lim_{\lambda\downarrow 0} d(1+\lambda)^{d-1} \frac1{(2\pi)^{d}}\int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} \widehat{ e^{-\lambda(|u|^2+|v|^2)}} e^{i(uA+v B)}f\,{\rm d} u\,{\rm d} v \\ & \ \ \quad + \lim_{\lambda\downarrow 0} \Bigl[(1+\lambda)^d \frac1{(2\pi)^{d}}\int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} \frac{{\rm d} }{{\rm d} \lambda}\widehat{ e^{-\lambda(|u|^2+|v|^2)}} e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v \Bigr] \\ & \ \ = (I)+(II). \end{align*} Now, for any $f\in X$, \begin{align*} (I) & = \lim_{\lambda\downarrow 0} \frac{d}{(2\pi)^{d}}\int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} \widehat{ e^{-\lambda(|u|^2+|v|^2)}} e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v \\ & = d \int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} \delta_{(0,0)} e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v = df. \intertext{ Similarly, for $f\in \Dom(A^2)\cap \Dom(B^2)$, } (II) & = \lim_{\lambda\downarrow 0} \frac1{(2\pi)^{d}}\int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} -\widehat{(|u|^2+|v|^2) e^{-\lambda(|u|^2+|v|^2)}} e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v \\ & = \int_{{\mathbb R}^{2d}}\int_{{\mathbb R}^{2d}} \Delta \delta_{(0,0)} e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v = -(A^2 + B^2) f. \end{align*} Here, $\Delta \delta_{(0,0)}$ denotes the Laplacian of the Dirac delta function in the sense of distributions. We will prove next that $\Dom(A^2)\cap \Dom(B^2)$ is a core for $L$. We have already seen that $\Dom(A^2)\cap \Dom(B^2)$ is contained in ${\mathsf{D}}(L)$. The definition of the operators $P(t)$ together with the commutation relation defining Weyl pairs implies that $\Dom(A^2)\cap \Dom(B^2)$ is invariant under $P(t)$. Since $\Dom(A^2)\cap \Dom(B^2)$ is also dense in $X$, a standard result in semigroup theory implies that $\Dom(A^2)\cap \Dom(B^2)$ is a core for $L$. To complete the proof it remains to show the final assertion. By a standard analytic extension argument, the right-hand side of \eqref{eq:Pt1} defines an analytic extension of $P(t)$ to the open right half-plane which again satisfies the semigroup property. Estimating as in \eqref{eq:Pt2} we see that this extension is uniformly bounded on every sector of angle strictly less than $\frac12\pi$. A standard semigroup argument (see, e.g., \cite[Exercise 9.8]{Haase-ISEM}) gives the strong continuity of the extension on each of these sectors. \end{proof} \begin{example}\label{ex:HA} For the standard pair of momentum and position we recover the standard fact that the harmonic oscillator defined by $-Lf(u) = \frac12\Delta f(u) - \frac12|u|^2f(u)$ generates a holomorphic semigroup of angle $\frac12\pi$, strongly continuous on each smaller sector, on each of the spaces on $L^p({\mathbb R}^d)$ with $1\le p<\infty$. \end{example} For later use we make the following simple observation. \begin{corollary}\label{cor:spectral} For all $t>0$ we have $$ \Vert t L P(t)\Vert \le 2^{d+2}d M_A M_B(1+t) e^{-t} .$$ \end{corollary} \begin{proof} Using the same notation as before, write $\lambda_t' = \frac{2e^{-t}}{(1+e^{-t})^2}$ for the derivative of $t\mapsto \lambda_t = \frac{1-e^{-t}}{1+e^{-t}}.$ In view of $LP(t)f = -\frac{\rm d}{{\rm d}t}P(t)f$, differentiation of the right-hand side of \eqref{eq:Pt1} (and noting that $\frac{1}{1-e^{-t}} = \frac12( 1+\lambda_t^{-1}) $) gives \begin{equation*} \begin{aligned} & (4\pi)^d LP(t)f \\ & \quad = -\frac{\rm d}{{\rm d}t} \Bigl((1+\lambda_t^{-1})^d\int_{{\mathbb R}^{2d}} \exp(-(|u|^2+|v|^2)/4\lambda_t) e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v\Bigr) \\ & \quad = d(1+\lambda_t^{-1})^{d-1}\frac{\lambda_t'}{\lambda_t^2} \int_{{\mathbb R}^{2d}} \exp(-(|u|^2+|v|^2)/4\lambda_t) e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v \\ & \quad\quad -(1+\lambda_t^{-1})^d \frac{\lambda_t'}{4\lambda_t^{2}} \int_{{\mathbb R}^{2d}} (|u|^2+|v|^2)\exp(-(|u|^2+|v|^2)/4\lambda_t) e^{i(uA+vB)}f\,{\rm d} u\,{\rm d} v \end{aligned} \end{equation*} and therefore \begin{equation*} \begin{aligned} &\Vert LP(t)f\Vert \\ & \quad \le d (1+\lambda_t)^{d-1}\frac{\lambda_t'}{\lambda_t} \frac{M_A M_B \Vert f\Vert}{(4\pi \lambda_t)^d}\int_{{\mathbb R}^{2d}} \exp(-(|u|^2+|v|^2)/4\lambda_t) \,{\rm d} u\,{\rm d} v \\ & \quad \quad + (1+\lambda_t)^d \frac{\lambda_t'}{4\lambda_t^{2}} \frac{M_A M_B \Vert f\Vert}{(4\pi \lambda_t)^d} \int_{{\mathbb R}^{2d}} (|u|^2+|v|^2)\exp(-(|u|^2+|v|^2)/4\lambda_t)\,{\rm d} u\,{\rm d} v. \end{aligned} \end{equation*} In view of the identities $$ \frac1{(4\pi \lambda_t)^d}\int_{{\mathbb R}^{2d}} \exp(-(|u|^2+|v|^2)/4\lambda_t) \,{\rm d} u\,{\rm d} v = 1$$ and \begin{align*} & \frac1{(4\pi \lambda_t)^d}\int_{{\mathbb R}^{2d}} (|u|^2+|v|^2)\exp(-(|u|^2+|v|^2)/4\lambda_t) \,{\rm d} u\,{\rm d} v \\ & \quad = \frac1{(4\pi \lambda_t)^d}\sum_{j=1}^{2d}\int_{{\mathbb R}^{2d}} w_j^2\exp(-|w|^2/4\lambda_t) \,{\rm d} w \\ & \quad = \frac1{(4\pi \lambda_t)^d}\sum_{j=1}^{2d}\Bigl(\int_{{\mathbb R}} w_j^2\exp(-w_j^2/4\lambda_t) \,{\rm d} w_{j}\Bigr)\prod_{\substack{1\le k\le 2d \\ k\not=j}}\int_{{\mathbb R}}\exp(-w_k^2/4\lambda_t) \,{\rm d} w_k \\ & \quad = \frac1{(4\pi \lambda_t)^{1/2}}\sum_{j=1}^{2d}\int_{{\mathbb R}} w_j^2\exp(-w_j^2/4\lambda_t) \,{\rm d} w_{j} \\ & \quad = 4d\lambda_t \end{align*} we obtain \begin{align*} \Vert tLP(t)f\Vert & \le t\Bigl(d 2^{d-1}\frac{\lambda_t'}{\lambda_t }+ 2^d\frac{\lambda_t'}{4\lambda_t^{2}}\cdot 4d\lambda_t\Bigr)M_A M_B \Vert f \Vert \\ & = 2^{ d+1}dt\frac{\lambda_t'}{\lambda_t }M_A M_B \Vert f \Vert \\ & = 2^{ d+1}dt \cdot \frac{2e^{-t}}{(1+e^{-t})^2} \cdot \frac{1+e^{-t}}{1-e^{-t}}M_A M_B \Vert f\Vert \\ & = 2^{ d+2}d\frac{t}{1-e^{-2t}}e^{-t} M_A M_B \Vert f \Vert \\ & \le 2^{ d+2}d(1+t) e^{-t} M_A M_B \Vert f\Vert. \end{align*} \end{proof} \subsection{Ground states} Let $(A,B)$ be a Weyl pair on the Banach space $X$. Upon passing to the limit $t_1,t_2\to\infty$ in \eqref{eq:asas} one sees that if the limit $$a_\infty(A,B) := \lim_{t\to\infty} a_t(A,B)$$ exists in the weak operator topology of $\mathscr{L}(X)$, then it is a projection. That this limit indeed exists under the assumption that $X$ be reflexive is a consequence of the following lemma. \begin{lemma} Let $(S(t))_{t\ge 0}$ be a $C_0$-semigroup on a reflexive Banach space $X$ and let $(T_t)_{t\ge 0}$ be a uniformly bounded family of operators on $X$ such that $ S(s)\circ T_t = T_t \circ S(s) = T_{t+s}$ for all $s,t\ge 0$. Then there exists a bounded operator $\pi$ on $X$ such that $\lim_{t\to\infty} T_t x = \pi (x)$ weakly for all $x\in X$. \end{lemma} \begin{proof} Fix $x\in X$. Since $X$ is reflexive, any sequence $t_n\to\infty$ has a subsequence $t_{n_k}\to\infty$ such that $\lim_{k\to\infty} T_{n_k}x $ exists weakly. Let $\pi(x)$ be this weak limit. We will show that $\pi(x)$ does not depend on the choice of the sequence $t_n\to\infty$, nor on the choice of the weakly convergent subsequence $t_{n_k}\downarrow 0$. To this end it suffices to show that if both $r_k\to\infty$ and $s_k\to\infty$ are such that the weak limits $y:= \lim_{k\to\infty} T_{r_k}x$ and $y':= \lim_{k\to\infty} T_{s_k}x$ exist, then $y = y'$. By passing to a further subsequence we may assume that $r_k \le s_k$ for all $k$. Then $T_{s_k}x = S(s_k-r_k)T_{r_k}x$ and therefore, for all $x^*\in X^*$, \begin{align*} |\langle T_{s_k}x-T_{r_k}x,x^*\rangle| & = |\langle S(s_k-r_k)T_{r_k}x-T_{r_k}x,x^*\rangle| \\ & \le \Vert T_{r_k}x\Vert \Vert S^*(s_k-r_k)x^*-x^*\Vert_{X^{*}} \\ & \le M\|x\|\Vert S^*(s_k-r_k)x^*-x^*\Vert_{X^{*}}, \end{align*} where $M = \sup_{t\ge 0} \Vert T_t\Vert$. Since $X$ is reflexive, the adjoint semigroup $(S^*(t))_{t\ge 0}$ is strongly continuous (see \cite[Proposition I.5.14]{EngNag}), it follows that $$ |\langle y-y',x^*\rangle| = \lim_{k\to\infty} |\langle T_{s_k}x-T_{r_k}x,x^*\rangle| \le M \|x\|\lim_{k\to\infty} \Vert S^*(s_k-r_k)x^*-x^*\Vert_{X^{*}} = 0.$$ This being true for all $x^*\in X^*$, we conclude that $y=y'$. The operator $\pi$ thus defined is linear and bounded, with norm $\Vert \pi\Vert \le M$. That $\pi (x)= \lim_{t\to\infty}T_t x$ weakly now follows from a standard subsequence argument. \end{proof} \begin{proposition} \label{prop:nullrange} If $(A,B)$ is a Weyl pair on a reflexive Banach space $X$, then the weak operator limit $\pi := \lim_{t\to\infty} a_t(A,B)$ exists in $\mathscr{L}(X)$. Furthermore, ${\mathsf{N}}(L)={\mathsf{R}}(\pi) $ and $\overline{{\mathsf{R}}(L)}= {\mathsf{N}}(\pi).$ \end{proposition} \begin{proof} For all $t_1,t_2\ge 0$, \eqref{eq:asas} implies $$e^{-t_1L}\circ a_{{t_2}}(A,B) = a_{{t_2}}(A,B)\circ e^{-t_1L} = a_{t_1+t_2} (A,B).$$ Hence the first assertion follows from the lemma. The second assertion is proved by a routine semigroup argument. If $f\in {\mathsf{N}}(L)$, then $a_t(A,B)f = e^{-tL}f =f$ implies $\pi(f) = f$, and conversely if $\pi(f) = f$, then for all $t\ge 0$ and $\phi\in X^*$ we have \begin{align*}\langle e^{-tL}f,\phi\rangle & = \langle f, e^{-tL^*}\phi\rangle= \lim_{s\to\infty}\langle a_s(A,B)f, e^{-tL^*}\phi\rangle \\ & = \lim_{s\to\infty} \langle a_{s+t}(A,B)f,\phi\rangle = \langle \pi(f),\phi\rangle = \langle f,\phi\rangle \end{align*} and therefore $e^{-tL}f=f$. This implies $f\in {\mathsf{D}}(L)$ and $Lf = 0$. This proves ${\mathsf{N}}(L)={\mathsf{R}}(\pi)$. The proof that $\overline{{\mathsf{R}}(L)}= {\mathsf{N}}(\pi)$ is equally simple. \end{proof} In particular we see that $X$ admits the direct sum decomposition ${\mathsf{N}}(L)\oplus \overline{{\mathsf{R}}(L)}$. Such a decomposition holds for every sectorial operator on a reflexive Banach space (see \cite[Proposition 10.1.9]{HNVW2}); the point of the proposition is to identify the associated projection as being given by $\pi$. \section{Transference}\label{sec:transf} Let $X$ be a Banach space. For functions $a\in \mathscr{S}({\mathbb R}^{2d})$, the {\em twisted convolution} with a function $g\in C_{\rm c}({\mathbb R}^{2d};X)$ is defined by \begin{align}\label{eq:twisted} C_{a}g(x,\xi) :=&\,\frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} e^{\frac12i(x\eta-y\xi)} a(y,\eta )g(x-y,\xi-\eta )\,{\rm d} y\,{\rm d} \eta . \end{align} By the pointwise inequality $|C_a g| \le|a|*|g|$ and Young's inequality, $C_{a}$ extends to a bounded operator on $L^p({\mathbb R}^{2d};X)$ for all $1\le p\le \infty$. We begin with a Coifman--Weiss type transference result. \begin{proposition}[Transference]\label{prop:transf} Let $(A,B)$ be a Weyl pair of dimension $d$ on a Banach space $X$ and set $M_A := \sup_{t\in {\mathbb R}} \Vert e^{itA}\Vert$ and $M_B := \sup_{t\in {\mathbb R}} \Vert e^{itB}\Vert$. Let $1\le p<\infty$. \begin{enumerate} \item For all $a\in \mathscr{S}({\mathbb R}^{2d})$ we have \begin{align*} \Vert a(A,B)\Vert \le M_A^2 M_B^2 \Vert C_{\widehat a} \Vert_{\mathscr{L}(L^p({\mathbb R}^{2d};X))}. \end{align*} \item Let $\{a_j: \,j\in J\}$ be a family of functions in $ \mathscr{S}({\mathbb R}^{2d})$. If the family of twisted convolutions $\{C_{\widehat a_j},\, j\in J\}$ is $R$-bounded in $\mathscr{L}(L^p({\mathbb R}^{2d};X))$, then $\{a_j(A,B):\, j\in J\}$ is $R$-bounded in $\mathscr{L}(X)$, and in that case \begin{align*} \mathscr{R}_p(a_j(A,B):\,j\in J) \le M_A^2 M_B^2 \mathscr{R}_p(C_{\widehat a_j}:\, j\in J). \end{align*} \item Let $\{a_j: \,j\in J\}$ be a family of functions in $ \mathscr{S}({\mathbb R}^{2d})$. If the family of twisted convolutions $\{C_{\widehat a_j},\, j\in J\}$ satisfies $$ {\mathbb E} \Big\|\sum_{j \in J} \varepsilon_{j} C_{\widehat a_j}g\Big\| \lesssim \|g\| \quad \forall g\in L^{p}({\mathbb R}^{2d};X), $$ then $${\mathbb E} \Big\|\sum_{j \in J} \varepsilon_{j} a_{j}(A,B)f\Big\| \lesssim \|f\| \quad \forall f \in X. $$ \end{enumerate} \end{proposition} \begin{proof} For $r>0$ we will use the short-hand notation $[-r,r]^{2} = \{(x,\xi)\in {\mathbb R}^{2d}:\, |x|\le r, \, |\xi|\le r\}$. The elementary estimate $ \|a(A,B)\|\leq M_AM_B \|\widehat a\|_{1} $ shows that, for any given $\varepsilon>0$, we may choose $N>0$ so large that the operator \[ a(A,B)_{(N)} f: = \frac1{(2\pi)^d}\int_{\complement [-N,N]^2} \widehat a(u,v)e^{i(uA+v B)} f\,{\rm d} u\,{\rm d} v \] satisfies \[ \|a(A,B)_{(N)}\|\leq \frac{M_AM_B}{(2\pi)^d} \int_{\complement [-N,N]^2} |\widehat a(u,v)| \,{\rm d} u\,{\rm d} v < \varepsilon . \] We will therefore concentrate on estimating the norm of the operator \[ a(A,B)^{(N)} f: = \frac1{(2\pi)^d}\int_{[-N,N]^2} \widehat a(u,v)e^{i(uA+v B)} f\,{\rm d} u\,{\rm d} v. \] Accordingly set $\widehat a^{(N)} := 1_{[-N,N]^{2}}\widehat a$. Choose $M$ so large that $\frac{M+N}{M}\leq1+\varepsilon$. Let us write $U(u,v) = e^{i(uA+v B)}$ for brevity. By \eqref{eq:sigma} we have $ U(u,v)\circ U(-u,-v) = I$ and therefore \begin{equation} \|f\| \leq M_AM_B\|U(-u,-v)f\|, \quad f\in X. \end{equation} Averaging over $[-M,M]^2$, for all $1\le p<\infty$ and $f\in X$ we obtain \begin{align*} \! & \| a(A,B)^{(N)} f\|^p\\ &\leq \frac{M_A^pM_B^p}{(2M)^{2d}}\int_{[-M,M]^2} \|U(-u,-v)a(A,B)^{(N)}f\|^p\,{\rm d} u\,{\rm d} v \\ &= \frac{M_A^pM_B^p}{(2M)^{2d}}\int_{[-M,M]^2} \Big\| \frac1{(2\pi)^d} \int_{{\mathbb R}^{2d}} \widehat a^{(N)}(y,\eta )U(-u,-v)U(y,\eta )f\,{\rm d} y\,{\rm d} \eta \,\Big\|^p\,{\rm d} u \,{\rm d} v \\ &= \frac{M_A^pM_B^p}{(2M)^{2d}}\int_{[-M,M]^2}\! \Big\| \frac1{(2\pi)^d}\!\int_{{\mathbb R}^{2d}} e^{\frac12i(u\eta-yv)}\widehat a^{(N)}(y,\eta )U(y-u,\eta -v)f\,{\rm d} y\,{\rm d} \eta \,\Big\|^p\!\!\,{\rm d} u\,{\rm d} v \\ \intertext{using \eqref{eq:sigma}. Also ${{\bf 1}}_{[-M-N,M+N]^2 }(y-u,\eta-v)=1$ if $(u,v)\in[-M,M]^2$ and $(y,\eta )\in[-N,N]^2$, so that with $\chi_{M+N}:={{\bf 1}}_{[-M-N,M+N]^2}$ the last expression can be rewritten as} &= \frac{M_A^pM_B^p}{(2M)^{2d}}\int_{[-M,M]^2}\Bigl\Vert \frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} e^{\frac12i(u\eta-yv)}\widehat a^{(N)}(y,\eta ) \\ & \qquad\qquad\qquad \times \bigl[\chi_{M+N}(y-u,\eta-v)U(y-u,\eta -v)f\bigr]\,{\rm d} y\,{\rm d} \eta \, \Bigr\Vert^p\,{\rm d} u\,{\rm d} v \\ &=\frac{M_A^pM_B^p}{(2M)^{2d}}\int_{[-M,M]^2} \| C_{\widehat a^{(N)}} [\chi_{M+N}(\cdot,\cdot) U(\cdot,\cdot)f ](u,v)\|^p\,{\rm d} u\,{\rm d} v \\ &\leq \frac{M_A^pM_B^p}{(2M)^{2d}} \| C_{\widehat a^{(N)}} [\chi_{M+N} Uf]\|_{L^p({\mathbb R}^{2d};X)}^p \\ &\stackrel{\rm(i)}{\leq }\frac{M_A^pM_B^p}{(2M)^{2d}} \|C_{\widehat a^{(N)}} \|_{\mathscr{L}(L^p({\mathbb R}^{2d};X))}^p \int_{[-M-N,M+N]^2} \|U(u,v)f\|^p\,{\rm d} u\,{\rm d} v \\ &\leq \frac{M_A^pM_B^p}{(2M)^{2d}}\|C_{\widehat a^{(N)}} \|_{\mathscr{L}(L^p({\mathbb R}^{2d};X))}^p \, (2(M+N))^{2d}M_A^pM_B^p\|f\|^p \\ & \leq (1+\varepsilon)^{2d} M_A^{2p}M_B^{2p} \|C_{\widehat a^{(N)}} \|_{\mathscr{L}(L^p({\mathbb R}^{2d};X))}^p\,\|f\|^p. \end{align*} It follows that \begin{align*} \Vert a(A,B)f\Vert & \le \Vert a(A,B)_{(N)}f\Vert +\Vert a(A,B)^{(N)}f\Vert \\ & \le \varepsilon \Vert f\Vert + (1+\varepsilon)^{2d/p}M_A^{2}M_B^{2} \|C_{\widehat a^{(N)}} \|_{\mathscr{L}(L^p({\mathbb R}^{2d};X))} \Vert f\Vert. \end{align*} Letting $N\to \infty$ in this estimate, and then letting $\varepsilon\downarrow 0$, the desired estimate in (1) is obtained. \smallskip Part (2) is proved in exactly the same way. We replace $a$ by $\sum_{n=1}^N \varepsilon_n a_{j_n}$ (where $j_1,\dots,j_N\in J$ and $(\varepsilon_n)_{n=1}^N$ is a Rademacher sequence) and instead of using one fixed $f$ we use a sequence $(f_n)_{n=1}^N$ to build Rademacher sums; instead of estimating with operator norms in (i), we estimate with $R$-bounds. The same reasoning applies to part (3). \end{proof} The next lemma expresses the twisted convolution $C_{\widehat a}$ in terms of the standard pair on $L^2({\mathbb R}^{2d})$: \begin{lemma} \label{lem:twiststand} Let $((Q_1,Q_2), (P_1,P_2))$ be the standard pair of dimension $2d$ on $L^2({\mathbb R}^{2d}),$ i.e., \begin{align*} Q_{1,j}f(x,\xi) = x_jf(x,\xi), \quad & Q_{2,j}f(x,\xi) = \xi_jf(x,\xi), \\ P_{1,j}f(x,\xi) = \frac1i \frac{\partial f}{\partial x_j}(x,\xi),\quad &P_{2,j}f(x,\xi) = \frac1i \frac{\partial f}{\partial \xi_j}(x,\xi), \end{align*} for $1\le j\le d$. The pair $(-\frac12 Q_2-P_1,\frac12Q_1-P_2)$ is a Weyl pair of dimension $d$ on $L^2({\mathbb R}^{2d})$, and for all $a\in \mathscr{S}({\mathbb R}^{2d})$ we have $$ C_{\widehat a} = a\Bigl(-\frac12 Q_2-P_1,\frac12Q_1-P_2\Bigr).$$ \end{lemma} \begin{proof} The proof of the first assertion is immediate. For all $a\in \mathscr{S}({\mathbb R}^{2d})$ and $g\in L^2({\mathbb R}^{2d})$, \begin{align*} \ & a\Bigl(-\frac12 Q_2-P_1,\frac12Q_1-P_2\Bigr)g(x,\xi) \\ & \qquad =\frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} \widehat a(u,v)e^{i(u(-\frac12Q_2-P_1)+v(\frac12 Q_1-P_2))}g(x,\xi) \,{\rm d} u\,{\rm d} v \\ & \qquad =\frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} \widehat a(u,v)e^{\frac12iuv}e^{iu(-\frac12Q_2-P_1)}e^{iv(\frac12Q_1-P_2)}g(x,\xi) \,{\rm d} u\,{\rm d} v \\ & \qquad = \frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} \widehat a(u,v)e^{\frac12iuv}e^{-\frac1{2}iuQ_2} e^{-iuP_1}e^{\frac1{2}iv Q_1} e^{-iv P_2}g(x,\xi) \,{\rm d} u\,{\rm d} v \\ & \qquad = \frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} \widehat a(u,v) e^{\frac1{2}iv Q_1}e^{-\frac1{2}iuQ_2} e^{-iuP_1} e^{-iv P_2}g(x,\xi) \,{\rm d} u\,{\rm d} v \\ & \qquad = \frac1{(2\pi)^d}\int_{{\mathbb R}^{2d}} \widehat a(u,v)e^{\frac12i(v x-\xi u)}g(x-u,\xi-v) \,{\rm d} u\,{\rm d} v \\ & \qquad = C_{\widehat a}g(x,\xi). \end{align*} \end{proof} In the setting of the lemma, by the Stone--von Neumann theorem (see \cite[Theorem 14.8]{Hall}), there exist a countable index set $L$ and an orthogonal direct sum decomposition $$ L^2({\mathbb R}^{2d}) = \bigoplus_{\ell\in L} H_{\ell},$$ as well as unitary operators $U_\ell: H_\ell \to L^2({\mathbb R}^d)$, such that for all $\ell\in L$ the following assertions hold: \begin{enumerate} \item $H_{\ell}$ is invariant under each of the groups $e^{it(-\frac12 Q_{2,j}-P_{1,j})}$ and $e^{it(\frac12Q_{1,j}-P_{2,j})}$; \item $U_\ell$ establishes a unitary equivalence of these groups on $H_\ell$ with the groups $e^{itQ_j}$ and $e^{itP_j}$ on $L^2({\mathbb R}^d)$, where $(Q,P)$ is the standard pair on $L^2({\mathbb R}^d)$. \end{enumerate} As a direct consequence we obtain that, for all $\ell\in L$ and $a\in \mathscr{S}({\mathbb R}^{2d})$: \begin{enumerate} \item[(1)$'$] $H_{\ell}$ is invariant under $a(-\frac12 Q_{2}-P_{1},\frac12Q_{1}-P_{2})$; \item[(2)$'$] $U_\ell$ establishes a unitary equivalence of the restriction of $a(-\frac12 Q_{2}-P_{1},\frac12Q_{1}-P_{2})$ to $H_\ell$ and the operator $a(Q,P)$ on $L^2({\mathbb R}^d)$. \end{enumerate} As a result we obtain \begin{equation}\label{eq:scalar} \begin{aligned} \Vert C_{\widehat a}\Vert_{\mathscr{L}(L^2({\mathbb R}^{2d}))} & = \Big\Vert a(-\frac12 Q_{2}-P_{1},\frac12Q_{1}-P_{2})\Big\Vert_{\mathscr{L}(L^2({\mathbb R}^{2d}))} \\ & = \sup_{\ell\in L} \Big\Vert a(-\frac12 Q_{2}-P_{1},\frac12Q_{1}-P_{2})\Big|_{H_\ell}\Bigr\Vert_{\mathscr{L}(H_\ell)} = \Vert a(Q,P)\Vert_{\mathscr{L}(L^2({\mathbb R}^{d}))}. \end{aligned} \end{equation} \begin{remark} In the next section we address the problem of estimating the norm of $C_{\widehat a}$ in the vector-valued setting. Here we wish to point out the general fact that the identity \eqref{eq:scalar} has a simple, albeit not very useful (cf. the concluding remark at the end of the section), vector-valued extension in terms of spaces of $\gamma$-radonifying operators. These are defined as follows (comprehensive treatments are given in \cite{HNVW2, Nee}). Let $H$ be a Hilbert space with inner product $(\cdot|\cdot)$ and $X$ be a Banach space. Every finite rank operator $T:H\to X$ can be represented as $$ Th = \sum_{n=1}^N (h|h_n)x_n$$ for some orthonormal sequence $(h_n)_{n=1}^N$ in $X$, and some sequence $(x_n)_{n=1}^N$ in $X$. For such operators $T$ we define $$ \Vert T\Vert_{\gamma(H,X)}^2 := {\mathbb E} \Big\Vert \sum_{n=1}^N \gamma_n x_n\Big\Vert^2, $$ where $(\gamma_n)_{n=1}^N$ is a sequence of independent standard normal random variables (taken real-valued if the scalar field is ${\mathbb R}$ and complex-valued if the scalar field is ${\mathbb C}$; once again, one could insist on using real-valued standard normal variables at the expense of different constants). It is easy to see that this gives a well-defined norm on the space of finite rank operators from $H$ to $X$. Its completion is denoted by $\gamma(H,X)$. If $X$ is a Hilbert space, then $\gamma(H,X)$ is isometric in a natural way to the space of Hilbert--Schmidt operators from $H$ to $X$, and if $X = L^p(M,\mu)$ with $1\le p<\infty$, then one has a natural isomorphism of Banach spaces $$\gamma(H, L^p(M,\mu)) \simeq L^p(M,\mu;H).$$ It is not hard to see (see \cite[Theorem 9.6.1]{HNVW2}) that if $S: H\to H$ is a bounded operator, then the mapping $ h \otimes x \mapsto Sh \otimes x$ uniquely extends to a bounded operator $\widetilde S\in \mathscr{L}(\gamma(H^*,X))$ of the same norm. Here, $H^*$ is the Banach space dual of $H$. Applying this construction to the twisted convolutions $C_a$ and the operators $a(Q,P)$ with $a\in \mathscr{S}({\mathbb R}^{2d})$, viewed as a bounded operators on the Hilbert spaces $L^2({\mathbb R}^{2d})$ and $L^2({\mathbb R}^d)$ respectively, and identifying the duals of these spaces with the spaces themselves via the duality $\langle f,g\rangle = \int fg$ (no conjugation here), we obtain well-defined extensions of these operators to bounded operators on $\gamma(L^2({\mathbb R}^{2d}),X)$ and $\gamma(L^2({\mathbb R}^{d}),X)$ of the same norms. Thus \eqref{eq:scalar} self-improves to $$ \Vert C_{\widehat a}\Vert_{\mathscr{L}(\gamma(L^2({\mathbb R}^{2d}),X))} = \Vert a(Q,P)\Vert_{\mathscr{L}(\gamma(L^2({\mathbb R}^{d}),X))}.$$ This identity suggests that we could try to bound the Weyl calculus in terms of the $\gamma(L^2({\mathbb R}^{2d}),X))$-norm of the twisted convolution. This is possible under a $\gamma$-boundedness assumption: \end{remark} \begin{proposition}[$\gamma$-Transference]\label{prop:g-transf} Let $(A,B)$ be a Weyl pair of dimension $d$ on a Banach space $X$. If the set $$ \{e^{i(uA+vB)}:\, (u,v)\in {\mathbb R}^{2d}\}$$ is $\gamma$-bounded, with $\gamma$-bound $\Gamma$, then for all $a\in \mathscr{S}({\mathbb R}^{2d})$ we have \begin{align*} \Vert a(A,B)\Vert \le \Gamma^2\Vert C_{\widehat a} \Vert_{\mathscr{L}(\gamma(L^2({\mathbb R}^{2d}),X))}. \end{align*} \end{proposition} Similar versions of parts (2) and (3) of Proposition \ref{prop:transf} hold. The proof is a routine adaptation of the proof of Proposition \ref{prop:transf}. As a corollary we obtain that, if the set $\{e^{i(uA+vB)}: \, (u,v)\in{\mathbb R}^d\}$ is $\gamma$-bounded, with $\gamma$-bound $\Gamma$, then for all $a\in \mathscr{S}({\mathbb R}^{2d})$ we have $$ \Vert a(A,B)\Vert \le \Gamma^2\Vert a(Q,P)\Vert_{\mathscr{L}(\gamma(L^2({\mathbb R}^{d}),X))}.$$ Admittedly, this result is unlikely to be useful: for the standard pair, the $\gamma$-bounded\-ness assumption is satisfied only for $p=2$ (by \cite[Proposition 8.1.16]{HNVW2}). \section{$R$-Sectoriality of $L$}\label{sec:untwist} To apply the transference theory from Section \ref{sec:transf}, ideally one needs to bound twisted convolution operators $C_{\widehat a}$ acting on the Bochner spaces $L^{p}({\mathbb R}^{2d};X)$ in terms of the norm of $a(Q,P)$ for the standard pair, i.e., one needs a vector-valued extension of \eqref{eq:scalar}. We do not know how to do this in general. The $L^p$-theory in the scalar-valued case, considered by Mauceri in \cite{mauceri80}, is already quite subtle and depends on Hilbert space-specific techniques to treat the $p=2$ case. Extending his theory to UMD-valued functions would be interesting in itself (for the new techniques that need to be developed) and would lead to general estimates for the Weyl calculus of Weyl pairs. Here, we just focus on those twisted convolutions needed to study the semigroup generated by $-L=\frac12d-\frac12(A^{2}+B^{2})$. The symbols $a$ involved are such that $C_{\widehat a}$ can be effectively ``untwisted". The main aim of this section is to prove the following result: \begin{theorem}[$R$-Sectoriality]\label{thm:L-R-sect} Let $(A,B)$ be a Weyl pair on a UMD Banach lattice $X$. Then for all $\theta\in (0,\pi)$ the operator $L = \frac12(A^{2}+B^{2}) - \frac12d$ is $R$-sectorial of angle $\theta$. Moreover, the set $ \{(\frac{\pi}{2}-\theta)^{2d} \exp(-zL) \;;\; |arg(z)|<\theta\} $ is R-bounded, with R-bound independent of $\theta$. \end{theorem} The only place in the proof where we use the lattice structure of $X$ is in the following lemma, which reduces the task of proving $R$-boundedness of twisted convolutions to proving $R$-boundedness of (standard) convolutions. \begin{lemma}\label{lem:untwist-Rbd} Let $X$ be a Banach lattice with finite cotype and let $1\le p<\infty$. Suppose $(a_j)_{j\in J}$ and $(b_j)_{j\in J}$ are families of functions in $\mathscr{S}({\mathbb R}^{2d})$ satisfying $$ |{a}_{j}(y,\eta)| \leq |{b}_{j}(y,\eta)| \quad \forall y,\eta \in {\mathbb R}^{d},\ j\in J. $$ If the family of (standard) convolution operators $(\mathscr{C}_{|b_j|})_{j\in J}$ on $L^p({\mathbb R}^{2d};X)$ is $R$-bounded, with $R_p$-bound $\mathscr{R}_p$, then also the family $(C_{{a_j}})_{j\in J}$ on $L^p({\mathbb R}^{2d};X)$ is $R$-bounded, with $R_p$-bound $\lesssim_{p,q,X} C^p\mathscr{R}_p^p$. \end{lemma} \begin{proof} Since $L^{p}({\mathbb R}^{2d};X)$ has finite cotype (see \cite[Proposition 7.1.4]{HNVW2}), we may use the Khintchine--Maurey Theorem (see \cite[Theorem 7.2.13]{HNVW2}) to pass from Radem\-acher sums to square functions. If $j_1,\dots,j_N\in J$ and $g_1\dots,g_N\in L^p({\mathbb R}^{2d};X)$ are given and $(\varepsilon_n)_{n=1}^N$ is a Rademacher sequence, we thus obtain \begin{align*} \ & {\mathbb E} \Big\Vert \sum_{n=1}^N \varepsilon_n C_{{a_{j_n}}} g_{j_n} \Big\Vert _{L^p({\mathbb R}^{2d};X)}^p \\ & \qquad \eqsim \Big\Vert \Big(\sum_{n=1}^N |C_{{a_{j_n}}} g_{j_n}|^{2} \Big)^{1/2} \Big\Vert_{L^p({\mathbb R}^{2d};X)}^p\\ & \qquad = \int_{{\mathbb R}^{2d}} \Big\Vert \Big(\sum_{n=1}^N \Big|\int_{{\mathbb R}^{2d}} e^{\frac12i(x\eta - y\xi)} {a_{j_n}}(y,\eta) g_{j_n}(x-y,\xi-\eta)\,{\rm d} y\,{\rm d}\eta\Big|^{2}\Big)^{1/2} \Big\Vert^p\,{\rm d} x\,{\rm d} \xi \\ & \qquad \leq \int_{{\mathbb R}^{2d}} \Big\Vert \Big(\sum_{n=1}^N \Big(\int_{{\mathbb R}^{2d}} |{b_{j_n}}(y,\eta)| |g_{j_n}(x-y,\xi-\eta)|\,{\rm d} y\,{\rm d}\eta\Big)^{2}\Big)^{1/2} \Big\Vert^p\,{\rm d} x\,{\rm d} \xi \\ & \qquad = \Big\Vert \Big(\sum_{n=1}^N (\mathscr{C}_{|b_{j_n}|}|g_{j_n}|)^{2}\Big)^{1/2} \Big\Vert _{L^p({\mathbb R}^{2d};X)}^p \\ & \qquad \eqsim {\mathbb E} \Big\Vert \sum_{n=1}^N \varepsilon_n \mathscr{C}_{|b_{j_n}|} |g_{j_n}| \Big\Vert _{L^p({\mathbb R}^{2d};X)}^p \\ & \qquad \le \mathscr{R}_p^p {\mathbb E} \Big\Vert \sum_{n=1}^N \varepsilon_n |g_{j_n}| \Big\Vert _{L^p({\mathbb R}^{2d};X)}^p \\ & \qquad \eqsim \mathscr{R}_p^p \Big\Vert \Bigl(\sum_{n=1}^N |g_{j_n}|^2\Bigr)^{1/2} \Big\Vert _{L^p({\mathbb R}^{2d};X)}^p \\ & \qquad \eqsim \mathscr{R}_p^p {\mathbb E} \Big\Vert \sum_{n=1}^N \varepsilon_n g_{j_n} \Big\Vert _{L^p({\mathbb R}^{2d};X)}^p \end{align*} with constants depending only on $p$ and $X$. \end{proof} We now consider the kernels relevant to our applications, namely the Fourier transforms of $$a_{z}(x,\xi) = (1+\lambda_{z})^{d} e^{-\lambda_{z}(|x|^{2}+|\xi|^{2})}$$ with $\lambda_{z} = \frac{1-e^{-z}}{1+e^{-z}}$ for $z \in {\mathbb C}$ such that $\Re z>0$. We need an elementary lemma. \begin{lemma}\label{lem:theta} For all $0<\theta<\frac12\pi$ and non-zero $z\in {\mathbb C}$ satisfying $|\arg z|\le \theta$ we have $$ (\tfrac12\pi - \theta) \lesssim \cos(\arg{\lambda_{z}}), \quad |\lambda_{z}| \lesssim (\tfrac12\pi - \theta)^{-1}, $$ with constants independent of $\theta$ and $z$. \end{lemma} \begin{proof} It suffices to prove the inequalities for non-zero $z\in {\mathbb C}$ satisfying $|\arg z| = \theta$. Writing $z = r(\cos\theta+i\sin\theta)$ and computing the real and imaginary parts of $\frac{1-e^{-z}}{1+e^{-z}}$ in terms of $r$ and $\theta$, one readily finds that if $|\arg z| < \theta$, then $$ \tan (\arg(\frac{1-e^{-z}}{1+e^{-z}})) < \frac1{\cos\theta} $$ and consequently $$ \frac1{\cos(\arg(\frac{1-e^{-z}}{1+e^{-z}})) }< (1+\frac1{\cos^2\theta})^{1/2} \le 1+ \frac1{\cos\theta}\lesssim \frac1{\tfrac12\pi - \theta}.$$ Similar elementary estimates show that if $|\arg z| < \theta$, then $$ |\lambda_z| = \Big|\frac{1-e^{-z}}{1+e^{-z}}\Big| \lesssim \frac1{\cos\theta} \lesssim \frac1{(\tfrac12\pi - \theta)}.$$ \end{proof} This lemma is used to prove the following $R$-boundedness result. \begin{proposition} \label{prop:convR} Let $X$ be a UMD Banach lattice, and let $1<p<\infty$. There exists a constant $M\ge 0$, depending only on $p$ and $X$, such that for all $\theta \in (0,\frac12\pi )$ the family $\{C_{\widehat a_z } \;;\; z\not=0,\, |\arg z| < \theta\}$ is $R$-bounded in $\mathscr{L}(L^p({\mathbb R}^{2d};X))$, with constant $$\mathscr{R}(\{C_{\widehat a_z } \;;\; z\not=0, \,|\arg z| < \theta\}) \le \frac{M}{(\frac{1}{2}\pi-\theta)^{2d}} .$$ \end{proposition} \begin{proof} The space $X$, being UMD, has finite cotype (see \cite[Proposition 7.3.15]{HNVW2}). Fix $\theta \in (0,\frac12\pi )$ and let $z\in {\mathbb C}$ be a non-zero element such that $|\arg z|< \theta$ for all $k=1, \dots , N$. Writing $t = 1/\Re(1/\lambda_{z}) = |\lambda_{z}|/\cos(\arg{\lambda_{z}})$, for all $y,\eta \in {\mathbb R}^{d}$ we have \begin{align*} |\widehat a_z (y,\eta)| &\eqsim |1+\lambda_{z}|^{d} |\lambda_{z}|^{-d} e^{-\cos(\arg{\lambda_{z}})(|y|^{2}+|\eta|^{2})/4|\lambda_{z}|} \\ & \lesssim |1+\lambda_{z}|^{d} (\cos(\arg{\lambda_{z}}))^{-d} t^{-d} e^{-(|y|^{2}+|\eta|^{2})/4t} \end{align*} with constants depending only on $p$ and $X$. Hence by Lemma \ref{lem:theta}, \begin{align*} (\tfrac12\pi -\theta)^{2d}|\widehat a_z (y,\eta)| \lesssim t^{-d} e^{-\frac{(|y|^{2}+|\eta|^{2})}{4t}}=:b_{t}(y,\eta), \end{align*} with constants depending only on $p$ and $X$. Hence, by Lemma \ref{lem:untwist-Rbd}, $$ \mathscr{R}(\{(\tfrac12\pi -\theta)^{2d} C_{\widehat a_z } \;;\; z\not=0,\, |\arg z| < \theta\}) \lesssim \mathscr{R}(\{\mathscr{C}_{b_{t}} \;;\; t>0\}), $$ with a constant depending only on $p$ and $X$. Noting that $\mathscr{C}_{b_{t}}$ is a constant multiple of $\exp(t\Delta \otimes I_{X})$, the $R$-boundedness of the family $\{\mathscr{C}_{b_{t}} \;;\; t>0\}$ follows from the fact that $-\Delta \otimes I_{X}$ has a bounded $H^{\infty}$-calculus on $L^p({\mathbb R}^{2d};X)$ \cite[Theorem 10.2.25]{HNVW2}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:L-R-sect}] Let $(P(t))_{t\ge 0}$ be the analytic $C_0$-semigroup generated by $-L = \frac12d-\frac12(A^2+B^2)$. Fix $\theta \in (0, \frac12{\pi})$. By \cite[Proposition 10.3.3]{HNVW2} it suffices to show that the set $$ V_\theta :=\bigl\{P(z):\,z\not=0,\,|\arg z| < \theta\bigr\} $$ is $R$-bounded with $$\mathscr{R}(V_\theta)\lesssim_{p,X} \frac{1}{(\frac12{\pi}-\theta)^{2d}}.$$ By Theorem \ref{thm:A2B2} and Proposition \ref{prop:transf}, for this it suffices to show that the set $$ V'_\theta :=\bigl\{C_{\widehat a_z }:\,z\not=0,\,|\arg z| < \theta\bigr\} $$ is $R$-bounded with $$\mathscr{R}(V'_\theta)\lesssim_{p,X} \frac{1}{(\frac12{\pi}-\theta)^{2d}}.$$ This has been done in Proposition \ref{prop:convR}. \end{proof} \section{Functional calculus of $L$}\label{sec:Hinfty} In this section we prove that boundedness of the Weyl calculus of a Weyl pair $(A,B)$ implies a spectral multiplier theorem for the operator $L= \frac12(A^2+B^2)-\frac12d$, acting on a UMD lattice $X$. This is done by applying the theory developed in \cite{KalWei} to obtain a holomorphic functional calculus of angle zero from square function estimates and appropriate $R$-sectoriality bounds. The precise form of the latter then allows us to apply the theory developed in \cite{KriegW} to extend this holomorphic functional calculus to a full H\"ormander type spectral multiplier theorem. \begin{theorem}[Bounded $H^\infty$-calculus]\label{thm:HinftyL} If $(A,B)$ is a Weyl pair of dimension $d$ on a UMD lattice $X$ with a bounded Weyl calculus of type $0$, then $L:=\frac12(A^2+B^2)-\frac12d$ has a bounded $H^\infty(\Sigma_\theta)$-calculus for all $\theta\in (0,\pi)$. \end{theorem} For the proof of the theorem we need two lemmas. The first provides an expression for derivatives of the exponentials in the Weyl calculus representation formula of the operators $e^{-tL}$ of Theorem \ref{thm:A2B2}. \begin{lemma}\label{lem:pol} For all multi-indices $\alpha,\beta\in{\mathbb N}^d$ there is a polynomial $p_{\alpha,\beta}$ of degree $(\alpha,\beta)$ in the variables $(x,\xi)\in {\mathbb R}^{2d}$ such that for all $\lambda>0$ we have $$\partial_{\xi} ^{\alpha} \partial_{x} ^{\beta}e^{- \lambda(| x|^2+|\xi|^2)}= \sqrt{\lambda}^{|\alpha|+|\beta|} p_{\alpha,\beta}(\sqrt{\lambda} x,\sqrt{\lambda}\xi)e^{-\lambda(|x|^2+|\xi|^2)}.$$ \end{lemma} \begin{proof} If $p$ is a polynomial in $2d$ variables $x=(x_1,\dots,x_d)$ and $\xi = (\xi_1,\dots,\xi_d)$, of degree $\gamma=(\gamma_1,\dots,\gamma_d)$ in $x$ and $\delta=(\delta_1,\dots,\delta_d)$ in $\xi$, then for any $\lambda>0$, \begin{align*} & \partial_{x_j} [p( \sqrt{\lambda} x, \sqrt{\lambda} \xi)e^{- \lambda(| x|^2+|\xi|^2)}] \\ & \qquad \qquad = [ \sqrt{\lambda} (\partial_{x_j}p)( \sqrt{\lambda} x, \sqrt{\lambda} \xi) -2 \lambda x_j p(\sqrt{\lambda} x, \sqrt{\lambda} \xi)]e^{-\lambda (|x|^2+|\xi|^2)} \\ & \qquad \qquad = \sqrt{\lambda} q(\sqrt{\lambda} x, \sqrt{\lambda} \xi)e^{-\lambda (|x|^2+|\xi|^2)}, \end{align*} where $q$ is polynomial of degree $(\gamma_1,\dots,\gamma_{j-1},\gamma_j+1,\gamma_{j+1},\dots,\gamma_d)$ in $x$ and of degree $\delta$ in $\xi$. A similar identity holds for the partial derivatives with respect to $\xi_j$, which add one to the degree in the variable $\xi_j$. The lemma now follows by induction on $\alpha$ and $\beta$. \end{proof} As an application of the preceding lemma, the next lemma provides a uniform bound on the derivatives of certain signed sums of exponentials which will be used later to prove that certain related sums belong to the symbol class $S^{0}$ uniformly. As before, let $\lambda_{t} = \frac{1-e^{-t}}{1+e^{-t}}$ for $t>0$. \begin{lemma}\label{lem:exp-symb} For all multi-indices $\alpha,\beta\in{\mathbb N}^d$ such that $|\alpha|+|\beta|\not=0$ the functions $$ \kappa_{k,\epsilon,s}(x,\xi) :=\sum_{j=1}^k \epsilon_j \exp(-\lambda_{2^{-j} s}(|x|^2+|\xi|^2))$$ satisfy $$\sup_{(x,\xi)\in {\mathbb R}^{2d}}\, \langle\xi\rangle^{|\alpha|}|\partial_\xi^{\alpha}\partial_x^\beta \kappa_{k,\epsilon,s}(x,\xi)| < \infty $$ uniformly with respect to $k\ge 1$, $\epsilon = (\epsilon_j)_{j=1}^k\in \{\pm 1\}^k$, and $s\in [1,2]$. \end{lemma} \begin{proof} Let us set $\mu_{j,s} = \lambda_{2^{-j}s}$ for brevity. Given any two multi-indices $\alpha,\beta \in {\mathbb N}^{d}$ such that $|\alpha|+|\beta|\not=0$ we may estimate, using Lemma \ref{lem:pol}, \begin{align*} \ & \langle\xi\rangle^{|\alpha|}|\partial_\xi^{\alpha}\partial_x^\beta \kappa_{k,\epsilon,s}(x,\xi)| \\ & \qquad \le \langle\xi\rangle^{|\alpha|} \sum_{j=1}^k \sqrt{\mu_{j,s}}^{|\alpha|+|\beta|} |p_{\alpha,\beta}(\sqrt{\mu_{j,s}}x,\sqrt{\mu_{j,s}}\xi)| \exp(-\mu_{j,s}(|x|^2+|\xi|^{2})). \intertext{For $|\xi|\le 1$ we estimate the right-hand side by} & \qquad \lesssim_\alpha \sum_{j=1}^k \sqrt{\mu_{j,s}}^{|\alpha|+|\beta|} |p_{\alpha,\beta}(\sqrt{\mu_{j,s}}x,\sqrt{\mu_{j,s}}\xi)| \exp(-\mu_{j,s}(|x|^2+|\xi|^{2})) \\ & \qquad \lesssim_{\alpha,\beta} \sum_{j=1} ^{k}\sqrt{\mu_{j,s}}^{|\alpha|+|\beta|} \\ & \qquad \lesssim_{\alpha,\beta} \sum_{j=1} ^{k} 2^{-\frac12j(|\alpha|+|\beta|)} \\ & \qquad \lesssim_{\alpha,\beta} 1 \intertext{where we used that $\sup_{(x',\xi')\in {\mathbb R}^{2d}}|p_{\alpha,\beta}(x',\xi')|\exp(-(|x'|^2+|\xi'|^{2}))<\infty$ and $\mu_{j,s} \lesssim 2^{-j}$; while for $|\xi|>1$ we may estimate it by } & \qquad \lesssim_{\alpha} \sum_{j=1}^k (\sqrt{\mu_{j,s}}|\xi|)^{|\alpha|+|\beta|} p_{\alpha,\beta}(\sqrt{\mu_{j,s}}x,\sqrt{\mu_{j,s}}\xi)|\exp(-\mu_{j,s}(|x|^2+|\xi|^{2})) \\ & \qquad \lesssim_{\alpha,\beta} 1, \end{align*} where the last step follows by an application of \cite[Proposition H.2.3]{HNVW2}. In all these estimates, the constants are uniform in $k$, $\epsilon$, and $s$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:HinftyL}] By Theorem \ref{thm:L-R-sect} $L$ is $R$-sectorial of angle $\theta$ for any $\theta\in (0,\pi)$. Hence by \cite[Theorem 10.4.9]{HNVW2} it suffices to show that $$ \|f\|^2 \sim\underset{s \in [1,2]}{\sup}\sup_{N\in {\mathbb N}}{\mathbb E} \Big\|\sum _{|j|\le N} \varepsilon_{j}(\exp(-2^{j+1}sL) - \exp(-2^{j}sL))f\Big\|^2 \quad \forall f \in {\mathsf{D}}(L)\cap {\mathsf{R}}(L), $$ where $(\varepsilon_j)_{j\in{\mathbb Z}}$ is a Rademacher sequence, noting that the function $z\mapsto \exp(-2z) - \exp(-z)$ belongs to $H^1(\Sigma_\theta)\cap H^\infty(\Sigma_\theta)$ for each $\theta\in (0,\frac12\pi)$ using the notation of \cite[Chapter 10]{HNVW2}. It actually suffices to prove the one-sided inequality \begin{align}\label{eq:onesided} \underset{s \in [1,2]}{\sup}\sup_{N\in{\mathbb N}}{\mathbb E} \Big\|\sum _{|j|\le N} \varepsilon_{j}(\exp(-2^{j+1}sL) - \exp(-2^{j}sL))f\Big\|^2 \lesssim \|f\|^2 \quad \forall f \in X, \end{align} since the reverse inequality (for $f\in \overline{{\mathsf{R}}(L)} = \overline{{\mathsf{D}}(L)\cap {\mathsf{R}}(L)}$) will then follow by duality as in the proof of \cite[Theorem 10.4.4(3)]{HNVW2}, noting that the pair of adjoint operators $(B^*,A^*)$ is a Weyl pair in the dual lattice $X^{*}$ (which is UMD by \cite[Proposition 4.2.17]{HNVW1} and \cite[Proposition 7.5.15]{HNVW2}). Referring to the direct sum decomposition $X = {\mathsf{N}}(L) \oplus\overline{{\mathsf{R}}(L)}$ provided by Proposition \ref{prop:nullrange}, we will prove \eqref{eq:onesided} separately for $f\in {\mathsf{N}}(L)$ and $f\in\overline{{\mathsf{R}}(L)}$. For $f \in {\mathsf{N}}(L)$, \eqref{eq:onesided} is immediate from the fact that $\exp(-tL)f=f$. For $f\in \overline{{\mathsf{R}}(L)}$ we consider indices $j\in {\mathbb N}$ and $j\in {\mathbb Z}\setminus{\mathbb N}$ separately. For $j\in {\mathbb N}$ and $f\in {\mathsf{R}}(L)$, say $f = Lg$, we use that $\exp(-tL)f = L\exp(-tL)g$ decays to $0$ exponentially as $t\to\infty$ by Corollary \ref{cor:spectral}. In combination with the triangle inequality and the contraction principle, this gives \begin{align*} \ & \underset{s \in [1,2]}{\sup}\sup_{N\in{\mathbb N}}\Bigl({\mathbb E} \Big\| \sum_{j=0}^N \varepsilon_{j}(\exp(-2^{j+1}sL) - \exp(-2^{j}sL))f\Big\|^2\Bigr)^{1/2} \\ & \qquad \le 2\underset{s \in [1,2]}{\sup}\sup_{N\in{\mathbb N}}\Bigl({\mathbb E} \Big\| \sum_{j=0}^N \varepsilon_{j}\exp(-2^{j}sL)f\Big\|^2\Bigr)^{1/2} \\ & \qquad \le 2 \underset{s \in [1,2]}{\sup}\sup_{N\in{\mathbb N}}\sum_{j=0}^{N} \|\exp(-2^{j}sL)f\| \\ & \qquad \lesssim\|f\|. \end{align*} By continuity, this estimate extends to $f\in \overline{{\mathsf{R}}(L)}$. Let $$\widetilde a_{t}(x,\xi) := a_{2t}(x,\xi)-a_{t}(x,\xi),$$ where as always $a_t(x,\xi) = (1+\lambda_{t})^{d}e^{-\lambda_{t}(|x|^{2}+|\xi|^{2})} $ with $\lambda_t:= \frac{1-e^{-t}}{1+e^{-t}}$. In view of the cases already dealt with, the proof will be complete once we have shown that, for all $f\in \overline{{\mathsf{R}}(L)}$, \begin{equation} \label{eq:sfe} \sup_{s\in [1,2]}\sup_{N\ge 1}\mathbb{E}\Big\|\sum _{j=1} ^{N} \varepsilon_{j} \widetilde a_{2^{-j}s}(A,B)f\Big\|^2\lesssim \Vert f\Vert^2 \end{equation} with a constant independent of $f$. Set \begin{align*}\widetilde b_{t} :& = (1+\lambda_{2t})^{-d}a_{2t} - (1+\lambda_{t})^{-d}a_{t} \end{align*} so that \begin{align*} \widetilde a_t = \widetilde b_{t} - ((1+\lambda_{2t})^{-d}-1)a_{2t} + ((1+\lambda_{t})^{-d}a_{t}-1)a_t. \end{align*} We now take $t = 2^{-j}s$ and estimate each of the resulting three sums separately. Fix an integer $N \ge 1$. We first estimate \begin{align*} \sup_{s\in [1,2]}&\sup_{N\ge 1}\mathbb{E}\Big\|\sum _{j=1} ^{N} \varepsilon_{j} ((1+\lambda_{2^{-j+1}})^{-d}-1)a_{2^{-j+1}s}(A,B)f\Big\|^2 \\ &\lesssim \sup_{s\in [1,2]} \Bigl(\sum _{j=1} ^{\infty} |(1+\lambda_{2^{-j+1}})^{-d}-1| \|\exp(-2^{-j+1}sL)f \|\Bigr)^2 \\ & \lesssim_d \Bigl(\sum _{j=0} ^{\infty} 2^{-j} \Vert f\Vert\Bigr)^2 \lesssim \Vert f\Vert^2 , \end{align*} where we used the bound $(1+\lambda_{2^{-j}})^{-d}-1\lesssim_d 2^{-j}$ together with the uniform boundedness of the operators $\exp(-tL) = P(t)$. Similarly, $$ \sup_{s\in [1,2]} \sup_{N\ge 1}\mathbb{E}\Big\|\sum_{j=1}^{N} \varepsilon_{j} ((1+\lambda_{2^{-j}})^{-d}-1)a_{2^{-j}s}(A,B)f\Big\|^2 \lesssim \Vert f\Vert^2.$$ To prove \eqref{eq:sfe}, it therefore remains to show that \begin{equation*} \sup_{s\in [1,2]}\sup_{N\ge 1}\mathbb{E}\Big\|\sum_{j=1}^{N} \varepsilon_{j} \widetilde b_{2^{-j}s}(A,B)f\Big\|^2\lesssim \Vert f\Vert^2 . \end{equation*} To this end we claim that the functions $$\widetilde\kappa_{N,\epsilon,s}:=\sum_{j=1}^N \epsilon_{j} \widetilde b_{2^{-j}s}$$ belong to the symbol class $S^0$, uniformly in $N$, $\varepsilon\in \{\pm 1\}^N$, and $s\in [1,2]$. Since by assumption $(A,B)$ has a bounded Weyl calculus of type $0$, this claim, once it has been proved, will prove the theorem. We have \begin{align*} |\widetilde\kappa_{N,\epsilon,s}(x,\xi)| & \le \sum _{j=1} ^{N} |\widetilde b_{2^{-j}s}(x,\xi)| \\ & = \sum_{j=1} ^N \Bigl|\exp(-\lambda_{2^{-j+1}s}(|x|^{2}+|\xi|^{2}))-\exp(-\lambda_{2^{-j}s}(|x|^{2}+|\xi|^{2}))\Bigr| \\ & = \sum_{j=1} ^N \bigl(\exp(-\lambda_{2^{-j+1}s}(|x|^{2}+|\xi|^{2}))-\exp(-\lambda_{2^{-j}s}(|x|^{2}+|\xi|^{2}))\bigr) \\ & \leq 1 \end{align*} using a telescoping argument in the last step. In combination with Lemma \ref{lem:exp-symb} (which remains true if we replace the summation $\sum_{j=1}^N$ by $\sum_{j=0} ^{N-1}$), this proves the claim. \end{proof} \begin{remark}\label{rem:no-shift} For the standard pair $(Q,P)$ on $L^p({\mathbb R}^d;X)$ with $1<p<\infty$ and $X$ any UMD space, the operator $\frac12(Q^2+P^2)-\frac12d$ is $R$-sectorial and has a bounded $H^\infty$-calculus for any angle $\theta\in (0,\pi)$. Following the lines of \cite{MaaNee}, this follows from the results of \cite{BCCFR} (see also \cite{AFT}). \end{remark} Using the theory developed in \cite{KriegW}, we can extend the functional calculus of $L$ from $H^{\infty}(\Sigma_{\theta})$ to an appropriate H\"ormander class. We thus obtain a calculus in one of their $\mathscr{H}_{2} ^{\beta}$ classes. As pointed out in \cite[Remark 3.3]{KriegW}, these classes are slightly larger (but more complicated to define) than the standard H\"ormander classes of functions $f \in C^{m}[0,\infty)$ satisfying $$ \underset{R>0}{\sup} \ R^{2k} \int _{\frac12 R} ^{2R} |f^{(k)}(t)|^{2} \,\frac{{\rm d}t}{R}<\infty, $$ for all $k=0, \dots , m$. Note that the latter class contains all smooth functions with compact support in $(0,\infty)$. \begin{theorem}\label{thm:Horm} If $(A,B)$ is a Weyl pair on a UMD lattice $X$ with a bounded Weyl calculus of type $0$, then $L=\frac12(A^{2}+B^{2})-\frac12d$ has an $R$-bounded $\mathscr{H}_{2} ^{2d+\frac{1}{2}}$-H\"ormander calculus. \end{theorem} \begin{proof} By Theorems \ref{thm:L-R-sect}, \ref{thm:HinftyL}, the assumptions of \cite[Theorem 7.1]{KriegW} are satisfied (note that UMD lattices have the required property $(\alpha)$ by \cite[Theorem 7.5.20; see the Notes to this section for the terminology]{HNVW2}). \end{proof} \section{Open problems} As explained in the introduction, this paper is mostly meant as a foundation for the development of pseudo-differential calculi in ``rough'' settings. We nonetheless think that the general theory of Weyl pairs presented here is also worth developing further in its own right. This would include solving the following problems: \begin{enumerate} \item[\rm(1)] To extend Mauceri's results on twisted convolutions \cite{mauceri80} to Bochner spaces $L^{p}({\mathbb R}^{2d};X)$, where $X$ is UMD and $p\in (1,\infty)$. \end{enumerate} An affirmative answer would automatically solve the next problem. \begin{enumerate} \item[\rm(2)] To extend Theorem \ref{thm:type0} to general Weyl pairs $(A,B)$. \end{enumerate} As observed in Remark \ref{rem:no-shift}, for standard pairs on $L^p({\mathbb R}^d;X)$ with $1<p<\infty$, the conclusions of Theorems \ref{thm:L-R-sect}, \ref{thm:HinftyL}, and \ref{thm:Horm} hold for arbitrary UMD Banach spaces $X$. In the three theorems for general Weyl pairs, the lattice structure of $X$ was only used through the proof of Lemma \ref{lem:untwist-Rbd}. Thus one may pose the following problem: \begin{enumerate} \item[\rm(3)] To decide whether Theorems \ref{thm:L-R-sect}, \ref{thm:HinftyL}, and \ref{thm:Horm} hold for arbitrary UMD spaces $X$. \end{enumerate} An affirmative answer to the first problem could possibly also solve the third problem, since it might pave the way for an alternative proof via transference. For these three problems, studying the particular case where $X$ is a non-commutative $L^p$-space would be particularly interesting, yet potentially much simpler than the general case (thanks to the availability of both domination and extrapolation techniques). \begin{enumerate} \item[\rm(4)] To prove an analogue of the Stone--von Neumann uniqueness theorem for Weyl pairs. \end{enumerate}
{ "timestamp": "2018-06-05T02:14:46", "yymm": "1806", "arxiv_id": "1806.00980", "language": "en", "url": "https://arxiv.org/abs/1806.00980" }
\section{Appendices} \section*{Appendix A} \label{subsec::threeD} \nocite{Costabel_2000aa} All the presented estimates are applicable to achieve three-dimensional estimates as well, going back to \cite{Veiga_2014aa,Buffa_2011aa}. We will briefly go over the construction and state the result corresponding to Theorem \ref{lem::multiconv}. For $p>0$ we define the spline complex on $[0,1]^3$ via \begin{align} \begin{aligned} \S^0_{\pmb p,\pmb \Xi}([0,1]^3)\coloneqq {}&{} S_{p_1,p_2,p_3}(\Xi_1,\Xi_2,\Xi_3),\\ \pmb \S^1_{\pmb p,\pmb \Xi}([0,1]^3)\coloneqq {}&{} S_{p_1-1,p_2,p_3}(\Xi_1',\Xi_2,\Xi_3) \times \\ &\qquad \times S_{p_1,p_2-1,p_3}(\Xi_1,\Xi_2',\Xi_3) \times \\&\qquad\qquad \times S_{p_1,p_2,p_3-1}(\Xi_1,\Xi_2,\Xi_3'), \\ \pmb \S^2_{\pmb {p},\pmb \Xi}([0,1]^3)\coloneqq {}&{} S_{p_1,p_2-1,p_3-1}(\Xi_1,\Xi_2',\Xi_3') \times \\ &\qquad \times S_{p_1-1,p_2,p_3-1}(\Xi_1',\Xi_2,\Xi_3') \times \\&\qquad\qquad \times S_{p_1-1,p_2-1,p_3}(\Xi_1',\Xi_2',\Xi_3), \\ \S^3_{\pmb p,\pmb \Xi}([0,1]^3)\coloneqq {}&{} S_{p_1-1,p_2-1,p_3-1}(\Xi_1',\Xi_2',\Xi_3'). \end{aligned} \end{align} Let $f_0,$ $\pmb f_1,$ $\pmb f_2,$ $f_3$ be sufficiently smooth. We can use the transformations \begin{align} \begin{aligned} \iota_0(\pmb F)(f_0)&\coloneqq f_0\circ\pmb F,& \iota_1(\pmb F)(\pmb f_1)&\coloneqq (d\pmb F)^\top (\pmb f_1\circ \pmb F),\\ \iota_2(\pmb F)(\pmb f_2)&\coloneqq \det(d\pmb F) (d\pmb F)^{-1} (\pmb f_2\circ \pmb F),& \iota_3(\pmb F)(f_3)&\coloneqq \det(d\pmb F) (f_3\circ \pmb F), \end{aligned} \end{align} to define the corresponding spaces in the single patch physical domain as in \eqref{def::singlepatchphysical}, cf.~\cite{Hiptmair_2002aa}. Now, the projections $\tilde \Pi_{\pmb p,\pmb \Xi,\Omega}^0$, $\tilde {\pmb\Pi}_{\pmb p,\pmb \Xi,\Omega}^{1}$, $\tilde {\pmb\Pi}_{\pmb p,\pmb \Xi,\Omega}^{2}$, and $\tilde \Pi_{\pmb p,\pmb \Xi,\Omega}^3$ w.r.t.~the reference domain for $\pmb \Xi = [\Xi_1,\Xi_2,\Xi_3]$ defined in complete analogy to \eqref{def::commtilde}, commute with the differential operators {$\pmb\grad,\pmb\curl$ and $\div$.} By properties of the pullbacks, cf.~\cite[{Sec.~5.1}]{Veiga_2014aa}, this holds for the physical domain as well. The three-dimensional global B-spline projections are then defined as \begin{align*} \tilde\Pi^0_\Omega\coloneqq \hspace{-.2cm}{}&{}\bigoplus_{0\leq j< N}\hspace{-.1cm} \left((\iota_{0}(\pmb F_j))^{-1}\circ\tilde\Pi_{\pmb p,\pmb \Xi,\Omega}^0\circ\iota_{0}(\pmb F_j)\right),& \tilde{\pmb\Pi}^1_\Omega\coloneqq \hspace{-.2cm}{}&{}\bigoplus_{0\leq j< N}\hspace{-.1cm} \left((\iota_{1}(\pmb F_j))^{-1}\circ\tilde{\pmb\Pi}_{\pmb p,\pmb \Xi,\Omega}^1\circ\iota_{1}(\pmb F_j)\right),\\ \tilde{\pmb\Pi}^2_\Omega\coloneqq \hspace{-.2cm}{}&{}\bigoplus_{0\leq j< N}\hspace{-.1cm} \left((\iota_{2}(\pmb F_j))^{-1}\circ\tilde{\pmb\Pi}_{\pmb p,\pmb \Xi,\Omega}^2\circ\iota_{2}(\pmb F_j)\right),& \tilde\Pi^3_\Omega\coloneqq \hspace{-.2cm}{}&{}\bigoplus_{0\leq j< N}\hspace{-.1cm} \left((\iota_{3}(\pmb F_j)^{-1}\circ\tilde\Pi_{\pmb p,\pmb \Xi,\Omega}^3\circ\iota_{3}(\pmb F_j)\right). \end{align*} {In complete analogy to} the proof of Theorem \ref{lem::multiconv}, one can achieve the following result for the three-dimensional multipatch spline complex. \begin{corollary}\label{volumetricmulticonv} Let the volumetric analogue of Assumptions \ref{ass::knotvecs} and \ref{ass::geometry} be satisfied. Assume the functions $f_1,$ $\pmb f_2,$ $\pmb f_3,$ $f_4$ to be sufficiently smooth, i.e., such that the norms and interpolation operators below are well defined. Then one finds, for integers $s$ as below, \begin{align*} \norm{f_1 - \tilde\Pi^0_\Omega f_1}_{H^r(\Omega)} &\lesssim h^{s-r} \norm{f_1}_{{H}^s_{\mathrm{pw}}(\Omega)},&& 3\leq s\leq p+1,\\ \norm{\pmb f_2 - \tilde{\pmb\Pi}^1_\Omega \pmb f_2}_{\pmb H(\curl,\Omega)} &\lesssim h^s \norm{\pmb f_2}_{{\pmb H}^s_{\mathrm{pw}}(\curl,\Omega)},&& 2 < s\leq p,\\ \norm{\pmb f_3 - \tilde{\pmb\Pi}^2_\Omega \pmb f_3}_{\pmb H(\div,\Omega)} &\lesssim h^s \norm{\pmb f_3}_{{\pmb H}^s_{\mathrm{pw}}(\div,\Omega)},&& 1 < s\leq p,\\ \norm{f_4 - \tilde\Pi^3_\Omega f_4}_{L^2(\Omega)} &\lesssim h^s \norm{f_4}_{{H}^s_{\mathrm{pw}}(\Omega)},&& 0 \leq s\leq p, \end{align*} for $r = 0,1$. \end{corollary} \section{Introduction} Since its introduction by Hughes \emph{et al.} in \cite{Hughes_2005aa}, the technique of \emph{isogeometric analysis} has sparked interest in various communities, see e.g.~\cite{Bontinck_2017ag,Cottrell_2009aa}. Modern design tools often represent the geometries via NURBS mappings \cite{Piegl_1997aa}, which, in the framework of isogeometric analysis, are utilised as mappings from reference elements onto an exact representation of the geometry. This enables the user to perform simulations without the introduction of geometric errors. As discrete function spaces, spaces underlying the parametrisation of the geometry are used; such that forces obtained as the results of numerical simulations can be applied to the geometry in the form of deformations. This, in theory, unites the design and simulation processes, since the geometry format for simulation and design coincide, thus, eliminating the need for frequent remeshing and preprocessing of the computational domain. However, in many applications, the geometries are merely given via a boundary representation, i.e., as two-dimensional surfaces in a three-dimensional ambient space. Thus, for many numerical applications that want to utilise the high orders of convergence and spectral properties of isogeometric analysis, a volumetric parametrisation of the computational domain has to be constructed by hand. For some problems, this issue can be overcome by the use of \emph{boundary element methods}. Indeed, many applications of isogeometric boundary element methods have been introduced in recent years \cite{Beer_2017aa,Doelz_2017aa,Doelz_2016aa,Marussig_2015aa,Simpson_2012aa,Simpson_2017aa}. These go beyond the scope of academic examples and show that isogeometric boundary methods are ready for industrial application. This can be attributed to the application of so-called \emph{fast methods} \cite{Doelz_2016aa,Kurz_2007aa,Hackbusch_2002aa}, which counteract the dense matrices arising from boundary element formulations. The analysis of classical boundary element methods is well understood, see \cite{McLean_2000aa,Sauter_2010aa} for the scalar cases, and \cite{Buffa_2001aa,Buffa_2001ac,Buffa_2002aa,Buffa_2003aa} for the case of electromagnetic problems, and properties of different choices of discretisation are detailed by \cite{Weggler_2011aa,Zaglmayr_2006aa}, going back to the works of {\cite{Bossavit_1988aa,Bossavit_1998aa,Ciarlet_2002aa,Monk_1993aa,Monk_2003aa,Nedelec_1980aa}} and many more. Moreover, the utilisation of parametric mappings in the context of boundary element methods is not new. For different choices of basis functions, much of the theory has already been investigated, cf.~\cite{Harbrecht._2001aa,Harbrecht_2013aa}. However, this kind of analysis has not yet been done for B-splines as ansatz functions and for a full discretisation of the de Rham diagram, as needed for problems requiring divergence conforming discretisations. With isogeometric boundary element methods in mind, one cannot simply rely on the established analysis of variational isogeometric methods \cite{Veiga_2014aa}. Despite the fact, that first multipatch estimates have been investigated in \cite{Buffa_2015aa}, the \emph{spline complex} \cite{Buffa_2010aa}, i.e.~a conforming B-spline discretisation of the de Rham complex, has not been analysed for the multipatch setting. Moreover, error analysis in the trace space, i.e., the spaces on the boundary of a domain on which boundary element methods operate, cannot be trivially deduced by an error analysis of finite element methods, since the norms induced on the boundary are nonlocal norms, defined through dualities \cite{McLean_2000aa}. In this paper, we want to establish approximation estimates of optimal order for the trace spaces $H^{1/2}(\Gamma)$, $\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)$ and $H^{-1/2}(\Gamma),$ where $\Gamma=\partial\Omega$. These spaces and some required definitions will be introduced in Section \ref{sec::definitions}. We will use spline-techniques as in \cite{Buffa_2010aa}, going back to \cite{Schumaker_2007aa}, to first define a multipatch spline complex (Section \ref{subsec::complex}). Then, in Section \ref{subsec::approx}, investigate its approximation properties w.r.t.~standard norms on multipatch boundaries. In Section \ref{sec::tracespaces}, we will follow the lines of established boundary element literature, e.g.~\cite{Buffa_2003ab,Buffa_2003aa,Sauter_2010aa,Steinbach_2008aa}, and show that isogeometric approximation on trace spaces share the approximation properties of classical alternatives \cite{Weggler_2011aa,Zaglmayr_2006aa}. Finally, in Section \ref{sec::conclusion}, we will collect the results. \section{Trace Spaces for Boundary Element Methods} \label{sec::definitions} We will introduce necessary definitions, and discuss notation. For an in-depth introduction, we refer to the books by Adams \cite{Adams_1978aa} and McLean \cite{McLean_2000aa}. Let $\Omega\subseteq {\mathbb R}^3$ be some Lipschitz domain and let $Df$ denote the weak derivative of some function $f$. As in \cite{Buffa_2003aa} or \cite{Nezza_2012aa}, we will follow convention and set $H^0(\Omega)= L^2(\Omega)$. For any integer $m\geq 1$, we define $H^m(\Omega)=\lbrace f\in L^2(\Omega)\colon D f \in H^{m-1}(\Omega)\rbrace$ equipped with the norm recursively defined by \begin{align*} \norm{f}_{H^0(\Omega)} & \coloneqq\norm{f}_{L^2(\Omega)}, & \norm{f}_{H^m(\Omega)}^2 & \coloneqq\norm{f}_{H^{m-1}(\Omega)}^2 + \sum_{\abs{\pmb\alpha} = m}\norm{D^{\pmb\alpha} f}_{L^2(\Omega)}^2, \end{align*} where ${\pmb\alpha}$ is a multiindex with $\abs{ {\pmb\alpha}} = \sum_{1\leq i\leq 3} \alpha_i =m$ and $D^{\pmb\alpha} f = \big( {\partial^{\alpha_1}_{x_1}},\dots,{\partial^{\alpha_{3}}_{x_{3}}} \big).$ For the special case $H^1(\Omega)$ we find $\norm{f}_{H^1(\Omega)}^2 = \norm{f}_{L^2(\Omega)}^2 + \norm{\pmb\grad f}_{\pmb L^2(\Omega)}^2$. By $\abs{\cdot}_{H^m(\Omega)}$ we will denote the $m$-th semi-norm, i.e., the term with $\norm{\cdot}_{H^m(\Omega)}^2 = \norm{\cdot}_{H^{m-1}(\Omega)}^2 + \abs{\cdot}_{H^m(\Omega)}^2$. Now let $s = m+\epsilon$, where $m\in {\mathbb N}$ and $\epsilon \in (0,1).$ We define the fractional Sobolev space $H^s(\Omega)$ as the functions of $L^2(\Omega)$ for which the norm \begin{align*} \norm{f}_{H^s(\Omega)}^2 \coloneqq \norm{f}_{H^m(\Omega)}^2 + {\sum_{\abs{\alpha} = m}\int_\Omega\int_\Omega \frac{\abs{D^\alpha f(\pmb x)-D^\alpha f(\pmb y)}^2}{\abs{\pmb x-\pmb y}^{2\epsilon+3}}\:\operatorname{d} x\:\operatorname{d} y} \end{align*} is finite. We equip $H^s(\Omega)$ with the corresponding norm. Vectorial Sobolev spaces can be defined largely analogously and will be denoted by bold letters, for example $\pmb H^s(\Omega)$. For any first-order differential operator $\operatorname d$, we set $$H^s(\operatorname{d},\Omega)\coloneqq \lbrace f \in H^s(\Omega) \colon \operatorname d f \in H^s(\Omega)\rbrace,$$ equipped with the corresponding graph norm. Of specific interest are spaces of types \begin{align*} \pmb H^s(\div,\Omega)&\coloneqq \lbrace\pmb f\in\pmb H^s(\Omega)\colon \div(\pmb f)\in H^s(\Omega) \rbrace,\\ \pmb H^s(\pmb\curl,\Omega)&\coloneqq \lbrace\pmb f\in\pmb H^s(\Omega)\colon \pmb\curl(\pmb f)\in \pmb H^s(\Omega) \rbrace, \end{align*} and spaces of similar structure w.r.t.~the surface differential operators $\pmb\grad_\Gamma,$ $\div_\Gamma,$ ${\bb\curl}_\Gamma$ and $\curl_\Gamma$, cf.~\cite{Buffa_2003aa,Peterson_1995aa}. \subsection{Trace Space Setting} We are interested in function spaces on compact boundaries of Lipschitz domains $\Gamma = \partial \Omega$. As commonly done, we can now define the corresponding spaces on manifolds $\Gamma$ via charts and partitions of unity, cf.~\cite{McLean_2000aa}. \begin{definition}[Trace Operators, \cite{Buffa_2003aa,Sauter_2010aa}] Let $u\colon\Omega\to \mathbb C$ and $\pmb u \colon \Omega \to \mathbb C^3$. Following the notation of \cite{Buffa_2003aa}, we define the \emph{trace operators} for smooth $u$ and $\pmb u$ as \begin{align*} \gamma_0 (u)(\pmb x_0) & \coloneqq \lim_{\pmb x\to \pmb x_0} u(\pmb x), & \pmb \gamma_0 (\pmb u)(\pmb x_0) & \coloneqq \lim_{\pmb x\to \pmb x_0}\pmb u(\pmb x) - \pmb{ n}_{\pmb x_0} (\pmb u(\pmb x)\cdot \pmb{n}_{\pmb x_0}), &\\ \pmb \gamma_{ t} (\pmb u)(\pmb x_0) & \coloneqq \lim_{\pmb x\to \pmb x_0}\pmb u(\pmb x) \times \pmb{n}_{\pmb x_0}, & \gamma_{\pmb n} (\pmb u)(\pmb x_0) & \coloneqq \lim_{\pmb x\to \pmb x_0}\pmb u(\pmb x) \cdot \pmb{n}_{\pmb x_0}, \end{align*} for $\pmb x_0\in\Gamma$ and $\pmb x\in \Omega$, where $\pmb{n}_{\pmb x_0}$ denotes the {outward} normal vector of $\Omega$ at $\pmb x_0\in \Gamma$. \end{definition} By density arguments, one extends these operators to a weak setting, see \cite{McLean_2000aa}. One can visualise the trace operators acting on vector fields as in Figure \ref{Fig::traces}. \begin{figure}\centering \begin{subfigure}{.3\textwidth } \input{pics/original.tex} \caption{Original vector} \end{subfigure}\hspace{2cm} \begin{subfigure}{.3\textwidth } \input{pics/trace_tangential.tex} \caption{Image of $\pmb\gamma_0$} \end{subfigure}\\[.4cm] \begin{subfigure}{.3\textwidth } \input{pics/trace_rotatedtangential} \caption{Image of $\pmb \gamma_t$} \end{subfigure}\hspace{2cm} \begin{subfigure}{.3\textwidth } \input{pics/trace_normal.tex} \caption{Image of $\gamma_{\pmb n}$} \end{subfigure} \caption{Visualisation of the trace operators.}\label{Fig::traces} \end{figure} Assuming compactness of $\Gamma$, we define for all $s>0$ the space $H^{-s}(\Gamma)$ as the dual space of $H^s(\Gamma).$ We define the trace space $\pmb H_\times^{s}(\Gamma) \coloneqq \pmb \gamma_t (\pmb H^{s+ 1/2}(\Omega))$, for $0<s<1$. The space $\pmb H^{-s}_\times(\Gamma)$ denotes the corresponding dual space w.r.t.~the duality pairing $\langle\cdot \times \pmb n,\cdot\rangle_{L^2(\Gamma)}$. Note that $\pmb H_\times^s(\Gamma)$ might not coincide with $\pmb H^s(\Gamma)$ understood in a componentwise sense, since this identity holds only for smooth geometries, i.e., $C^\infty$-manifolds, see~\cite{Buffa_2002aa}. Defining $\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)\coloneqq\pmb \gamma_t(\pmb H^0(\pmb \curl,\Omega))$ together with its rotated counterpart $\pmb H_\times^{-1/2}(\curl_\Gamma,\Gamma)\coloneqq \pmb\gamma_0(\pmb H^0(\curl,\Omega))$, we recall the following mapping properties of the trace operators, as presented in \cite[Thm.~3.37]{McLean_2000aa} and \cite[Thm.~1, Thm.~3]{Buffa_2003aa}. \begin{theorem}[Mapping Properties of the Trace Operators] \label{thm::htimestrace} For the trace operators, the following properties hold. \begin{enumerate} \item The trace operator $\gamma_0\colon H^s(\Omega)\to H^{s-1/2}(\Gamma)$ is linear, continuous and surjective, with a continuous right inverse for $0<s< 3/2$. \item The operator $\pmb \gamma_0\colon \pmb H^0(\pmb \curl,\Omega)\to \pmb{H}_\times^{-1/2}(\curl_\Gamma,\Gamma)$ is linear, continuous, surjective, and possesses a continuous right inverse. \item The operator $\pmb \gamma_t\colon \pmb H^0(\pmb \curl,\Omega)\to \pmb{H}_\times^{-1/2}(\div_\Gamma,\Gamma)$ is linear, continuous, surjective, and possesses a continuous right inverse. \item The operator $\gamma_{\pmb n}\colon \pmb H^0(\div,\Omega)\to H^{-1/2}(\Gamma)$ is linear, continuous and surjective. \end{enumerate} Moreover, for $0\leq s < 1,$ there exists a continuous extension of the tangential trace mapping $\pmb \gamma_t\colon \pmb H^s(\pmb \curl,\Omega)\to \pmb H_\times^{s-1/2}(\div_\Gamma,\Gamma)$. \end{theorem} In the following, we consider a de Rham complex as in Figure \ref{fig::classicaldeRham}, \begin{figure} $$\begin{tikzcd}[row sep = 2.2em,column sep = 1.4cm] H^1(\Omega)\ar{dd}[description]{\gamma_0}\ar{r}[description]{\pmb \grad}& \pmb H^0(\pmb \curl,\Omega)\ar{r}[description]{\pmb \curl}\ar{d}[description]{\pmb \gamma_0} \arrow[ddd,bend right=60,"{\pmb{\gamma}_t}" description, near start]& \pmb H^0(\div,\Omega)\ar{dd}[description]{\gamma_{\pmb n}}\\ &\pmb H^{{-1/2}}(\curl_\Gamma,\Gamma)\ar{dr}[description]{\curl_\Gamma} \arrow{dd}[description]{\cdot \times \pmb n} &\\ H^{{1/2}}(\Gamma)\ar{dr}[description]{{{\bb\curl}_\Gamma}} \ar{ur}[description]{\pmb \grad_\Gamma} & & H^{{-1/2}}(\Gamma)\\ &\pmb H_\times^{{-1/2}}(\div_\Gamma,\Gamma)\ar{ur}[description]{\div_\Gamma}& \end{tikzcd} $$ \caption{Two dimensional de Rham complex on the boundary, induced by application of the trace operators to the three-dimensional complex in the domain.} \label{fig::classicaldeRham} \end{figure} where the trace operators map the three-dimensional spaces onto the boundary. By definition of the involved trace operators and surface differential operators, the diagram commutes. \begin{remark} Note that the diagram in Figure \ref{fig::classicaldeRham} is an immensely powerful tool, showcasing the relation between the three-dimensional and two-dimension de Rham complex, and the relation of the trace spaces utilised in boundary element methods with their counterparts in the finite element context. It can even be used to define the notions introduced previously: Given the trace operators $\gamma_0$, $\pmb \gamma_0$ and $\gamma_{\pmb n}$ as well as the three-dimensional de Rham sequence, we can {define} the trace operator $\pmb \gamma_t$ by rotation around the normal and the trace spaces via the surjectivity assertions of Theorem \ref{thm::htimestrace}. Moreover, one can define the surface differential operators as the operators making the diagram commute. \end{remark} As a first step towards an analysis w.r.t.~spaces of fractional regularity, we review a classical interpolation argument. \begin{lemma}[Interpolation Lemma]\label{interpolationlemma} Let $0\leq s_1\leq s_2$ and $0\leq t_1\leq t_2$ be integers and let $\Gamma$ be a compact manifold, smooth enough for the space ${H^{\max(s_2,t_2)}(\Gamma)}$ to be defined. For $\sigma\in[0,1]$, if $T \colon H^{s_j}(\Gamma)\to H^{t_j}(\Gamma)$ is a bounded linear operator for both $j=1,2$, with \begin{align*} \norm{T u}_{H^{t_j}(\Gamma)}&\leq C_j \norm{u}_{H^{s_j}(\Gamma)},\qquad j\in \lbrace 1,2\rbrace, \intertext{for two constants $C_1$ and $C_2$, then we find} \norm{Tu}_{H^{(1-\sigma)\cdot t_1 + \sigma\cdot t_2}(\Gamma)}&\leq C_1^{1-\sigma}C_2^\sigma\norm{u}_{H^{(1-\sigma)\cdot s_1 + \sigma\cdot s_2}(\Gamma)}. \end{align*} \end{lemma} \begin{proof} This follows by the combination of \cite[Theorem 4.1.2]{Bergh_1976aa} and \cite[Definition 2.4.1]{Bergh_1976aa}.\qed \end{proof} \subsection{The Spline Complex in the Trace Space Setting}\label{subsec::complex} We briefly review the basic notions of isogeometric methods and refer to \cite{Cottrell_2009aa,Hughes_2005aa} for an introduction to isogeometric analysis and to \cite{Piegl_1997aa,Schumaker_2007aa} for more details on NURBS and spline theory. \nocite{Lee_1996aa} \begin{definition}[B-Spline Basis {\cite[Sec.~2]{Veiga_2014aa}}] Let ${\mathbb K}$ be either ${\mathbb R}$ or $\mathbb C$ and $p,k$ be integers with $0\leq p< k$. We define a \emph{$p$-open knot vector} $\Xi$ as a set of knots $\xi_i$ of the form \begin{align*} \Xi = \big\lbrace\underbrace{\xi_0 = \cdots =\xi_{p}}_{=0}< \xi_{p+1}\leq \cdots\leq \xi_{k-1} < \underbrace{\xi_{k}=\cdots =\xi_{k+p}}_{=1}\big\rbrace\in[0,1]^{k+p+1}. \end{align*} We will assume the multiplicity of interior knots to be at most $p$. We can then define the basis functions $ \lbrace b_i^p \rbrace_{0\leq i< k}$ on $[0,1]$ for $p=0$ as \begin{align*} b_i^0(x) & =\begin{cases} 1, & \text{if }\xi_i\leq x<\xi_{i+1}, \\ 0, & \text{otherwise,} \end{cases} \intertext{ and for $p>0$ via the recursive relationship} b_i^p(x) & = \frac{x-\xi_i}{\xi_{i+p}-\xi_i}b_i^{p-1}(x) +\frac{\xi_{i+p+1}-x}{\xi_{i+p+1}-\xi_{i+1}}b_{i+1}^{p-1}(x), \end{align*} for all $0\leq i<k-1$. Given the basis as above, the space $S_p(\Xi)$ is given as $\operatorname{span}(\lbrace b_i^p\rbrace_{0\leq i <k})$. The integer $k$ hereby denotes the dimension of the spline space. \end{definition} \begin{definition}[{\cite[Ass.~2.1]{Veiga_2014aa}}] For a $p$-open knot vector $\Xi,$ let $h_i\coloneqq \xi_{i+1}-\xi_{i}.$ We define the \emph{mesh size} $h$ to be the maximal distance $h\coloneqq \max_{p\leq i < k}h_i$ between neighbouring knots. We call a knot vector \emph{locally quasi-uniform} when for all non-empty elements, neighbouring $[\xi_{i_1},\xi_{i_1+1}]$ and $[\xi_{i_2},\xi_{i_2+1}]$ there exists a constant $\theta\geq 1$ such that the ratio $h_{i_1}\cdot h_{i_2}^{-1}$ satisfies $\theta^{-1}\leq h_{i_1}\cdot h_{i_2}^{-1} \leq \theta.$ \end{definition} Let $\ell=2,3$, and let the knot vectors $\Xi_1,\dots,\Xi_{\ell}$ be given. B-spline functions on the domain $[0,1]^{\ell}$ are constructed through simple tensor product relationships for $ p_{i_1,\dots i_\ell} \in {\mathbb K}$ via \begin{align} f(x_1,\dots ,x_{\ell})\coloneqq\sum_{0\leq i_1< k_1}\dots\sum_{0\leq i_\ell< k_\ell} p_{i_1,\dots,i_{\ell}} \cdot b_{i_1}^{p_1}(x_1)\cdots b_{i_{\ell}}^{p_{\ell}}(x_{\ell}),\label{def::tpspline} \end{align} which allows \emph{tensor product B-spline spaces}, denoted by $S_{p_1,\dots,p_\ell}(\Xi_1,\dots,\Xi_\ell)$ to be defined. We will refer to non-empty intervals of the form $[\xi_i,\xi_{i+1}]$, $0\leq i<k$, and in the tensor product sense, non-empty sets of the form $[\xi_{i_1},\xi_{i_1+1}]\times\dots\times[\xi_{i_\ell},\xi_{i_\ell+1}]$ as \emph{elements} w.r.t.~the knot vectors. \begin{definition}[Support Extension, {\cite[Sec.~2.A]{Buffa_2015aa}}] Let $S_p({\Xi})$ be a $k$ dimensional spline space on $[0,1]$, and let $Q$ be an element of the knot vector $\Xi$. We define the \emph{support extension $\tilde Q$} of $Q$ by \begin{align*} \tilde Q \coloneqq \left\lbrace \textstyle{{\bigcup_{0\leq i<k}\operatorname{supp}(b_i^p)}}\colon b_i^p(x)\neq 0\text{ for }x\in Q\right\rbrace. \end{align*} The same concept is generalised by tensor product construction to spline spaces on $[0,1]^\ell$. \end{definition} \begin{assumption}[Quasi-Uniformity of Knot Vectors]\label{ass::knotvecs} All knot vectors will be assumed to be $p$-open and locally quasi-uniform, such that the usual spline theory is applicable \cite{Veiga_2014aa,Piegl_1997aa,Schumaker_2007aa}. \end{assumption} Throughout this paper, we will reserve the letter $h$ for the maximal distance between two given knots and $p$ for the minimal polynomial degree. Moreover, we let $\tilde h$ denote the maximal size of a support extension. For inequalities we will use the notation $$M\lesssim T,$$ if $M \leq C \cdot T$ holds for some constant $C>0$ independent of $h$. If both $M\lesssim T$ and $T\lesssim M$ hold, we will write $M\simeq T$. \begin{definition}[Patch] \label{def:patch} We define a \emph{patch} $\Gamma$ to be the image of $[0,1]^2$ under an invertible diffeomorphism $\pmb F\colon [0,1]^2\to\Gamma\subseteq\mathbb R^3$. Let $\Omega$ be a Lipschitz domain. We define a \emph{multipatch geometry} to be a compact, orientable two-dimensional manifold $\Gamma=\partial \Omega$ invoked via $\bigcup_{0\leq j <N} \Gamma_j$ by a family of patches $\lbrace \Gamma_j\rbrace_{0\leq j<N},$ $N\in \mathbb N$, given by a family of diffeomorphisms $$\lbrace \pmb F_j \colon [0,1]^2\hookrightarrow \Gamma_j\rbrace_{0\leq j<N},$$ called \emph{parametrisation}. We require the images of $(0,1)^2$ of all $\pmb F_j$ to be disjoint and that for any \emph{patch interface} $D$ of the form $D=\partial\Gamma_{j_0}\cap \partial\Gamma_{j_1}\neq \emptyset$, we find that the parametrisations $\pmb F_{j_0}$ and $\pmb F_{j_1}$ coincide. \end{definition} Note that this definition excludes non-watertight geometries and geometries with T-junctions, since mappings at interfaces must coincide, cf.~Figure \ref{Fig::Multipatch}. \begin{figure} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.3\textwidth]{pics/multipatch_ctrl_before.pdf}}; \node at (5,0) {\includegraphics[width=.3\textwidth]{pics/multipatch_ctrl.pdf}}; \node (A) at (2,0) {}; \node (B) at (3,0) {}; \draw [->, line join=round, decorate, decoration={ zigzag, segment length=4, amplitude=.9,post=lineto, post length=2pt }] (A) -- (B); \end{tikzpicture} \caption{Mappings on interfaces must coincide.} \label{Fig::Multipatch} \end{figure} In the spirit of isogeometric analysis, these mappings will usually be given by NURBS mappings, i.e.,~by \begin{align*} \pmb F_j(x,y)\coloneqq \sum_{0\leq j_1<k_1}\sum_{0\leq j_2<k_2}\frac{\pmb c_{j_1,j_2} b_{j_1}^{p_1}(x) b_{j_2}^{p_2}(y) w_{j_1,j_2}}{ \sum_{i_1=0}^{k_1-1}\sum_{i_2=0}^{k_2-1} b_{i_1}^{p_1}(x) b_{i_2}^{p_2}(y) w_{i_1,i_2}}, \end{align*} for control points $\pmb c_{j_1,j_2}\in {\mathbb R}^3$ and weights $w_{i_1,i_2}>0.$ In accordance with the isogeometric framework, degrees and knot vectors of the discrete spaces to be mapped from the reference domain are usually chosen in accordance with the parametrisation \cite{Hughes_2005aa}. However, the description of the geometry is, in principle, independent of the analysis that will follow. From now on we reserve the letter $N$ for the number of patches and the letter $j$ to refer to a generic patch. As NURBS with interior knot repetition are not arbitrarily smooth, one would usually resort to the utilisation of bent Sobolev spaces \cite{Veiga_2014aa}. However, to avoid technical details, we introduce the following assumption. \begin{assumption}[Smoothness of Geometry Mappings]\label{ass::geometry} We assume any multipatch geometry to be given by an invertible, non-singular parametrisation $\lbrace \pmb F_j\rbrace_{0\leq j<N}$ with $\pmb F_j\in C^\infty([0,1]^2).$ \end{assumption} {We remark that Assumption~\ref{ass::geometry} implies that each patch $\Gamma_j$ has Lipschitz boundary.} We {also} stress that, limited by the smoothness of $\lbrace \pmb F_j\rbrace_{0\leq j< N}$, all results are still provable for non-smooth but invertible NURBS parametrisation, although this would require an analysis via bent Sobolev spaces as in \cite{Veiga_2014aa}. Assumption \ref{ass::geometry} is merely for convenience. Moreover, it is possible to obtain parametric mappings satisfying Assumption \ref{ass::geometry} either through extraction of rational Bézier patches, which can be obtained as subpatches of a given NURBS parametrisation or, more generally, through an algorithmic approach as in \cite{Harbrecht_2010aa}. \begin{definition}[Spaces of Patchwise Regularity] Let $\Gamma = \bigcup_{0\leq j< N} \Gamma_j$ be a multipatch geometry. We define the norm \begin{align*} \norm{f}_{H_{{\mathrm{pw}}}^s(\Gamma)}^2 \coloneqq \sum_{0\leq j<N} \norm{f|_{\Gamma_j}}_{H^s(\Gamma_j)}^2 \end{align*} for all $f\in L^2(\Gamma)$ for which the right-hand side is well defined, and define the corresponding space equipped with this norm as \begin{align*} H_{{\mathrm{pw}}}^s(\Gamma)\coloneqq \lbrace f\in L^2(\Gamma)\colon \norm{f}_{H_{{\mathrm{pw}}}^s(\Gamma)} < \infty \rbrace. \end{align*} In complete analogy, we extend the definition to vector-valued Sobolev spaces (and spaces with graph norms), as usual, denoted by bold letters $ {\pmb H}_{{\mathrm{pw}}}^s(\Gamma)$. \end{definition} \begin{definition}[Single Patch Spline Complex \cite{Buffa_2011aa}] \label{def:spline-complex} Let $\pmb p=(p_1,p_2)$ be a pair of positive integers and $\Xi_1,\Xi_2$ be $p$-open knot vectors on $[0,1].$ Let $\Xi_1'$ and $\Xi_2'$ denote their truncation, i.e., the knot vector without its first and last knot. We define the \emph{spline complex} on $[0,1]^2$ as the spaces \begin{align*} \S^0_{\pmb p,\pmb\Xi}([0,1]^2)\coloneqq {}&{} S_{p_1,p_2}(\Xi_1,\Xi_2),\\ \pmb \S^1_{\pmb p,\pmb\Xi}([0,1]^2)\coloneqq {}&{} S_{p_1,p_2-1}(\Xi_1,\Xi_2') \times S_{p_1-1,p_2}(\Xi_1',\Xi_2),\\ \S^2_{\pmb p,\pmb\Xi}([0,1]^2)\coloneqq {}&{} S_{p_1-1,p_2-1}(\Xi_1',\Xi_2'). \end{align*} \end{definition} In the reference domain, the spline complex can be visualised as in Figure \ref{Fig::Complex}. \begin{figure} \input{pics/deRham} \caption{Visualisation of the single patch spline complex for $\pmb p= (2,2)$. {The blue functions are the univariate B-splines related to each coordinate direction, whose tensor-product gives the bases of the spline spaces in Definition~\ref{def:spline-complex}.} {The two-dimensional ${\bb\curl}$ operator maps the smooth space to a vector valued space where the regularity in each vector component is lowered w.r.t.~one spatial component. Analogously, the divergence operator maps to the space of globally lowered regularity.}}\label{Fig::Complex} \end{figure} Assume $\Gamma$ to be a single patch domain given via a geometry mapping $\pmb F$ in accordance with Assumption \ref{ass::geometry}. To define the spaces in the physical domain, we resort to an application of the pull-backs, which, as a study of \cite{Peterson_2006aa} reveals, are given by \begin{align*} \iota_0(\pmb F)(f_0)(\pmb x)&\coloneqq \big(f_0\circ\pmb F\big)(\pmb x),&&\pmb x\in [0,1]^2,\\ \iota_1(\pmb F)(\pmb f_1)(\pmb x)&\coloneqq\big( \kappa(\pmb x)\cdot (d\pmb F)^{-1} (\pmb f_1\circ \pmb F)\big)(\pmb x),&&\pmb x\in [0,1]^2,\\ \iota_2(\pmb F)(f_2)(\pmb x)&\coloneqq \big(\kappa(\pmb x)\cdot (f_2\circ \pmb F)\big)(\pmb x),&&\pmb x\in [0,1]^2, \end{align*} where the term $\kappa$ for $\pmb x\in[0,1]^2$ is given by the so-called \emph{surface measure} \begin{align} \kappa(\pmb x)\coloneqq \norm{\partial_{x_1}\pmb F( {\pmb x})\times \partial_{x_2}\pmb F( {\pmb x})}. \end{align} Note that if one were to compute the pullbacks $\iota_i(\pmb F)$ for $i=0,1,2$ as above, at first glance one were to encounter a dimensionality problem, since the inverse $d\pmb F^{-1}$ of the Jacobian $d\pmb F$ arising from $\pmb F$ is of size $2\times 3$, and thus not readily invertible. The study of e.g.~\cite{Bossavit_1988aa,Dhaeseleer_1991aa,Kurz_2007aa} makes it clear that, due to Assumption \ref{ass::geometry}, required inverse mappings for the case of embedding a two-dimensional manifold into three-dimensional ambient space exist. They need to be understood as mappings from $[0,1]^2$ onto the tangential space of $\Gamma$. It is merely a smooth one to one mapping between a two-dimensional space into another, and invertibility must be understood in this sense. However, for implementation this matters little, since both ansatz- and test functions will be defined on $[0,1]^2$. Therefore one merely needs to compute the corresponding push-forwards, readily available through the equalities \begin{align*} (\iota_0(\pmb F))^{-1}(f_0)&= \big(f_0\circ \pmb F^{-1}\big)(\pmb x),&&\pmb x\in \Gamma,\\ (\iota_1(\pmb F))^{-1}(\pmb f_1)&= \big({\kappa(\pmb x)^{-1}} \cdot (d\pmb F) \pmb f_1\circ \pmb F^{-1}\big)(\pmb x),&&\pmb x\in \Gamma,\\ (\iota_2(\pmb F))^{-1}(f_2)&= \big({\kappa(\pmb x)^{-1}} \cdot f_2\circ \pmb F^{-1}\big)(\pmb x),&&\pmb x\in \Gamma, \end{align*} due to Assumption \ref{ass::geometry}. The inverse of $\pmb F$ needs not be computed, since pull-backs and push-forwards cancel out by construction. \begin{remark}\label{rem::stillconforming} A study of \cite[{Chap.~5}]{Peterson_1995aa} makes clear that these mappings are still conforming for $\Gamma_j\subseteq\mathbb R^3,$ i.e.,~that the diagram $$ \begin{tikzcd}[row sep = 3em,column sep = 1.3cm] H^1\big((0,1)^2\big)\ar{d}[description]{\iota_0(\pmb F_j)^{-1}}\ar{r}[description]{{\bb\curl}}& \pmb H\big(\div,(0,1)^2\big)\ar{r}[description]{\div}\ar{d}[description]{\iota_1(\pmb F_j)^{-1}}& L^2\big((0,1)^2\big)\ar{d}[description]{\iota_2(\pmb F_j)^{-1}}\\ H^{1}(\Gamma_j)\ar{r}[description]{{{\bb\curl}_\Gamma}} & \pmb H(\div_\Gamma,\Gamma_j)\ar{r}[description]{\div_\Gamma} & L^{{2}}(\Gamma_j) \end{tikzcd}$$ commutes. Because of this, we can identify the divergence on the reference domain with the divergence on the physical domain, up to a bounded factor induced by the corresponding pull-back, due to Assumption \ref{ass::geometry}. We will, later on, utilise this explicitly to apply estimates of the kind \begin{align*} \norm{\div_\Gamma f}_{L^2(\Gamma_j)} &\simeq \norm{\div( f\circ \pmb F)}_{L^2([0,1]^2)}, \end{align*} see also \cite{Monk_2003aa,Peterson_2006aa} for a further review of these concepts. \end{remark} Now we can define corresponding discretisations on the physical domain $\Gamma_j$ by \begin{align} \begin{aligned} \S^0_{\pmb p,\pmb\Xi}(\Gamma_j)\coloneqq {}&{} \left\lbrace f \colon \iota_0(\pmb F_j)(f) \in \S^0_{\pmb p,\pmb\Xi}([0,1]^2)\right\rbrace,\\ \pmb \S^1_{\pmb p,\pmb\Xi}(\Gamma_j)\coloneqq {}&{} \left\lbrace \pmb f \colon \iota_1(\pmb F_j)(\pmb f) \in \pmb \S_{\pmb p,\pmb\Xi}^1([0,1]^2)\right\rbrace,\\ \S^2_{\pmb p,\pmb\Xi}(\Gamma_j)\coloneqq {}&{} \left\lbrace f \colon \iota_2(\pmb F_j)(f) \in \S^2_{\pmb p,\pmb\Xi}([0,1]^2)\right\rbrace. \end{aligned}\label{def::singlepatchphysical} \end{align} Proceeding as in \cite{Veiga_2014aa} the spline complex for spaces on the boundary is defined as follows. \begin{definition}[Multipatch Spline Complex on Trace Spaces] Let $\Gamma = \bigcup_{0\leq j< N} \Gamma_j$ be a multipatch boundary satisfying Assumption \ref{ass::geometry}. Moreover, let $\pmb \Xi \coloneqq (\pmb \Xi_j)_{0\leq j<N}$ be pairs of knot vectors in accordance with Assumption \ref{ass::knotvecs} and $\pmb p=(\pmb p_j)_{0\leq j<N}$ pairs of integers, corresponding to polynomial degrees. Then we define the \emph{spline complex on the boundary} $\Gamma$ via \begin{align*} \S^0_{\pmb p,\pmb\Xi}(\Gamma)\coloneqq {}&{} \left\lbrace f\in H^{1/2}(\Gamma)\colon f|_{\Gamma_j} \in \S^0_{\pmb p_j,\pmb\Xi_j}(\pmb \Gamma_j)\text{ for all }0 \leq j<N\right\rbrace,\\ \pmb \S_{\pmb p,\pmb\Xi}^1(\Gamma)\coloneqq {}&{} \left\lbrace \pmb f\in \pmb H_\times^{{-1/2}}(\div_\Gamma,\Gamma)\colon \pmb f|_{\Gamma_j} \in \pmb \S_{\pmb p_j,\pmb\Xi_j}^1(\pmb \Gamma_j)\text{ for all }0\leq j< N\right\rbrace,\\ \S^2_{\pmb p,\pmb\Xi}(\Gamma)\coloneqq {}&{} \left\lbrace f\in H^{-1/2}(\Gamma)\colon f|_{\Gamma_j} \in \S^2_{\pmb p_j,\pmb\Xi_j}(\pmb \Gamma_j)\text{ for all }0\leq j< N\right\rbrace. \end{align*} We assume $\pmb p$ and $\pmb \Xi$ to be such that they coincide on every patch-interface. \end{definition} \begin{remark}\label{rem::defviatrace} Note that a different definition of the considered spline spaces could be achieved by application of the trace operators to the volumetric parametrisation, provided their existence, see Theorem \ref{thm::htimestrace}. However, the construction above seems more suitable for the analysis of approximation properties. \end{remark} \section{Approximation Properties of Conforming Spline Spaces} \label{subsec::approx} We will now investigate approximation properties of the spaces defined in the previous section. This will be done through the introduction of quasi-interpolation operators, projections, which are defined in terms of a dual basis. For one-dimensional spline spaces Schumaker \cite[Sec.~4.2]{Schumaker_2007aa} introduced quasi-interpolants, defined in via some dual functionals $$\lambda_{i,p}\colon L^2([\xi_i,\xi_{i+p+1}])\to \mathbb K,$$ such that \begin{align} \Pi_{p,\Xi}\colon f\mapsto\sum_{0\leq i < k} \lambda_{i,p}(f) b_{i}^p. \end{align} Note that, by definition of the $\lambda_{i,p}$ they merely require $f$ to be square integrable. Moreover, the operators depend on the specific knot vectors, which we do not reference for notational purposes. As shown in \cite{Veiga_2014aa}, a tensor product construction utilising the above projection yields interpolants $\Pi_{\pmb p,\pmb \Xi}^0,$ $\pmb{\Pi}_{\pmb p,\pmb \Xi}^1,$ $\Pi_{\pmb p,\pmb \Xi}^2$ mapping onto the spaces $\S^0_{\pmb p,\pmb \Xi}([0,1]^2),$ $\pmb \S_{\pmb p,\pmb\Xi}^1([0,1]^2)$, and $\S^2_{\pmb p,\pmb \Xi}([0,1]^2),$ as explained in \cite[p.~169ff]{Veiga_2014aa}, where error estimations and $L^2$-stability for B-spline approximations have been also been provided. A crucial property of the construction is as follows. \begin{lemma}[Commuting Interpolation Operators,~{\cite[Prop.~5.8]{Veiga_2014aa}}]\label{lem::theycommute} The diagram $$ \begin{tikzcd H^{1}\big((0,1)^2\big)\ar{r}{{\bb\curl}}\ar{d}{\Pi_{\pmb p,\pmb \Xi}^0}&\pmb H\big(\div,(0,1)^2\big)\ar{r}{\div}\ar{d}{\pmb\Pi_{\pmb p,\pmb \Xi}^{1}}&L^2\big((0,1)^2\big)\ar{d}{\Pi_{\pmb p,\pmb \Xi}^2}\\ \S_{\pmb p,\pmb \Xi}^0([0,1]^2)\ar{r}{{\bb\curl} }&\pmb\S_{\pmb p,\pmb \Xi}^{1}([0,1]^2)\ar{r}{\div}& \S_{\pmb p,\pmb \Xi}^2([0,1]^2) \end{tikzcd} $$ commutes. \end{lemma} \begin{remark} For the two-dimensional setting \cite{Veiga_2014aa} introduces two spaces $\pmb \S_{\pmb p,\pmb\Xi}^1$ and $\pmb \S_{\pmb p,\pmb\Xi}^{1*}$, which correspond to curl conforming and divergence conforming spaces, respectively. Since we are interested mostly in spaces of the $\div$-type and the spaces differ only by a rotation, we will not mention the two different types of spline spaces. However, it should be noted that our spaces of type $\pmb \S_{\pmb p,\pmb\Xi}^1$ correspond to those of type $\pmb \S_{\pmb p,\pmb\Xi}^{1*}$ in the cited literature. \end{remark} By application of the pull-backs used to define the spline spaces one can immediately generalise the projectors and all results to the case of functions on the physical domains. Corollary 5.12 of \cite{Veiga_2014aa} reveals that for the case of a single patch $\Gamma_j$ the following holds. \begin{corollary}[Single Patch Approximation Estimate,~{\cite[Cor.~5.12]{Veiga_2014aa}}]\label{cor::Approxcor} Let $\Gamma_j$ be a single patch domain and let Assumptions \ref{ass::knotvecs} and \ref{ass::geometry} hold. Then we find that \begin{align*} \norm{u -\Pi^0_{\pmb p,\pmb \Xi}u}_{H^r(\Gamma_j)} {} & {}\lesssim h^{s-r}\norm{u}_{H^s(\Gamma_j)},\qquad0\leq r \leq s\leq p+1, \\ \norm{\pmb u -\pmb \Pi_{\pmb p,\pmb \Xi}^1\pmb u}_{\pmb H^r(\Gamma_j)} {} & {}\lesssim h^{s-r}\norm{\pmb u}_{\pmb H^s(\Gamma_j)},\qquad0\leq r \leq s\leq p, \\ \norm{u -\Pi_{\pmb p,\pmb \Xi}^{2} u}_{H^r(\Gamma_j)} {} & {}\lesssim h^{s-r}\norm{ u}_{H^s(\Gamma_j)},\qquad0\leq r \leq s\leq p. \\ \end{align*} \end{corollary} Indeed, the construction of $\pmb\Pi^1_{\pmb p,\pmb \Xi}$ makes it possible to estimate $$\norm{\pmb u -\pmb \Pi^1_{\pmb p,\pmb \Xi}\pmb u}_{\pmb H^r(\div_\Gamma,\Gamma_j)}\lesssim h^{s-r}\norm{\pmb u}_{\pmb H^s(\div_\Gamma,\Gamma_j)},\qquad0\leq r \leq s\leq p,$$ since, by properties of the pull-backs, the operators also commute w.r.t.~the surface differential operators {one finds that \begin{align*} \norm{\pmb u -\pmb \Pi^1_{\pmb p,\pmb \Xi}\pmb u}^2_{\pmb H^r(\div_\Gamma,\Gamma_j)} ={}&{} \norm{\pmb u -\pmb \Pi^1_{\pmb p,\pmb \Xi}\pmb u}^2_{\pmb H^r(\Gamma_j)} + \norm{\div_\Gamma(\pmb u) -\div_\Gamma(\pmb\Pi^1_{\pmb p,\pmb \Xi} \pmb u)}^2_{ H^r(\Gamma_j)}\\ ={}&{} \norm{\pmb u -\pmb \Pi^1_{\pmb p,\pmb \Xi}\pmb u}^2_{\pmb H^r(\Gamma_j)} + \norm{\div_\Gamma(\pmb u) -\Pi^2_{\pmb p,\pmb \Xi} \div_\Gamma(\pmb u)}^2_{ H^r(\Gamma_j)}, \end{align*} allowing to apply the estimates of the previous corollary. } For the remainder of this section, we will generalise these notions for the multipatch case. \subsection{Multipatch Quasi-interpolation Operators} We now want to generalise the above to the multipatch setting. {For this, we need to construct interpolation operators capable of preserving continuity across patch boundaries.} For one-dimensional spline spaces $S^p(\Xi)$ and $f\in C^\infty([0,1]),$ \cite[{Sec.~2.1.5}]{Veiga_2014aa} defines $$ \tilde \Pi_{p,\Xi}\colon f \mapsto \sum_{0\leq i < k}\tilde \lambda_{i,p}(f)b_{i}^{p},$$ where for $0< i< k-1$ we set $\tilde \lambda_{i,p}(f)=\lambda_{i,p}(f),$ but additionally, require \begin{align*} \tilde \lambda_{0,p}(f)=f(0)\qquad\text{as well as}\qquad \tilde \lambda_{k-1,p}(f)=f(1). \end{align*} This will yield versions of the projection operators which respect boundary conditions. Analogously to the construction in \cite{Buffa_2011aa}, we can now construct quasi-interpolation operators for the multipatch case that commute w.r.t.~derivation. Investigation of the one-dimensional diagram \begin{equation} \begin{tikzcd H^1(0,1) \ar{d}{ \tilde \Pi_{p,\Xi}} \ar{r}{\partial_x} & \ar{d}{\tilde \Pi_{p,\Xi}^\partial} L^2(0,1)\\ S^p(\Xi) \ar{r}{\partial_x}&S^{p-1}(\Xi') \end{tikzcd}\label{diag::1d} \end{equation} makes clear that a suitable choice of $\tilde \Pi_{p,\Xi}^\partial$ is given by \begin{align} \tilde \Pi_{p,\Xi}^\partial \colon f \mapsto \partial_\eta \left[\tilde \Pi_{p,\Xi} \int_0^\eta f(x) \:\operatorname{d} x\right]. \label{def::tildeop} \end{align} {By diagram chase and application of the fundamental theorem of calculus one can see that} \eqref{def::tildeop} renders Diagram \eqref{diag::1d} commutative. \begin{proposition}[Spline Preserving Property] The operator $\tilde\Pi^\partial_{p,\Xi}\colon L^2(0,1)\to S^{p-1}(\Xi')$ preserves B-splines within $S^{p-1}(\Xi')$. \end{proposition} \smartqed \begin{proof} By \cite[{Sec.~2}]{Buffa_2015aa} we know that the assertion holds for $\tilde \Pi_{p,\Xi}$. Fixing a spline $b'\in S^{p-1}(\Xi'),$ we know that there exists a $b\in S^{p}(\Xi)$ with $\partial_x b = b'$, since $\partial_x\colon S^{p}(\Xi) \to S^{p-1}(\Xi')$ is surjective. Now, since $b\in H^1(0,1),$ the assertion follows by diagram chase.\qed \end{proof} An immediate consequence of this proposition is the fact, that the operator ${\tilde\Pi}^\partial_{p,\Xi}$ is a projection. Defining quasi-interpolation operators via \begin{align} \begin{aligned} \tilde \Pi_{\pmb p,\pmb \Xi}^0 & \coloneqq \tilde \Pi_{p_1,\Xi_1}\otimes \tilde \Pi_{p_2,\Xi_2}, \\ \tilde {\pmb\Pi}_{\pmb p,\pmb \Xi}^{1} & \coloneqq (\tilde \Pi_{p_1,\Xi_1}\otimes \tilde \Pi_{p_2,\Xi_2}^\partial)\times (\tilde \Pi_{p_1,\Xi_1}^\partial\otimes \tilde \Pi_{p_2,\Xi_2}), \\ \tilde \Pi_{\pmb p,\pmb \Xi}^2 & \coloneqq \tilde \Pi_{p_1,\Xi_1}^\partial\otimes \tilde \Pi_{p_2,\Xi_2}^\partial, \label{def::commtilde} \end{aligned} \end{align} {where $\otimes$ denotes the tensor-product and $\times$ denotes the Cartesian product,} we can now define global projections on the physical domain via application of the pull-backs. \begin{definition}[Global Interpolation Operators]\label{def::globalinterpolants} Let $\pmb \Xi$ and $\pmb p$ denote $N$-tuples of pairs of knot vectors and polynomial degrees, respectively. Let $\Gamma=\bigcup_{0\leq j<N}\Gamma_j$ be a multipatch boundary induced by a family of diffeomorphisms $\lbrace \pmb F_j\rbrace_{0\leq j< N}$ {as in Definition~\ref{def:patch}}. {For a family of patchwise linear operators $\lbrace L_j\rbrace_{0\leq j< N}$ we denote by $\lbrace \tilde L_j\rbrace_{0\leq j< N}$ their extensions by 0 onto $\Gamma $ and write $$\bigoplus_{0\leq j< N} L_j \coloneqq \sum_{0\leq j< N} \tilde L_j.$$} Now, the global B-spline projections are defined as \begin{align*} \tilde\Pi_\Gamma^0\coloneqq {}&{}\bigoplus_{0\leq j< N} \left((\iota_{0}(\pmb F_j))^{-1}\circ\tilde\Pi_{\pmb p_j,\pmb\Xi_j}^0\circ\iota_{0}(\pmb F_j)\right),\\ \tilde{\pmb\Pi}_\Gamma^1\coloneqq {}&{}\bigoplus_{0\leq j< N} \left((\iota_{1}(\pmb F_j))^{-1}\circ\tilde{\pmb\Pi}_{\pmb p_j,\pmb\Xi_j}^1\circ\iota_{1}(\pmb F_j)\right),\\ \tilde\Pi_\Gamma^2\coloneqq {}&{}\bigoplus_{0\leq j< N} \left((\iota_{2}(\pmb F_j))^{-1}\circ\tilde\Pi_{\pmb p_j,\pmb\Xi_j}^2\circ\iota_{2}(\pmb F_j)\right), \end{align*} i.e.,~by patchwise application of the projections of \eqref{def::commtilde} with their corresponding pull-backs and push-forwards. \end{definition} Note that, since the pullbacks are commuting with the differential operators in the reference domain and surface differential operators, an analogue of Lemma \ref{lem::theycommute} holds also for the global interpolants \cite{Peterson_2006aa}. For the global interpolation operators to be well defined, we require a certain amount of regularity. This can be formalised as follows. \begin{lemma}[Regularity Required for the Commuting Diagram Property]\label{lem::multipatch::theycommute::and::regularity} The interpolation operators $\tilde\Pi_\Gamma^{0},$ $\tilde{\pmb\Pi}_\Gamma^{1}$ and $\tilde\Pi_\Gamma^{2}$ are well defined for functions in $H^{1+\varepsilon}(\Gamma)$, ${\pmb H}^{\varepsilon}(\div_\Gamma,\Gamma)$ and $H^\varepsilon(\Gamma),$ respectively, for any $\varepsilon>0$. Moreover, the diagram $$ \begin{tikzcd H^{1+\varepsilon}(\Gamma)\ar{r}{{\bb\curl}_\Gamma}\ar{d}{\tilde\Pi_\Gamma^0}&\pmb H^{\varepsilon}(\div_\Gamma,\Gamma)\ar{r}{\div_\Gamma}\ar{d}{\tilde{\pmb\Pi}_\Gamma^{1}}&H^\varepsilon(\Gamma)\ar{d}{\tilde\Pi_\Gamma^2}\\ \S_{\pmb p,\pmb\Xi}^0(\Gamma)\ar{r}{{\bb\curl}_\Gamma }&\pmb\S_{\pmb p,\pmb\Xi}^{1}(\Gamma)\ar{r}{\div_\Gamma}&\S_{\pmb p,\pmb\Xi}^2(\Gamma) \end{tikzcd} $$ commutes. \end{lemma} \begin{proof} By Sobolev Imbedding Theorems, see \cite[Sec.~8]{Nezza_2012aa}, we know that any function in $H^{1+\varepsilon}(\Gamma)$ admits a continuous representative. Thus, by definition of $\tilde{\Pi}^0_\Gamma,$ it is well defined for functions in $H^{1+\varepsilon}(\Gamma)$. Its definition via integration makes the operators $\tilde \Pi_{p,\Xi}^\partial$ well defined for functions in $H^\varepsilon(\Gamma)$, which immediately yields the assertion about $\tilde\Pi^2_\Gamma.$ \nocite{Adams_1978aa,wloka_1987aa} It remains to show that $\pmb H^\varepsilon(\div_\Gamma,\Gamma)$ is within the domain of $\tilde {\pmb\Pi}^1_\Gamma.$ Considering each interface separately, by applying Gauss' theorem in the union of the two adjacent patches, one can see that the normal component of any function in $\pmb H^\varepsilon(\div_\Gamma,\Gamma)$ is continuous across patch boundaries. By tensor product construction of $\pmb{\tilde \Pi^1}_{\Gamma}$ on each patch w.r.t.~the reference domain, the continuous component can be identified with the domain of the operators of type $\tilde \Pi_{p,\Xi}$. Thus, the interpolation is well defined. With regard to the interior and the tangential component along patch interfaces, by definition of the dual functionals $\tilde \lambda_i$ via integration, and integration within the definition of $\tilde \Pi^\partial_{p,\Xi}$, regularity of $\pmb H^\varepsilon(\Gamma)$ suffices for the operation to be well defined. The commuting property {follows analogously to \cite[Prop.~5.8]{Veiga_2014aa}}.\qed \end{proof} The constructions of \eqref{def::commtilde} and Definition \ref{def::globalinterpolants} can easily be generalised to three dimensions, see {Appendix A}{}. \subsection{Convergence Properties of Multipatch Quasi-Interpolation Operators} We will now provide approximation estimates for the introduced interpolation operators. Note that, by construction, it is clear that the boundary interpolating projections commute w.r.t.~the differential operators. It is however not clear whether the construction in \eqref{def::tildeop} and \eqref{def::commtilde} impacts the convergence behaviour w.r.t.~$h$-refinement. To utilise the commuting property to show convergence in the energy spaces, we need an analogue of Corollary \ref{cor::Approxcor} for the multipatch operators. The classical proofs rely heavily on the $L^2$-stability of the projectors. Unfortunately, due to the interpolation at $0$ and $1$, the multipatch variants lose this property. Thus, we need to establish another suitable stability condition. \begin{proposition}[Stability of $\tilde \Pi_{p,\Xi}$]\label{Prop::TildeStability} Let Assumption \ref{ass::knotvecs} hold. Assume $f$ to be continuous in a neighbourhood around 0 and 1 and let $I = (\xi_j,\xi_{j+1})$. Let $\tilde I$ denote the support extension of $I$. Then it holds that \begin{align} \norm{\tilde\Pi_{p,\Xi} (f)}_{L^2(I)}\lesssim{} &{} \norm{f}_{L^2(\tilde I)}+h\abs{f}_{H^1(\tilde I)} ,\label{eq::notquiteL2stable}\\ \abs{\tilde \Pi_{p,\Xi} (f)}_{H^1(I)}\lesssim {}&{} \norm{f}_{H^1(\tilde I)}.\label{eq::secondStability} \end{align} Moreover, we find \begin{align*} \norm{{\tilde\Pi}_{p,\Xi}^\partial(f)}_{L^2(I)} \lesssim \norm{f}_{L^2(\tilde I)}. \end{align*} \end{proposition} \begin{proof} The first two inequalities have been discussed by \cite[{Prop.~2.3}]{Buffa_2015aa}. Investigating the third assertion, we set $g(x)=\int_0^xf(t)\:\operatorname{d} t$. The proof concludes by a nontrivial application of the Poincaré inequality as follows. For this, we set $C=-\frac{1}{\abs{\tilde I }}\int_{\tilde I} g \:\operatorname{d} x$, where $\abs{\tilde I}$ denotes the Lebesgue measure of $\tilde I$, and observe that \begin{align} \norm{{\tilde\Pi}^\partial_{p,\Xi}(f)}_{L^2(I)} = {}&{}\norm{\partial_x\tilde\Pi_{p,\Xi} \int_0^xf(t)\:\operatorname{d} t }_{L^2(I)}\notag\\ = {}&{}\norm{\partial_x\tilde \Pi_{p,\Xi}\left(\int_0^x f(t)\:\operatorname{d} t +C\right)}_{L^2(I)}\notag\\ = {}&{}\abs{\tilde\Pi_{p,\Xi}\left(\int_0^x f(t)\:\operatorname{d} t +C\right)}_{H^1(I)}\notag\\ \lesssim {}&{}(\norm{g+C}_{L^2(\tilde I)}^2 + \abs{g + C}_{H^1(\tilde I)}^2)^{1/2},\label{eq::followsfrom} \end{align} where the inequality follows from \eqref{eq::secondStability}. Now, since by definition of $C$ we find that $\frac{1}{\abs{\tilde I }}\int_{\tilde I}g\:\operatorname{d} x=-C$, we can apply the Poincaré inequality, see e.g.~\cite{wloka_1987aa}, which yields $$\norm{g + C}_{L^2(\tilde I )}\lesssim \abs{g}_{H^1(\tilde I)} = \norm{f}_{L^2(\tilde I)},$$ for the first term of \eqref{eq::followsfrom}. For the second term, we find $$\abs{g+C}_{H^1(\tilde I)}^2= \int_{\tilde I}\abs{\partial_x(g(x)+C)}^2\:\operatorname{d} x = \abs{g}_{H^1(\tilde I)}^2= \norm{f}_{L^2(\tilde I)}^2$$ and the assertion follows.\qed \end{proof} Utilising the stability condition, we now can provide an error estimate in one dimension. \begin{proposition}[Approximation Properties of $\tilde \Pi_{p,\Xi}$]\label{prop::missinglink} Let the assumptions of Proposition \ref{Prop::TildeStability} hold. For integers $1\leq s\leq p+1$ one finds \begin{align*} \norm{f-\tilde \Pi_{p,\Xi} f}_{L^2(I)}\lesssim h^{s}\norm{f}_{H^s(\tilde I)},\qquad \text{for all } f\in H^s(0,1), \intertext{and for integers $0\leq s\leq p$ one finds} \norm{f-\tilde \Pi_{p,\Xi}^\partial f}_{L^2(I)}\lesssim h^{s}\norm{f}_{H^s(\tilde I)},\qquad \text{for all } f\in H^s(0,1). \end{align*} \end{proposition} \begin{proof} We investigate merely the case of $\tilde\Pi_{p, \Xi}$. Due to the stability of $\tilde \Pi^\partial_{p,\Xi}$ as discussed in Proposition \ref{Prop::TildeStability}, we can prove the other case by similar means. For the first inequality, {as in \cite[Prop.~4.2]{Veiga_2014aa},} it is enough to consider classical polynomial estimates together with Proposition \ref{Prop::TildeStability} to achieve \begin{align*} \norm{f-\tilde\Pi_{p, \Xi} f}_{L^2(I)}&\leq \norm{f-q}_{L^2(I)}+\norm{\tilde\Pi_{p,\Xi}(q-f)}_{L^2(I)}\\ &\lesssim \norm{f-q}_{L^2(I)}+ \norm{q-f}_{L^2(\tilde I)}+h\abs{q-f}_{H^1(\tilde I)} \\ &\lesssim h^{s}\norm{f}_{H^s(\tilde I)}, \end{align*} which holds for a sensible choice of $q$, i.e., the $L^2$-orthogonal approximation w.r.t.~the polynomials of degree no higher than $p$.\qed \end{proof} We state the main result of this section. \begin{theorem}[Approximation via Commuting Multipatch Quasi-Interpolants]\label{lem::multiconv} Let Assumptions \ref{ass::knotvecs} and \ref{ass::geometry} be satisfied and let $s$ be integer-valued. Let $f_0 \in H_{{\mathrm{pw}}}^{s}(\Gamma),\text{ } 2\leq s, $ as well as $\pmb f_1 \in {\pmb H}_{{\mathrm{pw}}}^{s}(\Gamma),\text{ } 1\leq s,$ and $f_2 \in H_{{\mathrm{pw}}}^{s}(\Gamma),\text{ } 0\leq s.$ Moreover, let each function be within the domain of the interpolation operator applied below, cf.~Lemma \ref{lem::multipatch::theycommute::and::regularity}. We find that \begin{align} \norm{f_0- \tilde \Pi_\Gamma^0f_0}_{L^2(\Gamma)} {}& {}\lesssim h^{s}\norm{f_0}_{H_{{\mathrm{pw}}}^s (\Gamma)},& & 2\leq s\leq p+1, \notag\\ \norm{f_0- \tilde \Pi_\Gamma^0f_0}_{H^1(\Gamma)} {}& {}\lesssim h^{s-1}\norm{f_0}_{H_{{\mathrm{pw}}}^{s} (\Gamma)},& & 2\leq s\leq p+1, \notag\\ \norm{\pmb f_1- \tilde {\pmb\Pi}_\Gamma^1 \pmb f_1}_{\pmb L^2(\Gamma)} {} & {}\lesssim h^{s}\norm{\pmb f_1}_{{\pmb H}_{{\mathrm{pw}}}^{s}(\Gamma)}, & & 1\leq s\leq p, \notag \\ \norm{f_2- \tilde\Pi_\Gamma^2 f_2}_{L^2(\Gamma)} {} & {}\lesssim h^{{s}}\norm{f_2}_{H_{{\mathrm{pw}}}^s (\Gamma)}, & & 0\leq s\leq p. \notag \intertext{ We moreover find that} \norm{\pmb f_1- \tilde {\pmb \Pi}_\Gamma^1 \pmb f_1}_{\pmb H^0(\div_\Gamma,\Gamma)} {} & {}\lesssim h^s\norm{\pmb f_1}_{{\pmb H}_{{\mathrm{pw}}}^s(\div_\Gamma,\Gamma)}, & & 1\leq s\leq p. \label{eq::hdivnorm} \end{align} \end{theorem} \begin{proof} Due to the properties of the pull-backs and the locality of the norms involved, it suffices to provide a patchwise argument in the reference domain. Note that the regularity of the spline approximation is always sufficient for the involved norms to be defined since it is enforced by the interpolation property of the $\tilde \Pi$ at the patch interfaces. \cite[Prop.~4.2]{Buffa_2015aa} directly provides \begin{align} \norm{f-\tilde\Pi_{\pmb p,\pmb \Xi}f}_{H^r\big((0,1)^2\big)}& \lesssim h^{s-r}\norm{f}_{H^{s}\big((0,1)^2\big)}, \end{align} for $r=0,1$, from which the $\tilde \Pi_\Gamma^0$ case follows immediately. We will now provide a proof for the $\tilde\Pi_\Gamma^2$ case by investigating $\tilde\Pi^\partial_{\pmb p,\pmb \Xi}=\tilde\Pi^\partial_{p,\Xi_1}\otimes \tilde\Pi^\partial_{p,\Xi_2},$ which will be done largely analogous to the proofs within the cited literature. The third assertion follows from a combination of the arguments in each vector component. Let $f\in H^s\big((0,1)^2\big)$ for some $1\leq s\leq p$. {Note that this implies $\norm{f}_{H^s_x(0,1)}\in{L_y^2(0,1)}$ and $\norm{f}_{H^s_y(0,1)}\in{L_x^2(0,1)}$, where by the $x$- and $y$-indexed norms we denote the usual norm taken w.r.t. the corresponding tensor product direction.} {Let $I_1\times I_2 = Q \subset (0,1)^2$ be an element.} One can estimate via triangle inequality that \begin{align} \norm{f-\tilde \Pi^\partial_{\pmb p,\pmb \Xi} f}_{L^2(Q)} &= \norm{f- (\tilde {\Pi}^\partial_{p,\Xi_1}\otimes \tilde {\Pi}^\partial_{p,\Xi_2}) (f)}_{L^2(Q)}\notag\\ & \leq \norm{f-(\tilde {\Pi}^\partial_{p,\Xi_1}\otimes \operatorname{Id}) (f)}_{L^2(Q)} \notag\\ &\qquad+\norm{(\tilde {\Pi}^\partial_{p,\Xi_1}\otimes \operatorname{Id}) (f) - (\tilde {\Pi}^\partial_{p,\Xi_1}\otimes \tilde {\Pi}^\partial_{p,\Xi_2}) (f)}_{L^2(Q)}. \label{eq::estimate} \end{align} By Proposition~\ref{prop::missinglink} we immediately can estimate the first term of \eqref{eq::estimate} via \begin{align} \norm{f-(\tilde {\Pi}^\partial_{p,\Xi_1}\otimes \operatorname{Id}) (f)}^2_{L^2(Q)} ={}&{}\int_{{I_2}} \norm{f-\tilde \Pi^\partial_{p,\Xi_1} f}^2_{L^2({I_1})}\:\operatorname{d} y\notag\\ \lesssim {}&{} h^{2s}\int_{{I_2}}\norm{f}_{H^s({\tilde I_1})}^2\:\operatorname{d} y\notag\\ \lesssim {}&{} h^{2s}\norm{f}_{H^s(\tilde Q)}^2.\label{proof::eqpack1} \end{align} Now, we can estimate the second term of \eqref{eq::estimate} by utilisation of the stability property from Proposition \ref{Prop::TildeStability} {and application of Proposition~\ref{prop::missinglink}}, which yields \begin{align} \norm{(\tilde {\Pi}^\partial_{p,\Xi_1}\otimes \operatorname{Id}) (f) - (\tilde {\Pi}^\partial_{p,\Xi_1}\otimes \tilde {\Pi}^\partial_{p,\Xi_2}) (f)}_{L^2(Q)}^2 \lesssim{}&{} \int_{{I_1}} \norm{f -\tilde\Pi^\partial_{p,\Xi_2}f }_{L^2({\tilde I_2})}^2\:\operatorname{d} {x} \notag\\ \lesssim {}&{} h^{2s}\norm{f}^2_{H^s(\tilde Q)}.\label{proof::eqpack2} \end{align} Now the assertion follows. Again, we stress that the missing assertion for an interpolator of type $\tilde\Pi\otimes\tilde\Pi^\partial$ follows analogously, even though it is not $L^2$-stable due to the impact of the seminorm term in \eqref{eq::notquiteL2stable}. One needs merely replace either \eqref{proof::eqpack1} or \eqref{proof::eqpack2} with the corresponding argument from \cite[{Prop.~4.2}]{Buffa_2015aa}. For an investigation of \eqref{eq::hdivnorm}, it suffices to utilise Lemma \ref{lem::multipatch::theycommute::and::regularity} together with the above to see that, for $1\leq s\leq p$, one finds \begin{align*} \norm{\pmb f_1- \tilde {\pmb\Pi}_{\pmb p,\pmb\Xi}^1 \pmb f_1}_{\pmb H^0(\div,(0,1)^2)} {} & {}\leq \norm{\pmb f_1- \tilde {\pmb\Pi}_{\pmb p,\pmb\Xi}^1 \pmb f_1}_{\pmb L^2((0,1)^2)} + \norm{\div(\pmb f_1- \tilde {\pmb\Pi}_{\pmb p,\pmb\Xi}^1 \pmb f_1)}_{L^2((0,1)^2)}\\ &{}={} \norm{\pmb f_1- \tilde {\pmb\Pi}_{\pmb p,\pmb\Xi}^1 \pmb f_1}_{\pmb L^2((0,1)^2)} + \norm{\div\pmb f_1- \div(\tilde {\pmb\Pi}_{\pmb p,\pmb\Xi}^1 \pmb f_1)}_{L^2((0,1)^2)}\\ &{}={} \norm{\pmb f_1- \tilde {\pmb\Pi}_{\pmb p,\pmb\Xi}^1 \pmb f_1}_{\pmb L^2((0,1)^2)} + \norm{\div\pmb f_1- \tilde{\pmb\Pi}_{\pmb p,\pmb\Xi}^2 \div ( \pmb f_1)}_{L^2((0,1)^2)}\\ & {} \lesssim h^{s}\norm{\pmb f_1}_{{\pmb H}^s(\div,(0,1)^2)}, \end{align*} from which the result follows by properties of the geometry mapping.\qed \end{proof} These results are immediately applicable to two-dimensional finite element methods with a straight forward generalisation to three dimensions, see {Appendix A}{}. \begin{corollary}[Approximation Results for Finite Element Methods]\label{cor::femanalogy} Let $\Omega$ be a two dimensional domain, satisfying Assumption \ref{ass::geometry}. Let $f_0\in H^1(\Omega)$, $\pmb f_1\in \pmb H^0(\div,\Omega)$ and $f_2\in L^2(\Omega)$. Then, its holds that \begin{align*} \inf_{f_h\in \S^0_{\pmb p,\pmb \Xi}(\Omega)}\norm{ f_0 - f_h}_{H^{1}(\Omega)}&\lesssim h^{s-1}\norm{f_0}_{{H}_{{\mathrm{pw}}}^{s}(\Omega)},&1\leq {}&{}s \leq p+1,\\ \inf_{\pmb f_h\in \pmb \S^1_{\pmb p,\pmb \Xi}(\Omega)}\norm{ \pmb f_1 - \pmb f_h}_{\pmb H^0(\div,\Omega)}&\lesssim h^{s}\norm{\pmb f_1}_{ {\pmb H}_{{\mathrm{pw}}}^s(\div,\Omega_j)},&0\leq {}&{}s \leq p,\\ \inf_{f_h\in \S^2_{\pmb p,\pmb \Xi}(\Omega)}\norm{ f_2 - f_h}_{L^2(\Omega)}&\lesssim h^{s}\norm{f_2}_{{H}_{{\mathrm{pw}}}^{s}(\Omega)},&0\leq {}&{}s \leq p. \end{align*} \end{corollary} \begin{proof} Due to the stability of the respective orthogonal projection $\mathcal P_1\colon H^{1}(\Omega)\to \S^0_{\pmb p,\pmb \Xi}(\Omega)$, ${\mathcal P}_{\div}\colon {\pmb{ H}^0}(\div,\Omega)\to {\pmb \S}^1_{\pmb p,\pmb \Xi}(\Omega)$ and $\mathcal P_0\colon L^2(\Omega) \to \S^2_{\pmb p,\pmb \Xi}(\Omega)$, we immediately have the result for the minimal values of $s$. {Repeating the same steps as in the proof of} Theorem \ref{lem::multiconv}, we find the result for larger values of $s$ and smooth choices of $f_0$, $\pmb f_1$ and $f_2$. The assertion now follows by interpolation arguments {as in Lemma \ref{interpolationlemma}} and {density of smooth functions in (subspaces of) $L^2(\Omega)$}. \qed \end{proof} Again, a generalisation of this result to the sequence $$ \begin{tikzcd H^{1}(\Omega)\ar{r}{\pmb\grad}&\pmb H^0(\pmb\curl,{\Omega})\ar{r}{\pmb\curl}&\pmb H^0(\div,\Omega)\ar{r}{\div}&L^2(\Omega) \end{tikzcd} $$ on three-dimensional volumetric domains $\Omega$ is straight forward, cf. Appendix A. This includes in particular also the approximation property of $\pmb H^0(\pmb \curl,\Omega)$, which, for two-dimensional domains, coincides with the $\pmb H^0(\div,\Omega)$-case, up to rotation, see \cite[Sec.~5.5]{Veiga_2014aa}. \begin{remark} {We emphasise that the interpolation operators constructed in this section are merely a theoretical tool for which there are alternatives, cf.~\cite{Gerritsma2010} or the sources cited therein. We utilise the Schumaker quasi-interpolation operators \cite{Schumaker_2007aa}, since they are often used within the spline community and they suffice to show quasi-optimality w.r.t. $h$. In most implementations, it suffices to implement the orthogonal projection or evaluate a suitable bilinear form via quadrature rules rather than interpolation operators.} \end{remark} \section{Approximation Properties in Trace Spaces}\label{sec::tracespaces} Now, we will consider approximation properties of the spaces $\S^0_{\pmb p,\pmb \Xi}(\Gamma)$, $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)$ and $\S^2_{\pmb p,\pmb \Xi}(\Gamma)$ w.r.t.~the fractional Sobolev spaces $H^{1/2}(\Gamma)$, $\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)$ and $H^{-1/2}(\Gamma)$. This will be done by investigation of the orthogonal projection. Due to its optimality, we know that it must achieve the same convergence rates w.r.t.~$h$-refinement as those of Theorem~\ref{lem::multiconv}. Moreover, properties of the orthogonal projection are of interest for an application in the context of partial differential equations, since inf-sup conditions yield quasi-optimal behaviour for the approximate solution w.r.t.~the orthogonal approximation of the involved energy space \cite{Xu_2002aa}. We will start by utilisation of interpolation as in Lemma \ref{interpolationlemma} and optimality of the orthogonal projection of the respective energy space to get convergence results for positive fractional spaces. This yields the following corollary. \begin{corollary}[Approximating $H^{1/2}(\Gamma)$ with $\S_{\pmb p,\pmb \Xi}^0(\Gamma)$]\label{H1/2} Let Assumptions \ref{ass::knotvecs} and \ref{ass::geometry} be satisfied. Let {$f\in H^{s}_{{\mathrm{pw}}}(\Gamma)\cap H^{1/2}(\Gamma)$} {for integers $2\leq s\leq p+1$}, and let $\mathcal P_{1/2} f$ denote its $H^{1/2}(\Gamma)$-orthogonal projection onto $\S_{\pmb p,\pmb \Xi}^0(\Gamma).$ It holds that \begin{align*} \norm{f- \mathcal P_{1/2}f}_{H^{1/2}(\Gamma)} {}& {}\ \lesssim h^{s-1/2}\norm{f}_{H^{s}_{{\mathrm{pw}}}(\Gamma)}. \end{align*} \end{corollary} \begin{proof} By Theorem \ref{lem::multiconv} we know for integers $s$ with $2 \le s \le p+1$ that \begin{align*} \norm{f- \tilde \Pi^0_\Gamma(f)}_{H^{r}(\Gamma)} &{}\lesssim h^{s-r}\norm{f}_{{H}^s_{{\mathrm{pw}}}(\Gamma)}, \end{align*} for both $r \in \{0,1\}$. Now, application of Lemma \ref{interpolationlemma} yields \begin{align*} \norm{f- \tilde \Pi^0_\Gamma(f)}_{H^{1/2}(\Gamma)} &{}\lesssim h^{s-1/2}\norm{f}_{{H}^s_{{\mathrm{pw}}}(\Gamma)}. \end{align*} By optimality of the $H^{1/2}(\Gamma)$-orthogonal projection $\mathcal P_{1/2}$, we {obtain the result.} \qed \end{proof} Interpolation does not yield estimates in norms with negative index. Thus, to show the approximation properties of $\S_{\pmb p,\pmb \Xi}^2(\Gamma)$ in $H^{-1/2}(\Gamma),$ we resort to an application of the Aubin-Nitsche Lemma \cite{Adams_1978aa}. \begin{corollary}[Approximating $H^{-1/2}(\Gamma)$ with $\S_{\pmb p,\pmb \Xi}^2(\Gamma)$]\label{H-1/2} Let Assumptions \ref{ass::knotvecs} and \ref{ass::geometry} be satisfied. Let $f\in H^{-1/2}(\Gamma)\cap H_{{\mathrm{pw}}}^{s}(\Gamma)$ for some $s\geq {0}$. Let $\mathcal P_{-1/2}$ denote the $H^{-1/2}(\Gamma)$-orthogonal projection of $f$ onto $\S_{\pmb p,\pmb \Xi}^2(\Gamma).$ Then it holds that \begin{align} \norm{f- \mathcal P_{-1/2}f}_{H^{-1/2}(\Gamma)} {} & {} \lesssim h^{{s+1/2}}\norm{f}_{H_{{\mathrm{pw}}}^s (\Gamma)}, & & {0}\leq s\leq p. \label{eq::h-12} \end{align} \end{corollary} \begin{proof} Assume, for now, that $f\in L^2(\Gamma)\cap H_{{\mathrm{pw}}}^{s}(\Gamma),$ and let $\mathcal P_0$ denote the $L^2$-orthogonal approximation onto $\S_{\pmb p,\pmb \Xi}^2(\Gamma).$ Since $H^{-1/2}(\Gamma)$ is the dual space to $H^{1/2}(\Gamma)$ we can estimate \begin{align}\begin{aligned} \norm{f- \mathcal P_0 f}_{H^{-1/2}(\Gamma)}\coloneqq {}&{}\sup_{0\neq v\in H^{1/2}(\Gamma)} \frac{\abs{\langle f- \mathcal P_0 f , v\rangle_{L^2(\Gamma)}}}{\norm{v}_{H^{1/2}(\Gamma)}}\\ = {}&{}\sup_{0\neq v\in H^{1/2}(\Gamma)} \frac{\abs{\langle f- \mathcal P_0 f , v-\mathcal P_0 v\rangle_{L^2(\Gamma)}}}{\norm{v}_{H^{1/2}(\Gamma)}}\\ \lesssim {}&{} \norm{f-\mathcal P_0 f}_{L^2(\Gamma)}\sup_{0\neq v\in H^{1/2}(\Gamma)}\frac{\norm{v-\mathcal P_0 v}_{L^2(\Gamma)}}{\norm{v}_{H^{1/2}(\Gamma)}} .\end{aligned}\label{classicaldualityhalforderconvergence} \end{align} By Theorem \ref{lem::multiconv}, we now arrive at $\norm{f-\mathcal P_{0}f }_{H^{-1/2}(\Gamma)}\leq h^{1/2+s}\norm{f}_{H_{{\mathrm{pw}}}^s(\Gamma)}$ for $0\leq s\leq p.$ Replacing $\mathcal P_0$ by $\mathcal P_{-1/2}$ now yields the assertion, analogously to the proof of Corollary \ref{H1/2}, using interpolation, optimality of $\mathcal P_{-1/2}$ w.r.t.~the $H^{-1/2}(\Gamma)$-error and density of regular functions in $H^{-1/2}(\Gamma)$.\qed \end{proof} \begin{remark} This result does not necessarily rely on Theorem \ref{lem::multiconv}. Since $H^{-1/2}(\Gamma)$ allows for discontinuities, it can be reproduced by application of the patchwise estimates of Corollary \ref{cor::Approxcor}. This has been done in \cite{Doelz_2017aa}. \end{remark} \begin{remark} {Note that by putting global norms on the right hand side, analogues of Corollaries \ref{H1/2} and \ref{H-1/2} can be shown for minimal regularities, i.e., $1/2\leq s$ in the case of the $H^{1/2}(\Gamma)$-error and $-1/2\leq s$ in the sense of the $H^{-1/2}(\Gamma)$-error by almost analogous means, cf.~\cite[Thm.~4.2.17]{Sauter_2010aa}. However, these results rely on the smoothness of the geometry for the norm on the right hand side to be well defined. We aim for our results to be immediately applicable to the multipatch setting of isogeometric analysis, where we want to require smoothness of the geometry only patchwise.} \end{remark} Now, what is missing to understand the approximation properties of the spaces $\S^0_{\pmb p,\pmb\Xi}(\Gamma),$ $\pmb\S_{\pmb p,\pmb\Xi}^1(\Gamma),$ and $\S^2_{\pmb p,\pmb\Xi}(\Gamma)$ in the trace space setting w.r.t.~the diagram in Figure \ref{fig::classicaldeRham}, is an analysis of the approximation properties of $\pmb \S_{\pmb p,\pmb\Xi}^1(\Gamma)$ in the space $\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma).$ For this purpose, we want to employ an argument similar to the one of Corollary \ref{H-1/2}. However, as will be discussed in a moment, this cannot be done with such ease as before in Corollary \ref{H-1/2}. We choose to follow the lines of \cite{Buffa_2003ab}, from whose argumentation we deviate only to adapt to the B-spline setting. The proof is lengthy and technical, thus we only state the result, with the full proof discussed in Section \ref{longproof}. \begin{theorem}[Approximating $\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)$ with $\pmb \S_{\pmb p,\pmb\Xi}^1(\Gamma)$]\label{thm::hdiv} Let Assumptions \ref{ass::knotvecs} and \ref{ass::geometry} be satisfied and let $\pmb f\in {\pmb H}_{{{\mathrm{pw}}}}^s(\div_\Gamma,\Gamma)\cap \pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)$ for some $s\geq -1/2.$ Let $\mathcal P_\times f$ denote the $\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)$-orthogonal projection of $\pmb f$ onto $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)$. Then one finds \begin{align*} \norm{\pmb f-\mathcal P_\times f}_{\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)} &\lesssim h^{1/2+s} \norm{\pmb f}_{{\pmb H}_\times^s(\div_\Gamma,\Gamma)} \intertext{ for all $-1/2\leq s\leq 0.$ Moreover, for $0\leq s\leq p$, it holds that} \norm{\pmb f-\mathcal P_\times f}_{\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)}& \lesssim h^{1/2+s} \norm{\pmb f}_{ {\pmb H}_{{\mathrm{pw}}}^s(\div_\Gamma,\Gamma)}. \end{align*} \end{theorem} \begin{remark} Note that Corollary \ref{H-1/2} and Theorem \ref{thm::hdiv} include the classical results from boundary element theory, even though a first glance suggests otherwise. This is due to the fact, that $p$ refers not to the degree of $\S_{\pmb p,\pmb \Xi}^2(\Gamma)$ and $\pmb \S_{\pmb p,\pmb \Xi}^1(\Gamma)$ respectively, but rather to the degree at the beginning of the sequence $\S_{\pmb p,\pmb \Xi}^0(\Gamma)\to\pmb \S_{\pmb p,\pmb \Xi}^{1}(\Gamma)\to\S_{\pmb p,\pmb \Xi}^2(\Gamma).$ In terms of basis functions, the space $\S_{\pmb p,\pmb \Xi}^2(\Gamma)$ contains splines of degree $p-1$, thus shifting the notation by 1. \end{remark} \subsection{Proof of Theorem 3}\label{longproof} \input{appendB} \FloatBarrier \section{Conclusion}\label{sec::conclusion} We have derived multipatch approximation results of the spline complex w.r.t.~the norms required by boundary- and finite element methods. Let the functions $f_0$, $\pmb f_1$, $f_2$ be regular enough for the norms on both left and right-hand side of the following estimates to be well defined, see also Lemma \ref{lem::multipatch::theycommute::and::regularity}. For multipatch boundaries $\Gamma$ in accordance with Assumptions \ref{ass::knotvecs} and \ref{ass::geometry}, we proved \begin{align} \inf_{f_h\in \S^0_{\pmb p,\pmb \Xi}(\Gamma)}\norm{ f_0 - f_h}_{H^{1/2}(\Gamma)}&\lesssim h^{s-1/2}\norm{f_0}_{{H}^{s}_{{\mathrm{pw}}}(\Gamma)}& {2}\leq {}&{}s \leq p+1,\label{2d:1}\\ \inf_{\pmb f_h\in \pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)}\norm{ \pmb f_1 - \pmb f_h}_{\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)}&\lesssim h^{s+1/2}\norm{\pmb f_1}_{{\pmb H}_\times^s(\div_\Gamma,\Gamma)}&-1/2\leq {}&{}s \leq 0,\label{2d:2}\\ \inf_{\pmb f_h\in \pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)}\norm{ \pmb f_1 - \pmb f_h}_{\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)}&\lesssim h^{s+1/2}\norm{\pmb f_1}_{ {\pmb H}_{{\mathrm{pw}}}^s(\div_\Gamma,\Gamma)}&0\leq {}&{}s \leq p,\label{2d:22}\\ \inf_{f_h\in \S^2_{\pmb p,\pmb \Xi}(\Gamma)}\norm{ f_2 - f_h}_{H^{-1/2}(\Gamma)}&\lesssim h^{s+1/2}\norm{f_2}_{{H}^{s}_{{\mathrm{pw}}}(\Gamma)}&{0}\leq {}&{}s \leq p.\label{2d:3} \end{align} Here, \eqref{2d:1} follows from Corollary \ref{H1/2}, \eqref{2d:2} and \eqref{2d:22} follow from Theorem \ref{thm::hdiv}, \eqref{2d:3} follows from Corollary \ref{H-1/2}. Moreover, we can apply these results for finite element methods as well. By extension of the tensor product structure in the construction of spline spaces and interpolation operators by one dimension, see {Appendix A}{}{}, we find for multipatch domains $\Omega\subseteq \mathbb R^d$, with $d=2,3$, the estimates \begin{align*} \inf_{f_h\in \S^0_{\pmb p,\pmb \Xi}(\Omega)}\norm{ f_3 - f_h}_{H^{1}(\Omega)}&\lesssim h^{s-1}\norm{f_3}_{{H}^{s}_{{\mathrm{pw}}}(\Omega)}&{d}\leq {}&{}s \leq p+1,\\ \inf_{\pmb f_h\in \pmb \S^1_{\pmb p,\pmb \Xi}(\Omega)}\norm{ \pmb f_4 - \pmb f_h}_{\pmb H^0(\div,\Omega)}&\lesssim h^{s}\norm{\pmb f_4}_{{\pmb H}^s_{{\mathrm{pw}}}(\div,\Omega_j)}&{1}\leq {}&{}s \leq p,\\ \inf_{f_h\in \S^2_{\pmb p,\pmb \Xi}(\Omega)}\norm{ f_5 - f_h}_{L^2(\Omega)}&\lesssim h^{s}\norm{f_5}_{{H}^{s}_{{\mathrm{pw}}}(\Omega)}&0\leq {}&{}s \leq p, \end{align*} for $f_3$, $\pmb f_4$ and $f_5$ smooth enough for the norms to be defined, as explained in Corollary \ref{cor::femanalogy}. Estimates for three-dimensional spaces, including $\pmb H(\pmb\curl,\Omega),$ follow analogously, cf. Corollary~\ref{volumetricmulticonv} in Appendix A. We can drop the regularity requirements from Theorem \ref{lem::multiconv}, since they are only required by the constructed quasi-interpolants, and not by the orthogonal projection w.r.t.~the corresponding Sobolev spaces, see Section \ref{sec::tracespaces}. Taking into account the three-dimensional generalisation of the construction in Section \ref{subsec::approx}, see {Appendix A}{}{}, we now have access to a discretisation of the diagram in Figure \ref{fig::classicaldeRham}, given by \begin{equation} \begin{tikzcd}[row sep = 2em,column sep = 1.3cm] \S^0_{\tilde{\pmb p},\pmb {\tilde \Xi}}(\Omega)\ar{d}[description]{\gamma_0}\ar{r}[description]{\pmb\grad}& \pmb \S^1_{\tilde{\pmb p},\pmb {\tilde \Xi}}(\Omega)\ar{r}[description]{\pmb\curl}\ar{d}[description]{\pmb{\gamma}_t}& \pmb \S^2_{\tilde{\pmb p},\pmb {\tilde \Xi}}(\Omega)\ar{d}[description]{\gamma_{\pmb n}}\ar{r}[description]{\div} & \S^3_{\tilde{\pmb p},\pmb {\tilde \Xi}}(\Omega)\\ \S^0_{{\pmb p},\pmb \Xi}(\Gamma)\ar{r}[description]{{{\bb\curl}_\Gamma}} & \pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)\ar{r}[description]{\div_\Gamma} & \S^2_{\pmb p,\pmb \Xi}(\Gamma) \end{tikzcd}\label{Fig::discreteDeRham} \end{equation} for suitable choices of (lists of tuples of) polynomial degrees $\tilde{\pmb p},\pmb p$ and knot vectors $\tilde{\pmb \Xi}, \pmb \Xi$, and under the assumption that $\Omega$ is given as a multipatch domain. {Note that a corresponding discretisation of $\pmb H^{-1/2}(\curl_\Gamma,\Gamma)$ can be obtained in complete analogy to the construction of $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)$.} To this end, we know that for \emph{any} problem formulated within the isogeometric framework that enjoys a discrete inf-sup condition or a variant of Céa's Lemma w.r.t.~the norms above, we can expect a convergence of optimal order w.r.t.~$h$-refinement \cite{Xu_2002aa}. Note, however, that the orthogonal projection will, in general, not have the commuting diagram property in the sense of Lemma \ref{lem::multipatch::theycommute::and::regularity}. This distinction is critical for existence and uniqueness proofs for problems requiring conforming discretisations.
{ "timestamp": "2019-06-28T02:14:51", "yymm": "1806", "arxiv_id": "1806.01062", "language": "en", "url": "https://arxiv.org/abs/1806.01062" }
\section{Introduction}\label{sec:introduction} Political science scholars working with large quantities of textual data are often interested in discovering latent semantic structures in their document collections. Examples include legislative debates, policies, media content, manifestos, and open-ended survey questions. Manual coding of such data is extremely time-consuming and expensive, and does not scale well with the expanding size of the corpora. In practice, document summarization is routinely carried out using variations of probabilistic topic models \citep{BleiNgJordan2003}. However, semantic understanding of resulting topics still requires human involvement and thus a set of discretionary decisions that need validation and ensure transparency. A key feature of political corpora is the evolution of semantic structures over time, which is not fully accounted for in existing methods. Politicians and other decision-makers choose to pay attention to functional areas of specialization, such as health or education policies, which reflect their interests, career experience and goals, and demands for representation. The agenda of the legislature and the content of debates shifts across these topics over time, according to functional pressures and agenda-setting in the media and from public opinion. This is known as concept drift \citep{gama2014survey}. In natural language processing (NLP), dynamic topic models are often used to capture the evolution of content structure over time \citep{DTM}. Concept drift in political documents comes from changing content of the texts and presentation style (e.g. compare UN General Debate speeches by Obama and Trump), as well as from the adaptive behavior of politicians \citep{baturo2017understanding}. The vocabulary used to express topics may change over time. Human coders are intuitively better placed to pick up the change in meaning in political texts, while machine coding is often faulted with failures to detect semantic change \citep{albaugh_comparing_2014}. We present a method that automatically transfers existing domain-specific knowledge base for topic labeling. We show that our method works well under the concept drift in document summarization. We illustrate the performance of our method applying dynamic topic modeling of the debates in the UK House of Commons from 1935 to 2014, and labeling the topics using the coding manual of the Comparative Agendas Project (CAP) \citep{bevan2014}. We validate our results using human labeling of the topics by the CAP expert coders. Our method applies more generally and can be easily extended to other areas with existing domain-specific knowledge base, such as party manifestos, open-ended survey questions, social media analysis, and legal cases. Using our method, researchers in these fields can be more confident that the building blocks of their models are not an artefact of human coding decisions from within the research process itself. \section{Related Work} In the absence of roll-call data that can be used for ideal point estimation, scholars have turned to legislative speech to estimate policy positions, either by focusing on selected debates \citep[e.g.][]{LaverBenoit2002,HerzogBenoit2015} or through the analysis of all speeches during a legislative term \citep{LauderdaleHerzog2016}. A parallel stream of the literature has used topic modeling to estimate the extent to which legislators speak on different topics \citep{quinn_how_2010}. Topic modeling is a class of models that estimate the underlying themes in a collection of documents. Originally proposed by \cite{BleiNgJordan2003} in their seminal paper on latent Dirichlet allocation (LDA), various extensions of LDA have been developed in the computing sciences \citep[e.g.][]{CTM,HDP,Roberts2016}. There have been several recent applications in political science \citep{Grimmer2009,Roberts2014,mueller2018reading}. One of the LDA extensions is the dynamic topic model (DTM) \citep{DTM}, which relaxes the assumption of LDA that documents are unordered. Instead, DTM assumes that documents are grouped into discrete time intervals (e.g., years) that exhibit different mixtures of topics, which allows topics to change over time -- both in terms of their prevalence in the corpus and in their word compositions. DTM has not seen extensive practical use with large volumes of data potentially due to scalability of the inference algorithm \citep{gropp2016scalable}. In political science there have been very few applications \citep[e.g.][]{gurciullo2015complex,greene_exploring_2017}. Collections of political documents spanning long periods of time exhibit a problem known as concept drift \citep{gama2014survey}. Under `not strictly stationary' data generating processes, the underlying concept that is the target variable of our prediction model may be changing over time thus affecting the predictive decision \citep[for a formal definition, see][]{webb2016characterizing}. In political science, \cite{lowe2011scaling} develop a measurement model to address the changing nature of left-right ideological dimension. \cite{benoit2016crowd} present a general approach to data generation using crowdsourcing that can quickly react to change of concepts like the appearance of immigration as a new dimension of party competition. A standard approach to deal with concept drift in political science NLP applications has been to analyze the data from each time interval separately. For example, recent work using annual speech data estimates separate models for each year \citep[e.g.][]{HerzogBenoit2015,baturo2013life}. DTM is a drift-aware adaptive learning algorithm that adapts to the evolution of topics over time. Topic labeling is a key post-processing step of all probabilistic topic models. As a general rule labels should be relevant, understandable, with high coverage inside topic, and discriminative across topics. Early research focused on generating labels by hand by using a set of top \emph{n} words in a topic distribution (so called \emph{cardinality}) learned by a topic model \citep[e.g.][]{griffiths2004finding}. An alternative approach is to implement a supervised topic modeling approach that limits the topics to a predefined set with their word distributions provided \emph{a priori} \citep{mcauliffe2008supervised,ramage2009labeled}. The former approach is not scalable, carries a high cognitive load in forming the topic concept and its interpretation \citep{aletras2017labeling}, and also suffers from a potential bias of the human labeler \citep{lau2016sensitivity}, while the latter is unable to pick up topics unknown beforehand \citep{wood2017source}. Several automatic labeling approaches have been proposed in the literature that utilize external, contextual information. \citet{mei2007automatic} minimize the semantic distance between the topic model and the candidate label based on the phrases inside documents. \citet{lau2011automatic} utilize various ranking mechanisms of the top-\emph{n} words and candidate labels from Wikipedia articles containing these terms. \cite{bhatia2016automatic} use word embeddings to map topics and candidate labels derived from Wikipedia article titles, and then select topic labels based on cosine similarity and relative ranking measures. Word embeddings pre-trained on a large corpus like Wikipedia and deployed for topic labeling of PubMed abstracts as in \cite{bhatia2016automatic} is a simple form of general domain knowledge transfer. More generally, a machine learning framework captures the ability to transfer knowledge to new conditions, which is known as transfer learning \citep[for a survey and formal definition see][]{pan2010survey}. \section{Unsupervised Topic Modeling with Transfer Topic Labeling} Our main idea is illustrated in Figure~\ref{fig:flowchart}. The dotted box on the right-hand side illustrates traditional unsupervised topic modeling, which stops with estimated latent topics that need manual labeling. In our approach, we use outside expert codebooks to extract topic labels and associated keywords, which we then use to automatically label the estimated latent topics. Retaining human-in-the-loop allows for adjustment of the labels for specific domains with sparse coverage in the source knowledge base. Hence, we use the term semi-automated topic label in this paper. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{flowchart-crop.pdf} \caption{Illustration of the application of transfer learning for semi-automated labeling of estimated topics.} \label{fig:flowchart} \end{figure} In the remainder of this section, we demonstrate the utility of this approach with speeches from the UK House of Commons over the time period from 1935 to 2014. We first explain how we have estimated latent, dynamic topics from the speeches. We then discuss how we have used the codebooks from the Comparative Agendas Project (CAP) \citep{bevan2014} as existing knowledge base to transfer topic labels. \subsection{Estimating Dynamic Topics from House of Commons Speeches, 1935--2014} Our data consist of the complete record of debates from the UK House of Commons during the time period 1935--2014. All debates and information about speakers were downloaded from TheyWorkForYou,\footnote{\url{https://www.theyworkforyou.com/}} a transparency website that provides access to parliamentary records and information about MPs. All data were downloaded in XML format and further processed in Python. The full data set consists of about 4.3 million floor contributions with an average of 49,720 contributions per year (min=17,280, max=118,500, sd=17,596) and a total of 117,914 unique words. Within each session, we combined each MP's contributions into a single text, excluding contributions that concern the rules of procedure or the business of the House, such as the reading of the parliamentary agenda or formal announcements. We also removed the traditional prayer at the beginning of each sitting and all contributions and announcements by the Speaker. As part of the pre-processing, we applied stemming, removed words that appeared less than 50 times and in fewer than 5 documents, removed punctuation, numbers, symbols, stopwords, hyphens, single letters, and a custom list of high-frequency terms.\footnote{We used the fairly extensive MySQL stopword list, which includes more than 500 words. Our custom list includes the following words: hon, mr, member, members, bill, minister, prime, government(s), friend, year(s), gentleman, gentlemen.} The final data set from which we estimate topics includes 47,524 documents (i.e., an MP's concatenated speeches during a session) and 19,185 unique words. We used the dynamic topic model (DTM) by \cite{DTM} to estimate topics from the speech data. Like any unsupervised topic model, DTM requires setting the number of topics \emph{a priori}. We followed the standard in the literature and picked the number of topics based on semantic coherence and exclusivity \citep[c.f.,][]{Roberts2016}. Based on these two metrics, we selected the model with 22 topics.\footnote{Further details and references regarding model selection are provided in the supplementary materials, Section A.} \subsection{Extracting Topic Labels and Keywords from Expert Codebooks} We used coding instructions from the Comparative Agendas Project (CAP) \citep{bevan2014} as external source to extract topic labels and associated keywords. We selected the CAP because we expected the majority of parliamentary speeches to be on topics related to public policy-making. This attention to policy topics is the central interest of the policy agendas set of projects, which have been active since the late 1990s \citep[e.g.][]{baumgartner_comparative_2013}. What has been called the policy agendas code frame is a fuller articulation of the policy topics ideas with a larger number of major topic codes, which aims at comprehensive coverage of any topic that is likely to appear. While our demonstration of transfer topic labeling is limited to the CAP codebooks, we note that our method can be easily extended to other codebooks as long as they include written coding instructions or vignettes. Further, as we will demonstrate below, our method for matching topic labels to estimated latent topics produces goodness-of-fit measures for each match, which allows to evaluate how well the topics derived from a codebook capture the estimated latent dimensions. The codebook for the UK Policy Agendas Project includes 19 major topics with subtopics.\footnote{This codeframe for the UK is summarized in \cite{john_policy_2013} and on the UK project website (http://www.policyagendas.org.uk/).} For each subtopic, the CAP codebook provides written examples of what is being included in each category. For example, category ``1. Macroeconomics -- 100: General domestic macroeconomic issues'' is described as follows: \begin{quote} \emph{the government's economic plans, economic conditions and issues, economic growth and outlook, state of the economy, long-term economic needs, recessions, general economic policy, promote economic recovery and full employment, demographic changes, population trends, recession effects on regional and local economies, distribution of income, assuring an opportunity for employment to every person seeking work, standard of living.} \end{quote} Because the descriptions of CAP subtopics are relatively short, we combine all subtopics under a major topic label into a single document. We then apply \emph{tf-idf} weighting to generate 19 weighted word lists (one for each major topic label), where the weight on each word reflects its importance to a topic label.\footnote{Before calculating \emph{tf-idf} weights, we applied the same pre-processing rules that we applied to the speech data to increase the similarity between the two vocabularies.} Table~\ref{tab:policy_agenda_topics} provides an overview of the 19 topics together with their ten highest ranked words. \begin{table}[tp] \centering \caption{Overview of CAP Topics} \label{tab:policy_agenda_topics} \begin{footnotesize} \begin{tabular}{p{0.25\textwidth}p{0.7\textwidth}} \toprule \textbf{Policy agenda topic} & \textbf{Top ten words based on \emph{tf-idf} weighting} \\ \midrule Macroeconomic Issues & tax, inflat, index, treasuri, fiscal, price, taxat, unemploy, bank, gold \\ Civil Rights & discrimin, asylum, immigr, equal, right, citizenship, minor, age, refuge, freedom \\ Health & healthcar, care, health, medic, drug, coverag, nurs, provid, alcohol, mental \\ Agriculture & agricultur, farm, anim, food, livestock, produc, crop, erad, fisheri, diseas \\ Labor and Employment & employ, labour, job, migrant, youth, worker, employe, workplac, work, train \\ Education and Culture & educ, student, school, art, vocat, higher, secondari, teacher, grant, learn \\ Environment & water, pollut, environment, wast, hazard, conserv, emiss, climat, municip, air \\ Energy & electr, gas, energi, coal, oil, power, natur, nuclear, fuel, gasolin \\ Transportation & highway, transport, rail, truck, bus, road, ship, aviat, speed, air \\ Law and Crime & crime, crimin, drug, justic, traffick, polic, juvenil, sentenc, court, offend \\ Social Welfare & benefit, elder, volunt, social, food, welfar, incom, contributori, meal, lunch \\ Community Development, Planning and Housing & hous, mortgag, urban, tenant, veteran, low, homeless, citi, rural, tenanc \\ Banking and Finance & small, bankruptci, copyright, busi, patent, consum, mortgag, tourism, sport, mutual \\ Defense & defenc, weapon, arm, intellig, militari, forc, reserv, veteran, armi, war \\ Space Science & scienc, space, radio, communic, satellit, tv, launch, telecommun, broadcast, research \\ Foreign Trade & trade, export, tariff, import, invest, exchang, duti, competit, u.k, restrict \\ International Affairs and Foreign Aid & european, soviet, east, u.n, africa, u.k, peac, polit, europ, treati \\ Government Operations & postal, legislatur, execut, minist, employe, elect, census, elector, offici, prime \\ Public Lands, Water Management & indigen, land, park, convey, histor, water, forest, monument, memori, reclam \\ \bottomrule \end{tabular} \end{footnotesize} \end{table} \subsection{Transfer Topic Labeling} We transfer topic labels from the CAP to the estimated latent topics through a pair-wise matching procedure that finds the most similar CAP topic word list for each latent dimension. For the CAP topics, the word lists are the \emph{tf-idf}-weighted word lists discussed above. For the dynamic topic model, we construct one word list for each of the 22 estimated latent topics.\footnote{Additional information on our implementation of transfer topic labeling is provided in in the supplementary materials, Section B.} To identify the best matching topics, we use the Jaccard index, which is a widely used set-based similarity measure.\footnote{We also used an alternative approach using the ROUGE F1 metric frequently used in document summarization and machine translation literature. The results using Jaccard and ROUGE were identical.} For two sets $A$ and $B$, the Jaccard index is defined as \begin{equation}\label{eq:jaccard} J(A,B) = \frac{|A\cap B|}{|A\cup B|} \end{equation} where the numerator is the size of the intersection between $A$ and $B$, and the denominator is the size of the union of the two sets. The Jaccard index is bound between 0 and 1, with higher numbers indicating greater overlap between two sets. We calculate the Jaccard index for each pair of word lists consisting of one CAP topic and one estimated DTM topic. Using the highest Jaccard value results in 19 unique matches where the CAP label is transferred to the estimated topic. As a validation exercise we recruited a group of CAP experts to label the word lists for each topic according to the CAP categorization. Seventeen experts who participated in this exercise could submit two choices of the labels (most appropriate and second most appropriate). We assess the quality of expert labeling using Fleiss' kappa measure of inter-coder agreement. We also calculate proportion of experts who agree with the automatically selected topic label as their first or second choice. We provide additional information on our expert coding validation exercise in the supplementary materials, Section C. Table~\ref{tab:labeled_topics} provides an overview of the 22 estimated dynamic topics together with their Jaccard index, the matched CAP topic label, topic label selected by experts, proportion of experts who selected the same topic label as the transfer-learning approach with the first or second choice, Fleiss' kappa inter-coder agreement, and the top 20 words from each DTM topic. Three CAP categories from Table~\ref{tab:policy_agenda_topics} have not been matched: Environment, Space Science, and Public Lands and Water Management. It should also be noted that the lists of words conform to common understandings of what the categories mean, for example agriculture has words such as milk and meat in the list. \begin{landscape} \begin{table}[tbp] \begin{tiny} \begin{center} \caption{DTM topics with Matched Policy Agenda Topic Labels and Comparison to Expert Coding}\label{tab:labeled_topics} \begin{tabular}{cp{0.25\textwidth}p{0.25\textwidth}ccccp{0.58\textwidth}} \toprule \textbf{\#} & \textbf{Topic Label Selected by} & \textbf{Topic Label Selected by} & \multicolumn{2}{c}{\textbf{Prop. Experts}} & \textbf{Jaccard} & \textbf{Fleiss'} & \textbf{Top 20 Words from Estimated Dynamic Topics} \\\cline{4-5} \textbf{} & \textbf{Transfer-Learning Approach} & \textbf{Experts} & \textbf{1st} & \textbf{2nd} & \textbf{Index} & \textbf{Kappa} & \\ \midrule 1 & Agriculture & Agriculture & 1.00 & 0 & 0.62 & 0.81 & price, agricultur, food, suppli, ask, ration, milk, water, farmer, ministri, market, industri, fisheri, consum, sugar, beef, meat, fish, rural, increas \\[0.2em] 2 & Labour and Employment & Labour and Employment & 0.94 & 0 & 0.47 & 0.54 & employ, industri, polic, men, labour, worker, union, work, unemploy, area, women, trade, law, wage, crime, home, court, factori, train, case \\[0.2em] 3 & International Affairs and Foreign Aid & International Affairs and Foreign Aid & 0.88 & 0.06 & 0.23 & 0.82 & hous, european, question, matter, eu, committe, order, union, communiti, discuss, statement, europ, made, treati, constitut, countri, debat, point, answer, make \\[0.2em] 4 & Defense & Defense & 0.88 & 0 & 0.42 & 0.61 & air, defenc, forc, ministri, civil, aviat, ireland, aircraft, aerodrom, servic, northern, broadcast, imperi, airway, afghanistan, televis, iraq, corpor, offic, fli \\[0.2em] 5 & Community Development, Planning and Housing Issues & Community Development, Planning and Housing Issues & 0.81 & 0.06 & 0.52 & 0.68 & local, hous, author, council, build, road, work, rent, charg, plan, home, region, area, counti, rate, communiti, land, london, peopl, develop \\[0.2em] 6 & Government Operations & Government Operations & 0.75 & 0.12 & 0.27 & 0.40 & scotland, scottish, state, vote, elect, elector, secretari, hous, parliament, commiss, regist, parti, assembl, system, ask, gallant, peopl, glasgow, awar, devolut \\[0.2em] 7 & Foreign Trade & Foreign Trade & 0.69 & 0.06 & 0.32 & 0.60 & trade, hous, question, committe, industri, board, matter, export, countri, import, duti, answer, discuss, refer, presid, agreement, made, film, hope, british \\[0.2em] 8 & Health & Health & 0.56 & 0.44 & 0.52 & 0.52 & school, educ, health, servic, care, author, hospit, evacu, nhs, children, patient, local, board, adopt, medic, peopl, teacher, area, univers, doctor \\[0.2em] 9 & Transportation & Transportation & 0.56 & 0.25 & 0.32 & 0.46 & busi, london, steel, product, industri, ship, war, suppli, constitu, british, ministri, transport, research, peopl, aircraft, work, rail, vessel, firm, factori \\[0.2em] 10 & Energy & Energy & 0.56 & 0.19 & 0.52 & 0.43 & agricultur, coal, industri, energi, land, farmer, board, oil, farm, miner, gas, subsidi, power, water, scheme, climat, british, committe, electr, carbon \\[0.2em] 11 & Labour and Employment & Labour and Employment & 0.50 & 0.12 & 0.13 & 0.54 & question, pension, peopl, sir, figur, work, benefit, answer, increas, million, inform, rate, refer, repli, report, cent, part, gallant, committe, matter \\[0.2em] 12 & Energy & Energy / Labour and Employment$^1$ & 0.38 & 0.19 & 0.37 & 0.48 & coal, industri, employ, unemploy, board, area, mine, job, peopl, fuel, develop, train, electr, miner, transport, region, men, work, east, north \\[0.2em] \hline 13 & Government Operations & Law, Crime, and Family Issues & 0.31 & 0.25 & 0.18 & 0.51 & peopl, home, ask, point, speaker, hous, constitu, case, offic, polic, order, general, agre, debat, secretari, prison, man, awar, post, public \\[0.2em] 14 & Social Welfare & Macroeconomics & 0.25 & 0.06 & 0.42 & 0.18 & secretari, state, tax, chancellor, peopl, benefit, pension, exchequ, cut, war, incom, social, hous, purchas, problem, profit, minist, compani, duti, govern \\[0.2em] 15 & Banking, Finance, and Domestic Commerce & Social Welfare & 0.06 & 0.06 & 0.23 & 0.30 & pension, nation, price, unemploy, industri, case, assist, insur, increas, busi, benefit, compani, british, peopl, age, offic, man, widow, board, allow \\[0.2em] 16 & Macroeconomics & Transportation & 0 & 0.25 & 0.32 & 0.46 & secretari, transport, state, railway, tax, road, industri, compani, price, bank, subsidi, commiss, vehicl, trade, nationalis, control, servic, chancellor, union, privat \\[0.2em] \hline 17 & Foreign Trade & International Affairs and Foreign Aid & 0 & 0 & 0.37 & 0.82 & countri, state, commonwealth, coloni, leagu, british, intern, unit, india, foreign, secretari, majesti, ask, syria, develop, german, south, peopl, rhodesia, world \\[0.2em] 18 & Social Welfare & Government Operations & 0 & 0 & 0.18 & 0.40 & amend, claus, point, committe, hous, learn, move, order, debat, case, matter, word, beg, line, act, deal, provis, make, legisl, law \\[0.2em] 19 & Social Welfare & Education & 0 & 0 & 0.27 & 0.53 & ask, secretari, state, school, educ, awar, offic, statement, war, armi, servic, make, teacher, children, men, admiralti, view, step, forc, releas \\[0.2em] 21 & International Affairs and Foreign Aid & Transportation & 0 & 0 & 0.18 & 0.46 & ask, wale, welsh, assembl, secretari, road, transport, war, awar, state, view, north, east, railway, learn, author, step, region, number, local \\[0.2em] 22 & Social Welfare & Government Operations & 0 & 0 & 0.27 & 0.40 & matter, question, case, sir, answer, sport, made, act, local, author, fund, inform, report, point, time, nation, person, concern, regul, servic \\[0.2em] 23 & Civil Rights, Minority Issues, and Civil Liberties & Government Operations & 0 & 0 & 0.23 & 0.40 & ireland, northern, countri, point, peopl, polic, war, hous, speech, great, time, parti, irish, speaker, debat, issu, order, opposit, state, agreement \\[0.2em] \bottomrule \end{tabular} \end{center} \vspace{-\baselineskip} \emph{Note}: Column ``Prop. Experts'' is the proportion of experts who selected the same topic as the transfer-learning approach with their first or second choice. \\ $^1$ Experts were tied between topic label ``Energy'' and ``Labour and Employment''. The Fleiss' Kappa reported in this row is the average of the kappas for each label. \end{tiny} \end{table} \end{landscape} A majority of experts agreed with the automatic approach on 12 topic labels. The clearest example being the topic of agriculture (\#1) where transfer labeling and all the experts identified farming and agriculture related terms. Further four topics show sufficiently large agreement between experts across two choices and automatic labeling (\#13 government operations and \#14 social welfare). Banking topic is labeled in total by 12\% of experts, but it also shows significant disagreement across experts with Fleiss' kappa at 0.3. For macroeconomics (\# 16) a majority of experts labeled it as transport, while 25\% of the experts agreed with the automatic labeling of this topic as macroeconomics as their second choice (kappa = 0.46). The remaining six entries in the table show complete disagreement between our automatic approach and experts. Not a single expert assigned the same label as the transfer-learning approach. These cases are difficult to explain as they both contain varied values for Jaccard and Fleiss' kappa. The topics morph into concerns about representation and territorial identity, not in ways that are solely about these political topics, but are still connected to the policy issues that MPs talk about. With transportation the match is for regional policies in Wales and England which is an amalgam of keywords on transport. Another crossover is for social welfare that combines with legislative procedures, which reflects the extent to which MPs focus on social welfare in asking parliamentary questions. The importance of this topic is that it comes up four times with different word formations. The experts are possibly using the government operations label for catch-all procedural issues (e.g. in \#18 and \#22), or fitting a label to a topic that is not represented in CAP like Northern Ireland (\# 23). In the latter case, the algorithm arguably more correctly applies the label of civil rights and minority issues. We provide additional validation results in the supplementary materials. \section{Conclusion} Treating text as data is an approach of increasing importance in political science. NLP techniques developed in the computing sciences are routinely added to methodological toolkits. Topic modeling is a favorite tool of document summarization. Political scientists often have to be creative in interpreting and labeling estimated topics; yet such labeling is also often difficult to replicate -- a sine qua non of modern political science. To address the deficiency of current labeling and to have a better way of accommodating changes over time, we present a new method for topic labeling. Our approach provides an automatic labeling method that transfers the wealth of substantive knowledge accumulated in political science into labeling topic models. Our transfer labeling approach is also fully transparent and replicable that allows to bring in human expertise to bear on difficult cases. Hence we call it a semi-automatic transfer labeling approach with a human-in-the-loop. The method can be extended to party manifestos, open-ended survey questions, social media data, and legal documents, in fact all research domains where topic models have made advances in recent years. \clearpage \singlespacing \bibliographystyle{apsr}
{ "timestamp": "2018-08-28T02:21:56", "yymm": "1806", "arxiv_id": "1806.00793", "language": "en", "url": "https://arxiv.org/abs/1806.00793" }
\section{Introduction} When uncertainty is described by probabilities, decision making is usually done by maximising expected utility. Except in degenerate cases, this leads to a unique optimal decision. If, however, the probability measure is only partially specified---for example by lower and upper bounds on the probabilities of specific events---this method no longer works. Essentially, the problem is that two different probability measures that are both compatible with the given bounds may lead to different optimal decisions. In this context, several generalisations of maximising expected utility have been proposed; see~\cite{troffaes2007} for an nice overview. A common feature of many such generalisations is that they yield \emph{set-valued choices}: when presented with a set of options, they generally return a subset of them. If this turns out to be a singleton, then we have a unique optimal decision, as before. If, however, it contains multiple options, this means that they are incomparable and that our uncertainty model does not allow us to choose between them. Obtaining a single decision then requires a more informative uncertainty model, or perhaps a secondary decision criterion, as the information present in the uncertainty model does not allow us to single out an optimal option. Set-valued choice is also a typical feature of decision criteria based on other uncertainty models that generalise the probabilistic ones to allow for imprecision and indecision, such as lower previsions and sets of desirable gambles. \emph{Choice functions} provide an elegant unifying mathematical framework for studying such set-valued choice. They map option sets to option sets: for any given set of options, they return the corresponding set-valued choice. Hence, when working with choice functions, it is immaterial whether there is some underlying decision criterion. The primitive objects of this framework are simply the set-valued choices themselves, and the choice function that represents all these choices, serves as an uncertainty model in and by itself. A major advantage of working with choice functions is that they allow us to impose axioms on choices, aimed at characterising what it means for choices to be rational and internally consistent; see for example the seminal work by Seidenfeld et al.~\cite{seidenfeld2010}. Here, we undertake a similar mission, yet approach it from a different angle. Rather than think of choice in an intuitive manner, we provide it with a concrete interpretation in terms of attitudes towards gambling, borrowing ideas from the theory of sets of desirable gambles \cite{couso2011:desirable,walley2000,cooman2010,cooman2011b}. From this interpretation alone, and nothing more, we develop a theory of coherent choice that includes a full set of axioms, a representation in terms of sets of desirable gambles, and a natural extension theorem. \iftoggle{arxiv}{In order to facilitate the reading, proofs and intermediate results have been relegated to the Appendix.} {Due to length constraints, proofs have been relegated to the appendix of an extended arXiv version of this contribution~\cite{extended}.} \section{Choice Functions} A choice function $\choicefun$ is a set-valued operator on sets of options. In particular, for any set of options $\optset$, the corresponding value of $\choicefun$ is a subset $\choicefun(\optset)$ of $\optset$. The options themselves are typically actions amongst which a subject wishes to choose. As is customary in decision theory, every action has a corresponding reward that depends on the state of a variable~$X$, about which the subject is typically uncertain. Hence, the reward is uncertain too. The purpose of a choice function is to represent our subject's choices between such uncertain rewards. Let us make this more concrete. First of all, the variable~$X$ takes values~$x$ in some set of states~$\mathcal{X}$. The reward that corresponds to a given option is then a function $\opt$ on $\mathcal{X}$. We will assume that this reward can be expressed in terms of a real-valued linear utility scale, allowing us to identify every option with a real-valued function on $\mathcal{X}$.\footnote{A more general approach, which takes options to be elements of an arbitrary vector space, encompasses the horse lottery approach, and was explored by Van Camp \cite{2017vancamp:phdthesis}. Our results here can be easily extended to this more general framework.} We take these functions to be bounded and call them \emph{gambles}. We use $\mathcal{L}$ to denote the set of all such gambles and also let \begin{equation*} \opts_{\succ0}\coloneqq\cset{\opt\in\mathcal{L}}{\opt\geq0\text{ and }\opt\neq0} \text{ and } \opts_{\preceq0}\coloneqq\cset{\opt\in\mathcal{L}}{\opt\leq0}. \end{equation*} Option sets can now be identified with subsets of $\mathcal{L}$, which we call \emph{gamble sets}. We restrict our attention here to \emph{finite} gamble sets and will use $\mathcal{Q}$ to denote the set of all such finite subsets of $\mathcal{L}$, including the empty set. \begin{definition}[Choice function]\label{def:choicefunction} A \emph{choice function} $\choicefun$ is a map from $\mathcal{Q}$ to $\mathcal{Q}$ such that $\choicefun(\optset)\subseteq\optset$ for every $\optset\in\mathcal{Q}$. \end{definition} Gambles in $\optset$ that do not belong to $\choicefun(\optset)$ are said to be \emph{rejected}. This leads to an alternative representation in terms of so-called rejection functions. \begin{definition}[Rejection function]\label{def:rejectionfunction} The \emph{rejection function} $\rejectfun[\choicefun]$ corresponding to a choice function $\choicefun$ is a map from $\mathcal{Q}$ to $\mathcal{Q}$, defined by $\rejectfun[\choicefun](\optset)\coloneqq\optset\setminus\choicefun(\optset)$ for all $\optset\in\mathcal{Q}$. \end{definition} \noindent Since a choice function is completely determined by its rejection function, any interpretation for rejection functions automatically implies an interpretation for choice functions. This allows us to focus on the former. Our interpretation for rejection functions now goes as follows. Consider a subject whose uncertainty about $X$ is represented by a rejection function $\rejectfun[\choicefun]$, or equivalently, by a choice function $\choicefun$. Then for a given gamble set $\optset\in\mathcal{Q}$, the statement that a gamble $\opt\in\optset$ is rejected from $\optset$---that is, that $\opt\in\rejectfun[\choicefun](\optset)$---is taken to mean that \emph{there is at least one gamble $\altopt$ in $\optset$ that our subject strictly prefers over $\opt$}. This interpretation is of course still meaningless, because we have not yet explained the meaning of strict preference. Fortunately, that problem has already been solved elsewhere: strict preference between elements of $\mathcal{L}$ has an elegant interpretation in terms of desirability~\cite{walley2000,quaeghebeur2015:statement}, and it is this interpretation that we intend to borrow here. To allow us to do so, we first provide a brief introduction to the theory of sets of desirable gambles. \section{Sets of Desirable Gambles}\label{sec:SDGs} A gamble $\opt\in\mathcal{L}$ is said to be \emph{desirable} if our subject strictly prefers it over the zero gamble, meaning that rather than not gamble at all, she strictly prefers to commit to the gamble where, after the true value $x$ of the uncertain variable $X$ has been determined, she will receive the (possibly negative) reward $\opt(x)$. A \emph{set of desirable gambles} $\desirset$ is then a subset of $\mathcal{L}$, whose interpretation will be that it consists of gambles $\opt\in\mathcal{L}$ that our subject considers desirable. The set of all sets of desirable gambles is denoted by $\mathbf{D}$. In order for a set of desirable gambles to represent a rational subject's beliefs, it should satisfy a number of rationality, or \emph{coherence}, criteria. \begin{definition} A set of desirable gambles $\desirset\in\mathbf{D}$ is called \emph{coherent} if it satisfies the following axioms \emph{\cite{couso2011:desirable,cooman2010,cooman2011b,quaeghebeur2015:statement}}: \begin{enumerate}[label=\textup{D}$_{\arabic*}$.,ref=\textup{D}$_{\arabic*}$,topsep=2pt,leftmargin=*] \item $0\notin\desirset$;\label{ax:desirs:nozero} \item\label{ax:desirs:pos} $\opts_{\succ0}\subseteq\desirset$; \item\label{ax:desirs:cone} if $\opt,\altopt\in\desirset$, $\lambda,\mu\geq0$ and $\lambda+\mu>0$, then $\lambda\opt+\mu\altopt\in\desirset$. \end{enumerate} We denote the set of all coherent sets of desirable gambles by $\overline{\desirsets}$. \end{definition} \noindent Axioms~\ref{ax:desirs:nozero} and~\ref{ax:desirs:pos} follow immediately from the meaning of desirability: zero cannot be strictly preferred to itself, and any gamble that is never negative but sometimes positive should be strictly preferred to the zero gamble. Axiom~\ref{ax:desirs:cone} is implied by the assumed linearity of our utility scale. Every coherent set of desirable gambles $\desirset\in\overline{\desirsets}$ induces a binary preference order $\succ_{\desirset}$---a strict vector ordering---on $\mathcal{L}$, defined by $ \opt\succ_{\desirset}\altopt\Leftrightarrow\opt-\altopt\in\desirset$, for all $\opt,\altopt\in\mathcal{L}$. The intuition behind this definition is that a subject strictly prefers the uncertain reward $\opt$ over $\altopt$ if she strictly prefers trading $\altopt$ for $\opt$ over not trading at all, or equivalently, if she strictly prefers the net uncertain reward $\opt-\altopt$ over the zero gamble. The preference order $\succ_{\desirset}$ fully characterises $\desirset$: one can easily see that $\opt\in\desirset$ if and only if $\opt\succ_{\desirset}0$. Hence, sets of desirable gambles are completely determined by binary strict preferences between gambles. \section{Sets of Desirable Gamble Sets} Let us now go back to our interpretation for choice functions, which is that a gamble $\opt$ in $\optset$ is rejected from $\optset$ if and only if there is some gamble $\altopt$ in $\optset$ that our subject strictly prefers over $\opt$. We will from now on interpret this preference in terms of desirability: we take it to mean that $v-u$ is desirable. In this way, we arrive at the following interpretation for a choice function $\choicefun$. Consider any $\optset\in\mathcal{Q}$ and $\opt\in\optset$, then \begin{equation}\label{eq:choiceintermsofdesir} \opt\notin\choicefun(\optset) \Leftrightarrow\opt\in\rejectfun[\choicefun](\optset) \Leftrightarrow(\exists\altopt\in\optset)\,\altopt-\op \text{~is desirable.} \end{equation} In other words, if we let $\optset-\set{\opt}\coloneqq\cset{\altopt-\opt}{\altopt\in\optset}$, then according to our interpretation, the statement that $\opt$ is rejected from $\optset$ is taken to mean that $\optset-\set{\opt}$ contains at least one desirable gamble. A crucial observation here is that this interpretation does not require our subject to specify a set of desirable gambles. Instead, all that is needed is for her to specify those gamble sets $\optset\in\mathcal{Q}$ that to her contain at least one desirable gamble. We call such gamble sets \emph{desirable gamble sets} and collect them in a \emph{set of desirable gamble sets} $\rejectset\subseteq\mathcal{Q}$. As can be seen from Equation~\eqref{eq:choiceintermsofdesir}, such a set of desirable gamble sets $\rejectset$ completely determines a choice function $\choicefun$ and its rejection function $\rejectfun[\choicefun]$: \begin{equation*}\label{eq:choiceintermsofK} \opt\notin\choicefun(\optset) \Leftrightarrow\opt\in\rejectfun[\choicefun](\optset) \Leftrightarrow\optset-\set{\opt}\in\rejectset, \text{ for all $\optset\in\mathcal{Q}$ and $\opt\in\optset$}. \end{equation*} The study of choice functions can therefore be reduced to the study of sets of desirable gamble sets. We will from now on work directly with the latter. We will use the collective term \emph{choice models} for choice functions, rejection functions, and sets of desirable gamble sets. Let $\mathbf{K}$ denote the set of all sets of desirable gamble sets $\rejectset\subseteq\mathcal{Q}$, and consider any such~$\rejectset$. The first question to address is when to call $\rejectset$ \emph{coherent}: which properties should we impose on a set of desirable gamble sets in order for it to reflect a rational subject's beliefs? We propose the following axiomatisation, using $(\lambda,\mu)>0$ as a shorthand notation for `$\lambda\geq0$, $\mu\geq0$ and $\lambda+\mu>0$'. \begin{definition}[Coherence] A set of desirable gamble sets $\rejectset\subseteq\mathcal{Q}$ is called \emph{coherent} if it satisfies the following axioms: \begin{enumerate}[label=\textup{K}$_{\arabic*}$.,ref=\textup{K}$_{\arabic*}$,leftmargin=*,topsep=2pt,start=0] \item\label{ax:rejects:nonempty} $\emptyset\notin\rejectset$; \item\label{ax:rejects:removezero} $\optset\in\rejectset\Rightarrow\optset\setminus\set{0}\in\rejectset$, for all $\optset\in\mathcal{Q}$; \item\label{ax:rejects:pos} $\set{\opt}\in\rejectset$, for all $\opt\in\opts_{\succ0}$; \item\label{ax:rejects:cone} if $\optset[1],\optset[2]\in\rejectset$ and if, for all $\opt\in\optset[1]$ and $\altopt\in\optset[2]$, $(\lambda_{\opt,\altopt},\mu_{\opt,\altopt})>0$, then \begin{equation*} \cset{\lambda_{\opt,\altopt}\opt+\mu_{\opt,\altopt}\altopt}{\opt\in\optset[1],\altopt\in\optset[2]} \in\rejectset; \end{equation*} \item\label{ax:rejects:mono}$\optset[1]\in\rejectset$ and $\optset[1]\subseteq\optset[2]\Rightarrow\optset[2]\in\rejectset$, for all $\optset[1],\optset[2]\in\mathcal{Q}$. \end{enumerate} We denote the set of all coherent sets of desirable gamble sets by $\overline{\rejectsets}$. \end{definition} Since a desirable gamble set is by definition a set of gambles that contains at least one desirable gamble, Axioms~\ref{ax:rejects:nonempty} and~\ref{ax:rejects:mono} are immediate. The other three axioms follow from the principles of desirability that also lie at the basis of Axioms~\ref{ax:desirs:nozero}--\ref{ax:desirs:cone}: the zero gamble is not desirable, the elements of $\opts_{\succ0}$ are all desirable, and any finite positive linear combination of desirable gambles is again desirable. Axioms~\ref{ax:rejects:removezero} and~\ref{ax:rejects:pos} follow naturally from the first two of these principles. The argument for Axiom~\ref{ax:rejects:cone} is more subtle; it goes as follows. Since $\optset[1]$ and $\optset[2]$ are two desirable gamble sets, there must be at least one desirable gamble $\opt\in\optset[1]$ and one desirable gamble $\altopt\in\optset[2]$. Since for these two gambles, the positive linear combination $\lambda_{\opt,\altopt}\opt+\mu_{\opt,\altopt}\altopt$ is again desirable, we know that at least one of the elements of $\cset{\lambda_{\opt,\altopt}\opt+\mu_{\opt,\altopt}\altopt}{\opt\in\optset[1],\altopt\in\optset[2]}$ is a desirable gamble. Hence, it must be a desirable gamble set. \section{The Binary Case}\label{sec:binary} Because we interpret them in terms of desirability, one might be inclined to think that sets of desirable gamble sets are simply an alternative representation for sets of desirable gambles. However, this is not the case: we will see that sets of desirable gamble sets constitute a much more general uncertainty framework than sets of desirable gambles. What lies behind this added generality is that it need not be known which gambles are actually desirable. For example, within the framework of sets of desirable gamble sets, it is possible to express the belief that at least one of the gambles $\opt$ or $\altopt$ is desirable while remaining undecided about which of them actually is; in order to express this belief, it suffices to state that $\set{\opt,\altopt}\in\rejectset$. This is impossible within the framework of sets of desirable gambles. Any set of desirable gamble sets $\rejectset\in\mathbf{K}$ determines a unique set of desirable gambles based on its binary choices only, given by \begin{equation*} \desirset[\rejectset]\coloneqq\cset{\opt\in\mathcal{L}}{\set{\opt}\in\rejectset}. \end{equation*} That choice models typically represent more than just binary choice is reflected in the fact that different $\rejectset$ can have the same $\desirset[\rejectset]$. Nevertheless, there are sets of desirable gamble sets $\rejectset\in\mathbf{K}$ that \emph{are} completely characterised by a set of desirable gambles, in the sense that there is a (necessarily unique) set of desirable gambles $\desirset\in\mathbf{D}$ such that $\rejectset=\rejectset[\desirset]$, with \begin{equation\iftoggle{arxiv}{}{*}}\label{eq:choicefromdesir} \rejectset[\desirset] \coloneqq\cset{\optset\in\mathcal{Q}}{\optset\cap\desirset\neq\emptyset}. \end{equation\iftoggle{arxiv}{}{*}} It follows from the discussion at the end of Section~\ref{sec:SDGs} that such sets of desirable gamble sets are completely determined by binary preferences between gambles. We therefore call them, and their corresponding choice functions, \emph{binary}. For any such binary set of desirable gamble sets $\rejectset$, the unique set of desirable gambles $\desirset\in\mathbf{D}$ such that $\rejectset=\rejectset[\desirset]$ is given by $\desirset[\rejectset]$. \begin{proposition}\label{prop:binaryiff} Consider any set of desirable gamble sets $\rejectset\in\mathbf{K}$. Then $\rejectset$ is binary if and only if $\rejectset[{\desirset[\rejectset]}]=\rejectset$. \end{proposition} The coherence of a binary set of desirable gamble sets is completely determined by the coherence of its corresponding set of desirable gambles. \begin{proposition}\label{prop:coherence:for:binary} Consider any binary set of desirable gamble sets $\rejectset\in\mathbf{K}$ and let $\desirset[\rejectset]\in\mathbf{D}$ be its corresponding set of desirable gambles. Then $\rejectset$ is coherent if and only if $\desirset[\rejectset]$~is. \end{proposition} \section{Representation in Terms of Sets of Desirable Gambles}\label{sec:representation} That there are sets of desirable gamble sets that are completely determined by a set of desirable gambles is nice, but such binary choice models are typically \emph{not} what we are interested in here, because then we could just as well use sets of desirable gambles to represent choice. It is the non-binary coherent choice models that we have in our sights here. But it turns out that our axioms lead to a representation result that allows us to still use sets of desirable gambles, or rather, sets of them, to completely characterise \emph{any} coherent choice model. \begin{theorem}[Representation]\label{theo:rejectsets:representation} Every coherent set of desirable gamble sets $\rejectset\in\overline{\rejectsets}$ is dominated by at least one binary set of desirable gamble sets: $\overline{\desirsets}\group{\rejectset}\coloneqq\cset{\desirset\in\overline{\desirsets}}{\rejectset\subseteq\rejectset[\desirset]}\neq\emptyset$. Moreover, $\rejectset=\bigcap\cset{\rejectset[\desirset]}{\desirset\in\overline{\desirsets}\group{\rejectset}}$. \end{theorem} \noindent This powerful representation result allows us to incorporate a number of other axiomatisations~\cite{2017vancamp:phdthesis} as special cases in a straightforward manner, because the binary models satisfy the required axioms, and these axioms are preserved under taking arbitrary non-empty intersections. \section{Natural Extension}\label{sec:natex} In many practical situations, a subject will typically not specify a full-fledged coherent set of desirable gamble sets, but will only provide some partial \emph{assessment} $\mathcal{A}\subseteq\mathcal{Q}$, consisting of a number of gamble sets for which she is comfortable about assessing that they contain at least one desirable gamble. We now want to extend this assessment~$\mathcal{A}$ to a coherent set of desirable gamble sets in a manner that is as conservative---or uninformative---as possible. This is the essence of \emph{conservative inference}. We say that a set of desirable gamble sets $\rejectset[1]$ is less informative than (or rather, at most as informative as) a set of desirable gamble sets $\rejectset[2]$, when \mbox{$\rejectset[1]\subseteq\rejectset[2]$}: a subject whose beliefs are represented by $\rejectset[2]$ has more (or rather, at least as many) desirable gamble sets---sets of gambles that definitely contain a desirable gamble---than a subject with beliefs represented by $\rejectset[1]$. The resulting partially ordered set $(\mathbf{K},\subseteq)$ is a complete lattice with intersection as infimum and union as supremum. The following theorem, whose proof is trivial, identifies an interesting substructure. \begin{theorem}\label{theo:conservative:inference} Let $\set{\rejectset[i]}_{i\in I}$ be an arbitrary non-empty family of sets of desirable gamble sets, with intersection $\rejectset\coloneqq\bigcap_{i\in I}\rejectset[i]$. If $\rejectset[i]$ is coherent for all $i\in I$, then so is $\rejectset$. This implies that $(\overline{\rejectsets},\subseteq)$ is a complete meet-semilattice. \end{theorem} \noindent This result is important, as it allows us to a extend a partially specified set of desirable gamble sets to the most conservative coherent one that includes it. This leads to the conservative inference procedure we will call natural extension. \begin{definition}[Consistency and natural extension] For any assessment $\mathcal{A}\subseteq\mathcal{Q}$, let $\overline{\rejectsets}(\mathcal{A})\coloneqq\cset{\rejectset\in\overline{\rejectsets}}{\mathcal{A}\subseteq\rejectset}$. We call the assessment~$\mathcal{A}$ \emph{consistent} if\/ $\overline{\rejectsets}(\mathcal{A})\neq\emptyset$, and we then call\/ $\EX(\mathcal{A})\coloneqq\bigcap\overline{\rejectsets}(\mathcal{A})$ the \emph{natural extension} of $\mathcal{A}$. \end{definition} \noindent In other words: an assessment $\mathcal{A}$ is consistent if it can be extended to some coherent rejection function, and then its natural extension $\EX(\mathcal{A})$ is the least informative such coherent rejection function. Our final result provides a more `constructive' expression for this natural extension and a simpler criterion for consistency. In order to state it, we need to introduce the set $\opts_{\succ0}^{\mathrm{s}}\coloneqq\cset{\set{\opt}}{\opt\in\opts_{\succ0}}$ and two operators on---transformations of---$\mathbf{K}$. The first is denoted by $\RS$, and defined by \begin{equation*} \RS\group{\rejectset}\coloneqq\cset{\optset\in\mathcal{Q}}{(\exists\altoptset\in\rejectset)\altoptset\setminus\opts_{\preceq0}\subseteq\optset} \text{ for all $\rejectset\in\mathbf{K}$}, \end{equation*} so $\RS\group{\rejectset}$ contains all gamble sets $\optset$ in $\rejectset$, all versions of $\optset$ with some of their non-positive options removed, and all supersets of such sets. The second is denoted by $\setposi$, and defined for all $\rejectset\in\mathbf{K}$ by \begin{align*} \setposi\group{\rejectset}\coloneqq\bigg\{ \bigg\{ \sum_{k=1}^n\lambda_{k}^{\opt[1:n]}\opt[k] \colon \opt[1:n]\in\times_{k=1}^n\optset[k] \bigg\} \colon &n\in\mathbb{N},(\optset[1],\dots,\optset[n])\in\rejectset^n,\\[-11pt] &\big(\forall\opt[1:n]\in\times_{k=1}^n\optset[k]\big)\,\lambda_{1:n}^{\opt[1:n]}>0 \bigg\}, \end{align*} where we used the notations $\opt[1:n]$ and $\lambda_{1:n}^{\opt[1:n]}$ for $n$-tuples of options $\opt[k]$ and real numbers $\lambda_{k}^{\opt[1:n]}$, $k\in\set{1,\dots,n}$, so $\opt[1:n]\in\mathcal{L}^{n}$ and $\lambda_{1:n}^{\opt[1:n]}\in\mathbb{R}^{n}$. We also used $\lambda_{1:n}^{\opt[1:n]}>0$ as a shorthand for `$\lambda_k^{\opt[1:n]}\geq0$ for all $k\in\set{1,\dots,n}$ and $\sum_{k=1}^n\lambda_k^{\opt[1:n]}>0$'. \begin{theorem}[Natural extension]\label{theo:rejectsets:consistency:and:natex} Consider any assessment $\mathcal{A}\subseteq\mathcal{Q}$. Then $\mathcal{A}$ is consistent if and only if\/ $\emptyset\notin\mathcal{A}$ and\/ $\set{0}\notin\setposi\group{\opts_{\succ0}^{\mathrm{s}}\cup\mathcal{A}}$. Moreover, if $\mathcal{A}$ is consistent, then $\EX(\mathcal{A})=\RS\group{\setposi\group{\opts_{\succ0}^{\mathrm{s}}\cup\mathcal{A}}}$. \end{theorem} \section{Conclusion} Our representation result shows that binary choice \emph{is} capable of representing general coherent choice functions, provided we extend its language with a `disjunction' of desirability statements---as is implicit in our interpretation---, next to the `conjunction' and `negation' that are already implicit in the language of sets of desirable gambles---see~\cite{quaeghebeur2015:statement} for a clear exposition of the latter claim. In addition, we have found recently that by adding a convexity axiom, and working with more general vector spaces of options to allow for the incorporation of horse lotteries, our interpretation and corresponding axiomatisation allows for a representation in terms of lexicographic sets of desirable gambles \cite{2017vancamp:phdthesis}, and therefore encompasses the one by Seidenfeld et al.~\cite{seidenfeld2010} (without archimedeanity). We will report on these findings in more detail elsewhere. Future work will address (i) dealing with the consequences of merging our accept-reject statement framework \cite{quaeghebeur2015:statement} with the choice function approach to decision making; (ii) discussing the implications of our axiomatisation and representation for conditioning, independence, and indifference (exchangeability); and (iii) expanding our natural extension results to deal with the computational and algorithmic aspects of conservative inference with coherent choice functions. \section*{Acknowledgements} This work owes a large intellectual debt to Teddy Seidenfeld, who introduced us to the topic of choice functions. His insistence that we ought to pay more attention to non-binary choice if we wanted to take imprecise probabilities seriously, is what eventually led to this work. The discussion in Arthur Van Camp's PhD~thesis \cite{2017vancamp:phdthesis} was the direct inspiration for our work here, and we would like to thank Arthur for providing a pair of strong shoulders to stand on. As with most of our joint work, there is no telling, after a while, which of us two had what idea, or did what, exactly. We have both contributed equally to this paper. But since a paper must have a first author, we decided it should be the one who took the first significant steps: Jasper, in this case.
{ "timestamp": "2018-06-05T02:16:44", "yymm": "1806", "arxiv_id": "1806.01044", "language": "en", "url": "https://arxiv.org/abs/1806.01044" }
\section{Acknowledgments} We thank Hardik Sharma, Ecclesia Morain, Michael Brzozowski, Hajar Falahati, and Philip J. Wolfe for insightful discussions and comments that greatly improved the manuscript. Amir Yazdanbakhsh is partly supported by a Microsoft Research PhD Fellowship. This work was in part supported by NSF awards CNS\#1703812, ECCS\#1609823, CCF\#1553192, Air Force Office of Scientific Research (AFOSR) Young Investigator Program (YIP) award \#FA9550-17-1-0274, NSF-1705047, Samsung Electronics, and gifts from Google, Microsoft, Xilinx, and Qualcomm. \section{Architecture Design for GANAX} \label{sec:arch} The execution flow of the generative model (\emph{i.e.} zero-insertion and variable number of operations per each convolution window) in GANs poses unique architectural challenges that the traditional convolution accelerators~\cite{tetris:asplos:2017,eyeriss:jssc:2017,eyeriss:isca:2016,dnnweaver:micro:2016,diannao:isca:2014} can not adequately address. There are two fundamental architectural challenges for GAN acceleration as follows: \begin{figure} \centering \includegraphics[width=0.47\textwidth]{top-level.pdf} \caption{Top-level block diagram of \ganax architecture.} \label{fig:tlganax} \vspace{-0.8cm} \end{figure} \niparagraph{Resource underutilization.} The first challenge arises due to the variable number of operations per each convolution window in transposed convolution operation. % In most of recent accelerators~\cite{eyeriss:jssc:2017,tetris:asplos:2017,dnnweaver:micro:2016,diannao:isca:2014}, which mainly target conventional convolution operation, the processing engines generally work in a SIMD manner. % The convolution windows in conventional convolution operation follow a regular pattern and the number of operations for each of these windows remains invariable. % Due to these algorithmic characteristics of conventional convolution operation, a SIMD execution model is an efficient and practical model. % However, since the convolution windows in transposed convolution operations exhibit a variable number of operations, a SIMD execution model is not an adequate design choice for these operations. % While using a SIMD model utilizes the data parallelism between the convolution windows with the same number of operations, its efficiency is limited in exploiting this execution model for the windows with a different number of operations. % That is, if one uses a convolution accelerator with a SIMD execution model for transposed convolution operations, the processing engines that are performing the operations for a convolution window with fewer number of operations have to remain idle until the operations for other convolution windows finish. % To address this challenge, we introduce a unified MIMD-SIMD architecture to accelerate the transposed convolution operation without compromising the efficiency of conventional convolution accelerators for convolution operations. % This unified MIMD-SIMD architecture effectively maximizes the utilization of accelerator compute resources while effectively utilizing the parallelism between the convolution windows with different number of operations. \niparagraph{Inconsequential computations.} % The second challenge emanates from the large number of zeros inserted in the multidimensional input feature map for transposed convolution operations. % Performing MAC operations on these zeros is inconsequential and wastes accelerator resources (See Figure~\ref{fig:zero-data}), if not skipped. % We address this challenge by leveraging an observation that even though the data access patterns in transposed convolution operations are irregular, they are still structured. % Furthermore, these structured patterns are repetitive across the execution of transposed convolutional operations. % Building upon these observations, the \ganax architecture decouples the operand access and execution. % Each processing engine in this architecture consists of a simple access engine that repetitively generates the addresses for operand accesses without interrupting the execute engine. % In the next sections, we examine these architectural challenges in details for GAN acceleration and expound the proposed microarchitectural solutions. \subsection{Unified MIMD-SIMD Architecture} In order to mitigate the resource underutilization, we devise a unified SIMD-MIMD architecture that reaps the benefits of SIMD and MIMD execution models at the same time. That is, while our architecture executes the operations for convolution windows with distinct computation patterns in a MIMD manner, it performs the operations of the convolution windows with the same computation pattern in a SIMD manner. Figure~\ref{fig:tlganax} illustrates the high-level diagram of the \ganax architecture, which is comprised of a set of identical processing engines (PE). The PEs are organized in a 2D array and connected through a dedicated network. Each PE consists of two $\mu$-engines, namely the access $\mu$-engine and the execute $\mu$-engine. The access $\mu$-engine generates the addresses for source and destination operands, whereas execute $\mu$-engine merely performs simple operations such as multiplication, addition, and multiply-add. The memory hierarchy is composed of an off-chip memory and two separate on-chip global buffers, one for data and one for $\mu$ops. These global on-chip buffers are shared across all the PEs. Each PE operates on one row of filter and one row of input and generates one row of partial sum values. The partial sum values are further accumulated horizontally across the PEs to generate the final output value. Using a SIMD model for transposed convolution operations leads to resource underutilization. The PEs that perform the computation for convolution windows with fewer number of operations remains idle, wasting computational resources. The simple solution is to replace the SIMD model with a fully MIMD computing model and utilize the parallelism between the convolution windows with different number of operations. However, a MIMD execution model requires augmenting each processing engine with a dedicated operation buffer. While this design resolves the underutilization of resources, it imposes a large area overhead, increasing area consumption by $\approx$3$\times$. Furthermore, fetching and decoding instructions from each of these dedicated operation buffers significantly increases the von Neumann overhead of instruction fetch and decode. To address these challenges, we design the \ganax architecture upon this observation that PEs in the same row perform same operations for a large period of time. As such, the proposed architecture leverages this observation and develop a middle ground between a fully SIMD and a fully MIMD execution model. The goal of designing the \ganax architecture is multi-faceted: (1) improve the PE underutilization by combining MIMD/SIMD model of computation for transposed convolution operations (2) without compromising the efficiency of SIMD model for conventional convolution operations. Next, we explain the two novel microarchitectural components that enable an efficient MIMD-SIMD accelerator design for GAN acceleration. \niparagraph{Hierarchical $\mu$op buffers.} To enable a unified MIMD and SIMD model of execution, we introduce a two-level $\mu$op buffer. Figure~\ref{fig:tlganax} illustrates the high-level structure of the two-level $\mu$op buffer. The two-level $\mu$op buffer consists of a global and a local $\mu$op buffer. The local and global $\mu$op buffers work cooperatively to perform the computations for GANs. Each horizontal group of PEs, called processing vector (PV), shares a local $\mu$op buffer, whereas, the global $\mu$op buffer that is shared across all the PVs. The \ganax accelerator can operate in two distinct modes: SIMD mode and MIMD-SIMD mode. Since all the convolution windows in the convolution operation have the same number of multiply-adds, the SIMD execution model is a best fit. As such for this case, the global $\mu$op buffer bypasses the local $\mu$ops and broadcasts the fetched $\mu$op to all the PEs. On the other hand, since the number of operations varies from one convolution window to another in transposed convolution operation, the accelerator works in MIMD-SIMD mode. In this mode, the global $\mu$op buffer sends distinct indices to each local $\mu$op buffer. Upon receiving the index, each local $\mu$op buffer broadcasts a $\mu$op, at the location pointed by the received index, to all the underlying PEs. Using MIMD-SIMD mode enables the \ganax accelerator to not only utilize the parallelism between the convolution windows with the same number of operations, but also utilize the parallelism across the windows with distinct number of operations. \niparagraph{Global $\mu$op buffer.} Before starting the computations of a layer, a sequence of high-level instructions, which defines the structure of each GAN layer, are statically translated into a series of $\mu$ops. These $\mu$ops are pre-loaded into the global $\mu$op buffer, and then the execution starts. Each of the $\mu$ops either performs an operation across all the PEs (SIMD) or initiates an $\mu$op in each PV (MIMD-SIMD). The initiated operation in the MIMD-SIMD mode may vary from one PV to another. The SIMD and MIMD $\mu$ops can be stored in the global $\mu$op buffer in any order. A 1-bit field in the global $\mu$op identifies the type of $\mu$op: SIMD or MIMD-SIMD. In the SIMD mode\emdash{}all the PEs share the same $\mu$op globally but execute it on distinct data\emdash the global $\mu$op defines the intended operation to be performed by all the PEs. In this mode, the local $\mu$op buffer is bypassed and the global $\mu$op are broadcasted to all the PEs at the same time. Upon receiving the $\mu$op, all the PEs perform the same operation, but on distinct data. In the MIMD-SIMD mode\emdash{}all the PEs within the same PV share the same $\mu$op but different PVs may execute different $\mu$ops\emdash the global $\mu$op is partitioned into multiple fields (one filed per each PV), each of which defines an index for accessing an entry in the local $\mu$op buffer. Upon receiving the index, each local $\mu$op buffer retrieves the corresponding $\mu$op stored at the given index and broadcasts it to all the PEs which it controls. The global $\mu$op buffer is double-buffered so that the next set of $\mu$ops for performing the computations of GAN layer$_{i+1}$ can be loaded into the buffer while the $\mu$ops for GAN layer$_i$ are being executed. \niparagraph{Local $\mu$op buffer.} In the \ganax architecture, each PV has a dedicated local $\mu$op buffer. In the SIMD mode, the local $\mu$op buffers are completely bypassed and all the PEs perform the same operation that are sent from global $\mu$op buffer. In the MIMD-SIMD mode, each local $\mu$op buffer is accessed at the location specified by a dedicated field in the global $\mu$op. This location may vary from one local $\mu$op buffer to another. Then, the fetched $\mu$op is broadcasted to all the PEs within a PV to perform the same operation but on distinct data. Each GAN layer may require a distinct sequence of $\mu$ops both globally and locally. Furthermore, each PE may need to access millions of operands at different locations to perform the computations of a GAN layer. Therefore, we may need not to only add large $\mu$op buffers to each PE, but also drain and refill the $\mu$op buffers multiple times. Adding large buffers to the PEs adds a large area overhead, which could have been utilized to improve the computing power of the accelerator. Also, the process of draining and refilling the $\mu$op buffers imposes a significant overhead in terms of both performance and energy. To mitigate these overheads, we introduce decoupled access-execute microarchitecture that enables us to significantly reduce the size of $\mu$op buffers and eliminate the need to drain and refill the local $\mu$op buffers for each GAN layer. \subsection{Decoupled Access-Execute $\mu$Engines} Though the data access patterns in transposed convolution operation are irregular they are still structured. Furthermore, the data access patterns are repetitive across the convolution windows. Building upon this observation, we devise a microarchitecture that decouples the data accesses from from the data processing. Figure~\ref{fig:access:execute} illustrates the organization of the proposed decoupled access-execute architecture. The \ganax decoupled access-execute architecture consists of two major microarchitectural units, one for address generation (access $\mu$-engine) and one for performing the operations (execute $\mu$-engine). The access $\mu$-engine generates the addresses for the input, weight, and output buffers. The input, weight, and output buffers consume the generated addresses for each data read/write. The execute $\mu$-engine, on the other hand, receives the data from the input and weight buffers, performs an operation, and stores the result in the output buffer. The $\mu$ops of these two engines are entirely segregated. However, the access and execute $\mu$-engines work cooperatively to perform an operation. The $\mu$ops for access $\mu$-engine handle the configuration of index generator units. The $\mu$ops for execute $\mu$engine \emph{only} specify the type of operation to be performed on data. As such, the execute $\mu$ops do \emph{not} need to include any fields for specifying the source/destination operands. Every cycle, the access $\mu$engine sends out the addresses for source and destination operands based on its preconfigured parameters. Then, the execute $\mu$engine performs an operation on the source operands. The result of the operation is, then, stored in the location that is defined by the access $\mu$engine. Having decoupled $\mu$-engines for accessing the data and executing the operations has a paramount benefit of reusing execute $\mu$ops. Since there is no address field in the execute $\mu$ops, we can reuse the same execute $\mu$op on distinct data over and over again without the need to change any fields in the $\mu$ops. Reusing the same $\mu$op on distinct data helps to significantly reduce the size of $\mu$op buffers. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{access-execute.pdf} \caption{Organization of decoupled Access-Execute architecture.} \label{fig:access:execute} \vspace{-0.8cm} \end{figure} \niparagraph{Access $\mu$-engine.} Figure~\ref{fig:access:execute} illustrates the microarchitectural units of access $\mu$-engine. The main function of access $\mu$-engine is to generate the addresses for source and destination operands based on a preloaded configuration. While designing a full-fledged access $\mu$-engine that is capable of generating various patterns of data addresses enables flexibility for the \ganax accelerator, but it is an overkill for our target application (\ie, GANs). As mentioned in the dataflow section (Section~\ref{sec:dataflow}), the data access patterns for transposed convolution operations are irregular, yet structured. Based on our analysis over the evaluated GANs, we observe that the data accesses in the \ganax dataflow are either \emph{strided} or \emph{sequential}. The stride value for a strided data access pattern depends on the number of inserted zeros in the multidimensional input activation. Furthermore, these data access patterns are repetitive across a large number of convolution windows and for large number of cycles. We leverage these observations to simplify the design of the access $\mu$-engine. Figure~\ref{fig:access:execute}(a) depicts the block diagram of the access $\mu$engine in \ganax. The access engine mainly consists of one or more strided $\mu$index generators. The $\mu$index generator can generate one address every cycle, following a pattern governed by a preloaded configuration. Since the data access patterns may vary from one layer to another, we design a reconfigurable $\mu$index generator. Figure~\ref{fig:access:execute}(b) depicts the block diagram of the proposed reconfigurable $\mu$index generator. There are five configuration registers that govern the pattern for data address generation. The \xx{Addr.} configuration register specifies the initial address from which the data address generation starts, while the \xx{Offset} configuration register can be used to offset the range of generated addresses as needed. The \xx{Step} configuration register specifies the step size between two consecutive addresses, while the \xx{End} configuration register specifies the final value up to which the addresses should be generated. Finally, the \xx{Repeat} configuration register indicates the number of times that a configured data access pattern should be replayed. The modulo adder, which consists of an adder and a subtractor, is used to enable data address generation in a rotating manner. The modulo adder performs a modulo addition on the values stored in the \xx{Addr.} and \xx{Step} registers. If the result of this modulo addition is fewer than the value in \xx{End} register, the calculated result is sent to the output. This means that the next address to be generated is still within the range of \xx{Addr.} and \xx{End} register values. Otherwise, the result of the modulo addition minus the value of \xx{End} register is sent to the output. That is, the next address to be generated is beyond the \xx{End} register value and the address generation process must start over from the beginning. In this scenario, the \xx{Decrement} signal is also asserted which cause the value of the \xx{Repeat} register to be decreased by one, indicated one round of address generation is finished. Once the \xx{Repeat} register reaches zero, the \xx{Stop} signal is asserted and no more addresses are generated. After configuring the parameters, the strided $\mu$index generator can yield one address per cycle without any further interventions from the controller. Using this configurable $\mu$index generator along the observation that the data address patterns in GANs are structured, the \ganax architecture can bypass the inconsequential computations and save both cycles and energy. \niparagraph{Execute $\mu$-engine.} Figure~\ref{fig:access:execute}(b) depicts the microarchitectural units of execute $\mu$-engine. The execute $\mu$-engine consists of an ALU, which can perform simple operations such as addition, multiplication, comparison, and multiply-add. The main job of execute $\mu$-engine is \emph{just} to perform an operation on the received data. At each cycle the execute $\mu$-engine consumes one $\mu$op from the $\mu$op FIFO and performs the operation on the source operands and store the result back into the destination operand. If the $\mu$Op FIFO becomes empty, the execute $\mu$op halts and no further operation is performed. In this case, all the input/weight/output buffers are notified to stop their reads/writes. The decoupling between access and execute $\mu$engines enables us to remove the address field from the execute $\mu$ops. Removing the address field from the execute $\mu$ops allow us to reuse the same $\mu$ops over and over again on different data. Furthermore, we leverage this $\mu$op reuse and the fact that the computation of the CNN requires a small set of $\mu$ops ($\approx$ 16) to simplify the design of the $\mu$op buffers. Instead of draining and refilling the $\mu$op buffers, we preload all the necessary $\mu$ops for convolution and transposed convolution operations in the $\mu$op buffers. For the local $\mu$op buffer, we load \emph{all} the $\mu$ops before starting the computation of a GAN. \niparagraph{Synchronization between $\mu$engines.} In the \ganax architecture (Figure~\ref{fig:access:execute}), there is one address FIFO for each strided $\mu$index generator. The address FIFOs perform the synchronization between access $\mu$-engine and execute $\mu$-engine. Once an address is generated by a strided $\mu$index generator, the generated address is pushed into the corresponding address FIFO. The addresses in the address FIFOs are later consumed to read/write data from/into the data buffers (\ie, input/weight/output buffers). If any of the address FIFOs are full, the corresponding strided $\mu$index generator stops generating new addresses. In the case that any of the address FIFOs are empty, no data is read/written from/into its corresponding address FIFO. \section{Conclusion} \label{sec:conclusion} Generative adversarial networks harness both generative and discriminative deep models in a game theoretical framework to generate close-to-real synthetic data. The generative model uses a fundamentally different mathematical operator, called transposed convolution, as opposed to the conventional convolution operator. Transposed convolution extrapolates information by first inserting zeros and then applying convolution that needs to cope with irregular placement of none-zero data. To address the associated challenges for executing generative models without sacrificing accelerator performance for conventional DNNs, this paper devised the \ganax accelerator. In the proposed accelerator, we introduced a unified architecture that conjoins SIMD and MIMD execution models to maximize the efficiency of the accelerator for both generative and discriminative models. On the one hand, to conform to the irregularities in the generative models, which are formed due to the zero-insertion step, \ganax supports selective execution of only the required computations by switching to a MIMD-SIMD mode. To support this mixed execution mode, \ganax offers a decoupled micro access-execute paradigm at the finest granularity of its processing engines. On the other hand, for the conventional discriminator DNNs, it sets the architecture in a purely SIMD mode. The evaluation results across a variety of generative adversarial networks reveal that the \ganax accelerator delivers, on average, 3.6$\times$ speedup and 3.1$\times$ energy reduction for the generative models. These significant benefits are attained without sacrificing the execution efficiency of the conventional discriminator DNNs. \section{Flow of Data in Generative Models} \label{sec:dataflow} \input{gan-motiv.tex} \niparagraph{Challenges and opportunities for GAN acceleration.} \begin{figure} \captionsetup[subfigure]{justification=centering} % \centering \subfloat[\scriptsize{Conventional Convolution\newline(Data Reduction)}]{\includegraphics[width=0.24\textwidth]{conv.pdf} \label{fig:conv}} % \subfloat[\scriptsize{Transposed Convolution\newline(Data Expansion)}]{\includegraphics[width=0.24\textwidth]{transposed-conv.pdf} \label{fig:tconv}} % \caption{(a) Convolution operations decreases the size of data (data reduction). (b) Transposed convolution increases the size of data (data expansion).} % \vspace{-0.5cm} \label{fig:conv-tconv} \end{figure} The generative models in GANs are fundamentally different from the discriminative models. As Figure~\ref{fig:dec-gen} illustrates, while the discriminative model mostly consists of convolution operations, the generative model uses transposed convolution operations. Accelerating convolution operations has been the focus of a handful of studies~\cite{bitfusion:isca:2018,eyeriss:jssc:2017,scnn:isca:2017,tetris:asplos:2017,cnn:loop:fpga:2017,flexflow:hpca:2017,pipelayer:hpca:2017,cambricon:micro:2016,eie:isca:2016,truenorth:arxiv:2016,tabla:hpca:2016,eyeriss:isca:2016,prime:isca:2016,dnnweaver:micro:2016,cnvlutin:isca:2016,blocking:systematic:arxiv:2016,deep-comp:iclr:2016,isaac:isca:2016,shidiannao:isca:2015,FPGADeep:fpga:2015,snnap:hpca:2015}; however, accelerating transposed convolution operations has remained unexplored. Figure~\ref{fig:conv-tconv} depicts the fundamental difference between the conventional convolution and transposed convolution operations. The convolution operation performs \emph{data reduction} and generally transforms the input data to a smaller representation. On the other hand, the transposed convolution implements a \emph{data expansion} and transforms the input data to a larger representation. The transposed convolution operation expands the data by first transforming the input data through inserting zeros between the input rows and columns and then performing the computations by sliding a convolution window over the transformed input data. Due to this fundamental difference between convolution and transposed convolution operations, using the same conventional convolution dataflow for generative model may lead to inefficiency. The main reason for such inefficiency can be attributed to the variable number of operations per each convolution window in the transposed convolution. The variable number of operations per each convolution window is the main result of zero insertion step in transposed convolution. Because of this zero-insertion step, distinct convolution windows may have a different number of consequential multiplications between inputs and weights.\footnote{A consequential multiplication is a multiplication in which none of the source operands are zero and contributes to the final value of the convolution operation.} This discrepancy in the number of operations is the root cause for inefficiency in the computations of generative models, if the same convolution dataflow is used. As such, we aim to design an efficient flow of data for GANs by focusing on: (1) managing the discrepancy in the number of operations per each convolution window in order to mitigate the inefficiencies in the execution of generative models, (2) leveraging the similarities between convolution and transposed convolution operations in order to accelerate both discriminative and generative models on the same hardware platform, and (3) improving the data reuse in discriminative and generative models. \niparagraph{Why using a conventional convolution dataflow is not efficient for transposed convolution?} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{conv_dataflow.pdf} \caption{(a) Zero-insertion step in a transposed convolution operation for a 4$\times$4 input and the transformed input. The light-colored squares display zero values in the transformed input. (b) Using conventional dataflow for performing a transposed convolution operation.} \vspace{-0.7cm} \label{fig:conv_dataflow} \end{figure} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{dataflow_fig.pdf} \caption{The \ganax flow of data after applying (a) output row reorganization and (b) filter row reorganization. (c) The \ganax flow of data after applying both output and filter row reorganization and eliminating the idle compute nodes. The combination of these flow optimizations reduces the idle (white) compute nodes and improves the resource utilization.} \vspace{-0.7cm} \label{fig:dataflow} \end{figure*} Going through a simple example of a 2-D transposed convolution, we illustrate the main sources of inefficiency in performing transposed convolution, if a conventional convolution dataflow is used. Figure~\ref{fig:conv_dataflow}(a) illustrates an example of performing a transposed convolution operation using a conventional convolution dataflow. In this transposed convolution operation, a 5$\times$5 filter with stride of one and padding of two is applied on a 4$\times$4 2D input. In the initial step, the transposed convolution operation inserts one row and one column of zeros between successive rows and columns (white squares). Performing this zero-insertion step, the input is expanded from a 4$\times$4 matrix to a 11$\times$11 one. The number of zeros to be inserted for each transposed convolution layer in the generative models may vary from one layer to another and is a parameter of the network. After performing the zero-insertion, the next step is to slide a convolution window over the transformed input and perform the multiply-add operations. Figure~\ref{fig:conv_dataflow}(b) illustrates performing this convolution operation using a conventional convolution dataflow~\cite{eyeriss:jssc:2017,tetris:asplos:2017,eyeriss:isca:2016}. To avoid clutter in Figure~\ref{fig:conv_dataflow}(b), we only show the dataflow for generating the output rows 2-5. Each circle in Figure~\ref{fig:conv_dataflow}(b) represents a compute node that can perform vector-vector multiplications between a row of the filter and a row of the zero-inserted input. The filter rows are spatially reused across each of the computation nodes in a vertical manner. Once a vector-vector multiplications finish, the partial sums are aggregated horizontally to yield the results of performing transposed convolution operation for each output row. The black circles represent the compute nodes that are performing consequential operations, whereas the white circles which represent the compute nodes performing inconsequential operations. As depicted in Figure~\ref{fig:dataflow}(b), there will be inconsequential operation (white circles) if a conventional convolution dataflow is used for the execution of transposed convolution operations. Because of the inserted zeros, some of the filter rows are not used to compute the value of an output row. For example, since the 1$^\text{st}$, 3$^\text{rd}$, and 5$^\text{th}$ rows of the input are zero, the 2$^\text{nd}$ output row only needs to perform the operations for non-zero elements; hence using only the 2$^\text{nd}$ and 4$^\text{th}$ filter rows, leaving three compute nodes idle. Overall, in this example, 50$\%$ of the compute nodes remain idle during the execution of this transposed convolution operation. Analyzing this transposed convolution operation reveals three main sources of inefficiency when a conventional convolution dataflow is used. \begin{enumpackedp} % \item \textbf{Coarse-grain resource underutilization:} Since the consequential filter rows vary from one output row to another, a significant number of compute nodes remain idle. In the aforementioned example, this underutilization applies to 50$\%$ of the compute nodes, which perform vector-vector multiplications. % \item \textbf{Fine-grain resource underutilization:} Even within a compute node a large fraction of the multiply-add operations are inconsequential due to the columnar zero insertion. % \item \textbf{Reuse reduction:} While the compute units pass along the filter rows for data reuse, the inserted zeros render this data transfer futile. \end{enumpackedp} \noindent{} We address the first two sources of inefficiency with a series of optimizations on the flow of data in GANs. Then, to address the last source of inefficiency that arises because of the inconsequential multiply-add operations within each compute node, we introduce an architectural solution (Section~\ref{sec:arch}). \niparagraph{Flow of data for generative models in GANAX.} Figure~\ref{fig:dataflow} illustrates the proposed flow of data optimizations for generative models in \ganax. To mitigate the challenges of using conventional convolution dataflow for transposed convolution operations in generative models, we leverage the insight that even though the patterns of computation may vary from one output row to another, they are still structured. Taking a closer look at Figure~\ref{fig:conv_dataflow}, we learn that there are only two distinct patterns\footnote{The location of white and black circles (compute nodes) defines each pattern.} in the output row computations. In this example, the even output rows (\ie, 2$^\text{nd}$ and 4$^\text{th}$) use one pattern of computation, whereas the odd output rows (\ie, 3$^\text{rd}$ and 5$^\text{th}$) use a different pattern for their computations. Building upon this observation, we introduce a series of flow of data optimizations to mitigate the aforementioned inefficiencies in the computation of transposed convolution operation, if a conventional convolution dataflow used. The first optimization maximizes the data reuse by reorganizing the computation of the output rows in a way that the rows with the same pattern in their computations become adjacent. Figure~\ref{fig:dataflow}(a) illustrates the flow of data after applying this output row reorganization. Applying the output row reorganization in this example, make the even-indexed (2$^\text{nd}$ and 4$^\text{th}$ output rows) output rows adjacent. Similar adjacency is established for odd-indexed (3$^\text{rd}$ and 5$^\text{th}$ output rows) output rows. Although this optimization addresses the data reuse problem, it does not deal with the resource underutilization (\ie, idle compute nodes (white circles) still exist). To mitigate this resource underutilization, we introduce the second optimization that reorganizes the filter rows. As shown in Figure~\ref{fig:dataflow}(b), applying the filter row reorganization establishes an adjacency for the 1$^\text{st}$, 3$^\text{rd}$, and 5$^\text{th}$ filter rows. Similarly, the 2$^\text{nd}$ and 4$^\text{th}$ filter rows become adjacent. After applying output and filter row reorganization, as shown in Figure~\ref{fig:dataflow}(b), the idle compute nodes can be simply eliminated from the dataflow. Figure~\ref{fig:dataflow}(c) illustrates the \ganax flow of data after performing both optimizations, which improves the resource utilization for transposed convolution operation from 50\% to 100\%. The proposed \ganax flow of data also addresses the inefficiency in performing the horizontal accumulation of partial sums. As shown in Figure~\ref{fig:conv_dataflow}(b), the conventional convolution dataflow requires five cycles to perform the horizontal accumulation for each output row, regardless of their locations. However, comparing Figure~\ref{fig:conv_dataflow}(b) and Figure~\ref{fig:dataflow}(c), we observe that after applying output and filter row reorganization optimizations, the number of required cycles for performing the horizontal accumulation reduces from five to two for even-indexed output rows and from five to three for odd-indexed output rows. While the proposed flow of data optimizations effectively improve the resource utilization for transposed convolution, there arises an interesting architectural challenge: \emph{how to fully utilize the parallelism between the computations of the output rows that require different number of cycles for horizontal accumulation (two cycles for even-indexed and three cycles for odd-indexed output rows)?} If a SIMD execution model is used, some of the compute nodes have to remain idle until the accumulations for the output rows that require more cycles for horizontal accumulation, finish. The next section elaborates on the \ganax architecture that exploits the introduced flow of data for transposed convolution and fully utilize the parallelism between distinct output rows by conjoining the MIMD and SIMD execution models. \section{Evaluation and Methodology} \label{sec:eval} \begin{table} \centering \caption{Energy comparison between \ganax microarchitectural units and memory. PE energy includes the energy consumption of an arithmetic operation and the strided $\mu$index generators.} \includegraphics[width=0.49\textwidth]{energy-cost.pdf} \vspace{-0.5cm} \label{tab:energy} \end{table} \begin{table} \centering \caption{Area measurement of the major hardware units with \code{TSMC\,45nm}.} \includegraphics[width=0.49\textwidth]{area-breakdown.pdf} \vspace{-0.7cm} \label{tab:area} \end{table} \section{Evaluation} \niparagraph{Overall performance and energy consumption comparison.} Figure~\ref{fig:speedup-eyeriss} depicts the speedup of the generative models with \ganax over \eyeriss~\cite{eyeriss:isca:2016}. On average, \ganax yields \xx{3.6$\times$} speedup improvement over \eyeriss. The generative models with a larger fraction of inserted zeros in the input data and larger number of inconsequential operations in transposed convolution layers enjoy a higher speedup with \ganax. Across all the evaluated models, \bench{3D-GAN} achieves the highest speedup (\xx{6.1}$\times$). This higher speedup is mainly attributed to its larger number of inserted zeros in its transposed convolution layers. On average, the number of inserted zeros for \bench{3D-GAN} is around \xx{80\%} (See Figure~\ref{fig:zero-data}). On the other extreme, \bench{MAGAN} enjoys a speedup of merely \xx{1.3$\times$}, which is attributed to the lowest number of inserted zeros in its transposed convolution layers compared to other GANs. Figure~\ref{fig:energy-eyeriss} shows the energy reduction achieved by \ganax over \eyeriss. On average, \ganax effectively reduces the energy consumption by \xx{3.1$\times$} over the \eyeriss accelerator. The GANs (\bench{3D-GAN}, \bench{DCGAN}, and \bench{GP-GAN}) with the highest fraction of zeros and inconsequential operations in the transposed convolution layers enjoy an energy reduction of more than \xx{4.0$\times$}. These results reveal that our proposed architecture is efficient in addressing the main sources of inefficiency in the generative models. Figure~\ref{fig:runt-enrg-bd} shows the normalized runtime and energy breakdown between the discriminative and generative models. The first (second) bar shows the normalized runtime (energy) for \eyeriss (\ganax). To be consistent across all the networks, for the discriminative model of \bench{MAGAN}, we \emph{only} consider the contribution of convolution layers in the overall runtime and energy consumption. As the results show, while \ganax significantly reduces both the runtime and energy consumption of generative models, it delivers the same level of efficiency as \eyeriss for the discriminative models. \niparagraph{Energy breakdown of the microarchitectural units.} Figure~\ref{fig:enrg_bd_micro} illustrates the overall normalized energy breakdown of the generative models between distinct microarchitectural components of the \ganax architecture. The first and second bars show the normalized energy consumed by \eyeriss and \ganax, respectively. As the results show, \ganax reduces the energy consumption of all the microarchitectural units. This reduction is mainly attributed to the efficient flow of data in \ganax and the decoupled access-execute architecture that cooperatively diminishes the sources of inefficiency in the execution of transposed convolution operations. \begin{figure} \centering \subfloat[Runtime]{\includegraphics[width=0.48\textwidth]{runtime-breakdown.pdf} \label{fig:runt-bd-eyeriss}} \vspace{-0.cm} \\ \subfloat[Energy]{ \includegraphics[width=0.48\textwidth]{energy-breakdown.pdf} \label{fig:enrg-bd-eyeriss}} \caption{Breakdown of (a) runtime and (b) energy consumption between discriminative and generative models normalized to the runtime and energy consumption of \eyeriss. For each network, the first (second) bar show the normalized value when the application is executed on \eyeriss (\ganax).} \vspace{-0.5cm} \label{fig:runt-enrg-bd} \end{figure} \niparagraph{Processing elements utilization.} To show the effectiveness of \ganax dataflow in improving the resource utilization, we measure what percentage of the total runtime, the PEs are actively performing a consequential operation. Figure~\ref{fig:peutil} depicts the utilization of PEs for \eyeriss and \ganax. \ganax exhibits a high percentage of PE utilization, around \xx{90\%} across all the evaluated GANs. This high resource utilizations in \ganax is mainly attributed to the proposed dataflow that can effectively force the computation of the rows with similar computation pattern adjacent to each other. This forced adjacency of similar computation patterns eliminates inconsequential operations, which leads to a significant improvement in the utilization of the processing engines. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{energy-breakdown-microarch.pdf} \caption{Breakdown of energy consumption of the generative models between different microarchitectural units. The first bar shows the normalized energy breakdown for \eyeriss. The second bar show the energy breakdown for \ganax normalized to \eyeriss.} \vspace{-0.2cm} \label{fig:enrg_bd_micro} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{pe-utilization.pdf} \caption{Average PE utilization for the generative models in \eyeriss and \ganax.} \vspace{-0.5cm} \label{fig:peutil} \end{figure} \section{Introduction} \label{sec:intro} Deep Neural Networks (DNNs) have been widely used to deliver unprecedented levels of accuracy in various applications. However, they rely on the availability of copious amount of labeled training data, which can be costly to obtain as it requires human effort to label. To address this challenge, a new class of deep networks, called Generative Adversarial Networks (GANs), have been developed with the intention of automatically generating larger and richer datasets from a small initial labeled training dataset. GANs combine a generative model, which attempts to create synthetic data similar to the original training dataset, with a discriminative model, a conventional DNN that attempts to discern if the data produced by the generative model is synthetic, or belongs to the original training dataset~\cite{gan:goodfellow:nips:2014}. The generative and discriminative models compete with each other in a minimax situation, resulting in a stronger generator and discriminator. As such, GANs can create new impressive datasets that are hardly discernible from the original training datasets. With this power, GANs have gained popularity in numerous domains, such as medicine, where overtly costly human-centric studies need to be conducted to collect relatively small labeled datasets~\cite{medical:image:syn,retinal:image:syn}. Furthermore, the ability to expand the training datasets has gained considerable popularity in robotics~\cite{gan:robot:nips:2016}, autonomous driving~\cite{sadgan:arXiv:2016}, and media synthesis~\cite{artgan:arxiv:2017,gpgan:arxiv:2017,discogan:arxiv:2017,gan:music_synth:arxiv:2017,gan:midinet:arxiv:2017,3dgan:nips:2016,dcgan:arxiv:2015} as well. Currently, advances in acceleration for conventional DNNs are breaking the barriers to adoption~\cite{brainwave:2017,tpu:isca:2017,eie:isca:2016,eyeriss:isca:2016,dnnweaver:micro:2016,diannao:isca:2014}. However, while GANs are set to push the frontiers in deep learning, there is a lack of hardware accelerators that address their computational needs. This paper sets out to explore this state-of-the-art dimension in deep learning from the hardware acceleration perspective. Given the abundance of the accelerators for conventional DNNs~\cite{bitfusion:isca:2018,eyeriss:jssc:2017,scnn:isca:2017,tetris:asplos:2017,cnn:loop:fpga:2017,flexflow:hpca:2017,pipelayer:hpca:2017,cambricon:micro:2016,eie:isca:2016,truenorth:arxiv:2016,tabla:hpca:2016,eyeriss:isca:2016,prime:isca:2016,dnnweaver:micro:2016,cnvlutin:isca:2016,blocking:systematic:arxiv:2016,deep-comp:iclr:2016,isaac:isca:2016,shidiannao:isca:2015,FPGADeep:fpga:2015,snnap:hpca:2015,nn:general:pact:2015,ngpu:micro:2015,divergent:taco:2015,anpu:isca:2014,diannao:isca:2014,olivier:isca:2013,npu:micro:2012,neuflow:cvpr:2011}, designing an accelerator for GANs will only be attractive if they pose new challenges in architecture design. By studying the structure of emerging GAN models~\cite{artgan:arxiv:2017,gpgan:arxiv:2017,discogan:arxiv:2017,gan:music_synth:arxiv:2017,gan:midinet:arxiv:2017,3dgan:nips:2016,dcgan:arxiv:2015}, we observe that they use a fundamentally different type of mathematical operator in their generative model, called \emph{transpose convolution}, that operates on multidimensional input feature maps. The transposed convolution operator aims to extrapolate information from input feature maps, in contrast to the conventional convolution operator which aims to interpolate the most relevant information from input feature maps. As such, the transposed convolution operator first inserts zeros within multidimensional input feature maps and then convolves a kernel over this expanded input to augment information to the inserted zeros. The transposed convolution in GANs fundamentally differs from the operators in the backward pass of training conventional DNNs, as these do not insert zeros. Moreover, although there is a convolution stage in the transposed convolution operator, the inserted zeros lead to underutilization of the compute resources if a conventional convolution accelerator were to be used. The following highlights the sources of underutilization and outlines the contributions of this paper, making the first accelerator design for GANs. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{num-operations-tconv.pdf} \caption{The fraction of multiply-add operations in transposed convolution layers that are inconsequential due to the inserted zeros in the inputs.} \vspace{-0.cm} \label{fig:zero-data} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{disc-gen-v1.pdf} \caption{High-level visualization of a Generative Adversarial Network (GAN).} \label{fig:dec-gen} \vspace{-0.7cm} \end{figure} \begin{enumpacked} \item \niparagraph{Performing multiply-add on the inserted zeros is inconsequential.} % Unlike conventional convolution, the accelerator should skip over the zeros as they constitute more than \xx{60\%} of all the multiply-add operations as Figure~\ref{fig:zero-data} illustrates. Skipping the zeros creates an irregular dataflow and diminishes data reuse if not handled adequately in the microarchitecture. To address this challenge, we propose a reorganization of the output computations that allocates computing rows with similar patterns of zeros to adjacent processing engines. This forced adjacency reclaims data reuse across these neighboring compute units. \item \niparagraph{Reorganizing the output computations is inevitable but breaks the SIMD execution model.} % The inserted zeroes, even with the output computation reorganization, create distinct patterns of computation when sliding the convolution window. As such, the same sequence of operations cannot be repeated across all the processing engines, breaking the full SIMD execution model. Therefore, we propose a unified MIMD-SIMD accelerator architecture that exploits repeated patterns in the computations to create different microprograms that can execute concurrently in SIMD mode. To maximize the benefits from both levels of parallelism, we propose an architecture, called \ganax, that supports interleaving MIMD and SIMD operations at the finest granularity of a single microprogrammed operation. \item \niparagraph{MIMD is inevitable but its overhead needs to be amortized.} % Changes in the dataflow and the computation order necessitate irregular accesses to multiple different memory structures while the operations are still the same. That is, the data processing part can be SIMD but the irregular data access patterns prevent using this execution model. For \ganax, we propose the decoupling of data accesses from data processing. This decoupling leads to breaking each processing engine into an access micro-engine and an execute micro-engine. The proposed architecture extends the concept of access-execute architecture~\cite{stream:accl:isca:2017,decoupled:affine:isca:2017,decoupled:prefetching:micro:2016,decoupled:smith:sigarch:1982} to the finest granularity of computation for each individual operation. \end{enumpacked} Although \ganax addresses these challenges to enable efficient execution of the transposed convolution operator, it does not impose extra overhead, but instead offers the same level of performance and efficiency. To establish the effectiveness of our architectural innovation, we evaluate \ganax using six recent GAN models, on distinct applications. On average, \ganax delivers 3.6$\times$ speedup and 3.1$\times$ energy savings over a conventional convolution accelerator. These results indicate \ganax is an effective step towards designing accelerators for the next generation of deep networks. \section{Instruction Set Architecture Design ($\mu$Ops)} \label{sec:isa} The \ganax ISA should provide a set of $\mu$ops to efficiently map the proposed flow of data for both generative and discriminative models onto the accelerator. Furthermore, these $\mu$ops should be sufficiently \emph{flexible} to serve distinct patterns in the computation for both convolution and transposed convolution operations. Finally, to keep the size of $\mu$op buffers modest, the set of $\mu$ops should be \emph{succinct}. To achieve these multifaceted goals, we first introduce a set of algorithmic observations that are associated with GAN models. Then, we introduce the major $\mu$ops that enable the execution of GAN models on \ganax. \subsection{Algorithmic Observations} The following elaborates a set of algorithmic observations that are the foundation of the \ganax $\mu$ops. \niparagraph{(1) MIMD/SIMD execution model.} Due to the regular and structured patterns in the computation across the convolution windows in conventional DNNs, they are best suited for SIMD processing. However, the patterns in the computation of GANs are inherently different between generative and discriminative models. Due to the inserted zeros in the generative models, their patterns in the computation vary from one convolutional window to another. We observe that exploiting a combination of SIMD and MIMD execution model can be more efficient in accelerating GAN models than solely relying on SIMD. Therefore, the focus of the \ganax $\mu$ops is to include the operations that enable \ganax to fully utilize the SIMD and MIMD execution models. \niparagraph{(2) Repetitive computation patterns.} We observe that even though GANs require a large number of computations, most of these computations are similar between generative and discriminative models. In addition, these computations are repetitive over a long period of time. Building upon this observation, we introduce a customized \codebold{repeat} $\mu$op that significant reduces the $\mu$op footprints. In addition, the commonality between the operations in generative and discriminative models allows us to design a succinct, yet representative, set of $\mu$ops. To further reduce the $\mu$op footprints, we introduce a dedicated set of execute $\mu$ops that only define the type of operations. These $\mu$ops are reused for distinct data during the execution of generative and discriminative models on the GANAX architecture. \niparagraph{(3) Structured and repetitive memory access patterns.} We observe that despite the irregularity of memory access patterns in generative models, they are still structured and repetitive. Analyzing the data access patterns of various GANs reveals that their memory access patterns are either sequential or strided. Building upon this observation and our decoupled access-execute architecture, we introduce a set of access $\mu$ops that are used merely to configure the access $\mu$engines and initiate the address generation process. Once initiated, the access $\mu$engines generate the configured access patterns over and over until they are intervened. \subsection{Access $\mu$Ops} \ganax access $\mu$ops are used to configure the access $\mu$engine and initiate/stop the process of address generation. These $\mu$ops are executed across all the PEs within a PV whose index is indicated by \code{pv\_index} field in the $\mu$ops. Furthermore, in all of these $\mu$ops, \code{\%addrgen\_idx} specifies the index of the targeted address generator in the access $\mu$engine. The supported $\mu$ops in the access $\mu$engines are as follows: \begin{enumpacked} \item \codebold{access.cfg} \code{\%pv\_idx, \code{\%addrgen\_idx,} \code{\%dst,} imm}: This $\mu$op loads a 16-bit \code{imm} value into one of the five \code{\%dst} configuration registers (\emph{i.e.,} as shown in Figure~\ref{fig:access:execute}(b), these configuration registers are \xx{Addr.}, \xx{Offset}, \xx{Step}, \xx{End}, and \xx{Repeat}) of one of the address generators in the access $\mu$engine. \item \codebold{access.start} \code{\%pv\_idx,} \code{\%addrgen\_idx}: This $\mu$op initiates the address generation in one of the address generators in the access $\mu$engine. The process of address generation continues until an \codebold{acceess.stop} $\mu$op is executed or the iteration register reaches zero. \item \codebold{access.stop} \code{\%pv\_idx,} \code{\%addrgen\_idx}: This $\mu$op intervenes the address generation of one of the address generators in the access $\mu$engine. The address generation can be re-initiated again by executing an \code{access.start} $\mu$op. \end{enumpacked} \subsection{Execute $\mu$Ops} Execute $\mu$ops are categorized into two groups: (1) SIMD $\mu$ops are fetched from each PE's local $\mu$op buffer and executed locally within each PE and (2) the MIMD $\mu$ops are fetched from the global $\mu$op buffer and executed across all PEs. The SIMD $\mu$ops can be executed in the MIMD manner as well. That is, the MIMD $\mu$ops are a superset of the SIMD $\mu$ops. We first introduce the SIMD $\mu$ops, then explain the extra $\mu$ops that belong to the MIMD group. \niparagraph{SIMD $\mu$ops.} SIMD group only comprises a succinct, yet representative set of $\mu$ops for performing convolution and transposed convolution operations. The combination of SIMD $\mu$ops and the decoupled access-execute architecture in \ganax helps to reduce the size of local $\mu$op buffers. The SIMD $\mu$ops do not have source or destination fields and only specify the type of operation to be executed. Once executed, depending on the type of operation, a given PE consumes the generated addresses by the $\mu$index generators and delivers the data to the execute $\mu$engine. Since these $\mu$ops do not have any source or destination register, they are pre-loaded into the local $\mu$op buffers before starting the execution. Then, they are re-used over and over, on distinct data whose addresses are generated by the access $\mu$engines. The SIMD $\mu$ops are as follows: \begin{enumpacked} \item \codebold{add}, \codebold{mul}, \codebold{mac}, \codebold{pool}, and \codebold{act}: Depending on the type, these $\mu$ops consume one or more addresses from the $\mu$index generators for source and destination operands. For example, \codebold{add} consumes two addresses for the source operands and one address for the destination operand, but \codebold{act} uses one address for the source operand and one address for the destination operand. \item \codebold{repeat}: This $\mu$op causes the next fetched $\mu$op to be repeated a specified number of times. This number is specified in a microarchitectural register in each PE. This register is pre-loaded with a MIMD $\mu$op before the execution starts. \end{enumpacked} \niparagraph{MIMD $\mu$ops.} The MIMD $\mu$ops are loaded into the global $\mu$op buffers and executed globally across all the PEs. In addition to all the SIMD $\mu$ops, the following $\mu$ops execute in a MIMD manner: \begin{enumpacked} \item \codebold{mimd.ld} \code{\%pv\_idx,} \code{\%dst, imm}: This $\mu$op loads the immediate value (\code{imm}) into one of the microarchitectural registers (\code{\%dst}) of all the PEs with a PV. The \code{\%pv\_idx,} specifies the index of the target PV. This $\mu$op is mainly used to load an immediate value into the repeat register. \item \codebold{mimd.exe} \code{\%$\mu$op\_index\textsubscript{1},..., \code{\%$\mu$op\_index\textsubscript{i}}}: Upon receiving this $\mu$op, the i\textsuperscript{th} PV fetches a $\mu$op located at location \code{\%$\mu$op\_index\textsubscript{i}} from its local $\mu$op buffer and executes it across all the PEs horizontally. Since the value of the \code{\%$\mu$op\_index} may vary from one PV to another, this $\mu$op causes \ganax to operate in a MIMD manner. \end{enumpacked} \begin{table} \centering \caption{The evaluated GAN models, their released year, and the number of convolution (\code{Conv}) and transposed convolution (\code{TConv}) layers per generative and discriminative models.} \vspace{-0.2cm} \includegraphics[width=0.48\textwidth]{benchmarks.pdf} \label{tab:bench} \vspace{-0.65cm} \end{table} \section{Methodology} \label{sec:eval} \niparagraph{Workloads.} We use several state-of-the-art GANs to evaluate the \ganax architecture. Table~\ref{tab:bench}, shows the evaluated GANs, a brief description of their applications, and the number of convolution (\code{Conv}) and transposed convolution (\code{TConv}) layers per generative and discriminative models. \niparagraph{Hardware design and synthesis.} We implement the \ganax microarchitectural units including the strided $\mu$index generator, the arithmetic logic of the PEs, controllers, non-linear function, and other logic hardware units in Verilog. We use \code{TSMC 45\,nm} standard-cell library and \code{Synopsys Design Compiler (L-2016.03-SP5)} to synthesize these units and obtain the area, delay, and energy numbers. \niparagraph{Energy measurements.} Table~\ref{tab:energy} shows the energy numbers for major micro-architectural units, memory operations, and buffer accesses in \code{TSMC 45nm} technology. To measure the area and read/write access energy of the register files, SRAMs, and local/global buffers, we use \bench{CACTI-P}~\cite{cactip}. To have a fair comparison, we use energy numbers reported in \tetris~\cite{tetris:asplos:2017}, which has a similar PE architecture as \eyeriss. In Table~\ref{tab:energy}, the energy overhead of strided $\mu$index generators is included in the normalized energy cost of PE. For DRAM accesses, we use the Micron's DDR4 system power calculator~\cite{micron:ddr4}. The same frequency (\xx{500 MHz}) is used for both \eyeriss and \ganax in all the experiments. \begin{figure}[t] \centering \subfloat[Speedup]{\includegraphics[width=0.48\textwidth]{speedup-vs-eyeriss.pdf} \label{fig:speedup-eyeriss}} \vspace{-0.0cm} \\ \subfloat[Energy Reduction]{ \includegraphics[width=0.48\textwidth]{energy-vs-eyeriss.pdf} \label{fig:energy-eyeriss}} \caption{Speedup and energy reduction of generative models compared to \eyeriss~\cite{eyeriss:isca:2016}.} \label{fig:speedup-energy-eyeriss} \vspace{-0.5cm} \end{figure} \niparagraph{Architecture configurations.} In this paper, we study a configuration of \ganax with 16 Processing Vectors (PVs) each with 16 Processing Engines (PEs). We use the default \eyeriss configurations for on-chip memories such as the size of input and partial sum registers, weight SRAM, and global data buffer. The same on-chip memory sizes are used for \ganax. Each local $\mu$op buffer has 16 entries. The number of entries is sufficient to encompass all the execute $\mu$ops. The global $\mu$op buffer has 32 entries each with 64 bits, four bits per each PV. Each local $\mu$op uses these four bits to index its local $\mu$op buffer. An extra one bit in the global $\mu$ops determines the execution model of the accelerator for the current operation (\ie, SIMD or MIMD-SIMD). \niparagraph{Area analysis.} Table~\ref{tab:area} shows the major architectural components for the baseline architecture (\eyeriss~\cite{eyeriss:jssc:2017,eyeriss:isca:2016}) and \ganax in \code{45\,nm} technology node. For logic of the microarchitectural units, we use the reported area from the synthesis. For the memory elements, we use \code{CACTI-P}~\cite{cactip} and the reported numbers in \eyeriss~\cite{eyeriss:jssc:2017}. In order to be consistent in the results, we scaled down the reported area numbers in \eyeriss from \code{65\,nm} to \code{45\,nm}. To have a fair comparison between \eyeriss and \ganax, the same number of PEs and on-chip memory are used for both accelerators. Under this setting, \ganax has an area overhead of \ganaxarea compared to \eyeriss. \niparagraph{Microarchitectural simulation.} Table~\ref{tab:area} shows the major microarchictural parameters of \ganax. We implement a microarchitectural simulator on top of the \eyeriss simulator~\cite{tetris:asplos:2017}. The extracted energy numbers from logic synthesis and \bench{CACTI-P} are integrated into the simulator to measure the energy consumption of the evaluated network models on \ganax. To evaluate our proposed accelerator, we extend the \eyeriss simulator with the proposed ISA extensions and the \ganax flow of data. For all the baseline numbers, we use the plain version of the simulator. \section{Background and Motivation} \label{sec:background} To enable the machines to do beyond just image and text recognition, they need to have a model of how the world functions which is the ability to predict. The most notable successes in deep learning has been on using supervised learning to map a high-dimensional input to a class label. Conventional supervised learning suffers from scalability as it requires large amounts of labeled data. Hence, there are two main challenges: (1) labeling millions of data points requires extensive time and effort and (2) in various applications even generating data itself is complex and requires a significant time and effort. To overcome these challenges, various semi-supervised and unsupervised learning techniques have been introduced. The goal of unsupervised learning is to learn representations that are (i) interpretable, (ii) easily transferable to novel tasks and novel object categories, and to disentangle the informative representation of the data from the noise purely from unlabeled data [1]. This is key to enable the machines to predict and comprehend without significant human intervention for training. Adversarial networks have recently emerged as an alternative way to efficiently train the machines. An Adversarial network consists of generator and discriminator models where the former tries to generate outputs that are as close as possible to the real counterparts while the latter tries to distinguish the outputs from the generator from the real counterparts. Given the positive feedback loop between these two networks, they optimize themselves such that (1) they generate more realistic outputs and (2) they significantly have a more accurate prediction. Generative adversarial networks have recently been introduced by Goodfellow et al.~\cite{gan:goodfellow} to bridge the gap between the success supervised and unsupervised learning. In their proposed GAN, the generative model is set in a competition with an adversary: a discriminative model that learns to predict whether a sample is from the model distribution or the data distribution. In another work, Mirza et al. [3] introduced conditional GANs to provide control over data generation for various modes. Radford et al.~\cite{dcgan} proposed a class of CNNs called deep convolutional generative adversarial networks (DCGANs) that shows competitive performance compared with other unsupervised learning algorithms while have stable training in most settings. Salimans et al. [5] proposed new architectural features and training procedures applicable to GANs. Their primary objective is to improve the effectiveness of generative adversarial networks for semi-supervised learning via learning additional unlabeled data. Finally, recently Liu et al. [6] proposed coupled generative adversarial network (CoGAN) in the context of image recognition where CoGAN can learn from a joint distribution without any tuple of corresponding images. As machine learning becomes ubiquitous in every imaginable application, unsupervised learning becomes significantly important as it has the capability to unlock the true potentials of artificial intelligence. As mentioned above, existing literature primarily focuses on various system-level and algorithmic-level enhancement of the GANs corresponding to unsupervised learning. For unsupervised learning methods and algorithms become practical, we need proper hardware solutions that optimize the overall power/performance envelope to allow us to sustain the exponentially increasing demand for computational power. \subsection{Generative Adversarial Neural Networks} \niparagraph{Discriminative model.} \niparagraph{Generative model.} A transposed convolutional layer carries out a regular convolution but reverts its spatial transformation. \subsection{Motivation} There are three main motivations for designing a new accelerator for generative adversarial neural networks: \begin{enumerate} \item Since there are imbalance in the number of non-zero rows, some of the PEs in Eyeriss remains idle during the computation of the generative models. \item A large fraction of MAC operations within each deconvolution window are zero. Therefore, each PE wastes a large fraction of its cycles without performing ineffectual operations. \item For deconvolution operation, each input feature map is padded with a large number of zeros. Sending these zeros to each PE wastes significant amount of energy and cycles. \end{enumerate} \ganax aims to resolve these problems while efficiently accelerate both generative and discriminative models. \if 0 \section{Generative Adversarial Networks} \label{sec:overview} \begin{figure} \centering % \caption{The high-level overview of GAN.} % \label{fig:gan} \end{figure} Generative Adversarial Neural Network (GANN) is a class of neural network that converts the latent space representations to a high-dimensional data that is relatively similar to the given input data. A GANN comprises of two main neural network models: (1) \emph{generative model} and (2) \emph{discriminative model}. These two models work together with an ultimate goal of learning to generate synthetic data that look authentic to a human observer. More specifically, a GANN is a neural network model that learns the data distribution of the given input data. From the learned data distribution, the GANN can later generate synthetic output data which are relatively similar to the given input training data. \niparagraph{How a GANN model works?} Figure~\ref{fig:gan-hl} shows the high-level structure of a GANN for generative synthetic images of dogs. The generative model generates a synthetic data and sends it to the discriminative model. In the first iterations of training, the discriminative model marks the generated image as fake. The discriminative model also provides feedback to the generative model that how far is the generated image from the genuine input data. Based on the received feedbacks from the discriminative model, the generative model attempts to make the generated images closer to the genuine input image data. During this cycle of competition between discriminative and generative model, both models attempts to improve the quality of their work. Discriminative model gets better in differentiating the synthetic data from genuine data and generative model gets improved in generating synthetic data that look like the genuine data. This cycle of training continues until the discriminative model get fooled by the generated synthetic image and identify the generated image as a genuine image. In the following paragraphs, we delve into the architecture of each of these models and their main operations. \niparagraph{Generative model.} A generative model (G) in GANN is composed of multiple layers in which each layer performs a specific operation on its input. The performed operation at each layer transforms the input into another useful representation. The main task of a generative model is to artificially generate synthetic data from latent representations. The latent representations are the conceptual representation (high-level abstraction) of a possible output data, such as images or sound of a musical instruments, in a significantly succinct and compressed form. The latent representations are given to the generative model as inputs. Then, the generative model applies a sequence of operations at each layer of the model and \emph{upsample} the latent representations into a more detailed and uncompressed representations. Finally, the last layer of the generative model produces the synthetic output data. The basic operation in the layers of a generative model is \emph{transposed convolution} (also known as fractionally strided convolution or wrongly as deconvolution). The transposed convolution operation is relatively similar to the convolution operation, in the sense that it performs point-wise product and summation between an input feature map (ifmap) and a sliding window of a filter to produce an upsampled output feature map (ofmap). However, the transposed deconvolution operation is more complex than the convolution operation. The transposed convolution operation differs from the convolution operation mainly in the way that the ifmap is padded before applying the actual operation. \niparagraph{Transposed convolution.} The main task of transposed convolution operation is to gradually upsample the input data at each layer in order to finally convert the latent representations to a synthetic data. The transposed convolution operation applies a point-wise product between a transformed ifmap and a filter. The results of the point-wise product are accumulated together to generate an element of the output feature map. In contrast to the convolution operation, the transposed convolution operation usually requires to add multiple columns and rows of zeros to the ifmap before kicking off the actual point-wise product operation. Figure~\ref{fig:trans} shows an example of a 2D 5$\times$5 transposed convolution operation with stride one that is mainly used in GANNs. The sizes of nontransformed ifmap and filter is 4$\times$4 and 5$\times$5, respectively. First, we need to transform the ifmap by inserting columns and rows of zeros both to the borders and between the rows and columns of the nontransformed ifmap. As shown in Figure~\ref{fig:trans}, applying the transformation on the ifmap increases its size from a 2D ifmap of 4$\times$4 to a 2D ifmap of 12$\times$12. After applying the transformation on the ifmap, the transposed convolution operation applies a point-wise product of the sliding window of the 5$\times$5 filter to the ifmap. Then, the results of the product are accumulated together to generate one element of the ofmap. Applying the sliding window of the filter to all the sub-windows of ifmap generates an upsampled version of ifmap with the size of 8$\times$8. \niparagraph{Discriminative model.} A discriminator model (D) is basically a binary classifier in the form of a normal deep Convolutional Neural Network (CNN). The main responsibility of the discriminator model is to determine whether a given input data is genuine or it has been artificially created by the generator model. The main operations of the discriminative model is similar to the conventional deep CNNs~\cite{alexnet, googlenet, vgg, resnet} and include: \emph{convolution, activation, pooling, normalization,} and \emph{fully-connected}. Recent work~\cite{eyeriss,tetris} shows that the primary computation of CNN occurs in the convolution layers. Hence, the main effort in many recent work is to accelerate the convolution operation. The convolution operation applies a point-wise product of a sliding window of a filter to an ifmap. Finally, the elements of the point-wise product of each window are summed together to generate one element of the output feature map (ofmap). After each convolution operation a non-linear activation function (mostly ReLU in recent CNN architectures) is applied to the result to generate the final value for the output feature map. Different channels of the output feature map is produced by applying different filters on the ifmap. The dimension of a filter and an ifmap are normally 3D. The convolution operation are usually performed in a batch of 3D filters and ifmap. The Equation~\ref{eq:conv} shows the basic computation for a convolution operation between a filter (F) and an ifmap (I) to produce one element of an ofmap (O). \begin{equation} \label{eq1} O[i][j][k] = f(\sum\sum^{0}_{o} I * F) \end{equation} \fi \end{comment} \section{Related Work} \label{sec:related} \ganax has fundamentally a different accelerator architecture than the prior proposals for deep network acceleration. In contrast to prior work that mostly focus on convolution operation, \ganax accelerates transposed convolution operation, a fundamentally different operation than conventional convolution. Below, we overview the most relevant work to ours along two dimensions: neural network acceleration and MIMD-SIMD acceleration. \niparagraph{Neural network acceleration.} Accelerator design for neural networks has become a major line of computer architecture research in recent years. A handful of prior work explored the design space of neural network acceleration, which can be categorized into ASICs~\cite{bitfusion:isca:2018,tetris:asplos:2017,eyeriss:jssc:2017,scnn:isca:2017,eyeriss:isca:2016,eie:isca:2016,cambricon:micro:2016,cnvlutin:isca:2016,truenorth:arxiv:2016,shidiannao:isca:2015,ngpu:micro:2015,nn:general:pact:2015,diannao:isca:2014,olivier:isca:2013,npu:micro:2012}, FPGA implementations~\cite{tabla:hpca:2016,dnnweaver:micro:2016,snnap:hpca:2015,FPGADeep:fpga:2015,neuflow:cvpr:2011}, using unconventional devices for acceleration~\cite{isaac:isca:2016,prime:isca:2016,anpu:isca:2014}, and dataflow optimizations~\cite{cnn:loop:fpga:2017,pipelayer:hpca:2017,flexflow:hpca:2017,deep-comp:iclr:2016,eyeriss:isca:2016,blocking:systematic:arxiv:2016,divergent:taco:2015}. Most of these studies have focused on accelerator design and optimization of merely one specific type of convolutional as the most compute-intensive operation in deep convolutional neural networks. \eyeriss~\cite{eyeriss:isca:2016} proposes a row stationary dataflow that yields high energy efficiency for convolutional operation. \eyeriss exploits data gating to skip zero inputs and further improves the energy efficiency of the accelerator. However, \eyeriss still wastes cycles for detecting the zero-valued inputs. Cnvlutin~\cite{cnvlutin:isca:2016} can save compute cycle and energy for zero-values inputs but still wastes resources for zero-valued weights. In contrast, Cambricon-X~\cite{cambricon:micro:2016} can skip zero-valued weights but still wastes compute cycles and energy for zero-input values. SCNN~\cite{scnn:isca:2017} proposes an accelerator that can skip both zero-valued inputs and weights and efficiently performs convolution on highly sparse data. However, not only SCNN cannot handle dynamic zero-insertion in input feature maps, but also it is not efficient for non-sparse vector-vector multiplications, which are the dominant operation in discriminative models of GANs. None of these works can perform zero-insertion into the input feature maps, which is fundamentally a requisite for transposed convolution operation in the generative models. Compared to these successful prior work in neural network acceleration, \ganax proposes a unified architecture for efficient acceleration of both conventional convolution and transposed convolution operations. As such, \ganax encompasses the acceleration of a wider range of neural network models. \niparagraph{MIMD-SIMD accelerators.} While the idea of access-execute is not brand-new, \ganax extends the concept of access-execute architecture~\cite{stream:accl:isca:2017,decoupled:affine:isca:2017,decoupled:prefetching:micro:2016,decoupled:smith:sigarch:1982} to the finest granularity of computation for each individual operand for deep network acceleration. A wealth of research has studied the benefits of MIMD-SIMD architecture in accelerating specific applications~\cite{pasm,precision,netra,superb,simd-mimd:image,simd-mimd:reconf,simd-mimd:fpga,mixed-mode:fpga,simd-mimd:augmented}. Most of these works have focuses on accelerating computer vision applications. For example, PRECISION~\cite{precision} proposes a reconfigurable hybrid MIMD-SIMD architecture for embedded computer vision. In the same line of research, a recent work~\cite{simd-mimd:augmented} proposes a multicore architecture for real-time processing of augmented reality applications. The proposed architecture leverages SIMD and MIMD for data- and task-level parallelism, respectively. While these works have studied the benefits of MIMD-SIMD acceleration mostly for computer vision applications, they did not study the potential gains of using MIMD and SIMD accelerators for modern machine learning applications. Prior to this work, the benefits, limits, and challenges of MIMD-SIMD architectures for modern deep model acceleration was unexplored. Conclusively, the GANAX architecture is the first to explore this uncharted territory of MIMD-SIMD acceleration for the next generation of deep networks.
{ "timestamp": "2018-06-05T02:18:11", "yymm": "1806", "arxiv_id": "1806.01107", "language": "en", "url": "https://arxiv.org/abs/1806.01107" }
\section{Implementation}\label{approach} In this section, we present three approaches for implementing our SP2 framework. \subsection{Naive Approach}\label{naive} In this approach, the auxiliary public model data contains the entire item factor matrix (i.e. all the latent item vectors and their biases). Each user's on-device \emph{private} model is then built following the steps shown in algorithm \ref{naivealgo}. The update equation used in this algorithm are similar to the ones used in the MF model in Section \ref{background}. \begin{algorithm} \caption{Naive method to build on-device \emph{private} model} \label{naivealgo} \begin{algorithmic}[1] \Require $\delta \leftarrow$ learning rate , $\lambda \leftarrow$ reg. parameter , $epochs \leftarrow$ number of epochs, $Q \leftarrow$ Aux. public model data containing all latent item vectors $(q_i)$, item biases $(b_i)$ and global ratings mean $(\mu)$, $p_u \leftarrow$ \emph{public} user latent vector for $u$, $b_u \leftarrow$ \emph{public} user bias for $u$, $\Omega^u_{\text{private}} \leftarrow$ private ratings by $u$ \Ensure $p_u^* \leftarrow$ \emph{private} user latent vector for $u$, $b_u^* \leftarrow$ \emph{private} user bias for $u$, \Procedure{}{$\delta,\lambda,epochs, Q, p_u, b_u, \Omega^u_{\text{private}}$} \State $p_u^* \leftarrow p_u, b_u^* \leftarrow b_u$ \For{$e=0;e<epochs;e++$} \ForAll{$ r_{ui} \in \Omega^u_{\text{private}}$} \State $\hat{r_{ui}} = \mu + b_u^* + b_i + q_i^Tp_u^*$ \State $ e_{ui} = r_{ui}$-$\hat{r_{ui}}$ \State $b_u^* \leftarrow b_u^* + \delta(e_{ui} - \lambda b_u^*) $ \State $b_i \leftarrow b_i + \delta(e_{ui} - \lambda b_i) $ \State $p_u^* \leftarrow p_u^* + \delta(e_{ui}q_i - \lambda p_u^*)$ \State $q_i \leftarrow q_i + \delta(e_{ui}p_u^* - \lambda q_i)$ \EndFor \State \textbf{end for} \EndFor \State \textbf{end for} \EndProcedure \end{algorithmic} \end{algorithm} \subsubsection{Top-$N$ recommendation} Once the private model is built for user $u$, we can locally predict the rating for any item, as shown in equation (\ref{ratestimate}), using $p_u^*, b_u^*$, since $q_i, b_i$ are known for all the items as part of the auxiliary public model data. These predictions can be ranked locally on the user device to provide the top-$N$ recommendations. \subsubsection{Privacy Consideration} It is important to highlight some privacy considerations behind our naive approach: \noindent $\bullet$ Even though a user only needs the corresponding item factors for each of the privately rated item to compute the on-device \emph{private} model, the user cannot simply fetch only the desired item factors from the central recommender system in order to ensure privacy. \noindent $\bullet$ Consider an alternative scenario, where a user downloads only some additional irrelevant item factors to obfuscate the \emph{private} user information. This would require downloading significantly fewer number of item factors, as compared to downloading the entire item factor matrix. However, this would make top-$N$ computation infeasible locally. Now, the user needs to send back $p_u^*, b_u^*$ to the server, which would allow the server to guess user's \emph{private} ratings. Similarly, sending a randomly perturbed \emph{private} user factor back to the server can obfuscate the \emph{private} information, but will degrade the quality of top-$N$ recommendations. \noindent $\bullet$ Consider another alternative strategy, where the actual \emph{private} user factor is sent along with multiple ($k$) fake user factors, thereby obfuscating the private information and making it $k$-anonymous\cite{MicrosoftDiffPrivacy}. However, upload speeds are considerably lower than download speeds. In addition, the overall computation and communication costs can also increase by orders of magnitude, as the central servers need to compute multiple top-$N$ recommendation lists for every user and then send all of them back. It is important to note that the item factors matrix is downloaded only once during model building. In some situation, this does not involve unreasonable communication or storage overhead from the user end. For example, the total size (in MB) of all the item factors $(I)$ of dimension $k$ is given by $k \times |I| \times 8 / 2^{20}$, where each item factor is assumed to be an array of type \texttt{double}. Assuming $k=100$, the download sizes for all the item factors (in raw uncompressed format) for real datasets like MovieLens \cite{wsdm16} and Netflix \cite{wsdm16} are 4MB and 10MB respectively. However, for large industrial datasets (like Amazon \cite{mcauley2015image}) with close to 1 million items, the raw size of all item factors (of dimension 100) grows linearly to around 763MB. \subsection{Clustering} We propose this method to ensure scalability of the SP2 framework as the number of items become large. The intuition behind this approach is that the public auxiliary model data should consist of some approximate item factors $(Q')$, which is much smaller than the set of all item factors $(Q)$ i.e. $|Q'|<|Q|$. Now, each user $u$ for a private rating $r_{ui}$ should use the approximate item factor $\tilde{q_i'}$, instead of the actual item factor $q_i'$ to compute the \emph{private} model. This approximation introduces an error in $e_{ui}$ calculation for each private rating $r_{ui}$ and is given by $p_u'^*(q_i' - \tilde{q_i'})^T$, where $p_u'^*$ is the \emph{private} user factor for $u$ and $\tilde{q_i'} \in Q'$. Now, for each user $u$, we should minimize these approximation errors across all his/her \emph{private} ratings i.e. minimize $\sum_{i \in \Omega^u_{\text{private}}}p_u'^*(q_i' - \tilde{q_i'})^T$, or $p_u'^*\sum_{i \in \Omega^u_{\text{private}}}(q_i' - \tilde{q_i'})^T$. Since, the central recommender system does not know any $\Omega^u_{\text{private}}$ for any user, the former prepares the public auxiliary model data by minimizing the approximation errors across all item factors i.e. minimize $\sum_{i \in Q}(q_i' - \tilde{q_i'})^T$. This minimization goal is similar to the objective function used in clustering \cite{KMeansClustering}. Thus, the central recommender system performs this approximation through clustering, particularly using $K$-means clustering with Euclidean distance \cite{kmeans++}. The individual cluster mean is treated as the approximate item factor for all the items in the cluster. In summary, the public auxiliary model data for this method comprises of (1) $K$ cluster centroids obtained after applying the $K$-means algorithm on all the item factors, (2) cluster membership information, which identifies which cluster an item belongs to and (3) global ratings average. Using this public auxiliary model data, algorithm \ref{clusteringalgo} computes the on-device \emph{private} model for each user. \begin{algorithm} \caption{Building on-device \emph{private} model via clustering} \label{clusteringalgo} \begin{algorithmic}[1] \Require $\delta \leftarrow$ learning rate , $\lambda \leftarrow$ regularization parameter , $epochs \leftarrow$ number of epochs, $Q' \leftarrow$ Aux. public model data containing all cluster centers having latent vectors $(c_i)$, biases $({b^c_i})$ and global ratings mean $(\mu)$, $\rho \leftarrow$ Cluster membership function, where item $i$ is mapped to cluster $\rho(i)$, $p_u \leftarrow$ \emph{public} user latent vector for $u$, $b_u \leftarrow$ \emph{public} user bias for $u$, $\Omega^u_{\text{private}} \leftarrow$ private ratings by $u$ \Ensure $p_u^* \leftarrow$ \emph{private} user latent vector for $u$, $b_u^* \leftarrow$ \emph{private} user bias for $u$, \Procedure{}{$\delta,\lambda,epochs, Q', p_u, b_u, \Omega^u_{\text{private}}$} \State $p_u^* \leftarrow p_u, b_u^* \leftarrow b_u$ \For{\textbf{each} cluster c} \State $N_c = $ Calculate no. of items in $c$ from $\rho$ \EndFor \State \textbf{end for} \For{$e=0;e<epochs;e++$} \ForAll{$ r_{ui} $ in $ private\_ratings_u $} \State $\hat{r_{ui}} = \mu + b_u + b^c_{\rho(i)} + c_{\rho(i)}^Tp_u^*$ \State $ e_{ui} = r_{ui}$-$\hat{r_{ui}}$ \State $b_u^* \leftarrow b_u^* + \delta(e_{ui} - \lambda b_u^*) $ \State $b^c_{\rho(i)} \leftarrow b^c_{\rho(i)} + \delta(e_{ui} - \lambda b^c_{\rho(i)})/N_{\rho(i)} $ \State $p_u^* \leftarrow p_u^* + \delta(e_{ui}c_{\rho(i)} - \lambda p_u^*)$ \State $c_{\rho(i)} \leftarrow c_{\rho(i)} + \delta(e_{ui}p_u^* - \lambda c_{\rho(i)})/N_{\rho(i)}$ \EndFor \State \textbf{end for} \EndFor \State \textbf{end for} \EndProcedure \end{algorithmic} \end{algorithm} Note, the cluster membership information for a set of $I$ items would require $4 \times |I|$ bytes, assuming each cluster id is an integer which takes 4 bytes. For $K$ clusters, this membership information size can be further reduced drastically using $K$ bloom filters \cite{Bloom1970,BloomFilterWeb} where each bloom filter represents a cluster. \subsubsection{Top-$N$ recommendation} In this method recall that all the item factors are not available locally on the user device. Therefore, we pursue a different strategy here: user $u$ requests the \emph{public} item factors for top-$N'$ recommended items $(N' > N)$ from the central recommender system. The latter computes this using $u's$ \emph{public} user factor $(p_u')$ and then sends the top $N'$ items and their corresponding \emph{public} item factors to $u$. $u$ can re-rank these $N'$ items based on his/her \emph{private} user factor $(p_u^*)$ and then select the top-$N$. Note, this top-$N'$ computation by the central servers is not a privacy threat, as it can be easily calculated without any information about user's private ratings. Also, recall our assumption 2 in Section \ref{model}, which ensures that incorrect top-$N'$ information will not be sent by the central servers. \subsection{Joint Optimization} Our previous approach was based on hard assignment, where each item was assigned to only one cluster. However, soft clustering techniques like non-negative matrix factorization (NMF) \cite{MultUpdateProjGradDescent} considers each point as a weighted sum of different cluster centers. In this approach, we try to perform soft clustering on all the item factors simultaneously as the \emph{public} recommendation model is built. In other words, the central recommender system jointly learns the \emph{public} model and the soft cluster assignments. For this, we revise the equations (\ref{ratestimate}) and (\ref{l2loss}) to (\ref{newratestimate}) and (\ref{newl2loss}), where $C$ denotes the cluster center matrix of dimension ${k \times z}$ ($z$ being the number of clusters), and $w_i$ is a column vector representing the different cluster weights (non-negative) for item $i$. This problem can be formulated as a constrained optimization problem and algorithm \ref{jointalgo} shows how the central recommender system performs this joint optimization. One key aspect in this algorithm is that the weights are updated (step 14) using projected gradient descent (PGD) \cite{ProjectedGradientDescent}, in order to ensure that all cluster weights are non-negative. This facilitates in finding the top-$R$ cluster assignments for any item by finding the highest $R$ corresponding weights. Finally, the auxiliary model data for this approach should consist of the following: (1) the cluster center matrix $C$, (2) item biases $b_i$, (3) top-$R$ cluster weights (in descending order) for each item $i$, the corresponding cluster ids and (4) the global ratings mean. Using $C$ and top-$R$ cluster weights for any item $i$, user $u$ can locally approximate the \emph{public} item factor for any item by its weighted sum of top-$R$ cluster centers i.e. $\sum\limits_{n \in \text{top $R$}}w_nC_n$ ($C_n$ represents the $n^{th}$ cluster center). With this approximation, $u$ can now use algorithm \ref{naivealgo} to compute the on-device \emph{private} model again. Note, when $R$ is small, we can save a significant communication cost by sending only top-$R$ weights as compared to the naive approach. \begin{equation}\label{newratestimate} \begin{aligned} \hat{r_{ui}} = \mu + b_u + b_i + w_i^TC^Tp_u \end{aligned} \end{equation} \begin{equation}\label{newl2loss} \begin{aligned} \text{min} \sum\limits_{r_{ui}\in \Omega_{\text{public}}'}{ (r_{ui}-\hat{r_{ui}})^2 + \lambda(b_i^2 + b_u^2 + \parallel w_i\parallel^2_2 + \parallel c\parallel^2_2 + \parallel p_u\parallel^2_2 )}\\ \text{s.t.} w_{ij} \geq 0. \end{aligned} \end{equation} \begin{algorithm} \caption{Joint optimization based matrix factorization} \label{jointalgo} \begin{algorithmic}[1] \Require $\delta \leftarrow$ learning rate , $\lambda \leftarrow$ regularization parameter , $epochs \leftarrow$ number of epochs, $\Omega_{\text{public}}' \leftarrow set of all \emph{public} ratings$ \Ensure $C, p_u, b_u, b_i, w_i$ for all users and items \Procedure{}{$\delta,\lambda,epochs, \Omega_{\text{public}}'$} \State $\mu = \text{Mean}(\Omega_{\text{public}}')$ \State Initialize $b_u, p_u, b_i, w_i, C$ with values from $N(0,0.01)$. \For{$e=0;e<epochs;e++$} \ForAll{$ r_{ui} $ in $ private\_ratings_u $} \State $\hat{r_{ui}} = \mu + b_u + b_i + w_i^TC^Tp_u$ \State $ e_{ui} = r_{ui}$-$\hat{r_{ui}}$ \State $b_u \leftarrow b_u + \delta(e_{ui} - \lambda b_u) $ \State $b_i \leftarrow b_i + \delta(e_{ui} - \lambda b_i) $ \State $p_u \leftarrow p_u + \delta(e_{ui}cw_i - \lambda p_u)$ \State $C \leftarrow C + \delta(e_{ui}w_ip_u^T - \lambda c)$ \State $w_i \leftarrow w_i + \delta(e_{ui}c^Tp_u - \lambda w_i)$ \For{\textbf{each} $w \in w_i$} \State $w \leftarrow \text{Max}(w,0)$ //PGD \EndFor \State \textbf{end for} \EndFor \State \textbf{end for} \EndFor \State \textbf{end for} \EndProcedure \end{algorithmic} \end{algorithm} \subsubsection{Top-$N$ recommendation} Interestingly, with the auxiliary model data for this method, user $u$ can locally compute the approximation for each item factor, as mentioned above. As a consequence, $u$ is also able to locally compute the top-$N$ recommendations using these approximate item factors. \subsection{SP2 vs. Different Baselines} \noindent $\bullet$ \emph{Absolute Optimistic (Everything public)}: Here, we assume that every user optimistically shares everything publicly without any privacy concern i.e. a single MF model is built on the entire training data itself. Theoretically, this should have the best performance, thus providing the overall upper bound. \noindent $\bullet$ \emph{Absolute Pessimistic (Everything private)}: Here, we assume that every user is pessimistic and does not share anything publicly due to privacy concerns. Thus separate models are built for each user based \emph{only} on their individual ratings, which in practice, is as good as using the average rating for that user for all his/her predictions. \noindent $\bullet$ \emph{Only Public}: This mimics the standard CF scenario, where privacy preserving mechanisms are absent. Consequently, the users only rate the items, which they are comfortable with sharing; they refrain from explicitly rating sensitive items. We build a single MF model using \textbf{only} the \emph{public} ratings and ignore the \emph{private} ratings completely. \noindent $\bullet$ \emph{Distributed aggregation}: Shokri et al. \cite{ShokriRECSYS09} proposed peer-to-peer based data obfuscation policies, which obscured the user ratings information before uploading it to a central server that eventually built the final recommendation model. The three obfuscation policies mentioned are: (1) \emph{Fixed Random (FR) Selection}: A fixed set of ratings are randomly selected from other peers for obfuscation. (2) \emph{Similarity-based Random (SR) Selection}: A peer randomly sends a fraction of its ratings to the user for obfuscation depending on its similarity (Pearson, cosine, etc.) with the user. (3) \emph{Similarity-based Minimum Rating (SM) Frequency Selection}: This approach is similar to the previous one, except that instead of randomly selecting the ratings, higher preference is given to the ratings of those items that have been rated the least number of times. \noindent $\bullet$ \emph{Fully decentralized recommendation}: Berkovsky et al. \cite{BerkovskyRECSYS07} proposed a fully decentralized peer-to-peer based architecture, where each user requests rating for an item by exposing a part of his/her ratings to a few trusted peers. The peers obfuscate their profiles by generating fake ratings and then compute their profile similarities with the user. Finally, the user computes the rating prediction for the item based on the ratings received from the peers and the similarities between them. \noindent $\bullet$ \emph{Differential Privacy}: McSherry et al. in \cite{MicrosoftDiffPrivacy} masks the ratings matrix sufficiently by adding random noise, drawn from a normal distribution, to generate a noisy global average rating for each movie. These global averages are then used to generate $\beta_{m}$ fictitious ratings to further obscure the ratings matrix. This method ensures that the final model obtained does not allow inference of the presence or absence of any user rating. For all MF models, the hyper-parameters were initialized with default values from the \texttt{Surprise} package\footnote{\url{http://surprise.readthedocs.io/en/stable/matrix\_factorization.html}}. \subsection{Private Ratings Allocation} We first provide the following two definitions, which are used later for the private ratings allocation: \noindent $\bullet$ \emph{User privacy ratio} for a user $u$ is defined as the fraction of $u's$ total ratings which are marked \emph{private} by $u$. \noindent $\bullet$ \emph{Item privacy ratio} for an item $i$ is likewise defined as how many of the total users (which assigned $i$ a rating) have marked $i$ as \emph{private}. In order to examine the SP2 framework under two different hypotheses (stated in Section \ref{model}), we preprocess the datasets as discussed below: \noindent $\bullet$ \textbf{H1.} We generate user privacy ratios in the interval $[0, 1]$ for all $n$ users from a beta distribution \cite{BetaDistr1} with parameters $\alpha, \beta$. For each user $u$ with user privacy ratio $\gamma_u$, $(1 - \gamma_u)$ fraction of $u$'s ratings are randomly selected and marked as \emph{public}, while the remainder of $u$'s ratings are considered \emph{private}. \noindent $\bullet$ \textbf{H2.} Here, we generate item privacy ratios for all $m$ items from a beta distribution. For each item $i$ with item privacy ratio $\gamma_i$, $(1 - \gamma_i)$ fraction of ratings assigned to $i$ are randomly selected and marked as \emph{public}, while the remainder of $i$'s ratings are considered \emph{private}. For all our empirical analysis, we considered the following four beta distributions, as shown in Figure \ref{fig:BetaDists}. \noindent $1.$ \emph{Mostly Balanced $(\alpha = 2, \beta = 2)$}: Most user/item privacy ratios are likely to be close to the theoretical mean value 0.5. \noindent $2.$ \emph{Mostly Extreme $(\alpha = 0.5, \beta = 0.5)$}: Most users/items have either very high or very low privacy ratios. The overall average of the privacy ratios will be close to 0.5. \noindent $3.$ \emph{Mostly Pessimistic $(\alpha = 5, \beta = 1)$}: Most users/items have very high privacy ratios. \noindent $4.$ \emph{Mostly Optimistic $(\alpha = 1, \beta = 5)$}: Most users/items have very low privacy ratios. \begin{figure} \includegraphics[width=0.8\linewidth]{beta_dist} \caption{Probability density functions of four different beta distributions used in private ratings allocation.} \label{fig:BetaDists} \end{figure} \section{Conclusion}\label{conclusion} In this paper, we proposed a novel selective privacy preserving (SP2) paradigm for CF based recommender systems that allows users to keep a portion of their ratings \emph{private}, meanwhile delivering better recommendations, as compared to other privacy preserving techniques. We have demonstrated the efficacy of our approach under different configurations by comparing it against other baselines on two real datasets. Finally, our framework empowers users to define their own privacy policy by determining which ratings should be \emph{private} and which ones should be \emph{public}. \section{Experiments}\label{experiments} \input{baselines} \input{results} \section{SP2 Architecture}\label{architecture} Our proposed \emph{selective privacy preserving} (SP2) framework for CF algorithms is broadly based on the popular matrix factorization (MF) method, mainly due to its better performance, scalability and industrial applicability \cite{MFNetflix, MF16, Mahout, MLlib, DasCIKM17}. However, some of our discussions can also be extended to the traditional nearest neighbor based CF algorithms \cite{MFNeighbor08}. We next briefly review the MF technique in Section \ref{background}. \subsection{Background}\label{background} In the classic biased MF model \cite{MFNetflix}, we try to learn the latent user and item factors (assumed to be in the same feature space of dimension $k$) from an incomplete ratings matrix \cite{MFNeighbor08}. More formally, here, the estimated rating for a user $u$ on item $i$, $\hat{r_{ui}}$ is given by equation (\ref{ratestimate}). The corresponding symbol definitions are provided in Table \ref{tab:symbols}. We compute the user and item latent factors by minimizing the regularized squared error over all the known ratings, as shown in (\ref{l2loss}). This is done either using classic Alternating Least Squares method \cite{AlsRECSYS12,DasCIKM17,MLlib} which computes closed form solutions or via Stochastic Gradient Descent (SGD) \cite{MFNetflix}, which enjoys strong convergence guarantees \cite{NipsMFTheory,PMLRGDTheory} and many desirable properties for scalability \cite{FastSGDMFKDD15,AsyncSGD}. The variable update equations for SGD are given by equation (\ref{sgdupdates}). For simplicity, we assume from now on that the user and item factors contain the respective biases i.e. user factor for $u$ ($p_u'$) implies the column vector $[\, b_u \quad 1 \quad p_u^T\,]^T$ and item factor for $i$ ($q_i'$) refers to the column vector $[\, 1 \quad b_i \quad q_i^T\,]^T$. \begin{equation}\label{ratestimate} \begin{aligned} \hat{r_{ui}} = \mu + b_u + b_i + q_i^Tp_u = \mu + q_i'^Tp_u' \end{aligned} \end{equation} \begin{table} \caption{Definitions of symbols used in (\ref{ratestimate}) - (\ref{sgdupdates})} \label{tab:symbols} \begin{tabular}{llll} \toprule Symbol & Definition & Symbol & Definition\\ \midrule $\mu$ & global mean of ratings & $\Omega$ & set of observed ratings\\ $b_u$ & bias for user $u$ & $p_u$ & latent vector for user $u$\\ $b_i$ & bias for item $i$ & $q_i$ & latent vector for item $i$\\ $\delta$ & Learning rate & $\lambda$ & Regularization parameter \\ $r_{ui}$ & actual rating of $i$ by $u$ & $\hat{r_{ui}}$ & prediction of $u$'s rating for $i$\\ $e_{ui}$ & calculated as ($r_{ui}$-$\hat{r_{ui}}$) \\ \bottomrule \end{tabular} \end{table} \begin{equation}\label{l2loss} \begin{aligned} \text{min} \sum\limits_{r_{ui}\in \Omega}{ (r_{ui}-\hat{r_{ui}})^2 + \lambda(b_i^2 + b_u^2 + \parallel q_i \parallel^2_2 + \parallel p_u \parallel^2_2 )} \end{aligned} \end{equation} \begin{equation} \label{sgdupdates} \begin{aligned} b_u &\leftarrow b_u + \delta(e_{ui} - \lambda b_u) \\ b_i &\leftarrow b_i + \delta(e_{ui} - \lambda b_i) \\ p_u &\leftarrow p_u + \delta(e_{ui}q_i - \lambda p_u) \\ q_i &\leftarrow q_i + \delta(e_{ui}p_u - \lambda q_i) \end{aligned} \end{equation} \subsection{Problem Formulation}\label{probform} In a SP2 framework, each user $u$ has a set of \emph{public} ratings, denoted by $\Omega^u_{\text{public}}$ and a set of \emph{private} ratings, denoted by $\Omega^u_{\text{private}}$. However, since $\Omega^u_{\text{private}}$ is known only to $u$, the set of ratings observed here by the central recommender system is $\bigcup\limits_{u} \Omega^u_{\text{public}}$. We denote the latter by the notation $\Omega_{\text{public}}'$. Now, our problem can be formulated as a \emph{multi-objective} optimization problem, where we attempt to minimize $n$ regularized L2 loss functions together for $n$ users, as shown below:\\ $\text{min}$ $(f_1, f_2, ..., f_n)$, where L2 loss ($f_v$) for user $v$ is given by,\\ $f_v: \big[ \sum\limits_{r_{vj}\in \Omega^v_{\text{private}}}{(r_{vj}-\hat{r_{vj}})^2} \big] + \frac{1}{n}\sum\limits_{r_{ui}\in \Omega_{\text{public}}'}{ (r_{ui}-\hat{r_{ui}})^2} \\ \verb| | \qquad \qquad + \frac{\lambda}{n}(b_i^2 + b_u^2 + \parallel q_i \parallel^2_2 + \parallel p_u \parallel^2_2 ) \\$ Note, traditionally multi-objective optimization problems are solved with classic techniques like \emph{linear scalarization} (also known as the weighted sum method \cite{linearScalarization}). In fact, if we assign equal weights to each user's L2 loss function, then linear scalarization \cite{linearScalarization} can reduce this problem into a single-objective mathematical optimization problem (constructed as the weighted sum of the individual objective functions), which is similar to the one discussed in Section \ref{background}. However, due to privacy considerations, all of the data (users' ratings) cannot be pooled together; this makes classic solutions to multi-objective optimizations problems inapplicable here. We next outline a privacy-aware model to solve this problem. \subsection{Model}\label{model} We posit the following assumptions before summarizing our model. \\ \textbf{Assumption 1.} The central recommender system is \emph{semi-adversarial} in nature i.e. it logs any information requested by a user and can later utilize it to guess what the user has rated privately. \\ \textbf{Assumption 2.} The central recommender system is \emph{not malicious} in nature i.e. it will not deliberately send incorrect information to a user to adversely impact his/her recommendations. It has an incentive to provide high quality recommendations to the users.\\ \noindent \textbf{Framework.} Based on the earlier discussions, we now outline the working of our SP2 framework: \noindent $(1)$ The central recommender system first builds a \emph{public} model based on all the users' shared \emph{public} ratings using SGD. We obtain the \emph{public} user and item factors when the error converges after a certain number of epochs. \noindent $(2)$ Each user then downloads his/her corresponding \emph{public} user factor from the central recommender system. \noindent $(3)$ Additionally, all users' also download common \emph{auxiliary public model} data on their devices. This data is same for all users, and hence can be broadcasted by the central recommender system (for authentication in case the server cannot be trusted). \noindent $(3)$ Once the \emph{auxiliary public model} data and \emph{public} user factor is locally available on the device, local updates are performed on the \emph{public} user factor using \emph{auxiliary model} information and the \emph{private} ratings, which the user has saved on the device and has not shared with anyone. \noindent $(4)$ The final \emph{private} user factor and the \emph{private} model are stored on the user's device and never shared or communicated. \begin{figure} \includegraphics[width=0.85\linewidth]{SP2-Framework} \caption{Architecture of a selective privacy preserving (SP2) framework.} \label{SP2Framework} \end{figure} Figure \ref{SP2Framework} presents the overall architecture. Interestingly, in our framework users never upload/communicate any \emph{private} rating, even in encrypted format, thus guaranteeing privacy preservation. This is notably different from the general federated machine learning philosophy \cite{OnDevice,SecuredAggProtocol}. We elaborate the need for this difference in Section \ref{related}. \noindent \textbf{Auxiliary public model data.} It is also important to note that the central recommender system cannot share many parts of the \emph{public} model with all the users. In many systems like Netflix, users only consent to rate a video on the condition that the central recommender system displays only the average rating for the video, instead of the individual ratings from the users' community. In such cases, the \emph{auxiliary public model} data cannot contain ratings from $\Omega_{\text{public}}'$, as the latter scenario also constitutes a user-privacy breach, since the users may not be comfortable sharing their explicit ratings information with each other. In the same vein, consider the example, where the \emph{auxiliary public model} data comprises of both \emph{public} user factors and item factors. This information alone is sufficient to identify the corresponding \emph{public} ratings for other users with reasonable confidence, thus again breaching user privacy. Furthermore, even anonymizing this information is not enough to prevent privacy leaks as demonstrated through de-anonymization attacks on Netflix dataset \cite{Deanonattacks}. Thus, the \emph{auxiliary public model} data needs to be designed carefully so that it not only facilitates in building a better \emph{private} model on the user's device, but also simultaneously safeguards the SP2 framework from privacy breaches. In light of this, observe that the \emph{auxiliary public model} data can comprise of \emph{public} item factors alone. Each {public} item factor is updated over a set of users based on their \emph{public} user factors and ratings. Thus, only the set of final \emph{public} item factors alone do not constitute a user-privacy breach. \noindent \textbf{Private ratings distribution.} For analyzing the efficacy of our SP2 framework, it is also important to consider how users privately rate an item. We examine two different hypotheses for modeling this: \noindent $\bullet$ \textbf{Hypothesis 1 (H1).} Users always decide independently which of his/her ratings are \emph{private}. Formally, for any two users $x$ and $y$, who have rated an item $i$ with ratings $r_{xi}$ and $r_{yi}$ respectively, $P(r_{xi}$ \text{is private} $|$ $r_{yi}$ \text{is private}$) = P(r_{xi}$ \text{is private}$)$. \noindent $\bullet$ \textbf{Hypothesis 2 (H2).} Users do not decide independently which of his/her ratings are \emph{private}. In other words, ratings for some items are more likely to be marked as private. Formally, using the same mathematical notations as above, $P(r_{xi}$ \text{is private} $|$ $r_{yi}$ \text{is private}$) \neq P(r_{xi}$ \text{is private}$)$. In Section \ref{experiments}, we further discuss how \emph{private} ratings are allocated in our experiments based on these two hypotheses. \section{Introduction}\label{intro} Collaborative filtering (CF) based recommender systems are ubiquitously used across a wide spectrum of online applications ranging from e-commerce (e.g. Amazon) to recreation (e.g. Spotify, Netflix, Hulu, etc.) for delivering a personalized user experience \cite{mishra2016bottom}. CF techniques are broadly classified into two types -- (i) classic \emph{Nearest Neighbor} based algorithms \cite{MFNeighbor08} and more recent \emph{matrix factorization techniques} \cite{MFNetflix}, of which the latter has been more widely and predominantly adopted in industrial applications \cite{DasCIKM17} for building large-scale recommender models due to its superiority in terms of accuracy \cite{MFNetflix} and massive scalability \cite{FastSGDMFKDD15, MF16, Mahout, MLlib, Petuum, DiFacto}. Regardless of the underlying technique, the performance of a CF system is generally driven by the ``homophilous diffusion'' \cite{CannySIGIR02} process, where users must share some of their preferences in order to identify others with similar tastes and get good recommendations from them. The performance of CF algorithms often deteriorates without such adequate information, as often observed in the classic \emph{cold start} \cite{ColdStart} problem. This inherent need for a user to share his/her preferences sometimes leads to serious privacy concerns. To make things more complicated, privacy is not a static concept and may greatly vary across different users, items and places. For example, different users under changing geopolitical, social and religious influences may have varying degree of reservation about explicitly sharing their ratings on sensitive items that deal with subjects like politics, religion, sexual orientation, alcoholism, substance abuse, adultery, etc. \cite{ChowICDMW12}. Overall, these privacy concerns can prevent a user from explicitly rating many items, which reduces the overall performance of a CF algorithm, as compared to an ideal scenario, where everyone freely rates all the items they consume. \subsection{Motivation}\label{motiv} In this paper, we explore the idea of letting each user define his/her own privacy. In other words, here the user decides which ratings he/she can comfortably share \emph{publicly} with others, while his/her remaining ratings are considered as \emph{private}, which means that they are stored only on the user's device locally and are never shared with anyone including any peers or a centralized recommender system. Thus, this scheme enables each user to selectively define his/her own privacy. Figure \ref{fig:sp2-gui} shows an example of such an operational setup. \begin{figure} \includegraphics[width=0.55\linewidth]{SP2-gui} \caption{Working of a selective privacy preserving (SP2) framework from a user's perspective. } \label{fig:sp2-gui} \end{figure} In this paper, we attempt to build a CF framework that preserves each user's \emph{selective privacy} and investigates the following issues in enabling such a framework: \noindent $\bullet$ How can we build a \emph{selective privacy preserving} (\textbf{SP2}) CF model that assimilates information from two kinds of ratings -- all users' \emph{public} ratings and each user's on-device \emph{private} ratings? \vspace{2pt} \noindent $\bullet$ How can we ensure that there is no loss of \emph{private} information in our SP2 framework? \vspace{2pt} \noindent $\bullet$ Can the SP2 framework improve the performance of a CF algorithm? In other words, does the SP2 framework improve the overall recommendation quality at all by taking into account each user's private ratings? Or should the users simply hold back from rating sensitive materials if they have any privacy concern? \vspace{2pt} \noindent $\bullet$ Can this SP2 CF model ensure scalability with respect to industrial-scale datasets? Interestingly, the selective privacy preserving framework proposed in this paper is somewhat analogous to the rules of a classic poker game\footnote{\href{https://en.wikipedia.org/wiki/Omaha_hold\_\%27em}{https://en.wikipedia.org/wiki/Omaha\_hold\_`em}} (\emph{Omaha hold `em}), where each player tries to form a best hand combining some of the community cards (which are publicly visible to everyone) and some of the hole cards (which are privately dealt to each player). \subsection{Contributions}\label{contri} In the rest of this paper, we address the questions listed in Section \ref{motiv} and make the following contributions: \noindent $\bullet$ We mathematically formulate the selective privacy preservation problem and present a formal framework to study it (Section \ref{architecture}). To the best of our knowledge, this is the first work under the umbrella of \emph{federated machine learning} \cite{OnDevice} that supports a \emph{private on-device} recommendation model for CF algorithms. \vspace{2pt} \noindent $\bullet$ We propose three different strategies (Section \ref{approach}) for efficiently implementing an end-to-end SP2 framework, each of which is conducive to different situations. These underlying techniques overall ensure that a SP2 CF model incurs only a reasonable cost in terms of storage and communication overhead, even when dealing with massive industrial datasets or large machine learning models. \vspace{2pt} \noindent $\bullet$ We present analytical results on two real datasets comparing different privacy preserving and data obfuscation techniques to show the effectiveness of our SP2 framework (Section \ref{experiments}). We also empirically study what is a good information sharing strategy for any user in a SP2 framework and how much are the recommendations of a user affected, when he/she refrains from rating an item, instead of marking the latter as \emph{private}. \vspace{2pt} \noindent $\bullet$ We present the results of a pilot study (Section \ref{survey}), which demonstrates that an overwhelming majority of participants are willing to adopt this technology in order to receive more relevant recommendations without sacrificing their individual privacies. \section{Related Work}\label{related} Privacy preserving recommender systems has been well explored in the literature. Peer-to-peer (P2P) techniques \cite{BerkovskyRECSYS07} are largely meant to protect users from untrusted servers. However, they also require users to share their private information with peers, which is a privacy breach in itself. In addition, P2P architectures lack scalability due to limited number of trusted peers and are vulnerable to malicious interferences by rogue actors. Differential privacy methods \cite{MicrosoftDiffPrivacy} provide theoretical privacy guarantees for all users, but can also adversely impact the performance of the recommender systems due to data obfuscation. The related literature also comprises of cryptology \cite{zhan2008towards} based techniques that approach the problem little differently. For example, Zhan et al.\cite{zhan2008towards} used ``homomorphic encryption'' to integrate multiple sources of encrypted user ratings in a privacy preserving manner. However, the extreme computation time and scalability issues associated with homomorphic encryption pose a serious practicality question \cite{HomoEncryPract}, even for moderate size datasets. Lastly, recent federated machine learning approaches \cite{OnDevice} have proposed privacy-preserving techniques to build machine learning models using secure aggregation protocol \cite{SecuredAggProtocol}. However, in case of CF algorithms, this would require a user to share an update (in encrypted form) performed on an item factor locally. In our case, this means that the server would be able to identify from the encrypted updates, which items the user had rated privately, even though the exact ratings remain unknown. This itself constitutes a serious privacy breach \cite{ChowICDMW12,Annecdote,VideoPrivacyAct}. Hence, in our SP2 framework, no private user information is ever uploaded or communicated. \subsection{Results} We evaluate our SP2 framework using accuracy-based as well as ranking-based metric. The 5-fold average RMSE and NDCG@10 scores \cite{KDDNdcg} along with their corresponding standard deviations are reported in Table \ref{tab:results} for the MovieLens and Amazon Electronics datasets. \begin{table*}[ht] \centering \caption{Experimental results on MovieLens and Amazon Electronics datasets} \label{tab:results} \begin{minipage}{\textwidth} \begin{tabular}{lllcccc} \toprule \multirow{2}{*}{Category} & \multirow{2}{*}{Method} & \multirow{2}{*}{Model Parameters} & \multicolumn{2}{c}{Movielens} & \multicolumn{2}{c}{Amazon Electronics} \\ & & & RMSE & NDCG@10 & RMSE & NDCG@10 \\ \midrule \multirow{4}{*}{\pbox{20cm}{Peer-to-peer \\ Based}} & Shokri et al. (FR) & \#Peers$ = 10$ & 1.1624$\pm$0.00189 & 0.4873$\pm$0.0055 & 1.2216$\pm$0.01229 & 0.7757$\pm$0.00841 \\ & Shokri et al. (SR) & \#Peers$ = 10$ & 1.1624$\pm$0.00562 & 0.4891$\pm$0.00773 & 1.2048$\pm$0.00889 & 0.7774$\pm$0.00686 \\ & Shokri et al. (SM) & \#Peers$ = 10$ & 1.1447$\pm$0.00629 & 0.4922$\pm$0.0094 & 1.2028$\pm$0.00985 & 0.7748$\pm$0.00887 \\ & Berkovsky et al. & \#Peers$ = 40$ & \multicolumn{1}{c}{1.132$\pm$0.00411} & \multicolumn{1}{c}{0.4876$\pm$0.00599} & \multicolumn{1}{c}{1.3405$\pm$0.00562} & \multicolumn{1}{c}{0.7619$\pm$0.00756} \\ \midrule \midrule Diff. Privacy & McSherry et al. & $\beta_m = 15$ & 1.201$\pm$0.00675 & 0.4795$\pm$0.00911 & 1.1349$\pm$0.00664 & 0.7719$\pm$0.00675 \\ \midrule \midrule \multirow{2}{*}{\pbox{20cm}{Extreme \\ Baselines} } & {\color{red}Abs. Pessimistic} & $k=100, \text{\#epochs}=20$ & 0.9632$\pm$0.00489 & 0.4132$\pm$0.00661 & 0.9788$\pm$0.00368 & 0.7379$\pm$0.00535 \\ & {\color{blue}Abs. Optimistic} & $k=100, \text{\#epochs}=20$ & 0.8923$\pm$0.00576 & 0.5426$\pm$0.0072 & 0.9538$\pm$0.00955 & 0.788$\pm$0.00818 \\ \midrule \midrule \multirow{8}{*}{ \pbox{20cm}{Classic \\ Collaborative \\ Filtering} } & \multirow{4}{*}{Only Public (H1)} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & 0.9183$\pm$0.00725 & 0.545$\pm$0.00726 & 0.971$\pm$0.00516 & 0.7892$\pm$0.00334 \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & 0.925$\pm$0.0075 & 0.5468$\pm$0.00688 & 0.9775$\pm$0.00455 & 0.788$\pm$0.00204 \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & 0.9518$\pm$0.00822 & 0.5363$\pm$0.00727 & 0.9957$\pm$0.00763 & 0.7738$\pm$0.00233 \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & 0.9033$\pm$0.00641 & 0.5534$\pm$0.00179 & 0.96$\pm$0.00411 & 0.7955$\pm$0.00446 \\ \cmidrule{2-7} & \multirow{4}{*}{Only Public (H2)} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & 0.9206$\pm$0.00328 & 0.5391$\pm$0.00179 & 0.9692$\pm$0.00895 & 0.787$\pm$0.00463 \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & 0.9287$\pm$0.00228 & 0.528$\pm$0.00596 & 0.969$\pm$0.00931 & 0.7853$\pm$0.00242 \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & 0.9522$\pm$0.00213 & 0.517$\pm$0.00797 & 0.9851$\pm$0.00808 & 0.7718$\pm$0.00517 \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & 0.9063$\pm$0.00294 & 0.5466$\pm$0.00212 & 0.9581$\pm$0.00946 & 0.7929$\pm$0.00456 \\ \midrule \midrule \multirow{24}{*}{\pbox{20cm}{\textbf{Selective} \\ \textbf{Privacy} \\ \textbf{Preserving} \\ \textbf{(SP2)}}} & \multirow{4}{*}{\textbf{Naive (H1)}} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & \textbf{0.9051$\pm$0.00654} & \textbf{0.5558$\pm$0.00511} & \textbf{0.9613$\pm$0.00534} & \textbf{0.7991$\pm$0.00322} \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & \textbf{0.9072$\pm$0.00873 } & \textbf{0.5542$\pm$0.00727 } & \textbf{0.9641$\pm$0.00555 } & \textbf{0.7978$\pm$0.0012 } \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & \textbf{0.9316$\pm$0.0088} & \textbf{0.5444$\pm$0.00666 } & \textbf{0.9808$\pm$0.00733 } & \textbf{0.7868$\pm$0.00297 } \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & \textbf{0.8953$\pm$0.00688 } & \textbf{0.5594$\pm$0.00696 } & \textbf{0.9526$\pm$0.00525 } & \textbf{0.8048$\pm$0.00318 } \\ \cmidrule{2-7} & \multirow{4}{*}{\textbf{Naive (H2)}} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & \textbf{0.907$\pm$0.00377} & \textbf{0.5514$\pm$0.00123 } & \textbf{0.9589$\pm$0.00921 } & \textbf{0.7977$\pm$0.00378 } \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & \textbf{0.914$\pm$0.00302} & \textbf{0.5383$\pm$0.00491 } & \textbf{0.9603$\pm$0.00937 } & \textbf{0.793$\pm$0.00222 } \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & \textbf{0.9316$\pm$0.00215} & \textbf{0.5274$\pm$0.00802 } & \textbf{0.9705$\pm$0.00795 } & \textbf{0.7824$\pm$0.00459 } \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & \textbf{0.8946$\pm$0.0032} & \textbf{0.5532$\pm$0.00306 } & \textbf{0.9517$\pm$0.00949 } & \textbf{0.8034$\pm$0.00411 } \\ \cmidrule{2-7} & \multirow{4}{*}{Clustering (H1)} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & 0.9165$\pm$0.00766 & 0.5457$\pm$0.00695 & 0.966$\pm$0.00565 & 0.7893$\pm$0.00353\footnote{$P\text{value} < 0.02$} \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & 0.9146$\pm$0.01183 & 0.5494$\pm$0.00795 & 0.9695$\pm$0.00774 & 0.7876$\pm$0.00309 \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & 0.9387$\pm$0.00854 & 0.5366$\pm$0.00681 & 0.9847$\pm$0.00741 & 0.7736$\pm$0.00241 \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & 0.9037$\pm$0.00634 & 0.5538$\pm$0.00206 & 0.958$\pm$0.0048 & 0.7945$\pm$0.00373 \\ \cmidrule{2-7} & \multirow{4}{*}{Clustering (H2)} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & 0.9183$\pm$0.00395 & 0.5401$\pm$0.00172 & 0.9653$\pm$0.00924 & 0.7871$\pm$0.00464\footnote{$P\text{value} < 0.1$, statistically insignificant} \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & 0.9249$\pm$0.00206 & 0.5287$\pm$0.00598 & 0.9651$\pm$0.00938 & 0.7852$\pm$0.00228 \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & 0.9405$\pm$0.00166 & 0.5174$\pm$0.00718 & 0.9757$\pm$0.00812 & 0.7716$\pm$0.00528 \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & 0.9047$\pm$0.00319 & 0.5473$\pm$0.00121 & 0.9566$\pm$0.00971 & 0.7926$\pm$0.00436 \\ \cmidrule{2-7} & \multirow{4}{*}{\textbf{Joint Opt. (H1)}} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & \textbf{0.9051$\pm$0.00654 } & \textbf{0.556$\pm$0.00502 } & \textbf{0.9612$\pm$0.00533 } & \textbf{0.7989$\pm$0.00315 } \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & \textbf{0.9072$\pm$0.00873 } & \textbf{0.5537$\pm$0.00735 } & \textbf{0.964$\pm$0.00556 } & \textbf{0.7975$\pm$0.00098 } \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & \textbf{0.9316$\pm$0.0088 } & \textbf{0.5447$\pm$0.00646 } & \textbf{0.9808$\pm$0.00734 } & \textbf{0.7869$\pm$0.00338 } \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & \textbf{0.8953$\pm$0.00689} & \textbf{0.5592$\pm$0.00706 } & \textbf{0.9526$\pm$0.00524 } & \textbf{0.8045$\pm$0.00318 } \\ \cmidrule{2-7} & \multirow{4}{*}{\textbf{Joint Opt. (H2)}} & {\small$\alpha = 2, \beta = 2, \mu =0.48$} & \textbf{0.907$\pm$0.00377} & \textbf{0.551$\pm$0.00095} & \textbf{0.9589$\pm$0.0092} & \textbf{0.7978$\pm$0.00371} \\ & & {\small$\alpha = 0.5, \beta = 0.5, \mu =0.48$} & \textbf{0.914$\pm$0.00302} & \textbf{0.5383$\pm$0.00507} & \textbf{0.9602$\pm$0.00939} & \textbf{0.793$\pm$0.0022} \\ & & {\small$\alpha = 5, \beta = 1, \mu =0.82$} & \textbf{0.9319$\pm$0.00241} & \textbf{0.5278$\pm$0.00851 } & \textbf{0.9705$\pm$0.00792 } & \textbf{0.782$\pm$0.00479 } \\ & & {\small$\alpha = 1, \beta = 5, \mu =0.17$} & \textbf{0.8947$\pm$0.0032 } & \textbf{0.5533$\pm$0.00289 } & \textbf{0.9517$\pm$0.0095 } & \textbf{0.8034$\pm$0.00423 } \\ \bottomrule \end{tabular} \end{minipage} \end{table*} \begin{figure} \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{fig_up_v_RMSE} \caption{RMSE comparison} \label{fig:up_rmse} \end{subfigure} \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{fig_up_v_NDCG} \caption{NDCG comparison} \label{fig:up_ndcg} \end{subfigure} \caption{Performance comparison among various baselines for different user privacy ratios on MovieLens dataset.}\label{fig:up_comp} \end{figure} \begin{figure} \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{revised_cvsd} \caption{Aux. data size comparison} \label{fig:clus_data} \end{subfigure} \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{fig_Clusters_v_RMSE} \caption{RMSE comparison} \label{fig:clus_rmse} \end{subfigure} \caption{Comparing SP2 implementations for different number of clusters on Amazon Electronics dataset (preprocessed using \emph{mostly balanced} beta distribution and \emph{H1}).}\label{fig:clus_comp} \end{figure} As indicated by the results in Table \ref{tab:results}, the peer-to-peer based techniques and the differential privacy method, which attempt to ensure \emph{complete} user privacy from the central recommender system, end up performing worse than the standard \emph{only public} baseline due to the data obfuscation policies. In addition, the fully decentralized approach in \cite{BerkovskyRECSYS07} is not scalable due to the limited number of trusted peers. In the same vein, the distributed aggregation approaches in \cite{ShokriRECSYS09} suffer from poor performance as the number of peers increases due to higher obfuscation; however, lowering the number of peers risks significant privacy breach by the central recommender system. Table \ref{tab:results} further summarizes that our joint optimization approach (with only top-3 cluster weights) performs as good as the naive approach. Our clustering approach for SP2 framework, performs worse than naive and joint optimization but is largely better than the \emph{only public} baseline across both evaluation metrics. Unless otherwise mentioned in the table, $P$-value for all results related to SP2 framework (computed using two-tailed test with respect to \emph{only public} baseline) is less than $0.001$. As evident from the table, our results hold across both the hypotheses. However, the performance of all the implementations improve as the privacy ratio reduces. This is further demonstrated through figures \ref{fig:up_rmse} and \ref{fig:up_ndcg} which plot the RMSE and NDCG values respectively against varying average user privacy ratio across all users. Finally, figures \ref{fig:clus_data} and \ref{fig:clus_rmse} present an ablation study that studies the performance and communication cost for different SP2 frameworks with varying number of clusters. The naive method has the best performance but requires the largest auxiliary model data. The joint optimization technique require an order of magnitude less data than the naive one but can reach the same performance for an optimal number of clusters. \section{Survey}\label{survey} We conducted a survey\footnote{\url{https://goo.gl/yK2FDd}} to gauge public interest in using our SP2 framework. In total, 74 users responded, of which 74\% were male and 24\% were female. 92\% of our respondents were within the age bracket $(18-30)$. In our survey, we found that 57\% of the participants do not rate items on any platform, whereas around 20\% of the users provide a lot of ratings. About 48\% of the respondents claim they hesitate to rate an item because they do not want to share their opinion publicly or because they do not trust the platform. The last two questions in our survey were aimed at estimating how likely a user is to provide a rating, if he/she can use our selective privacy preserving framework. When users were asked if they would rate more items \emph{privately} on their device, if it guarantees to improve the quality of their recommendations, about 56\% of the users responded affirmatively, while 22\% said `maybe' and 22\% responded with a disagreement. The responses to this survey indicate that an overwhelming majority of users are willing to use our proposed selective privacy preserving framework in order to improve their recommendations as well as safeguard their \emph{private} information.
{ "timestamp": "2018-06-05T02:13:30", "yymm": "1806", "arxiv_id": "1806.00914", "language": "en", "url": "https://arxiv.org/abs/1806.00914" }
\section{Introduction} \label{sec:intro} Classification is a longstanding field of study in statistical learning. In the supervised setting where both instances and their labels are available, a solid theoretical foundation \cite{devroye1996probabilistic} has been established to guide the design of an effective learning algorithm. A key piece of this theory concerns the hypothesis space. To ensure successful supervised learning with a limited amount of labeled data, it has to be neither too large nor too small. In reality, this theoretical dilemma has been played out to the full. Just a few decades earlier, the hypothesis spaces built for the machines were too small to capture meaningful concepts. They underfitted. Deep learning has fundamentally changed the landscape, in vision related tasks in particular \cite{lecun1998gradient} \cite{krizhevsky2009learning}, because of their ability to learn sophisticated features automatically. Equipped with these highly expressive hypothesis spaces, researchers now face the opposite issue, i.e. how to take full advantage of them without causing overfitting, since labeled data typically requires human annotation and is thus in limited supply. Its solution, from the statistical learning theory's viewpoint, is clear: regularization. In this paper, we take the functional view of a generic probabilistic discriminative learner and propose a scalable unsupervised framework for its regularization. The core idea is to constrain its hypothesis space to a set of non-trivial piecewise constant functions. It can be motivated as follows. In most classification tasks, for a crushing majority of instances, one is often unequivocal regarding the category to which they should be assigned. This \textit{factual confidence}, translated mathematically, means that with overwhelming probability, the true predictive distribution conditional on an instance is a unit mass. As a function, it is thus approximately discrete-valued. Furthermore, frequently, our certainty regarding the label of an instance is such that even when altered to some degree, the instance will still be recognized as a member from the same category. Interpreted again in terms of the true conditional distribution, this property suggests its \textit{smoothness}. As a result, if a probabilistic discriminative model is to match its learning target, namely the true conditional distribution, it is naturally necessary for it to be \textit{factually} confident and smooth, hence \textit{non-trivially} piecewise constant. To encourage a model to be non-trivially confident in its predictions, we draw inspiration from a familiar \textit{Taboo} game. We analyse the game as a communication process in an adversarial environment and derive from it a scalable unsupervised regularization surrogate, whose minimization leads to a factually confident discriminator. Hence referred to as \textit{confidence regularization}, it can be shown as a scalable approach to spectral clustering \cite{von2007tutorial}. To ensure a model's smoothness, we introduce an unsupervised \textit{smoothness regularization}. Its main inspiration is \textit{virtual adversarial training} \cite{miyato2017virtual}. In particular, the functional view of a discriminator allows us to derive the loss rigorously from a uniform smoothness property. It also leads to a criterion for measuring a discriminator's local predictive stability. Moreover, our study suggests that confidence and smoothness are not isolated properties, in that when a smooth discriminator is confident of its predictions, under certain conditions, it becomes immune to attacks \cite{goodfellow2014explaining} and able to generalize. In the rest of the paper, we begin by formulating the \textit{Taboo} game, which leads to the first regularization loss that induces a non-trivially discrete-valued discriminator. Next we define the smoothness for a discriminative model and derive the second regularization loss in Section \ref{sec:vat}. Then we validate our approach with some experiments on unsupervised discriminative learning in Section \ref{sec:experiments}. Finally, we conclude by relating our framework to prior works and discussing future research directions. \paragraph{Notations.} For some positive integer $d \in \mathbb{Z}_+$, $\mathbb{R}^d$ denotes the $d$-dimensional Euclidean space and $\|\cdot\|_p$ its $p$-norm. Assume w.l.o.g. the instance and label space $(\mathcal{X}, \mathcal{Y}) \subseteq \mathbb{R}^h\times \mathbb{Z}_+$. Consider a random pair $(X, Y) \in \mathcal{X} \times \mathcal{Y}$. Let $\mathbb{Q}^*$ denote its true distribution and $\mathbb{Q}(\cdot|X; \theta)$ a model for $\mathbb{Q}^*(\cdot|X)$ with parameter $\theta\in \mathbb{R}^q$. Denote by $c^*: \mathcal{X} \mapsto \mathcal{Y}$ the Bayes classifier i.e. $c^*(x) := \argmax_{y\in \mathcal{Y}} \mathbb{Q}^*(y|x)$ with ties broken arbitrarily. For any label $y \in \mathcal{Y}$, we denote the collection of all $y$-labeled instances by $\mathcal{X}_y := \{x \in \mathcal{X}\;| \; c^*(x) = y\}$. The indicator of any set $\mathcal{A}$ is denoted by $1_{\mathcal{A}}(z)$. It equals $1$ if $z\in \mathcal{A}$ and $0$ otherwise. Finally, $f(\cdot||\cdot)$ denotes an f-divergence that measures the discrepancy of two laws. \section{From \textit{Taboo} to a factually confident discriminator} \label{sec:taboo} Our \textit{Taboo} game involves three players, Ann, Bob and Cal. Ann plays an adversarial role against Bob and Cal. Assume a fixed number of categories known to all of them. The game goes as follows. First, Ann gives Bob a diverse collection of unlabeled instances. Next she selects a category, reveals it to Bob, and asks him to describe it to Cal using a single instance from his collection. Bob and Cal win the game if Cal, upon receiving the instance, is able to correctly identify the category. We now formulate Bob and Cal's interaction as a communication process and show that if they have to win regardless of how Ann picks the initial category, they must be confident in their moves. \paragraph{\textit{Taboo} as label transmission.} Consider a discriminative model $\mathbb{Q}(\cdot|x), \; x \in \mathcal{X}$ and a finite instance set $\mathcal{S} \subseteq \mathcal{X}$. We model the communication between Bob and Cal with the \textit{label transition matrix} \begin{align} \forall (y, y') \in \mathcal{Y}\times\mathcal{Y}, \quad \mathbb{T}(y'|y; \mathbb{Q}, \mathcal{S}) := \sum_{x \in \mathcal{S}} \mathbb{P}(x|y; \mathbb{Q}, \mathcal{S}) \mathbb{Q}(y'|x) \label{labeltm} \end{align} where \begin{align} \mathbb{P}(x|y; \mathbb{Q},\mathcal{S}) := 1_{\mathcal{S}}(x)\frac{\mathbb{Q}(y|x)}{\sum_{x' \in \mathcal{S}} \mathbb{Q}(y|x')}. \label{l2i} \end{align} For any $y \in \mathcal{Y}$ such that $\sum_{x' \in \mathcal{S}} \mathbb{Q}(y|x') = 0$, the probability $\mathbb{P}(x|y; \mathbb{Q},\mathcal{S})$ can be defined arbitrarily. Specifically, Eq. \eqref{labeltm} and \eqref{l2i} describe how Bob, endowed with the model $\mathbb{Q}(\cdot|x)$ and the set $\mathcal{S}$, selects a single unlabeled instance to convey the label information to Cal, who then decodes using the same $\mathbb{Q}(\cdot|x)$. Since the matrix $\mathbb{T}$ quantifies the likelihood for Cal to correctly infer the label intended by Bob, an ideal communication requires it to be an identity. Intuitively, a factually confident discriminator $\mathbb{Q}(\cdot|x)$ could meet this requirement. Because then Bob would \textit{only} select instances that are representative of the category chosen by Ann, whereas Cal would not confuse a received instance with one from another, different category. The next theorem shows that it is indeed the case. \begin{thm} Consider an arbitrary finite set $\mathcal{S} \subseteq \mathcal{X}$. The transition matrix $\mathbb{T}(\cdot|\cdot; \mathbb{Q}, \mathcal{S})$ is diagonal if and only if for all $x \in \mathcal{S}$, $\mathbb{Q}(\cdot|x)$ is valued in $\{0, 1\}$ and $\forall y \in \mathcal{Y}, \; \{x \in \mathcal{S}\;| \; \mathbb{Q}(y|x) = 1\}\neq \emptyset$. \label{thm1} \end{thm} Hence, perfect label transmission through a single instance implies a confident model $\mathbb{Q}(\cdot|x)$ which additionally has to partition $\mathcal{S}$ into $|\mathcal{Y}|$ components, i.e. this confidence is fact based and not blind. This result suggests a necessary condition for a model $\mathbb{Q}(\cdot|x)$ to match its learning target i.e. $\mathbb{Q}^*(\cdot|x)$. \begin{col} Assume a classification setup with zero Bayes error i.e. $\mathbb{Q}^*(c^*(X) \neq Y) = 0$. $\mathbb{Q}(\cdot|x) = \mathbb{Q}^*(\cdot|x), \forall x\in \mathcal{X}$ holds only if the transition matrix $\mathbb{T}(\cdot|\cdot; \mathbb{Q}, \mathcal{S})$ is diagonal for any set $\mathcal{S} \subseteq \mathcal{X}$ such that $\forall y \in \mathcal{Y}, \;\mathcal{S} \cap \mathcal{X}_y \neq \emptyset$ i.e. $\mathcal{S}$ contains at least one instance from every category in $\mathcal{Y}$.\label{col1} \end{col} All the proofs are in the Appendix. Zero Bayes error assumption, also called the \textit{noiseless} condition, is common for analyzing learning algorithms in classification \cite{devroye1996probabilistic}. In the following, a set that has at least one representative for each \textit{true} category will be referred to as \textit{label complete}. \paragraph{Confidence regularization.} Hence, a good discriminator should allow for successful transmission of \textit{any} label with \textit{any} label complete set. This observation leads to the confidence regularization. Specifically, consider an unlabeled set $\mathcal{U}$. Define the collection of its label complete subsets as \begin{align*} \Lambda(\mathcal{U}) := \{\mathcal{S} \subseteq \mathcal{U} | \; \forall y \in \mathcal{Y}, \; \mathcal{S} \cap \mathcal{X}_y \neq \emptyset\}. \end{align*} In view of Theorem \ref{thm1} and Corollary \ref{col1}, denoting by $1_y$ the discrete probability over $\mathcal{Y}$ with a unit mass on $y$, we define the following loss for a parametrized discriminative model $\mathbb{Q}(\cdot|x; \theta)$ \begin{align} L_c(\theta;\mathcal{U}):=\max_{(\mathcal{S}, y)\in \Lambda(\mathcal{U}) \times \mathcal{Y}}f (1_y ||\mathbb{T}(\cdot|y; \theta, \mathcal{S})). \label{uform0} \end{align} Its interpretation in the \textit{Taboo} game is simple: knowing Bob and Cal's discriminative model, Ann picks a label complete set and a category so as to make it as hard as possible for them to conduct a successful communication. Eq. \eqref{uform0} quantifies the resulting worst-case label transmission failure rate. Since the subsets $\mathcal{X}_y, y\in \mathcal{Y}$ are unknown, so is $\Lambda(\mathcal{U})$. As a result, Eq. \eqref{uform0} is impractical. But if we can somehow sample from $\Lambda(\mathcal{U})$ according to some law $\mathbb{B}$, Eq. \eqref{uform0} can be relaxed to \begin{align} L'_c(\theta;\mathbb{B}):=\mathbb{E}_{\mathcal{S}\sim \mathbb{B}(\Lambda(\mathcal{U}))}\left[\max_{y\in \mathcal{Y}}f (1_y ||\mathbb{T}(\cdot|y; \theta, \mathcal{S}))\right]. \label{uform1} \end{align} Compared to Eq. \eqref{uform0}, this loss assumes a friendlier Ann in that she now selects Bob's label complete set at random according to $\mathbb{B}(\Lambda(\mathcal{U}))$, after which she still picks a category adversarially. Sampling label complete sets is indeed possible. To see it, consider a partition of $\Lambda$: $\cup_{b \geq |\mathcal{Y}|} \Lambda_b$ with $\Lambda_b := \{\mathcal{S} \in \Lambda\;|\; |\mathcal{S}| = b\}$. The next theorem shows that for any sufficiently large $b$, we can reliably sample from $\Lambda_b$, hence $\Lambda$, by simply putting together $b$ random training instances.\begin{thm} Assume $\min_{y\in \mathcal{Y}}\mathbb{Q}^*(Y=y) > 0$. Let $(\mathcal{S}_j)_{j=1,\ldots,T}$ be unlabeled sets of the same size $b$ consisting of $\mathbb{Q}^*$-i.i.d. instances. For any $\epsilon \in (0,1)$, if $b > \ln \left(T |\mathcal{Y}|\epsilon^{-1}\right)/\min_{y\in \mathcal{Y}}\mathbb{Q}^*(Y=y)$, then $\mathbb{Q}^*\left(\cap_{t=1}^T\{\mathcal{S}_t \in \Lambda_b\}\right) > 1- \epsilon$, i.e. with probability at least $1-\epsilon$, all $T$ sets are label complete. \label{thm2} \end{thm} See the Appendix for its proof. As an illustration, consider a balanced dataset $\mathcal{U}$. To approximately sweep it $r$ times and ensure with probability at least $1-\epsilon$ that all the random batches sampled in the process are label complete, the batch size $b$ needs to satisfy $b > |\mathcal{Y}|\ln \left(r|\mathcal{U}||\mathcal{Y}|\epsilon^{-1}b^{-1}\right)$. For $|\mathcal{Y}|=10, |\mathcal{U}|=6\times10^4, \epsilon=10^{-4}$ and $r=1000$, it implies $b\geq240$. As a result, we sample label complete sets in this way and will refer to the \textit{confidence regularization} loss Eq. \eqref{uform1} as $L_c'(\theta; \mathcal{U}, b)$ with some training batch size $b$ set according to Theorem \ref{thm2}. As a side note, confidence regularization can be seen as a \textit{scalable} approach to spectral clustering \cite{von2007tutorial} and is connected to association learning \cite{haeusser2017learning}. We defer this discussion to Section \ref{sec:relation}. \section{Model smoothness and immunity to adversarial attacks} \label{sec:vat} Even when confidence regularized, a complex model can still be erratic. To see it, assume $\mathcal{Y}=\{0,1\}$. Then an arbitrary definition $\mathbb{Q}(Y=1|\cdot): x\in\mathcal{X} \mapsto \{0,1\}$ such that $\mathbb{E}_{X\sim \mathbb{Q}^*}[\mathbb{Q}(Y=1|X)]=1/2$ (i.e. high entropy regime) is likely to result in a confident yet jittery model. We want to avoid that. Enter the smoothness requirement. A discriminative model is smooth if its output, a distribution over $\mathcal{Y}$, varies continuously as one moves around in the instance space $\mathcal{X} \subseteq \mathbb{R}^h$. Formally, in the noiseless setting, for some f-divergence $f(\cdot||\cdot)$, a model $\theta$ is said to be smooth w.r.t. $\mathbb{Q}^*$ if it satisfies \begin{align}\text{for $\mathbb{Q}^*$-almost all instances $x$}, \quad \lim_{\rho \to 0}\sup_{\|r\|_2 \leq \rho}f(\mathbb{Q}(\cdot|x; \theta) || \mathbb{Q}(\cdot|x+r; \theta)) =0\label{continuity} \end{align} where $r \in \mathbb{R}^h$ is a perturbation vector. Note that if the instance space $\mathcal{X}$ is a manifold in $\mathbb{R}^h$, at some $x$, it may be more appropriate to use a subset of the $\ell_2$-ball $\{r\;| \; \|r\|_2\leq \rho\}$ in the definition. However, since an instance space is typically hard to describe analytically and that it may additionally be task dependent, the $\ell_2$-ball is a convenient choice out of the worst-case consideration. \paragraph{Smoothness regularization.} If the true conditional distribution $\mathbb{Q}^*(\cdot|x), x \in \mathcal{X}$, as has been argued in the introduction, is piecewise constant, not only does it satisfy Eq. \eqref{continuity}, which is a local uniform smoothness property, it is also globally uniformly smooth w.r.t. $\mathbb{Q}^*$. This observation leads us to define $\mathbb{Q}^*$'s \textit{attack-free margin} as \begin{align} \rho^* :=\sup\left\{\rho \;\Big| \; \text{for $\mathbb{Q}^*$-almost all instances $x$}, \; \sup_{\|r\|_2 \leq \rho}f(\mathbb{Q}^*(\cdot|x) || \mathbb{Q}^*(\cdot|x+r)) =0\right\}. \label{radius} \end{align} It measures the maximum extent to which \textit{any} $\mathbb{Q}^*$-a.s. instance can be perturbed without causing \textit{true} confidence decline. In this light, a new necessary condition arises for a discriminative model $\theta$ to match its learning target. Specifically, under $\mathbb{Q}^*$, given an unlabeled set $\mathcal{U}$ and an f-divergence, we define the next \textit{smoothness regularization} loss to assess a model $\theta$'s global uniform smoothness \begin{align} L_s(\theta;\mathcal{U}, \rho^*) := \max_{x\in \mathcal{U}} \sup_{\|r\|_2\leq\rho^*}f(\mathbb{Q}(\cdot|x; \theta) || \mathbb{Q}(\cdot|x+r; \theta)). \label{cont0} \end{align} But it is impractical since 1) the attack-free margin $\rho^*$, a property of $\mathbb{Q}^*$, is unknown in general and 2) the maximum over the instances makes it hard for a learner to scale. Hence, we relax Eq. \eqref{cont0} to \begin{align} L'_s(\theta;\mathcal{U}, \rho) := |\mathcal{U}|^{-1}\sum_{x\in \mathcal{U}} \sup_{\|r\|_2\leq\rho}f(\mathbb{Q}(\cdot|x; \theta) || \mathbb{Q}(\cdot|x+r; \theta)) \label{cont1} \end{align} where the maximum over the instances is replaced by a sample average and the attack-free margin by a tunable parameter $\rho$ set by e.g. cross-validation. The remaining issue is to find the supremum in Eq. \eqref{cont1} or at least a good lower bound to it for an arbitrary instance $x$ and a positive $\rho$. To this end, denote $\phi_f(r; x, \theta):= f\left(\mathbb{Q}(\cdot|x; \theta) || \mathbb{Q}(\cdot|x+ r; \theta)\right)$. For some fixed unit vector $e \in \mathbb{R}^h$, consider $\psi_f(\nu; e, x, \theta) := \phi_f(\nu e; x, \theta)$ with $\nu \in \mathbb{R}$. It has some interesting properties. \begin{thm} Assume $\psi_f(\cdot; e, x, \theta)$ twice differentiable around $\nu=0$. Then both its value and first derivative vanish at $\nu=0$ i.e. $\psi_f(0; e, x, \theta) = \psi_f'(0; e, x, \theta) = 0$. In addition, when the f-divergence is the Kullback-Leibler divergence $\kl(\cdot||\cdot)$ or the squared Hellinger distance $\hel^2(\cdot||\cdot)$, we find \begin{align*} \psi''_{f}(0; e, x, \theta) = c_f e^t \left(\sum_{y \in \mathcal{Y}} \mathbb{Q}(y|x;\theta) \nabla_x \log \mathbb{Q}(y|x; \theta) \nabla^t_x \log \mathbb{Q}(y|x;\theta)\right) e := c_f e^t I_F(x, \theta) e \end{align*} with $I_F(x,\theta)$ the model $\theta$'s Fisher information at $x$ and $c_f$ a constant: $c_{\kl}=1$ and $c_{\hel^2}=1/4$. \label{lem1} \end{thm} Its proof (omitted due to page limit) follows directly from the definition of an f-divergence. This result is natural given the Fisher information's instrumental role in estimator design e.g. the Cramer-Rao bound and information geometry \cite{amari1987differential}. From this perspective, Condition \eqref{continuity} may be interpreted as requiring that locally, the label $Y$ reveals as little information as possible about the instance $X$. To anticipate the following development, note that by construction, the Fisher information $I_F(x,\theta)$ has its rank upper bounded by $\min(|\mathcal{Y}|, h)$. In particular, at any instance $x$, let $A(x, \theta)$ be an $h\times |\mathcal{Y}|$ matrix whose $y$-th column corresponds to $\sqrt{\mathbb{Q}(y|x;\theta)} \nabla_x \log \mathbb{Q}(y|x; \theta)$. Then $I_F(x,\theta) = A(x,\theta)A^t(x,\theta)$. Theorem \ref{lem1} implies that $\phi_f(\cdot; x,\theta)$ is approximately convex in an infinitesimal neighborhood around $r=0$. It further indicates that, at any $x$, to get a good lower bound to $\sup_{\|r\|_2\leq \rho}\phi_f(r;x,\theta)$ for a small but not infinitesimal $\rho$, we pay special attention to the subspace spanned by $I_F(x,\theta)$'s \textit{leading} eigenvectors. It is thus natural to sample $\phi_f(\cdot;x,\theta)$ along the dimensions emphasized by the Gaussian vector $\mathcal{N}(0, I^k_F(x,\theta))$ for some positive integer $k$. A larger $k$ puts more focus on eigendimensions corresponding to $I_F(x,\theta)$'s larger eigenvalues. This reasoning leads to the \textit{random} lower bound \begin{align} \max_{\nu\in A, 1\leq i\leq m}\phi_f(\nu\rho e_i(x, \theta)/\|e_i(x,\theta)\|_2; x, \theta), \; e_i(x,\theta) \sim \mathcal{N}(0, I_F^k(x,\theta)), \; i=1,\ldots, m \end{align} where $A$ denotes a finite search set such that $\{-1,1\} \subseteq A\subset[-1,1]$ and $e_i(x,\theta)$ $m$ i.i.d. samples. Due to the Fisher information $I_F(x,\theta)$'s particular structure, it costs $O(k|\mathcal{Y}|h)$, rather than $O(kh^2)$, to generate a sample $e_i(x,\theta)$, which can be obtained as $A(x,\theta) n_i$ for $k=1$ and $A(x,\theta)A^t(x, \theta)n'_i$ for $k=2$ etc. with $n_i$ and $n'_i$ a standard Gaussian vector sample. Hence, this random lower bound can be computed efficiently, especially in high dimension i.e. $h \gg |\mathcal{Y}|$. In practice, we take $m=1$, $k=4$ and $A$ a uniform grid, containing e.g. $10$ evenly spaced points. Since these choices yielded good experimental results, no further attempt was made to optimize these parameters. \paragraph{Fisher criterion for local stability and immunity to attacks.} Write the Fisher information's largest eigenvalue as $\beta(x,\theta)$. Theorem \ref{lem1} suggests it be used for assessing a model $\theta$'s curvature, or stability at instance $x$. Again, due to $I_F(x,\theta)$'s low-rankedness, it can be computed efficiently especially when $h \gg |\mathcal{Y}|$ because $A^t(x,\theta)A(x,\theta)$ has the same spectra as $I_F(x,\theta)$. However, a more appealing alternative is the trace as the two are equivalent: $\tr(I_F(x, \theta))\min(|\mathcal{Y}|, h)^{-1} \leq \beta(x, \theta) \leq \tr(I_F(x, \theta))$. Henceforth, we refer to the trace as the \textit{Fisher criterion for local stability}. Furthermore, Theorem \ref{lem1} suggests that an attack \cite{goodfellow2014explaining} might succeed at instance $x$ only if the perturbation is not orthogonal to the space spanned by the score vectors $\nabla_x \log \mathbb{Q}(y|x;\theta), y \in \mathcal{Y}$. In particular, if a smooth discriminator is discrete-valued as we seek to achieve by confidence regularization, an attack would be difficult because all the score vectors are zero. This analysis further suggests that when well trained, a factually confident and smooth model should be able to generalize, especially if the true distribution $\mathbb{Q}^*$ it attempts to match admits a large attack-free margin $\rho^*$ i.e. Eq. \eqref{radius}. \section{Experiments} \label{sec:experiments} For some positive hyper-parameters $(\lambda, \rho)$, our study leads to the unsupervised regularization loss $R(\theta;\mathcal{U}, b, \rho, \lambda):=L'_c(\theta; \mathcal{U}, b)+ \lambda L'_s(\theta; \mathcal{U}, \rho)$. Though in principle, it applies to all probabilistic discriminative learner, we took neural nets in the experiments due to their large hypothesis spaces. They had ReLU units and batch normalization \cite{ioffe2015batch} and were optimized by ADAM \cite{kingma2014adam} with a constant learning rate $10^{-3}$. The training batch size $b$ was set with the sampling failure rate $\epsilon=10^{-4}$. For the f-divergence, we took the squared Hellinger distance in smoothness regularization for its symmetry and the Kullback-Leiber divergence in confidence regularization. \textit{Unsupervised clustering accuracy} was used for result evaluation. For a set of $m$ instances, it is defined as $m^{-1}\max_{\pi}\sum_{i=1}^m 1_{y_i}(\pi(l_i))$ where $l_i$ denotes a model's predicted label for instance $i$ and the maximum is over all permutations of $\mathcal{Y}$. It was computed using the Munkres algorithm \cite{munkres1957algorithms}. The neural nets were written in \textit{Pytorch} with a fixed random seed $0$. To test their generalization ability, a network was trained only on training data. For clustering, both training and test data was used. We report the best results. Rival algorithms with accuracy reported in both mean $\eta$ and standard deviation $\sigma$, we provide them as $\eta + \sigma$. \subsection{Synthetic data} To validate our approach, we tested it on a low noise 2-c dataset \cite{shaham2018spectralnet}. Both training and test data contain $300$ random points per class. We took a net of two fully connected hidden layers comprising respectively $200$ and $100$ neurons and trained it only on the training data with $\lambda=1000$ and $\rho=0.04$. Note that too large a $\rho$ requires the model to be smooth even for two points from different classes, provided that their Euclidean distance is smaller than $2\rho$. Too small a $\rho$ fails to enforce model smoothness among points of the same class, hence $\rho$'s alternative interpretation as \textit{neighborhood width}. Fig. \ref{fig0} shows that the trained net succeeded in clustering the test data. Moreover, it did end up being $\mathbb{Q}^*$-piecewise constant with a good model attack-free margin, which allows to generalize. \begin{figure} \centering \subfigure[][epoch 1]{\includegraphics[height = 0.09\textheight, width = 0.325\textwidth]{db_0}} \subfigure[][epoch 200]{\includegraphics[height = 0.09\textheight, width = 0.325\textwidth]{db_200}} \subfigure[][epoch 1200]{\includegraphics[height = 0.09\textheight, width = 0.325\textwidth]{db_1200}} \caption{Unsupervised discriminative learning on a 2-c dataset. Each subfigure consists of (from left to right) model conditional probability, its Fisher criterion and conditional entropy. The model did end up being piecewise constant with a good model attack-free margin, which allows to generalize.} \label{fig0} \end{figure} \subsection{MNIST} MNIST has roughly balanced training and test set containing respectively $60000$ and $10000$ labeled $28$-by-$28$ grayscale digits. For feature extraction, our net, followed by a ten-way softmax layer, was \begin{align*} C(64, 3) \rightarrow C(64, 3) \rightarrow P(2) \rightarrow C(64, 3) \rightarrow P(2) \rightarrow FC(128) \end{align*} where $C(64,3)$, $P(2)$ and $FC(128)$ denotes a layer of $64$ $3$-by-$3$ convolutional filters, a max-pooling layer of $2$-by-$2$ windows with stride $2$, and a fully connected layer of $128$-dimensional fan-out. We ran experiments \textit{without data augmentation}. The digits were linearly scaled to make their intensity distribution of zero mean and unit variance. First, the net was trained for $500$ epochs with $\lambda=500$ and $\rho=0.1$ on the training data. Its resultant clustering accuracy was $0.9838$. Its test accuracy was even better (Tab. \ref{utable}). We then ran a fresh training on the full dataset. With $\lambda=100$ and $\rho=0.1$, the same training also yielded a good result (Tab. \ref{utable}). Note that IMSAT \cite{hu2017learning} used data augmentation. \begin{table} \caption{Accuracy of unsupervised discriminative learning on MNIST and Reuters} \centering \ra{0.1} \begin{tabular}{lllllllll} \toprule & \multicolumn{4}{c}{clustering} & \multicolumn{2}{c}{generalization (training)}\\ \cmidrule(r{8pt}){2-5} \cmidrule(lr){6-7} & VaDE \cite{jiang2016variational} & IMSAT \cite{hu2017learning} & S-Net \cite{shaham2018spectralnet} & Ours & S-Net \cite{shaham2018spectralnet} & Ours\\ \midrule MNIST & 0.9446 & \textbf{0.988} & 0.972 & 0.9692 & 0.970 (n.a.) & \textbf{0.9878 (0.9838)}\\ \midrule Reuters & 0.7938 & 0.719 & 0.809 & \textbf{0.8323} & 0.798 (n.a.) & \textbf{0.8094 (0.8107)} \\ \bottomrule \end{tabular} \label{utable} \end{table} To better understand the effects of regularization in terms of model confidence and stability, we also trained the same net in two supervised settings, one on the full training set (test error $0.54\%$) and the other on only $100$ random labeled digits, $10$ per category (test error $11.07\%$). We recorded their predictive confidence and Fisher criterion at all the test digits. Fig. \ref{fig1} shows the most and least stable digits according to the two well trained models whereas Fig. \ref{fig2} illustrates the effect of regularization, either by additional data or by our functional constraints. It shows that our functional regularization lowers the average Fisher criterion for all categories and also attains high model predictive confidence. \begin{figure} \centering \includegraphics[width = 0.48\textwidth]{worst_sup} \hspace{0.3cm} \includegraphics[width = 0.48\textwidth]{worst_uns}\\ \vspace{0.1cm} \includegraphics[width = 0.48\textwidth]{best_sup} \hspace{0.3cm} \includegraphics[width = 0.48\textwidth]{best_uns}\\ \vspace{0.1cm} \includegraphics[width = 0.48\textwidth, height = 0.12\textheight]{sup_box}\hspace{0.4cm} \includegraphics[width = 0.48\textwidth, height = 0.12\textheight]{unsup_box} \caption{Comparison of a supervised (left column; test error $0.54\%$) and unsupervised (right column; test error $1.22\%$) model trained on the same set of $60000$ digits. The top two rows are \textit{test} digits of the smallest and largest Fisher criterion in their respective category under the two models. The unsupervised model's labeling was remapped by the Munkres algorithm. The caption $a (b)$ on top of a digit reads a model's prediction $a$ and its confidence $b$. Red caption indicates a mis-classified digit. The numbers beneath are their model specific Fisher criterion. A larger criterion indicates a stronger vulnerability to attacks. The bottom row is a box plot of the category-wise Fisher criterion. Our functional regularization lowers the average Fisher criterion for all categories.} \label{fig1} \end{figure} \begin{figure} \centering \includegraphics[height=0.1\textheight, width = 0.32\textwidth]{ovf_corr} \includegraphics[height=0.1\textheight, width = 0.32\textwidth]{sup_corr} \includegraphics[height=0.1\textheight, width = 0.32\textwidth]{unsup_corr} \caption{Joint distributions of predictive confidence and Fisher criterion of the same net trained under different settings. We collected the statistics on the MNIST test set. From left to right are an overfitted model (trained on 100 labeled digits only), a supervised model trained on $60000$ labeled digits and an unsupervised model trained on the same digits but with no labels. The overfitted model is smooth but not confident whereas both well trained models are confident. The unsupervised model is even smoother.} \label{fig2} \end{figure} \subsection{Reuters} Reuters \cite{lewis2004rcv1} is a labeled corpus of English news. Following the established practice \cite{xie2016unsupervised}\cite{jiang2016variational}\cite{hu2017learning}\cite{shaham2018spectralnet}, we took four categories i.e. corporate/industrial, government/social, markets and economics and computed normalized tf-idf features on the 2000 most frequent words. This preprocessing represents each document as a vector of squared 2-norm equal to $2000$. The resulting dataset was skewed with the least frequent category representing $7.98\%$ of its $685,071$ documents. It was randomly divided to a $90\%$-$10\%$ split with the larger subset used for training. We took the same net as in \cite{shaham2018spectralnet}, which has two fully connected hidden layers containing respectively $512$ and $256$ neurons for feature extraction. With $\lambda=100$ and $\rho = 0.02$, $150$ training epochs resulted in a net with clustering accuracy $0.8107$ and test accuracy $0.8094$. When trained on the full dataset with the same parameters, its clustering accuracy was even higher (Tab. \ref{utable}). \section{Discussion} \label{sec:relation} To conclude, in this paper, we have presented an unsupervised framework to constrain a probabilistic discriminative learner's hypothesis space to a set of non-trivial piecewise constant functions. This functional constraint enforces a learned model's predictive confidence and smoothness, allowing it to generalize. Our approach is generic in that it applies to all probabilistic discriminative learners and can be used for scalable unsupervised discriminative learning. Due to page limit, we now discuss several prior works which directly inspire our work and future research directions. \paragraph{Confidence regularization and spectral clustering.} First we state a result which underpins this part of the discussion. Consider a discriminative model $\mathbb{Q}(\cdot|x), x \in \mathcal{X}$ and a finite set of instances $\mathcal{S} \subseteq \mathcal{X}$, over which we define a Markov chain with the following \textit{instance transition matrix} \begin{align} \forall (x', x) \in \mathcal{S} \times \mathcal{S}, \quad \mathbb{S}(x'|x; \mathbb{Q}, \mathcal{S}) := \sum_{y \in \mathcal{Y}} \mathbb{P}(x'|y; \mathbb{Q}, \mathcal{S}) \mathbb{Q}(y|x). \label{mchain} \end{align} Note that by the very definition of $\mathbb{P}(\cdot|y; \mathbb{Q}, \mathcal{S})$, i.e. Eq. \eqref{l2i}, this transition matrix is symmetric. The next theorem states the implication of a diagonal label transition matrix in terms of this Markov chain. \begin{thm} The Markov chain has $|\mathcal{Y}|$ irreducible recurrent classes if and only if the label transition matrix $\mathbb{T}(\cdot|\cdot; \mathbb{Q}, \mathcal{S})$, i.e. Eq. \ref{labeltm}, is diagonal.\label{thm3} \end{thm} Spectral clustering is one of the most popular approaches to unsupervised clustering \cite{shi2000normalized} \cite{ng2002spectral} \cite{von2007tutorial}. For its application, an adjacency metric is needed to measure pairwise instance similarity. Here we use the transition matrix $\mathbb{S}(\cdot|\cdot; \mathbb{Q}, \mathcal{S})$ for its modeling and see under which conditions it can be made useful for clustering $|\mathcal{Y}|$ classes of instances. The answer is clearly when the matrix $\mathbb{S}$ leads to $|\mathcal{Y}|$ irreducible recurrent classes, which according to Theorem \ref{thm3} is equivalent to a diagonal label transition matrix. Moreover, as our confidence regularization allows the unlabeled set $\mathcal{S}$ to vary as long as it is label complete, it can be seen as a scalable approach to adjacency and spectral learning when training data is not too imbalanced (ref. the proof of Theorem \ref{thm3} in the Appendix). A recent work \cite{haeusser2017learning} explores a similar idea but it directly models the instance transition matrix. Unlike our approach, theirs does not explicitly specify the number of irreducible classes in data, hence the risk of over-fragmentation, i.e. leaving some data unvisited. The authors thus introduced an entropy loss to favor a uniform visit. Our model, by operating directly at the category level, avoids this issue. \begin{table} \caption{Ideal loss under four discriminative training schemes} \centering \ra{0.1} \begin{tabular}{llll} \toprule \multicolumn{2}{c}{}\\ ord. supervised &$L_o(\theta):=\mathbb{E}_{X\sim \mathbb{Q}^*}\kl(\mathbb{Q^*}(\cdot|X) || \mathbb{Q}(\cdot|X; \theta))$\\ adversarial \cite{goodfellow2014explaining} &$L_a(\theta;\rho):=\mathbb{E}_{X\sim \mathbb{Q}^*}\sup_{\|r\|_2 \leq \rho}\kl(\mathbb{Q^*}(\cdot|X) || \mathbb{Q}(\cdot|X+r; \theta))$\\ VAT \cite{miyato2017virtual} &$L_v(\theta;\hat{\theta}, \rho):=\mathbb{E}_{X\sim \mathbb{Q}^*}\sup_{\|r\|_2 \leq \rho}\kl(\mathbb{Q}(\cdot|X; \hat{\theta}) || \mathbb{Q}(\cdot|X+r; \theta))$\\ smoothness reg. &$L'_s(\theta;\rho):=\mathbb{E}_{X\sim \mathbb{Q}^*}\sup_{\|r\|_2 \leq \rho}\hel^2(\mathbb{Q}(\cdot|X; \theta) || \mathbb{Q}(\cdot|X+r; \theta))$\\ \bottomrule \end{tabular} \label{table2} \end{table} \paragraph{Smoothness regularization and virtual adversarial training.} They differ in formulation. Specifically, the former favors \textit{all} $\mathbb{Q}^*$-piecewise constant models in a hypothesis space \textit{equally} whereas the latter, as stated in \cite{miyato2017virtual}, is intended as an unsupervised alternative to adversarial training \cite{goodfellow2014explaining} with the goal of approximating $x\mapsto \mathbb{Q}^*(\cdot|x)$ itself only. Tab. \ref{table2} shows the ideal losses under various schemes. Both unsupervised, the two are related. VAT, when searching for an optimal perturbation direction at instance $x$, implicitly solves for the leading eigenvector of the Fisher information $I_F(x, \hat{\theta})$. However, since this vector must lie in the subspace spanned by the score vectors, in high dimension, it is much more efficient to exploit its low-rankedness. Moreover, our study interprets VAT's power iteration as a sampling procedure. Given a Gaussian \textit{direction} $e$, we suggest it be used as a \textit{dimension} for a grid search, because it necessarily results in a tighter lower bound to $\phi_f(\cdot; x,\theta)$. Our formulation also gives $\rho$ a probabilistic meaning as an estimate of the $\mathbb{Q}^*$-dependent attack-free margin i.e. Eq. \eqref{radius}. In the future, we plan to study how to take noise into account in confidence regularization. For smoothness regularization, we will see whether an explicit instance manifold modeling would lead to a better result. Moreover, we consider applying our framework to other learning modes, such as semi-supervised learning \cite{haeusser2017learning}, active learning \cite{dasgupta2009two} and outlier detection.
{ "timestamp": "2018-06-05T02:13:36", "yymm": "1806", "arxiv_id": "1806.00919", "language": "en", "url": "https://arxiv.org/abs/1806.00919" }
\section{Conclusion} \label{S:concl} A parametric mapping $r:\C{P}\to\C{U}$ has been analysed in a variety of settings. The basic idea is the associated linear map $R:\C{U}\to\D{R}^{\C{P}}$, which both generalises the parametric mapping and enables the linear analysis. This leads immediately to the RKHS setting, and a first equivalent representation on the RKHS in terms of tensor products. By choosing other inner products than the one coming from the RKHS one can analyse the importance of different features, i.e.\ subsets of the parameter set. Importance is measured by the spectrum of the `correlation' operator $C$, and the representation is again in terms of tensor products. The correlation is factored by the associated linear map, and it is shown that on one hand all factorisations are unitarily equivalent, and on the other hand that each factorisation leads to differently parametrised representations, indeed linear resp.\ affine representations, if the correlation is a nuclear or trace class operator. In fact, each such factorisation corresponds to a representation, and vice versa. This equivalence is due to our strict assumptions on the associated linear map, but in general the associated linear map is a truly more general concept. These linear representations are in terms of real valued functions, which may be seen as some kind of `co-ordinates' on the parameter set. In the RKHS case, they are truly co-ordinates or an embedding of the parameter set into the RKHS, as each parameter point $p\in\C{P}$ can be identified with the evaluation functional $\updelta_p$ and hence the kernel $\varkappa(p,\cdot)$ at that point. An equivalent spectral analysis can be carried out for the kernel space in terms of integral equations and integral transforms. These spectral decompositions or other factorisations also lend themselves to the construction of parametric reduced order models, as the importance of different terms in the representing Karhunen-Lo\`eve- or POD-series can be measured. But other factorisations also lead to, not necessarily optimal, reduced order models. For the sake of simplicity for the representation only orthonormal bases have been considered here as they appear quite natural in the Hilbert space setting, but obviously other bases can be considered. The tensor product nature of this series makes it natural to employ this factorisation in a recursive fashion and thereby to generate a representation through high-order tensors. These often allow very efficient low-rank approximations, which is in fact another, but this time nonlinear, model order reduction. Certain refinements are possible in case the representation space has the structure of a `vector'- or `tensor'-field, a point which is only briefly touched. It was also shown that the structure of the spectrum of the correlation operator attached to such a tensor-space factorisation, or equivalently the structure of the set of singular values of the associated linear map, determines how many terms are needed in a reduced order model to achieve a certain degree of accuracy. Thus the functional analytic view on parametric problems via decompositions of linear maps gives a certain unity to seemingly different procedures which turn out to be closely related, at least if one looks for the similarities. This constitutes a natural introduction and background to low-rank tensor product representations, which are crucial for efficient computation. They are naturally employed in a \emph{functional approximation} approach to parametric problems. \ignore{ \subsection{Test of math fonts} \label{SS:testf} \[ \E{D}(\Omega) \hookrightarrow \E{S}(\Omega) \hookrightarrow \E{E}(\Omega) \hookrightarrow \mrm{L}_1^{\text{loc}}(\Omega) \hookrightarrow \E{E}^\prime(\Omega) \hookrightarrow \E{S}^\prime(\Omega) \hookrightarrow \E{D}^\prime(\Omega) \hookleftarrow \] \paragraph{Slanted:} --- serifs; \emph{for variables, scalars and vectors, operators} \[ a, \alpha, A, \phi, \varphi, \pi, \theta, \vartheta, \delta, \Phi, \Pi, \Theta, \Delta; \qquad \vek{a}, \vek{\alpha}, \vek{A}, \vek{\phi}, \vek{\varphi}, \vek{\pi}, \vek{\theta}, \vek{\vartheta}, \vek{\delta}, \vek{\Phi}, \vek{\Pi}, \vek{\Theta}, \vek{\Delta}. \] \paragraph{Slanted:} --- sans serif; \emph{for variables, tensors of higher degree} \[ \tns{a}, \tns{\alpha}, \tns{A}, \tns{\phi}, \tns{\varphi}, \tns{\pi}, \tns{\theta}, \tns{\vartheta}, \tns{\delta}, \tns{\Phi}, \tns{\Pi}, \tns{\Theta}, \tns{\Delta}; \qquad \tnb{a}, \tnb{\alpha}, \tnb{A}, \tnb{\phi}, \tnb{\varphi}, \tnb{\pi}, \tnb{\theta}, \tnb{\vartheta}, \tnb{\delta}, \tnb{\Phi}, \tnb{\Pi}, \tnb{\Theta}, \tnb{\Delta}. \] \paragraph{Other:} --- no small letters and no Greek letters; \emph{for sets, spaces, special operators} \[ \C{E}, \C{Q}, \C{R};\qquad \D{E}, \D{Q}, \D{R};\qquad \E{E}, \E{Q}, \E{R}.\] \paragraph{Fraktur:} --- no Greek letters; \emph{for logical variables, multi-indices?, sets of sets} \[ \F{e}, \F{q}, \F{r};\qquad \F{E}, \F{Q}, \F{R}. \] \paragraph{Upright:} --- serifs and sans-serif; \emph{for constants, special operators} \[ \mrm{a}, \mrm{A}, \mrm{\Phi}, \mrm{\Pi}, \mrm{\Theta}, \mrm{\Delta};\qquad \mat{a}, \mat{A}, \mat{\Phi}, \mat{\Pi}, \mat{\Theta}, \mat{\Delta};\qquad \ops{a}, \ops{A}, \ops{\Phi}, \ops{\Pi}, \ops{\Theta}, \ops{\Delta};\qquad \opb{a}, \opb{A}, \opb{\Phi}, \opb{\Pi}, \opb{\Theta}, \opb{\Delta}; \] --- upright Greek letters, especially small ones, and alternative capital ones \[ \upalpha, \upphi, \upvarphi, \uppi, \uptheta, \upvartheta, \updelta, \Upphi, \Uppi, \Uptheta, \Updelta; \qquad \vek{\upalpha}, \vek{\upphi}, \vek{\upvarphi}, \vek{\uppi}, \vek{\uptheta}, \vek{\upvartheta}, \vek{\updelta}, \vek{\Upphi}, \vek{\Uppi}, \vek{\Uptheta}, \vek{\Updelta}. \] } \section{Correlation} \label{S:correlat} As already alluded to at the end of \refSS{ass-lin-map}, the RKHS construction $\C{R}$ with the inner product $\bkt{\cdot}{\cdot}_{\C{R}}$ just mirrors or reproduces the inner product structure on the original space $(\C{U},\bkt{\cdot}{\cdot}_{\C{U}})$ on the RKHS space of real-valued functions $\C{R}$. Up to now there is no way of telling what is important in the parameter set $\C{P}$. Closely connected to this question is the one which subset of functions to choose for model reduction. Unfortunately, up to now we have no indication which subset may be particularly good. For this one needs additional information, a topic which will be taken up now. As a way of indicating what is important on the set $\C{P}$, assume that there is another inner product $\bkt{\cdot}{\cdot}_{\C{Q}}$ for scalar functions $\phi \in \D{R}^{\C{P}}$, and denote the Hilbert space of functions with that inner product by $\C{Q}$. Abusing the notation a bit, we denote the map $R:\C{U}\to\C{Q}$, defined as in \refeq{eq:IV} but with range $\C{Q}$, still by $R$. Generally one would also assume that the subspace $\mathop{\mathrm{dom}}\nolimits R = \{ u\in\C{U} \mid \nd{Ru}_{\C{Q}}< \infty\}$ is, if not the whole space $\C{U}$, at least dense in $\C{U}$. Additionally, one would assume that the densely defined operator $R$ is closed. For simplicity assume here that $R$ is defined on the whole space and hence continuous. Furthermore, assume that the map $R:\C{U}\to\C{Q}$ is still injective, i.e.\ for $\phi\in\C{R}$ one has $\nd{\phi}_{\C{R}}\ne 0 \Rightarrow \nd{\phi}_{\C{Q}}\ne 0$, and that $R$ is closed. Without loss of generality we assume then that $R$ is surjective---by restricting ourselves to the closed Hilbert subspace $R(\C{U})$ which we may call again $\C{Q}$. \begin{defi}[Correlation] \label{D:corr} With this, one may define \citep{kreeSoize86} a densely defined map $C$ in $\C{U}$ through the bilinear form \begin{equation} \label{eq:IX} \forall u, v \in \C{U}:\quad \bkt{Cu}{v}_{\C{U}} := \bkt{Ru}{Rv}_{\C{Q}} . \end{equation} The map $C$, which may also be written as $C=R^* R$, may be called the \emph{`correlation'} operator. By construction it is self-adjoint and positive. In case $R$ is defined on the whole space and hence continuous, so is $C$. \end{defi} The last statements are standard results from the theory of linear operators \citep{yosida-fa-1980}. Observe that in contrast to \refSS{RKHS} the adjoint is now different from the inverse as normally $R$ is not unitary, i.e.\ the adjoint is of the map $R:\C{U}\to\C{Q}$ w.r.t.\ the $\C{Q}$-inner product $\bkt{\cdot}{\cdot}_{\C{Q}}$. Often the inner product $\bkt{\cdot}{\cdot}_{\C{Q}}$ comes from a measure $\varpi$ on $\C{P}$, so that for two measurable scalar functions $\phi$ and $\psi$ on $\C{P}$ one has \[ \bkt{\phi}{\psi}_{\C{Q}} := \int_{\C{P}} \phi(p) \psi(p) \; \varpi(\mathrm{d} p), \] where the space $\C{Q}$ may then be taken as $\C{Q}:=\mrm{L}_2(\C{P},\varpi)$; or more generally with some kernel $\beta(p_1,p_2)$ \[ \bkt{\phi}{\psi}_{\C{Q}} := \int_{\C{P}\times\C{P}} \phi(p_1) \beta(p_1,p_2) \psi(p_2) \; \varpi(\mathrm{d} p_1) \varpi(\mathrm{d} p_2) = \ip{\beta}{\phi\otimes\psi}. \] One important sub-class of such situations is when $\varpi$ is a probability measure on $\C{P}$, i.e.\ $\varpi(\C{P}) = 1$. This is where the name `correlation' is borrowed from. In the first case \[ C = R^* R = \int_{\C{P}} r(p) \otimes r(p) \; \varpi(\mathrm{d} p). \] Often the set $\C{P}$ has more structure, like being in a topological space, a differentiable (Riemann) manifold, or a Lie group, which then may induce the choice of $\sigma$-algebra or measure. \subsection{Spectral decomposition} \label{SS:spec-dec} Before, in \refSS{RKHS} it was the factorisation of $C= R^*R$ which allowed the RKHS representation in \refeq{eq:VII0}. For other representations, one needs other factorisations. Most common is to use the spectral decomposition (e.g.~\citep{gelfand64-vol3, gelfand64-vol4, Segal1978, reedSimon-vol1, reedSimon-vol2, DautrayLions3}) of $C$ to achieve such a factorisation. In case the correlation were defined on a finite-dimensional space, represented as a matrix $\vek{C}$, the eigenvalue problem---where $\lambda\in\D{C}$ is an eigenvalue iff $\vek{C}-\lambda\vek{I}$ is not invertible---and eigen-decomposition would be written with eigenvectors $\vek{v}_m$ and eigenvalues $\lambda_m$ as \begin{equation} \label{eq:ev-fd} \vek{C}\vek{v}_m = \lambda_m \vek{v}_m,\quad \vek{C} =\sum_m \lambda_m \vek{v}_m \vek{v}_m^{\ops{T}}=\sum_m \lambda_m \vek{v}_m \otimes\vek{v}_m = \sum_m \lambda_m \upDelta\vek{E}_m = \vek{V}\vek{\Lambda}\vek{V}^{\ops{T}} . \end{equation} As $\vek{C}$ is self-adjoint and positive, this implies $\lambda\in\D{R}$ and $\lambda_m\ge 0$. The set of all eigenvalues $\sigma(\vek{C}):=\{\lambda_m\}_m\subset\D{C}$ is called the \emph{spectrum} of $\vek{C}$. Here we assume the ordering $0\le\lambda_1\dots\le\lambda_m$, each eigenvalue counted with appropriate multiplicity. The $\vek{v}_m$ are normalised eigenvectors, and are mutually orthogonal ($\vek{v}_m^{\ops{T}}\vek{v}_n=\updelta_{m,n}$). The first two decompositions---which are only different notations---are into weighted sums of simple tensor products of orthonormal vectors, or one-dimensional orthogonal projections $\upDelta \vek{E}_m := \vek{v}_m \vek{v}_m^{\ops{T}} = \vek{v}_m \otimes\vek{v}_m$, which define the spectral resolution $\vek{E}_m := \sum_{k\le m} \upDelta \vek{E}_k$. The $\vek{E}_m$ are hence the orthogonal projections onto the subspaces $\mathop{\mathrm{span}}\nolimits\{\vek{v}_k \mid k \le m \}$. The columns of $\vek{V}=[\vek{v}_1,\dots,\vek{v}_m,\dots]$ are the normalised eigenvectors, so that $\vek{V}$ is unitary resp.\ orthogonal, and $\vek{\Lambda} = \mathop{\mathrm{diag}}\nolimits(\lambda_m)$ is a diagonal matrix \citep{strang}, a `multiplication' operator, as for $\vek{\Lambda}\vek{u}=\vek{w}$, each component $u_m$ of $\vek{u}=[u_1,\dots, u_m, \dots]^{\ops{T}}$ is just multiplied by $\lambda_m$: $w_m = \lambda_m u_m$. The last decomposition in \refeq{eq:ev-fd} hence means that $\vek{C}$ is unitarily equivalent to a multiplication operator by a diagonal matrix with real non-negative entries. In contrast, on infinite dimensional Hilbert spaces the decompositions in \refeq{eq:ev-fd} are materially different formulations of the spectral theorem for self-adjoint operators, e.g.~\citep{gelfand64-vol3, Segal1978, reedSimon-vol1, reedSimon-vol2, DautrayLions3}. A number $\lambda\in\D{C}$ is in the spectrum $\sigma(C)$ iff $C-\lambda I$ is not invertible as a \emph{continuous operator}. But now there may be spectral values $\lambda\in\sigma(C)$ which are \emph{not} eigenvalues---this has to do with the possibility of a \emph{continuous spectrum}---and the sums in \refeq{eq:ev-fd} have to become integrals. Probably best known is the generalisation of the second last form in \refeq{eq:ev-fd} ($\vek{C}=\sum_m \lambda_m \upDelta\vek{E}_m$), namely \citep{gelfand64-vol3, Segal1978, reedSimon-vol1, reedSimon-vol1, DautrayLions3}: \begin{thm}[First spectral theorem] \label{T:1st-spec} The self-adjoint and positive operator $C:\C{U}\to\C{U}$, where $C=R^* R$, may be decomposed into an integral of orthogonal projections $E_\lambda$, \begin{equation} \label{eq:XII} C = \int_0^\infty \lambda \; \mathrm{d} E_\lambda = \int_{\sigma(C)} \lambda \; \mathrm{d} E_\lambda. \end{equation} Here $E_\lambda$ is the corresponding projection-valued spectral measure corresponding to $\vek{E}_m$ in \refeq{eq:ev-fd}, with a non-negative spectrum $\sigma(C) \subseteq \D{R}_+$. \end{thm} Observe that the factorised form $C=R^* R$ is actually equivalent to the statement that $C$ is self-adjoint and positive. For the sake of brevity and simplicity of exposition let us assume that $C$ has a pure point spectrum $\sigma_p(C) = \sigma(C)$, i.e.\ all $\lambda_m \in \sigma_p(C)$ are eigenvalues with eigenvector $v_m$, $C v_m = \lambda_m v_m, m\in\D{N}$, each eigenvalue repeated with appropriate finite multiplicity. In this case \refeq{eq:XII} becomes just a sum, and may be written with the CONS of unit-$\C{U}$-norm eigenvectors $\{v_m\}_m \subset \C{U}$. Here we assume the opposite ordering of the $\lambda_m$ as before in \refeq{eq:ev-fd}, namely $\lambda_1\ge\dots\ge\lambda_m\dots\ge 0$, and set \begin{equation} \label{eq:def-E_m} E_0 := I,\quad E_m := \sum_{k>m} v_k\otimes v_k; \quad \text{and for } m\ge 1: \; \upDelta E_{\lambda_m} := E_{m-1} - E_{m}. \end{equation} The spectral projection-valued measure $\mathrm{d} E_\lambda$ in \refeq{eq:XII} becomes a point measure $\mathrm{d} E_\lambda = \sum_m \updelta_{\lambda_m} \upDelta E_{\lambda_m}$, where $\updelta_{\lambda_m}$ is the \emph{Dirac}-$\updelta$. For the second part of the following \refT{1st-spec-rep}, also assume that the correlation $C$ is a \emph{trace class} or \emph{nuclear} operator, which means that the trace is finite ($\mathop{\mathrm{tr}}\nolimits C = \sum_m \lambda_m < \infty$), and $C$ is then necessarily also a Hilbert-Schmidt and a compact operator. \begin{thm}[First spectral representation and Karhunen-Lo\`eve{} expansion] \label{T:1st-spec-rep} The spectral decomposition of \refT{1st-spec}, \refeq{eq:XII} becomes \begin{equation} \label{eq:XIII} C = \sum_m \lambda_m (v_m \otimes v_m) = \sum_m \lambda_m \upDelta E_{\lambda_m}. \end{equation} Define a new CONS $\{s_m\}_m$ in $\C{Q}$: $\lambda_m^{1/2} s_m := R v_m$, to obtain the corresponding \emph{singular value decomposition} (SVD) of $R$ and $R^*$. The set $\varsigma(R)=\{\lambda_m^{1/2}\} = \sqrt{\sigma(C)}\subset \D{R}_+$ are the \emph{singular values} of $R$ and $R^*$. \begin{equation} \label{eq:XIV} R = \sum_m \lambda_m^\frac{1}{2} (s_m \otimes v_m); \quad R^* = \sum_m \lambda_m^\frac{1}{2} (v_m \otimes s_m); \quad r(p) = \sum_m \lambda_m^\frac{1}{2} \, s_m(p) v_m = \sum_m R^* s_m. \end{equation} \ignore{ \begin{eqnarray} \label{eq:XIV} R = \sum_m \lambda_m^{1/2} s_m \otimes v_m, \quad R u &=& \sum_m \lambda_m^{1/2}\langle v_m | u \rangle_{\C{U}} s_m,\\ \label{eq:XIVa} R^* = \sum_m \lambda_m^{1/2} v_m \otimes s_m, \quad R^* \phi &=& \sum_m \lambda_m^{1/2}\langle s_m | \phi \rangle_{\C{W}} v_m,\\ \label{eq:XIVb} R^{-1} = \sum_m \lambda_m^{-1/2} v_m \otimes s_m, \quad R^{-1} \phi &=& \sum_m \lambda_m^{-1/2}\langle s_m | \phi \rangle_{\C{R}}v_m. \end{eqnarray} From this follows \begin{equation} \label{eq:XV} r(p) = \sum_m \lambda_m^{1/2} \, s_m(p) v_m . \end{equation} } The last relation is the so-called \emph{Karhunen-Lo\`eve{} expansion} or \emph{proper orthogonal decomposition} (POD). If in that relation the sum is \emph{truncated} at $n\in\D{N}$, i.e.\ \begin{equation} \label{eq:best-n-term} r(p) \approx r_n(p) = \sum_{m=1}^n \lambda_m^\frac{1}{2} \, s_m(p) v_m = \sum_{m=1}^n R^* s_m(p), \end{equation} we obtain the \emph{best $n$-term approximation} to $r(p)$ in the norm of $\C{U}$. \end{thm} \begin{proof} The spectral decompositions \refeq{eq:XIII}---analogues of the first three in \refeq{eq:ev-fd}---are a consequence of the fact that for a point spectrum the projection-valued measure $\mathrm{d} E_\lambda$ in \refeq{eq:XII} becomes a discrete projection-valued measure $\upDelta E_{\lambda_m}$. That the system $\{s_m\}_m$ is a CONS follows from \[ \bkt{C v_m}{v_n}_{\C{U}} = \lambda_m \updelta_{m,n} = \bkt{R v_m}{R v_n}_{\C{Q}} = \bkt{\lambda_m^{1/2} s_m}{\lambda_n^{1/2} s_n}_{\C{Q}} = \lambda_m \bkt{s_m}{s_n}_{\C{Q}} . \] The representations in \refeq{eq:XIV} are shown in the same way as in \refC{RKHS-decomp}. It still remains to show that the function $p\mapsto r(p)$ defined in \refeq{eq:XV} is in $\C{U}\otimes\C{Q}$. For that, using the orthonormality of $\{s_m\}_m$ and $\{v_m\}_m$, and the nuclearity of $C$, one computes \[ \nd{r}_{\C{U}\otimes\C{Q}}^2 = \bkt{r}{r}_{\C{U}\otimes\C{Q}} = \sum_{m,n} \sqrt{\lambda_m \lambda_n}\,\bkt{s_m}{s_n}_{\C{Q}}\bkt{v_m}{v_n}_{\C{U}} = \sum_{m,n} \sqrt{\lambda_m \lambda_n}\,\updelta_{m,n} \updelta_{m,n} = \sum_m \lambda_m < \infty. \] The statement about the best-$n$-term approximation follows from the well-known optimality \citep{strang, Janson1997} of the SVD. \end{proof} Observe that, similarly to \refeq{eq:VII0}, $r$ is linear in the $s_m$. This means that by choosing the `co-ordinate transformation' $\C{P}\ni p\mapsto (s_1(p),\dots,s_m(p),\dots)\in\D{R}^{\D{N}}$ one obtains a \emph{linear / affine} representation where the first co-ordinates are the most important ones, i.e.\ they catch most of the variability in that the best-$n$-term approximation in the norm $\nd{\cdot}_C$ requires only the first $n$ co-ordinate functions $\{s_m\}_{m\le n}$. This is one possible criterion on how to build good reduced order models $r_n(p)$, i.e.\ how to choose a good subspace for approximation. Note that in case $\C{P}$ is a probability space, the condition that $C$ be a trace class or nuclear operator is also a necessary condition that $r$ have finite variance and that the distribution of $r$ be a \emph{probability measure} on $\C{U}$. When stating other series representations in the sequel, it will always be assumed that this condition of nuclearity is satisfied. Hence the definition of models via linear maps is much more general and allows one to consider \emph{generalised} resp.\ \emph{weak}, or in some way ideal representations \citep{segal58-TAMS, LGross1962, gelfand64-vol4, segalNonlin1969, kreeSoize86}. \subsection{Singular value decomposition} \label{SS:SVD} To treat the analogues of the first two decompositions of $C$ in \refeq{eq:ev-fd} in the case where $C$ has also a continuous spectrum directly requires technical tools such as Gel'fand triplets (rigged Hilbert spaces), direct integrals of Hilbert spaces \citep{gelfand64-vol3, gelfand64-vol4, DautrayLions3}, and generalised eigenvectors, which are beyond the scope of this short note. This also applies to representations which go beyond the case of a nuclear correlation, and typically become some kind of integral transform. We contend ourselves with an alternative, and materially stronger, formulation of the spectral decomposition than \refeq{eq:XII} \citep{gelfand64-vol3, Segal1978, reedSimon-vol1, DautrayLions3}, the analogue of the last decomposition in \refeq{eq:ev-fd} ($\vek{C}=\vek{V}\vek{\Lambda}\vek{V}^{\ops{T}}$) which will lead us to the singular value decomposition, an analogue of \refeq{eq:XIV}. The results in this \refSS{SVD} do not require $C$ to be nuclear, nor do they require $C$ or $R$ to be continuous. \ignore{ It is of course also possible to directly use the unitary equivalence of $L_2(\sigma(C))$ with $\C{U}$ and define for an appropriate $z\in\C{U}$ \begin{equation} \label{eq:XVIaa} \check{R}: L_2(\sigma(C)) \ni f \mapsto \int_0^\infty \lambda^{1/2} f(\lambda)\; \mathrm{d} E_\lambda z \in \C{U}. \end{equation} For a pure point spectrum \refeq{eq:XVIaa} reduces to \begin{eqnarray} \label{eq:XVIab} \check{R}: f &\mapsto& \sum_m \lambda_m^{1/2} f(\lambda_m) v_m,\quad\text{or}\\ \check{R} &=& \sum_m \lambda_m^{1/2} v_m \otimes \delta_{\lambda_m}, \label{eq:XVIac} \end{eqnarray} where $\langle \delta_{\lambda_m}, f \rangle_{L_2(\sigma(C))} = f(\lambda_m)$, again exhibiting the tensorial nature of the representation. One finds that $\check{R} \check{R}^*=C$, and this is another spectral representation. An alternative, and materially stronger, formulation of the spectral decomposition than \refeq{eq:XII} is \citep{gelfand64-vol3, Segal1978, reedSimon-vol1, DautrayLions3} the analogue of the last decomposition in \refeq{eq:ev-fd} ($\vek{C}=\vek{V}\vek{\Lambda}\vek{V}^{\ops{T}}$): } \begin{thm}[Second spectral theorem] \label{T:2nd-spec} The densely defined, self-adjoint and positive operator $C:\C{U}\to\C{U}$ is unitarily equivalent with a multiplication operator $M_{\mu}$, \begin{equation} \label{eq:ev-mult} C = V M_{\mu} V^*, \end{equation} where $V:\mrm{L}_2(\C{T})\to\C{U}$ is unitary between some $\mrm{L}_2(\C{T})$ on a measure space $\C{T}$ and the Hilbert space $\C{U}$, and $M_{\mu}$ is a multiplication operator, multiplying any $\psi\in\mrm{L}_2(\C{T})$ with a real-valued function $\mu$ --- $\mrm{L}_2(\C{T})\ni\psi\mapsto \mu \psi\in\mrm{L}_2(\C{T})$. In case $C$ is bounded, ${\mu}\in\mrm{L}_\infty(\C{T})$. As $C$ is positive, $\mu(t)\ge 0$ for almost all $t\in\C{T}$, and the essential range of $\mu$ is the spectrum of $C$. As $M_{\mu}$ with a real valued non-negative $\mu$ is self-adjoint and positive, one may define \begin{equation} \label{eq:ev-mult-sqrt} M_{\mu}^{1/2} := M_{\sqrt{\mu}}:\quad \mrm{L}_2(\C{T})\ni\psi\mapsto \sqrt{\mu} \psi\in\mrm{L}_2(\C{T}), \end{equation} from which one obtains the square-root of $C$ via its spectral decomposition \begin{equation} \label{eq:ev-mult-sqrt-2} C^{1/2} = V M_{\sqrt{\mu}} V^*. \end{equation} The factorisation corresponding to $C=R^* R$ in \refT{1st-spec} is here (with $M_{\sqrt{\mu}}=M_{\sqrt{\mu}}^*$) \begin{equation} \label{eq:ev-mult-svd-2} C= (V M_{\sqrt{\mu}})(V M_{\sqrt{\mu}})^* = (V M_{\sqrt{\mu}}I)(V M_{\sqrt{\mu}}I)^*. \end{equation} \end{thm} \begin{proof} The statement about the unitary equivalence is a standard result \citep{gelfand64-vol3, Segal1978, reedSimon-vol1, DautrayLions3} for self-adjoint operators, as well as the positivity of the multiplier $\mu$. Computation of the square-root $M_{\sqrt{\mu}}$ is obvious, as $M_{\sqrt{\mu}}^2=M_{\mu}$; on the other hand \refeq{eq:ev-mult-sqrt-2} is standard functional calculus of operators. In the last \refeq{eq:ev-mult-svd-2}, it obviously holds that $(V M_{\sqrt{\mu}})(V M_{\sqrt{\mu}})^* = V M_{\sqrt{\mu}} M_{\sqrt{\mu}} V^* = V M_{\mu} V^* $; observe that $M_{\mu}^*=M_{\mu}$ and $M_{\sqrt{\mu}}^*= M_{\sqrt{\mu}}$, as $\mu$ is real. \end{proof} From this spectral decomposition follow decompositions of $R$ and some spectrally connected factorisations of $C$: \begin{coro}[Singular value decomposition and further factorisations] \label{C:SVD-R-fact} The singular value decomposition (SVD) of $R$ is \begin{equation} \label{eq:ev-mult-svd} R = U M_{\sqrt{\mu}} V^*, \end{equation} where $U:\mrm{L}_2(\C{T})\to\C{Q}$ is a unitary operator, $M_{\sqrt{\mu}}$ is from \refeq{eq:ev-mult-sqrt}, and the unitary $V$ from \refeq{eq:ev-mult}. Further decompositions of $C$ arising from \refT{2nd-spec} are $C=G^* G$ with $G := I M_{\sqrt{\mu}} V^*$, and $C = (C^{1/2})^* C^{1/2} = C^{1/2} C^{1/2}$, with the SVD of $C^{1/2}$ given by \refeq{eq:ev-mult-sqrt-2}. \end{coro} \begin{proof} The SVD of $R$ in \refeq{eq:ev-mult-svd} is a standard result \citep{Segal1978, reedSimon-vol1}, and $U$ is unitary as $R$ was assumed surjective. The decomposition with $G$ follows directly from \refeq{eq:ev-mult-svd-2}. The last decomposition $C = (C^{1/2})^* C^{1/2}$ follows from the fact that with $C$ also $C^{1/2}$ is self-adjoint, and as $C^{1/2}$ is also positive, its SVD is equal to its spectral decomposition \refeq{eq:ev-mult-sqrt-2}. \end{proof} \subsection{Other factorisations and representations} \label{SS:factor} In the preceding \refSS{SVD} in \refC{SVD-R-fact} it was shown that there are several ways to factorise $C=R^* R$. Let us denote a general factorisation by $C = B^* B$, where $B:\C{U}\to\C{H}$ is a map to a Hilbert space $\C{H}$ with all the properties demanded from $R$---see the beginning of this section. Sometimes such a factor $B$ is called a \emph{square root} of $C$, but we shall reserve that name for the \emph{unique} factorisation with the self-adjoint factor $C^{1/2}$ from \refeq{eq:ev-mult-sqrt-2}, $C = (C^{1/2})^* C^{1/2}=C^{1/2} C^{1/2}$. In some way, all such factorisations are equivalent: \begin{thm}[Equivalence of factorisations] \label{T:equi-fact} Let $C=B^*B$ with $B:\C{U}\to\C{H}$ be any factorisation satisfying the conditions at the beginning of this section. Any two such factorisations $B_1:\C{U}\to\C{H}_1$ and $B_2:\C{U}\to\C{H}_2$ with $C=B_1^*B_1=B_2^*B_2$ are \emph{unitarily equivalent} in that there is a unitary map $X_{21}:\C{H}_1\to\C{H}_2$ such that $B_2 = X_{21} B_1$. Equivalently, each such factorisation is unitarily equivalent to $R$, i.e.\ for $C=B^* B$ there is a unitary $X:\C{H}\to\C{Q}$ such that $R= X B$. \end{thm} \begin{proof} Let $C=B_1^*B_1=B_2^*B_2$ be two such factorisations, each unitarily equivalent to $R= X_1 B_1= X_2 B_2$. As $X_2^*=X_2^{-1}$, it follows easily that $B_2 = X_2^* X_1 B_1$, so $B_1$ and $B_2$ are unitarily equivalent with the unitary $X_{21}:= X_2^* X_1$. So what is left is to show that an arbitrary factorisation is equivalent to $R$. From the SVD of $R$ in \refeq{eq:ev-mult-svd}, one sees easily that $R$ and $G$ in \refC{SVD-R-fact} are unitarily equivalent, as $R = U M_{\sqrt{\mu}} V^* = U (M_{\sqrt{\mu}} V^*) = U ( I M_{\sqrt{\mu}} V^*) = U G$. Now let $C=B^* B$ be an arbitrary factorisation with the required properties. Then, just as $R$ in \refC{SVD-R-fact}, the factor $B$ has a SVD \citep{Segal1978, reedSimon-vol1}, $B=W M_{\sqrt{\mu}} V^*$, with $ M_{\sqrt{\mu}}$ and $V$ from \refC{SVD-R-fact}, and a unitary $W:\mrm{L}_2(\C{T})\to\C{H}$. Hence $B = W G$ or $G=W^* B$, and finally $R = U G=U W^* B=X B$ with a unitary $X:=U W^*$. \end{proof} For finite dimensional spaces, a favourite choice for such a decomposition of $C$ is the Cholesky factorisation $\vek{C} = \vek{L} \vek{L}^{\ops{T}}$, where $B=\vek{L}^{\ops{T}}$. Now let us go back to the situation described in \refT{1st-spec-rep}, where for the sake of simplicity of exposition we assume that $C$ has a purely discrete spectrum and a CONS of eigenvectors $\{v_m\}_m$ in $\C{U}$, and let us have a look how the results up to now may be used to build new representations. First transport the eigenvector CONS from $\C{U}$ to $\mrm{L}_2(\C{T})$: \begin{lem} \label{L:eig-xi} Setting for all $m\in\D{N}$: $\xi_m := V^* v_m$, the system $\{\xi_m\}_m$ is a CONS in $\mrm{L}_2(\C{T})$, and $M_\mu \xi_m = \lambda_m \xi_m$, i.e.\ the $\xi_m$ are an eigenvector CONS of $M_\mu = V^* C V$. \end{lem} \begin{proof} Orthonormality and completeness are due to $V^*$ being unitary. With \refeq{eq:ev-mult} one computes \[ M_\mu \xi_m = V^* C V V^* v_m = \lambda_m V^* v_m = \lambda_m \xi_m, \] which shows the eigenvector property. \end{proof} \begin{prop} \label{P:new-cons} With the help of the CONS $\{s_m\}_m$ in $\C{Q}$ or $\{v_m\}_m$ in $\C{U}$, define a CONS $\{h_m\}_m$ in $\C{H}$: $\overline{\mathop{\mathrm{span}}\nolimits}\{h_m \mid m\in\D{N} \} = \C{H}$: \begin{equation} \label{eq:equi-cons} \forall m\in\D{N}: \quad h_m := B C^{-1} R^* s_m = B C^{-1/2} v_m. \end{equation} The CONS $\{h_m\}_m$ in $\C{H}$ is an eigenvector CONS of the operator \begin{equation} \label{eq:def-C-H} C_{\C{H}} := B B^*:\C{H}\to\C{H}, \end{equation} \begin{equation} \label{eq:eig-v-C-H} \forall m\in\D{N}: \quad C_{\C{H}} h_m := \lambda_m h_m. \end{equation} \end{prop} \begin{proof} The stated orthonormality of the $\{h_m\}_m$ is easily computed, as with \refT{2nd-spec}, \refC{SVD-R-fact}, and the SVD of $B=W M_{\sqrt{\mu}} V^*$ from \refT{equi-fact}, one obtains after a bit of computation $B C^{-1} R^* = W U^*$, and $B C^{- 1/2} = W V^*$, hence $h_m = W U^* s_m$, and therefore orthonormality follows from the unitarity of $W U^*$ and orthonormality of the $\{s_m\}_m$. Completeness follows from the completeness of $\{s_m\}_m$ and surjectivity of $B$. Similarly to $h_m = W U^* s_m$ one obtains with \refL{eig-xi}: \[ v_m = V U^* s_m \Rightarrow h_m = W U^* s_m = W V^* (V U^* s_m) = W V^* v_m = W \xi_m . \] From this follows, again with \refL{eig-xi}, \[ C_{\C{H}} h_m = (B B^*) W \xi_m = W M_\mu \xi_m = \lambda_m W \xi_m = \lambda_m h_m , \] \end{proof} One may see the statement in \refL{eig-xi} as a special case of Proposition~\ref{P:new-cons} with $B = I M_{\sqrt{\mu}} V^*$, as then $\xi_m = V^* v_m = U^* s_m$. Collecting, an immediate consequence is: \begin{coro} \label{C:other-eig} One has the following equivalent eigensystems \begin{itemize} \item on $\C{U}$ with $C = R^* R$ --- $C v_m = \lambda_m v_m$, $v_m = V U^* s_m$ and $C = V M_\mu V^*$; \item on $\C{H}$ with $C_{\C{H}} = B B^*$ --- $C_{\C{H}} h_m = \lambda_m h_m$, $h_m = W V^* v_m$ and $C_{\C{H}} = W V^* C V W^*$; \item on $\C{Q}$ with $C_{\C{Q}} = R R^*$ --- $C_{\C{Q}} s_m = \lambda_m s_m$, $s_m = U V^* v_m$ and $C_{\C{Q}} = U V^* C V U^*$; \item on $\mrm{L}_2(\C{T})$ with $C_{\mrm{L}_2(\C{T})} = M_\mu$ --- $C_{\mrm{L}_2(\C{T})} \xi_m = \lambda_m \xi_m$, $\xi_m = V^* v_m$ and $C_{\mrm{L}_2(\C{T})} = V^* C V$. \end{itemize} \end{coro} The last two statements are special cases of the second one with $\C{H}=\C{Q}$ and $B = R$, resp.\ $\C{H}=\mrm{L}_2(\C{T})$ with $B = M_{\sqrt{\mu}} V^*$. Hence each factorisation $C=B^* B$ with $B:\C{U}\to\C{H}$ gives a new equivalent eigensystem on $\C{H}$ for the operator $C_{\C{H}} = B B^*$. From \refeq{eq:XIV} in \refT{1st-spec-rep} one has $r = \sum_m R^* s_m$. This in conjunction with another equivalent factorisation according to \refT{equi-fact} immediately leads to new representations of $r(p)$, by replacing $R^* s_m$ in the Karhunen-Lo\`eve{} expansion in \refT{1st-spec-rep} by the equivalent $B^* h_m$. \begin{coro}[Representation from factorisation] \label{C:fact-rep} With a factorisation $C=B^* B$ and CONS $\{h_m\}_m$ in $\C{H}$ as in Proposition~\ref{P:new-cons}, one obtains the following representation of $r(p)$: \begin{equation} \label{eq:fact-rep} r(p) = \sum_m B^* h_m= \sum_m V M_{\sqrt{\mu}}W^* h_m, \quad \text{in particular also} \quad r(t) = \sum_m V M_{\sqrt{\mu}} \xi_m(t). \end{equation} \end{coro} In the special case of a purely discrete spectrum we are dealing with here it is possible to formulate analogues of the decompositions in \refC{RKHS-decomp}. This is an analogue of \refT{1st-spec-rep} for the general case $C=B^* B$ with $B:\C{U}\to\C{H}$: \begin{coro} \label{C:fact-rep-tens} With a factorisation $C=B^* B$ and CONS $\{v_m\}_m$ in $\C{U}$, CONS $\{h_m\}_m$ in $\C{H}$ as in Proposition~\ref{P:new-cons}, one obtains the following tensor representations of the map $C_{\C{H}}= B B^*$: \begin{equation} \label{eq:C-tens-rep} C_{\C{H}} = \sum_m \lambda_m h_m\otimes h_m. \end{equation} Specifically, for $\C{H}=\C{Q}$ and $C_{\C{H}}=C_{\C{Q}}$, \begin{equation} \label{eq:C-tens-rep-Q} C_{\C{Q}} = \sum_m \lambda_m s_m\otimes s_m. \end{equation} The corresponding expansions of $B$ and its adjoint are: \begin{equation} \label{eq:B-tens-rep} B = \sum_m \lambda_m^{1/2}\, h_m\otimes v_m; \quad \text{and} \quad B^* = \sum_m \lambda_m^{1/2}\, v_m\otimes h_m. \end{equation} In case the space $\C{H}$ is a function space like $\mrm{L}_2(\C{T})$ on a set $\C{A}$, this results in the Karhunen-Lo\`eve{} expansions for the representation of $r(p)$: \begin{equation} \label{eq:r-fact-rep-tens} r(a) = \sum_m \lambda_m^{1/2}\, h_m(a) v_m, \quad \text{in particular also} \quad r(t) = \sum_m \lambda_m^{1/2}\, \xi_m(t) v_m. \end{equation} \end{coro} In this last \refeq{eq:r-fact-rep-tens} the function $r(p)$ has become a function of the new parameter $a\in\C{A}$ or $t\in\C{T}$, having implicitly performed a transformation $\C{P}\to\C{A}$ or $\C{P}\to\C{T}$. The new parametrisation covers the same range as $r(p)$ before. As a summary of the analysis let us put everything together: \begin{thm}[Equivalence of representation and factorisation] \label{T:} A parametric mapping $r:\C{P}\to \C{U}$ into a Hilbert space $\C{U}$ with the conditions stated at the beginning of \refSS{ass-lin-map} and this section induces a linear map $R:\C{U}\to\C{Q}$, where $\C{Q}$ is a Hilbert space of functions on $\C{P}$. The \emph{reproducing kernel Hilbert space} is a special case of this. Any other factorisation of the `correlation' $C=R^* R$ on $\C{U}$, like $C = B^* B$ with a $B:\C{U}\to\C{H}$ into a Hilbert space $\C{H}$ with the same properties as $R$ is unitarily equivalent, i.e.\ there is a unitary $W:\C{Q}\to\C{H}$ such that $B = W R$. Any such factorisation induces a representation of $r$. Especially if $\C{H}$ is a space of functions on a set $\C{A}$, one obtains a representation $r(a), (a\in\C{A})$, such that $r(\C{P}) = r(\C{A})$. The associated `correlations' $C_{\C{Q}} = R R^*$ on $\C{Q}$ resp.\ $C_{\C{H}} = B B^*$ on $\C{H}$ have the same spectrum as $C$, and factorisations of $C_{\C{Q}}$ resp.\ $C_{\C{H}}$ induce new factorisations of $C$. \end{thm} \section{Introduction} \label{S:intro} Parametric models are used in many areas of science, engineering, and economics, etc. They appear in cases of \emph{design} of some systems, where the parameters may be design variables of some kind, and the variation of the parameters may show different possibilities, or display the envelope of the system over a range of possible parameter values. Other possibilities arise when one wants to \emph{control} the behaviour of some system, and the parameters are control variables. This is closely connected with situations where one wants to \emph{optimise} the behaviour of some system more generally by changing some parameters. Another important area is where some of the parameters may be uncertain, i.e.\ they could be random variables, and with respect to these one wants to perform \emph{uncertainty quantification}. Of course it is also possible that the parameter set has many-fold purposes, for example that some of the parameters model design variables, while others are uncertain, cf.\ \citep{SoizeFarhat2017}. Often the problem of having to deal with a parametric system is compounded by the fact that one also has to approximate the system behaviour through a \emph{reduced order model} due to high computational demands of the full system. This reduced model then therefore becomes a parametrised reduced order model. The survey \citep{BennWilcox-paramROM2015} and the recent collection \citep{MoRePaS2015} as well as the references therein provide a good view not only of reduced order models which depend on parameters, but also of parametric problems in general and some of the areas where they appear. So for further information on parametrised reduced order models and how to generate them we refer to these references. Here, we want to concentrate on a certain topic illuminating the theoretical background of such parametrised models. This is the connection between separated series representations, associated linear maps, the singular value (SVD) and proper orthogonal decomposition (POD), and tensor products. This then immediately opens the connections to reduced order models and low-rank tensor approximations. It will be seen that the distribution of singular values of the associated linear map determines how many terms are necessary for a good approximation with a reduced order model. For higher order tensor representations in the context of hierarchical tensor approximations, it is the SVD structure of the tensor product splittings associated with the tree structure of the index set partitions. Typically, the parameters are assumed to be tuples of independent real numbers, but here no assumptions are made about the parameter set. The geometry and topology of the parameter set is reflected by set of real functions defined on the parameter set, which can be viewed like co-ordinates in this context. In some cases, like design evaluations and uncertainty quantification, the parameter set in itself is not important, but only the range or distribution of the parametric model. Here the analysis of the associated linear map allows the re-parametrisation of the parametric model with the parameters taken from a different set. The principal result is then that within a certain class of representations of parametric models and associated linear maps there is a one-to-one correspondence between separated series representations and factorisations of the associated linear map. But in general the representation through a linear map is much more general, and allows the modelling of \emph{weak} or \emph{generalised} models. As a possible starting point to introduce the subject, assume that some physical system is investigated, which is modelled by an evolution equation for its state: \begin{equation} \label{eq:I} \frac{\mathrm{d}}{\mathrm{d} t}u(t) = A(q;u(t)) + f(q;t);\quad u(0) = u_0, \end{equation} where $u(t) \in \C{U}$ describes the state of the system at time $t \in [0,T]$ lying in a Hilbert space $\C{U}$ (for the sake of simplicity), $A$ is an operator modelling the physics of the system, and $f$ is some external influence (action / excitation / loading). Assume that the model depends on some quantity $q$, and assume additionally that for all $q$ of interest the system \refeq{eq:I} is well-posed. One part of these parameters $q$ may describe the specific nature of the system \refeq{eq:I}, whereas another part of the parameters, here denoted by $p\in\C{P}$ has to be varied for one reason or another in the analysis. One is often interested in how the system changes when these \emph{parameters} $p$ change. The parameter $p$ can be for example \begin{itemize} \item just the quantity, $p=q$; or \item the quantity and the action, $p=(q,f)$; or \item as before, but including the initial condition, $p=(q,f,u_0)$; or \item many other combinations. \end{itemize} To deal with all these different possibilities under one notation, the state \refeq{eq:I} can be rewritten as \begin{equation} \label{eq:I-p} \frac{\mathrm{d}}{\mathrm{d} t}u(t) = A(p;u(p;t)) + f(p;t);\quad u(0) = u_0, \end{equation} with a solution $u(p;t)$ denoting the explicit dependence on the parameter $p\in\C{P}$. Frequently, the interest is in functionals of the state $\Psi(p,t)=\Psi(p,u(p;t))$, and the functional dependence of $u$ on $p$ becomes important. Such situations arise in design, where $p$ may be a design parameter still to be chosen, and one may seek a design such that a functional $\Psi(p,t)$ or some kind of temporal integral or average $\tns{\psi}(p) = \int_0^T \Psi(p,t) \rho(t)\,\mathrm{d} t$ is, e.g., maximised \citep{Luenberger}. Optimal control is a special case of this, as one may try to influence the time evolution in such a way that $\Psi(p,T)$ (or $\tns{\psi}(p)$ above) is minimised or maximised. Another example is when the $p\in\C{P}$ are uncertain parameters, modelled by random variables. In the process of uncertainty quantification \citep{Matthies_encicl, xiu2010numerical, knio2010spectral, Smith2014} one then may want to compute expected values $\D{E}_p(\Psi(p,t))$. It may also be that the parameters have to be determined or identified to allow the model to match some observed behaviour, this is called an inverse problem, see \citep{hgm17-ECM} and the references therein. Another case is a general design evaluation, where one is interested in the range of $u(p;t)$ --- or $\Psi(p,t)$ or $\tns{\psi}(p)$ --- as $p$ varies over $\C{P}$. The situation just sketched involves a number of objects which are functions of the parameter values $p$. While evaluating $A(p)$ of $f(p)$ in \refeq{eq:I-p} for a certain $p$ may be straightforward, one may easily envisage situations where evaluating $u(p;t)$ or $\Psi(p,t)$ may be very costly as it may involve some very time consuming simulation or computation. Therefore one is interested in representations of $u(p;t)$ or $\Psi(p,t)$ which allow the evaluation in a cheaper way. These simpler representations are called by many names, some are \emph{proxy}- or \emph{surrogate}-model. As will be shown in the following \refS{parametric}, any such parametric object may be represented in many different ways, many of which can be analysed by linear maps which are associated with such representations. It will be shown that these representations may be seen as an element of a tensor product space. This in turn can be used to find very sparse approximations to those objects, and in turn much cheaper ways to evaluate the functional $\Psi$ or $\tns{\psi}$ for other parameter values. This association of parametric models and linear mappings has probably been known for a long time. The first steps which the authors are aware of in this direction of using linear methods are \citep{Loeve1945, Loeve1946}. A seminal work in this kind of inquiry is \citep{Karhunen1947} (with English \citep{Karhunen1947-e} and Spanish \citep{Karhunen1947-s} translations available online), which contains a first rather thorough exploration in the context of probability on infinite dimensional vector or function spaces, and many influential ideas, see also \citep{Karhunen1946, Karhunen1950} for similar work. The name Karhunen-Lo\`eve{} expansion for approximations of this kind in the context of probability theory was coined after the authors of \citep{Karhunen1947} and \citep{Loeve1945, Loeve1946}. This name is used in this context, in other areas that representation is now often known under the name \emph{proper orthogonal decomposition} (POD), which is firmly connected with the \emph{singular value decomposition} (SVD) of the associated linear map. In following publications, see \citep{segal56-TAMS, segal58-TAMS, LGross1962, segalNonlin1969}, the terms \emph{generalised processes}, \emph{linear process}, \emph{weak distribution} or equivalently \emph{weak measure} or \emph{weak process} appear for the associated linear map. This is indicative of the generalisation possible with these linear methods, see also the monographs \citep{gelfand64-vol4, LoeveII, kreeSoize86}. A first step of reviving and also connecting these methods of analysis with the theory of low-rank tensor approximations \citep{Hackbusch_tensor} was undertaken in \citep{boulder:2011} in the context of uncertainty quantification and inverse problems. It is furthermore also connected with non-orthogonal decompositions which are easier to compute, like the \emph{proper generalised decomposition}(PGD) \citep{chinestaBook}. Here we continue that endeavour of showing the connection of parametric models, model reduction of parametric models, and sparse numerical approximations to a certain extent in a more general setting. It is on this theoretical background that one may analyse modern numerical methods which allow the numerical handling of very high-dimensional objects, i.e.\ where one has to deal with an essentially high-dimensional space for the parameters $p\in\C{P}$. Whereas the parametric map may be quite complicated, the association with a linear map translates the whole problem into one of linear functional analysis, and into linear algebra upon approximation and actual numerical computation. Also, whereas the set $\C{P}$ might have a quite arbitrary structure, this is replaced by a subspace of the vector space $\D{R}^{\C{P}}$ of real valued functions on $\C{P}$, in some sense this is a `problem oriented co-ordinate system' on the set. This is a frequent technique in mathematics, and it replaces the quite arbitrary set by a vector space, which is much more accessible. Let us recall a situation which is similar and may be well-known to many readers. When the need arose to deal with very singular functions, especially when Dirac needed an `ideal' object like the $\updelta$-`function', for this and other so-called \emph{generalised} functions or \emph{distributions} a fruitful mathematical formulation turned out to be the model of a linear map into real numbers on a space of smooth regular functions, see e.g.\ \citep{gelfand64-vol1, gelfand64-vol2}. The association with a linear map is quickly shown to be related to representations connected with the adjoint of the map, and the precise definition and properties of the associated linear map are given in \refS{parametric}. The connection with reproducing kernel Hilbert spaces (RKHS) \citep{berlinet} also appears naturally here, and it is shortly sketched. From the map and its adjoint we obtain the `correlation', which will be analysed in \refS{correlat}. Here the spectral analysis and factorisation of the correlation will become important \citep{Segal1978, reedSimon-vol1, reedSimon-vol2,DautrayLions3}. This also connects the whole idea of linear methods for representation with tensor representations, which appear naturally in the spectral analysis. The kernel, which on the RKHS is the reproducing kernel, now appears in another context than the one already alluded to in \refS{parametric}, and in \refS{kernel} the kernel side of the representation is analysed, which is the classical domain of integral transforms and integral equations as already envisaged in \citep{Karhunen1947}. Some examples and interpretations are explained in \refS{xmpls}, to give an idea of the breadth of possible applications of the theory. Here the connection of these linear methods to both linear model reduction and nonlinear model reduction in the form of low-rank tensor approximations \citep{Hackbusch_tensor} is mentioned and briefly explained. The last \refS{refine} before the conclusion in \refS{concl} deals very shortly with certain refinements which are possible when some a priori structure of the represented spaces is known; we have connected it here with vector- and tensor-fields. \section{Kernel space} \label{S:kernel} In this section we take a closer look at the operator defined in \refeq{eq:def-C-H} in Proposition~\ref{P:new-cons} especially for the case $\C{H}=\C{Q}$ and $B = R$, i.e.\ we analyse the operator $C_{\C{Q}} = R R^*$. We shall restrict ourselves again to the case of a pure point spectrum. From \refC{other-eig} and \refeq{eq:C-tens-rep-Q} in \refC{fact-rep-tens} one knows that in an abstract sense \begin{equation} \label{eq:C-Q-decs} C_{\C{Q}} = U V^* C V U^* = U M_\mu U^* = \sum_m \lambda_m s_m \otimes s_m . \end{equation} But the point is here to spell this out in more analytical detail especially for the case when, as indicated already at the beginning of \refS{correlat}, the inner product on $\C{Q}$ is given by a measure $\varpi$ on $\C{P}$: \begin{equation} \label{eq:Q-ip} \forall \varphi, \psi \in \C{Q}: \quad \bkt{\varphi}{\psi}_{\C{Q}} = \int_{\C{P}} \varphi(p) \psi(p)\, \varpi(\mathrm{d} p) . \end{equation} \subsection{Kernel spectral decomposition} \label{SS:kernel-spec-dec} Then $C$ is given by \begin{equation} \label{eq:C-tens-int} C = \int_{\C{P}} r(p)\otimes r(p)\, \varpi(\mathrm{d} p) , \end{equation} and $C_{\C{Q}}$ is represented by the kernel \begin{equation} \label{eq:C-Q-kernel} \varkappa(p_1,p_2) = \bkt{r(p_1)}{r(p_2)}_{\C{U}}, \end{equation} so that for all $\varphi, \psi \in \C{Q}$ \begin{equation} \label{eq:XVIII} \bkt{C_{\C{Q}}\phi}{\psi}_{\C{Q}} = \bkt{R^* \varphi}{R^* \psi}_{\C{U}} = \iint_{\C{P}\times\C{P}} \varphi(p_1) \varkappa(p_1, p_2) \psi(p_2)\; \varpi(\mathrm{d} p_1) \varpi(\mathrm{d} p_2), \end{equation} i.e.\ $C_{\C{Q}}$ is a Fredholm integral operator \begin{equation} \label{eq:XVIII-int-op} (C_{\C{Q}} \psi)(p_1) = \int_{\C{P}} \varkappa(p_1, p_2) \psi(p_2)\; \varpi(\mathrm{d} p_2). \end{equation} The abstract eigenvalue problem described in \refC{other-eig} for the operator $C_{\C{Q}}$, when taking into account the explicit description \refeq{eq:XVIII-int-op}, is translated into finding an eigenfunction $s\in\C{Q}$ and eigenvalue $\lambda$ such that \begin{equation} \label{eq:XVIII-int-ev} (C_{\C{Q}} s)(p_1) = \int_{\C{P}} \varkappa(p_1, p_2) s(p_2)\; \varpi(\mathrm{d} p_2) = \lambda \, s(p_1) , \end{equation} a \emph{Fredholm} integral equation \citep{courant_hilbert, atkinson97}. \begin{prop} \label{P:kernel-eig-fcts} From \refC{other-eig} and \refeq{eq:C-Q-decs} one knows that the eigenfunctions are $\{s_m\}_m \subset \C{Q}$, hence, in particular with the kernel $\varkappa$, Mercer´s theorem \citep{courant_hilbert} gives \begin{equation} \label{eq:XVIIIb} \int_{\C{P}} \varkappa(p_1, p_2) s_m(p_2)\; \varpi(\mathrm{d} p_2) = \lambda_m \, s_m(p_1); \quad \varkappa(p_1, p_2) = \sum_m \lambda_m\, s_m(p_1) s_m(p_2), \end{equation} giving a decomposition of $\varkappa$, which is of course essentially identical to \refeq{eq:C-Q-decs}. \end{prop} In \refS{correlat} the analysis was based to a large extent on factorisations of the operator $C = R^* R$. Similarly, now one looks at factorisations of $C_{\C{Q}} = R R^*$. One example situation which occurs quite frequently fits nicely here, rather than in the later \refS{xmpls}, which is the case when $\C{P} = \D{R}^n$ with the usual Lebesgue measure, and the kernel is a convolution kernel, i.e.\ $\varkappa(p_1, p_2) = \kappa(p_1 - p_2)$. This means that the kernel is invariant under arbitrary displacements or shifts $z\in\D{R}^n$: $\varkappa(p_1, p_2) = \varkappa(p_1 + z, p_2 + z) = \kappa(p_1 - p_2)$. The eigenvalue equation \refeq{eq:XVIII-int-ev} becomes \begin{equation} \label{eq:XVIII-int-ev-z} (C_{\C{Q}} s)(p_1) = \int_{\D{R}^n} \kappa(p_1- p_2) s(p_2)\; \mathrm{d} p_2 = \lambda \, s(p_1) . \end{equation} As is well known, the symmetry of $\varkappa$ implies now that the function $\kappa$ has to be an \emph{even} function \citep{bracewell}, $\kappa(z) = \kappa(-z)$. It is clear that this form of equation can be treated by Fourier analysis \citep{courant_hilbert, atkinson97}; performing a Fourier transform on \refeq{eq:XVIII-int-ev-z} and denoting transformed quantities by a hat, e.g. $\hat{s}$, one obtains for all $\zeta \in \D{R}^n$ \begin{equation} \label{eq:XVIII-int-ev-F} \widehat{(C_{\C{Q}} s)}(\zeta) = \hat{\kappa}(\zeta) \hat{s}(\zeta) = \lambda \, \hat{s}(\zeta) . \end{equation} In this representation, $\widehat{(C_{\C{Q}} s)}$ has become a multiplication operator $M_{\hat{\kappa}}$ with the positive multiplier function $\hat{\kappa}(\zeta)\ge 0$ on the domain $\zeta\in\D{R}^n$. This is a concrete version of the case in the second spectral \refT{2nd-spec}, the multiplier function $\mu(t)$ in that theorem is $\hat{\kappa}(\zeta)$ here. The unitary transformation which has effectively \emph{diagonalised} the integral operator \refeq{eq:XVIII-int-ev} is the \emph{Fourier transform}, and the essential range of $\hat{\kappa}$ is the spectrum. This relates to the fact that the Fourier transform of the correlation $\kappa$ --- or more precisely the covariance, but we do not distinguish this here --- is usually called the spectrum, or more precisely the spectral density. In the terminology here the spectrum is the \emph{values} of $\hat{\kappa}$. It is now also easy to see how a continuous spectrum appears: on an infinite domain the integral operator \refeq{eq:XVIII-int-op} is typically not compact, and unless $\kappa$ is almost-periodic the operator has no point spectrum. The Fourier functions are \emph{generalised} eigenfunctions \citep{gelfand64-vol3, gelfand64-vol4, DautrayLions3}, as they are \emph{not} in $\mrm{L}_2(\D{R}^n)$. We shall not dwell further on this topic here. If we denote the Fourier transform on $ \mrm{L}_2(\D{R}^n)$ by \begin{equation} \label{eq:fourier} F: f(p) \mapsto \hat{f}(\zeta)= (F f)(\zeta) = \int_{\D{R}^n} \exp(-2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot \zeta) f(p) \,\mathrm{d} p , \end{equation} where $p\cdot \zeta$ is the Euclidean inner product in $\D{R}^n$, then one may write this spectral decomposition and factorisation of $C_{\C{Q}}$ in this special case corresponding to \refC{fact-rep-tens} in the following way. \begin{coro} \label{C:rep-kern-spec} The operator $C_{\C{Q}}=R R^*$ has in the stationary case of \refeq{eq:XVIII-int-ev-z} the spectral decomposition \begin{equation} \label{eq:C-Q-spec-dec} C_{\C{Q}} = F^* M_{\hat{\kappa}} F. \end{equation} As $\hat{\kappa}(\zeta)\ge 0$, the square-root multiplier is given by \begin{equation} \label{eq:M-Q-spec-sqrt} M_{\hat{\kappa}}^{1/2} = M_{\sqrt{\hat{\kappa}}} . \end{equation} This induces the following factorisation of $C_{\C{Q}}=R R^*$: \begin{equation} \label{eq:C-Q-spec-fact} C_{\C{Q}} = (M_{\sqrt{\hat{\kappa}}}F)^* (M_{\sqrt{\hat{\kappa}}} F). \end{equation} \end{coro} From \refC{other-eig} one has $C_{\C{Q}} = U V^* C V U^*$, which gives further \begin{equation} \label{eq:fourier-dent-U-F} C_{\C{Q}} = U V^* C V U^* = U V^* V M_\mu V^* V U^* = U M_\mu U^*, \end{equation} meaning that essentially in this case $U = F^*$, the inverse Fourier transform. This implies the well-known Fourier representations of stationary random functions. Denoting the shift operator for $z\in\D{R}^n$ as $T_z: f(p)\mapsto f(p+z)$, it is elementary that with $\eta_\zeta(p) := \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot \zeta)$ \[ T_z \eta_\zeta(p) = T_z \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot \zeta) = \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot z)\, \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot \zeta) = \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot z)\, \eta_\zeta(p), \] which says that the $\eta_\zeta(p)$ are `generalised' eigenfunctions of $T_z$ \citep{gelfand64-vol4, DautrayLions3}. They are \emph{not} true eigenfunctions as they are not in $\mrm{L}_2(\D{R}^n)$. Shift-invariance means that $T_z\,C_{\C{Q}} = C_{\C{Q}}\, T_z$, i.e.\ the operators commute. This implies that \citep{gelfand64-vol3, Segal1978, DautrayLions3} they have the same spectral resolution, i.e.\ the same true and generalised eigenfunctions. Both $T_z$ and $C_{\C{Q}}$ are effectively diagonalised by the Fourier transform $F$. This particular case of covariance has been treated extensively in the literature \citep{Karhunen1946, Karhunen1950, Yaglom-62-04, gelfand64-vol4, Yaglom-68-I, Yaglom-68-II, LoeveII, kreeSoize86, Matthies_encicl}. As is well known, the functions $\eta_\zeta(p)$ are formal or generalised eigenfunctions of a shift-invariant operator as that one in \refeq{eq:XVIII-int-ev-z} \citep{gelfand64-vol3, gelfand64-vol4, DautrayLions3}. This results in the following Karhunen-Lo\`eve{} expansions, also known as spectral expansions, for the formal representation of $r(p)$ in the case of a discrete spectrum \refeq{eq:C-tens-rep-Q}: \begin{equation} \label{eq:r-fact-rep-C-Q-d} r(p) = \sum_m \lambda_m^{1/2}\, \eta_{\lambda_m}(p) v_m, \end{equation} or, in the case of a continuous spectrum with generalised eigenvectors $v_\zeta$ of $C$, formally, \begin{equation} \label{eq:r-fact-rep-C-Q-c} r(p) = \int_{\D{R}^n} \sqrt{\hat{\kappa}(\zeta)}\, \eta_{\zeta}(p) v_\zeta \,\mathrm{d} \zeta = \int_{\D{R}^n} \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot \zeta)\,M_{\sqrt{\hat{\kappa}}}\, v_\zeta \,\mathrm{d} \zeta = F^* (M_{\sqrt{\hat{\kappa}}} v_\zeta), \end{equation} or, in the most general case, a combination of \refeq{eq:r-fact-rep-C-Q-d} and \refeq{eq:r-fact-rep-C-Q-c}. In that last \refeq{eq:r-fact-rep-C-Q-c}, the formal term $v_\zeta\,\mathrm{d} \zeta$ may be interpreted as a vector-valued measure $\tilde{v}(\mathrm{d} \zeta)$, in the case of a \emph{random} process or field $r(p)$ on $p\in\D{R}^n$ it is called a \emph{stochastic} measure. \subsection{Kernel factorisations} \label{SS:kernel-fact} The concrete realisation of the operator $C_{\C{Q}}$ as an integral kernel \refeq{eq:XVIII-int-op} opens the possibility to look for factorisations in the concrete setting of integral transforms. If, on the other hand, one has some other factorisation of the kernel, for example on some measure space $(\C{X},\nu)$: \begin{equation} \label{eq:XIX} \varkappa(p_1,p_2) = \int_{\C{X}} g(p_1,x) g(p_2,x)\; \nu(\mathrm{d} x) = \bkt{g(p_1,\cdot)}{g(p_2,\cdot)}_{\mrm{L}_2(\C{X})}, \end{equation} then the integral transform with kernel $g$ will play the r\^ole of a factor as before the mappings $R$ or $B$. Let us recall that in the context of RKHS, cf.\ \refSS{RKHS}, such a factorisation is often used as a so-called \emph{feature map}. \begin{defi} \label{D:int-trfm} Define the integral transform $X: \mrm{L}_2(\C{X}) \to \C{Q}$ with kernel $g$ as \begin{equation} \label{eq:XIX-int-trfm} X: \xi \mapsto \int_{\C{X}} g(\cdot,x)\xi(x)\; \nu(\mathrm{d} x). \end{equation} \end{defi} This results immediately in a new factorisation of $C_{\C{Q}}$ and a new representation: \begin{coro} \label{C:rep-kern-trfm} The operator $C_{\C{Q}}=R R^*$ with decomposition \refeq{eq:C-Q-decs} has the factorisation \begin{equation} \label{eq:C-Q-fact-g-X} C_{\C{Q}} = X X^*. \end{equation} Defining the orthonormal system $\{\chi_m\}_m \subset \mrm{L}_2(\C{X})$ by \begin{equation} \label{eq:XIX-k-trfm} \lambda_m^{1/2} \chi_m = X^* s_m; \quad \lambda_m^{1/2} \chi_m(x) = \int_{\C{P}} g(p,x) s_m(p)\; \varpi(\mathrm{d} p), \end{equation} this induces the following Karhunen-Lo\`eve{} representation of $r(x)$ on $\C{X}$: \begin{equation} \label{eq:r-C-Q-X-rep} r(x) = \sum_m \lambda_m^{1/2} \chi_m(x) v_m. \end{equation} \end{coro} \begin{proof} To prove \refeq{eq:C-Q-fact-g-X}, compute for any $\phi\in\C{Q}$ its adjoint transform $(X^* \phi)(x) = \int_{\C{P}} g(p,x) \phi(p)\; \varpi(\mathrm{d} p)$. Now for all $\varphi, \psi \in\C{Q}$ it holds that \begin{multline*} \bkt{X X^* \varphi}{\psi}_{\C{Q}} = \bkt{X^* \varphi}{X^* \psi}_{\mrm{L}_2(\C{X})} = \int_{\C{X}} (X^* \varphi)(x)(X^* \psi)(x) \; \nu(\mathrm{d} x) = \\ \int_{\C{X}} \left(\int_{\C{P}} g(p_1,x) \varphi(p_1)\; \varpi(\mathrm{d} p_1) \right) \left( \int_{\C{P}} g(p_2,x) \psi(p_2)\; \varpi(\mathrm{d} p_2) \right) \, \nu(\mathrm{d} x) = \\ \iint_{\C{P}\times\C{P}} \left( \int_{\C{X}} g(p_1,x) g(p_2,x)\; \nu(\mathrm{d} x) \right) \varphi(p_1) \psi(p_2)\; \varpi(\mathrm{d} p_1) \varpi(\mathrm{d} p_2) = \\ \iint_{\C{P}\times\C{P}} \varkappa(p_1,p_2) \varphi(p_1) \psi(p_2)\; \varpi(\mathrm{d} p_1) \varpi(\mathrm{d} p_2) = \bkt{C_{\C{Q}} \varphi}{\psi}_{\C{Q}}, \end{multline*} which is the bilinear form for \refeq{eq:C-Q-fact-g-X}. The rest follows from \refC{fact-rep-tens} with $\C{H}=\mrm{L}_2(\C{T})$ and $B = X^* U V^*: \C{U}\to\mrm{L}_2(\C{X})=\C{H}$, as from \refeq{eq:C-Q-decs} \[ X X^* = C_{\C{Q}} = U V^* C V U^*\quad \Rightarrow \quad C = (V U^* X) (X^* U V^*) = B^* B . \] \end{proof} The decomposition in Proposition~\ref{P:kernel-eig-fcts} may now also be seen in this light by setting $\C{X}=\D{N}$ with \emph{counting} measure $\nu$, such that $\mrm{L}_2(\C{X}) = \ell_2$, and $X$-transformation kernel $g(p,m) := \lambda_m^{1/2} s_m(p)$. Then \refeq{eq:XIX} becomes \refeq{eq:XVIIIb}, the concrete version of \refeq{eq:C-Q-decs}. The result in \refeq{eq:C-Q-spec-fact}, $C_{\C{Q}} = (M_{\sqrt{\hat{\kappa}}} F)^* (M_{\sqrt{\hat{\kappa}}} F)$ shows that the Fourier diagonalisation in \refC{rep-kern-spec} is a special case of such a kernel factorisation with $X := (M_{\sqrt{\hat{\kappa}}}\,F)^*$. As $\kappa$ is the inverse Fourier transform of $\hat{\kappa}$, \[ \kappa(p) = \int_{\D{R}^n} \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot \zeta)\,\hat{\kappa}(\zeta) \,\mathrm{d} \zeta , \] remembering that with the Fourier transform one has to consider the \emph{complex} space $\C{C}=\mrm{L}_2(\D{R}^n,\D{C})$ with inner product \[ \forall \varphi, \psi \in \C{C}: \quad \bkt{\varphi}{\psi}_{\C{C}} := \int_{\D{R}^n} \bar{\varphi}(\zeta) \psi(\zeta) \,\mathrm{d} \zeta \] ($\bar{\varphi}(\zeta)$ is the conjugate complex of $\varphi(\zeta)$), and by defining the $X$-transformation kernel $\gamma(p,\zeta) := \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, p\cdot \zeta)\,\sqrt{\hat{\kappa}(\zeta)}$, one obtains the kernel factorisation \begin{multline*} \varkappa(p_1,p_2) = \kappa(p_1 - p_2) = \int_{\D{R}^n} \exp(2 \uppi\, \mathchoice{\displaystyle\mathrm i\, (p_1 - p_2) \cdot \zeta)\, \hat{\kappa}(\zeta) \,\mathrm{d} \zeta = \\ \int_{\D{R}^n}\left(\exp(-2\uppi\,\mathchoice{\displaystyle\mathrm i\,p_2\cdot\zeta)\, \sqrt{\hat{\kappa}(\zeta)}\right) \left(\exp(2\uppi\,\mathchoice{\displaystyle\mathrm i\,p_1\cdot\zeta)\,\sqrt{\hat{\kappa} (\zeta)}\right)\,\mathrm{d} \zeta = \bkt{\gamma(p_2,\cdot)}{\gamma(p_1,\cdot)}_{\C{C}}. \end{multline*} \section{Interpretations, decompositions, and reductions} \label{S:xmpls} After all the abstract deliberations it is now time to see some concrete examples, which will show that the above description is in many cases an abstract statement of already very familiar constructions. An important example of these decompositions is when $\C{U}$ is also a space of functions. Imagine for example a scalar \emph{random field} $u(x,\omega)$, where $x\in\C{X}\subset\D{R}^n$ is a spatial variable, and $\omega\in\Omega$ is an elementary event in a probability space $\Omega$ with probability measure $\D{P}$. Na\"ively, at each $x\in\C{X}$ there is a random variable (RV) $u(x,\cdot)$, and for each realisation $\omega\in\Omega$ one has an instance of a spatial field $u(\cdot,\omega)$. To make things simple, assume that $u\in\mrm{L}_2(\C{X}\times\Omega)$, which is isomorphic to the tensor product $\mrm{L}_2(\C{X})\otimes \mrm{L}_2(\Omega) \cong \mrm{L}_2(\C{X}\times\Omega)$. Now one may investigate the splitting $p=x, \C{P}=\C{X}$ and $r(p) = u(p,\cdot)$ with $\C{U}= \mrm{L}_2(\Omega)$ and $\C{Q}=\mrm{L}_2(\C{X})$, where for each $p\in\C{X}$ the model $r(p)$ is a RV. Then the operator $C$ is on $\C{U}=\mrm{L}_2(\Omega)$, and one usually investigates $C_{\C{Q}}$ on $\C{Q}=\mrm{L}_2(\C{X})$, an operator on a spatial field. Turning everything around, one may investigate the splitting $p=\omega, \C{P}=\Omega$ and $r(p)=u(\cdot,p)$ with $\C{U}=\mrm{L}_2(\C{X})$ and $\C{Q}=\mrm{L}_2(\Omega)$, where for each $p\in\Omega$ the model $r(p)$ is a spatial field. The operator $C$ on $\C{U}=\mrm{L}_2(\C{X})$ is what was before the operator $C_{\C{Q}}$ and vice versa. \subsection{Examples and interpretations} \label{SS:xmpl} \begin{enumerate} \item Taking up this first example, assume that the Hilbert space $\C{U}$ is a space of centred (zero mean) random variables (RVs), e.g.\ $\C{U}=\mrm{L}_2(\Omega)$ with inner product $\bkt{\xi}{\eta}_{\C{U}} := \EXP{\xi \eta} = \int_\Omega \xi(\omega) \eta(\omega)\,\D{P}(\mathrm{d} \omega)$, the covariance, and $r$ is a zero-mean scalar random field and $r(p)=u(p,\cdot)$ is a zero-mean RV, or a ($n=1$) stochastic process indexed by $p\in\C{P} \subseteq \D{R}^n$. Then $R:\C{U}=\mrm{L}_2(\Omega)\to\C{Q}=\mrm{L}_2(\C{X})$ maps the RV $\xi$ to its spatial covariance with the random field, $R \xi = \left(p\mapsto \EXP{\xi(\cdot) u(p,\cdot)}\right)$. The representation operator $R^*$ maps fields into random variables, $R^* v = \int_{\C{X}} v(x) u(x,\cdot)\,\mathrm{d} x$. The operator $C$ on $\C{U}=\mrm{L}_2(\Omega)$ is rarely investigated, more typically one looks at $C_{\C{Q}}$ on $\C{Q}=\mrm{L}_2(\C{X})$, represented by its kernel $\varkappa$ as an integral equation \refeq{eq:XVIII-int-ev} on the spatial domain $\C{X}$. The kernel is the usual covariance function $\varkappa(p_1,p_2)$. This is the application where the name Karhunen-Lo\`eve{} expansion was originally used. We have used it here in a more general fashion. \item Similar to the previous example, but the random field is assumed to be stochastically homogeneous, which means that the covariance function $\varkappa(p_1,p_2)$ is \emph{shift invariant} or \emph{translation invariant}. This example has already been shortly described at the end of \refSS{kernel-spec-dec}, and there is much literature on this subject, e.g.\ \citep{Karhunen1946, Karhunen1950, Yaglom-62-04, gelfand64-vol4, Yaglom-68-I, Yaglom-68-II, LoeveII, kreeSoize86, Matthies_encicl}, so we will not further dwell on this. \item Here we look at the second example's interpretation of the random field described above. This is a special case of what has been described at the beginning of \refS{correlat}, where the measure $\varpi$ on $\C{P}$ is the probability measure $\varpi = \D{P}$. For simplicity let $r(p)$ be a centred $\C{U}$-valued RV, and each $r(p)=u(\cdot,p)$ is an instance of a spatial field. The associated linear operator $R:\C{U}=\mrm{L}_2(\C{X})\to \C{Q}=\mrm{L}_2(\Omega)$ maps spatial fields to \emph{weighted averages}, a RV; $R v = \int_{\C{X}} v(x) u(x,\cdot)\,\mathrm{d} x\in\C{Q}=\mrm{L}_2(\Omega)$. It is what $R^*$ was in the first example. And here the representation operator $R^*$ is what $R$ was in the first example. Then $C$ is the covariance operator, operating on spatial fields. This was $C_{\C{Q}}$ in the first example. \item If $\C{P}=\{1,2,\ldots,n\}$, then $\C{U}=\mathop{\mathrm{span}}\nolimits\{r(\C{P})\}\cong \D{R}^n$ is finite dimensional, and $\D{R}^{\C{P}} = \D{R}^n$ by definition. Hence both $\C{R} =\D{R}^n$ and $\C{Q}=\D{R}^n$, possibly with different inner products. In any case, $\varkappa$ is the Gram matrix of the vectors $\{r_1,\ldots,r_n\}$. The SVDs of $R$ are matrix SVDs, and the representation map $R^*$ is connected to the Karhunen-Lo\`eve{} expansion, which here is called the \emph{proper orthogonal decomposition} (POD). \item If $\C{P}=[0,T]$ and $r(t) (t\in [0,T])$ is the response of a dynamical system with state space $\C{U}$, one may take $\C{Q}=\mrm{L}_2([0,T])$. The associated linear map $R$ tells us the dynamics of certain components. To illustrate this, assume for the moment that $\C{U}=\D{R}^n$, a dynamical system with $n$ degrees of freedom. Taking each canonical unit vector $\vek{e}_j$ in turn, one sees that $R \vek{e}_j = (t\mapsto \vek{e}_j^{\ops{T}}\vek{u}(t) = u_j(t))$, i.e.\ the time evolution of the $j$-th component. The representation operator $R^*: \C{Q}\subset \mrm{L}_2([0,T])$ maps scalar time-functions on their weighted average with the dynamics $R^* \phi = \int_{[0,T]} \phi(t) u(t)\,\mathrm{d} t \in \C{U}$. \item Combining the two previous examples gives the method of \emph{temporal} snapshots, and the Karhunen-Lo\`eve{} expansion becomes the POD for a dynamical system. \item If $\C{P}=\{ \omega_s|\; \omega_s \in \Omega\}$ are samples from some probability space $\Omega$, then one obtains the POD method for samples for some $\C{U}$-valued RV. \end{enumerate} \subsection{Decompositions and model reduction} \label{SS:mod-red} Let us go back to the example at the beginning in \refS{intro}, where the quantity of interest is the time evolution of a dynamical system, $t\mapsto v(t,q)$ with state space $\C{V}$, dependent on a parameter $q\in\C{S}$. Assume for simplicity that the whole process can be thought of as an element of $\C{V} \otimes \mrm{L}_2([0,T])\times \C{S}) \cong \C{V} \otimes \mrm{L}_2([0,T]) \otimes \mrm{L}_2(\C{S})$. One may take $\C{U} = \C{V}\otimes\mrm{L}_2([0,T])$, the time-histories in state space, and $p=q$, $\C{P}=\C{S}$, and $\C{Q}=\mrm{L}_2(\C{S})$. But it is also possible to take $\C{U}=\C{V}$ and $p=(t,q)$, $\C{P}=[0,T]\times\C{S}$, $\C{Q}=\mrm{L}_2([0,T]) \otimes\mrm{L}_2(\C{S})$. Staying with the latter split, for example the representation \refeq{eq:XIV} in \refT{1st-spec-rep} becomes \begin{equation} \label{eq:XV} r(p) = \sum_m \lambda_m^{1/2} \, s_m(p) v_m = \sum_m \lambda_m^{1/2} \, s_m((t,q)) v_m. \end{equation} Now each of the scalar function $s_m((t,q))$ may be seen as a parametric model $q\mapsto s_m(\cdot,q)$ of time functions in $\mrm{L}_2([0,T])$. So now one may investigate the parametric model based on $\C{U}_* = \mrm{L}_2([0,T])$ and $\C{Q}_* = \mrm{L}_2(\C{S})$ for each of the $s_m$. Frequently the parameter space is a product space \[ \C{S} = \C{S}_{I} \times \C{S}_{II} \times \ldots = \prod_K \C{S}_K,\quad K=I, II, \dots , \] with a product measure $\varpi = \varpi_I \otimes \varpi_{II} \dots $, with $s_m(t,q)= s_m(t,(q_I,q_{II},\dots))$. As then \[ \mrm{L}_2(\C{S}) = \mrm{L}_2(\prod_K \C{S}_K) = \mrm{L}_2(\C{S}_I) \otimes \mrm{L}_2(\C{S}_{II}) \otimes \dots = \bigotimes_K \mrm{L}_2(\C{S}_K), \quad K=I, II, \dots , \] one obtains \begin{equation} \label{eq:XXIII-tens} \C{Q} = \C{U}_*\otimes \C{Q}_* = \C{U}_* \otimes \C{Q}_I \otimes \C{Q}_{II} \otimes \dots , \end{equation} with $\C{Q}_K = \mrm{L}_2(\C{S}_K)$ for $K=I, II, \dots$. It is clear that $\C{Q}_* = \bigotimes_K \C{Q}_K$ may be further split by different associations depending on the value of $J$: \begin{equation} \label{eq:XXIII-tens-p} \C{Q}_* = \C{U}_{**}\otimes\C{Q}_{**} = \left(\bigotimes_{K=I}^J \C{Q}_K\right) \otimes \left(\bigotimes_{K>J} \C{Q}_K\right) . \end{equation} As will be seen, this leads to hierarchical tensor approximations, e.g.\ \citep{Hackbusch_tensor, boulder:2011}. The model space has now been decomposed to \begin{equation} \label{eq:XXIII-tens-p-all} \C{U}\otimes\C{Q} = \C{U} \otimes \C{U}_* \otimes \C{Q}_* = \C{U} \otimes \C{U}_* \otimes \left(\bigotimes_K \C{Q}_K\right) . \end{equation} Computations usually require that one chooses finite dimensional subspaces and bases in there, in the example case of \refeq{eq:XXIII-tens-p-all} assume that these are \begin{align} \label{eq:XXXa} \text{span }\{ u_n\}_{n=1}^N =\C{U}_N \subset\C{U},\quad &\dim \C{U}_N = N,\\ \label{eq:XXXb} \text{span } \{\tau_k\}_{j=1}^J =\C{U}_{*J} \subset L_2([0,T]) = \C{U}_*, \quad &\dim \C{U}_{*J} =J,\\ \nonumber \forall \ell_k=1,\ldots,L_K, \; K=I, II, \dots :&\\ \label{eq:XXXc} \text{span } \{s_{\ell_k}\}_{\ell_k=1}^{L_k} = \C{Q}_{K, L_K} \subset L_2(\C{S}_K) = \C{Q}_{K}, \quad &\dim \C{Q}_{K,L_K} = L_K. \end{align} An approximation to $u\in \C{U}\otimes\C{Q}$ in the space described in \refeq{eq:XXIII-tens-p-all} is thus given by \begin{equation} \label{eq:XXXI} u(x,t,q_I, \ldots, ) \approx \sum_{n=1}^N \sum_{j=1}^J \sum_{\ell_I=1}^{L_I} \ldots \sum_{\ell_K = 1}^{L_K}\dots \tns{u}^{\ell_I,\ldots,\ell_K,\dots}_{n,j} w^n(x) \tau^j(t) \left(\prod_{K} s_{\ell_K}(\omega_m)\right). \end{equation} Via the \refeq{eq:XXXI} one sees that the tensor \begin{equation} \label{eq:XXIII-tens-concr} \tnb{u} = \left(\tns{u}^{\ell_I,\ldots,\ell_K,\dots}_{n,j}\right) \in \D{R}^{(N \times J \times \prod_K L_K)} \cong \D{R}^N \otimes \D{R}^J \otimes \bigotimes_K \D{R}^{L_K} \end{equation} represents the total parametric response $u(x,t,q_I, \ldots, )$. One way to perform model reduction is to apply the techniques described before on the finite dimensional approximation space of the one described in \refeq{eq:XXIII-tens-p-all} \begin{equation} \label{eq:XXIII-tens-p-all-f} \C{U}_N \otimes \C{Q}_M = \C{U}_N \otimes \C{U}_{*J} \otimes \left(\bigotimes_K \C{Q}_{K,L_K}\right) = \subset \C{U} \otimes \C{U}_* \otimes \left(\bigotimes_K \C{Q}_K\right) , \end{equation} with $\C{Q}_{M} = \C{U}_{*J} \otimes \left(\bigotimes_K \C{Q}_{K,L_K}\right)$ and dimension $\dim \C{Q}_M = M= J \times \prod_K L_K$ but not using the full dimension, as the spectral analysis of the `correlation' operator $C$ picks out the important parts. Another kind of reduction works directly with the tensor $\tnb{u}$ in \refeq{eq:XXIII-tens-concr}. It has formally $R^\prime= N \times J \times \prod_K L_K$ terms. Here we only touch on this subject, which is a \emph{nonlinear} kind of model reduction, and that is to represent this tensor with many times fewer $R \ll R^\prime$ terms through what is termed a low-rank approximation, for a thorough treatement see the monograph \citep{Hackbusch_tensor}. Whereas the so called \emph{canonical polyadic} (CP) decomposition uses the flat tensor product in \refeq{eq:XXIII-tens-p-all} --- under the name \emph{proper generalised decomposition} (PGD) \citep{Nouy2009, NouyACM:2010, ammChin2010, LadevezeCh2011} this is also a computational method to solve an effectively high-dimensional problem as \refeq{eq:I} or \refeq{eq:I-p}, see the review \citep{chinestaPL2011} and the monograph \citep{chinestaBook} --- the recursive use of splittings \refeq{eq:XXIII-tens-p} leads to \emph{hierarchical} tensor approximations, e.g.\ \citep{Grasedyck2010}. The index set can be thought to be partitioned and arranged into a binary tree, each time causing a split as in \refeq{eq:XXIII-tens-p}, or rather on the finite dimensional approximation \refeq{eq:XXIII-tens-p-all-f}, or equivalently in the concrete tensor in \refeq{eq:XXIII-tens-concr}. Particular cases of this are the \emph{tensor train} (TT) \citep{oseledetsTyrt2010, oseledets2011} and more generally the \emph{hierarchical Tucker} (HT) decompositions, see the review \citep{GrasedyckKressnerTobler2013} and the monograph \citep{Hackbusch_tensor}. An example how this representation then allows also fast post-processing such as finding maxima is given in \citep{Espig:2013}. Let us also mention that these sparse or low-rank tensor representations are connected with the expressive power of deep neural networks \citep{CohenSha2016, KhrulkovEtal2018}. Neural networks are one possibility of choosing the approximation functions in \refeq{eq:XXXc}. Obviously, a good reduced order model is one with only few terms. One recognises immediately that the SVD structure of the associated linear map of such a split determines how many terms are needed for a good approximation. Equivalently it is the structure of the spectrum of the appropriate correlation operator associated with the splitting. \section{Parametric problems} \label{S:parametric} Let $r: \C{P} \rightarrow \C{U}$ be a parametric description of one of the objects alluded to in the introduction, or the state or response of some system, where $\C{P}$ is some set, and $\C{U}$ is assumed --- for the sake of simplicity --- as a separable Hilbert space with inner product $\langle\cdot|\cdot\rangle_{\C{U}}$. More general locally convex vector spaces are possible, but the separable Hilbert space is in many ways the simplest model. The situation in its purest form may be thought of in an abstract way as follows: $ F: \C{U} \times \C{P} \to \C{U} $ is some parameter dependent mapping like \refeq{eq:I-p} in \refS{intro}, which is well-posed in the sense that for each $p\in\C{P}$ it has a unique solution $r(p)$ satisfying \begin{equation} \label{eq:II} F(r(p),p) = 0, \end{equation} implicitly defining the function $r(p)$ alluded to above. The mapping $F$ is representative for the conditions which $r(p)$ has to satisfy. \ignore{ one of the objects alluded to in the introduction, where $\C{P}$ is some set, and $\C{V}$ for the sake of simplicity is assumed as a separable Hilbert space with inner product $\langle\cdot|\cdot\rangle_{\C{U}}$ (the meaning of the index $\C{U}$ will soon become clear). } What we desire is a simple representation / approximation of that function, which avoids solving \refeq{eq:II} every time one wants to know $r(p)$ for a new $p\in\C{P}$, i.e.\ a \emph{proxy-} or \emph{surrogate model}. Of course the relation \refeq{eq:II} or its possible source \refeq{eq:I-p} not only defines $r(p)$, but they can be an important relation each candidate has to satisfy as well as possible, and possibly other such relations. This is important, as a proxy-model will often be used also in the sense of a model order reduction, so that the computed $r_c(p)$ will be an approximation. Then the degree to which a relation like \refeq{eq:II} is satisfied can be the basis for estimating how good a particular approximation $r_c(p)$ is. One relatively well-known way when dealing with random models \citep{segal58-TAMS, LGross1962, gelfand64-vol4, segalNonlin1969, kreeSoize86} turns the problem into one of consideration and ultimately of approximation of a linear mapping. The details in the simplest case are as follows. \subsection{Associated linear map} \label{SS:ass-lin-map} Assume without significant loss of generality that \ignore{ \begin{equation} \label{eq:III} \C{U} = \mathop{\mathrm{span}}\nolimits r(\C{P}) = \mathop{\mathrm{span}}\nolimits \mathop{\mathrm{im}}\nolimits r \subseteq \C{V} \end{equation} } $\mathop{\mathrm{span}}\nolimits r(\C{P}) = \mathop{\mathrm{span}}\nolimits \mathop{\mathrm{im}}\nolimits r \subseteq \C{U} $, the subspace of $\C{U}$ which is spanned by all the vectors $\{r(p)|\; p\in\C{P}\}$, is dense in $\C{U}$. \begin{defi} \label{D:ass-lin} Then to each such function $r:\C{P}\to \C{U}$ one may associate a linear map \begin{equation} \label{eq:IV} R: \C{U} \ni u \mapsto \bkt{r(\cdot)}{u}_{\C{U}} \in \D{R}^{\C{P}}, \end{equation} where $\D{R}^{\C{P}}$ is the space of real valued functions on $\C{P}$ and $\bkt{r(\cdot)}{u}_{\C{U}}$ is the real valued map on $\C{P}$ given by $\C{P}\ni p \mapsto \bkt{r(p)}{u}_{\C{U}}\in\D{R}$. \end{defi} \begin{lem} \label{L:inj} By construction, $R$ restricted to $\mathop{\mathrm{span}}\nolimits\mathop{\mathrm{im}}\nolimits r = \mathop{\mathrm{span}}\nolimits r(\C{P})$ is injective, and hence has an inverse on its restricted range $\tilde{\C{R}}:=R(\mathop{\mathrm{span}}\nolimits \mathop{\mathrm{im}}\nolimits r)$. \end{lem} \begin{proof} Assume that for $u\in \mathop{\mathrm{im}}\nolimits r=r(\C{P})$, $u\ne 0$, it holds that $R u = 0$. This means that $\exists p_1\in\C{P}$ such that $u=r(p_1)$, and $(Ru)(p)=\bkt{r(p)}{u}_{\C{U}}= \bkt{r(p)}{r(p_1)}_{\C{U}}=0$ for all $p\in\C{P}$. Taking $p=p_1$, this means that $\bkt{r(p_1)}{u}_{\C{U}}=\bkt{r(p_1)}{r(p_1)}_{\C{U}}= \nd{r(p_1)}_{\C{U}}^2=\nd{u}_{\C{U}}^2=0$. This can only hold for $u = 0$, contradicting the assumption $u\ne 0$, and so $R$ is injective on $\mathop{\mathrm{im}}\nolimits r$ and by linearity also on $\mathop{\mathrm{span}}\nolimits \mathop{\mathrm{im}}\nolimits r$. It is obviously also surjective from $\mathop{\mathrm{span}}\nolimits \mathop{\mathrm{im}}\nolimits r$ to $\tilde{\C{R}}$, therefore \emph{bijective}, hence has an inverse $R^{-1}$ on $\tilde{\C{R}}$. \end{proof} \begin{defi} \label{D:R-ip} This may be used to define an inner product on $\tilde{\C{R}}$ as \begin{equation} \label{eq:V} \forall \phi, \psi \in \tilde{\C{R}} \quad \langle \phi | \psi \rangle_{\C{R}} := \langle R^{-1} \phi | R^{-1} \psi \rangle_{\C{U}}, \end{equation} and to denote the completion of $\tilde{\C{R}}$ with this inner product by $\C{R}$. \end{defi} From \refL{inj} and \refD{R-ip} one immediately obtains \begin{prop} \label{P:R-unitary} It is obvious from \refeq{eq:V} that $R^{-1}$ is a bijective isometry between $\mathop{\mathrm{span}}\nolimits \mathop{\mathrm{im}}\nolimits r$ and $\tilde{\C{R}}$, hence continuous, and the same holds also $R$. Hence extended by continuity to the completion Hilbert spaces, $R$ and $R^*=R^{-1}$ are \emph{unitary} maps \end{prop} Up to now, no structure on the set $\C{P}$ has been assumed, whereas on $\C{U}$ the inner product is assumed to measure what is important for the state $r(p) \in \C{U}$, i.e.\ vectors with large norm are considered important. This is carried via the map $R$ defined in \refeq{eq:IV} onto the space of scalar functions $\C{R}$ on the set $\C{P}$, and the inner product there measures essentially the same thing as the one on $\C{U}$. The only thing that changes is that now one does not have to work with the space $\C{U}$, as everything is mirrored by the real functions $\phi \in \C{R}$, which may be seen as a `problem-oriented co-ordinate system' on $\C{P}$. \subsection{Reproducing kernel Hilbert space} \label{SS:RKHS} Given the maps $r:\C{P}\to\C{U}$ and $R:\C{U}\to\C{R}$, one may define the \emph{reproducing kernel} \citep{berlinet, Janson1997}: \begin{defi} \label{D:RK} The reproducing kernel associated with $r:\C{P}\to\C{U}$ and $R:\C{U}\to\C{R}$ is $\varkappa \in \D{R}^{\C{P} \times\C{P}}$ is given by: \begin{equation} \label{eq:VI} \C{P} \times\C{P} \ni (p_1, p_2) \mapsto \varkappa(p_1, p_2) := \bkt{r(p_1)}{r(p_2)}_{\C{U}} \in \D{R}. \end{equation} \end{defi} It is straightforward to verify that: \begin{thm} \label{T:rep-prop} For all $p\in\C{P}$: $\varkappa(p,\cdot)\in\tilde{\C{R}}\subseteq\C{R}$, and $\mathop{\mathrm{span}}\nolimits \{ \varkappa(p,\cdot)\;\mid\; p\in\C{P} \}=\tilde{\C{R}}$, i.e.\ the kernel $\varkappa$ generates the space $\C{R}$. The point evaluation functional $\updelta_p$ is a continuous map on $\C{R}$, given by the inner product with the reproducing kernel: \begin{equation} \label{eq:VII} \updelta_p : \C{R} \ni \phi \mapsto \updelta_p(\phi) = \ip{\updelta_p}{\phi}_{\C{R}^* \times \C{R}} := \phi(p) = \bkt{\varkappa(p,\cdot)}{\phi}_{\C{R}} \in \D{R}. \end{equation} This reproduction of $\phi$ leads to the name \emph{reproducing kernel}. \end{thm} \begin{proof} The first statement is due to the fact that $\varkappa(p,\cdot)=\ip{r(p)}{r(\cdot)}_{\C{U}} =Rr(p)(\cdot)$. For the reproducing property, similarly as in \refL{inj}, we take $\phi\in \tilde{\C{R}}$, i.e.\ $\exists u\in\C{U}$ with $\phi(\cdot)=\bkt{r(\cdot)}{u}_{\C{U}} = R u(\cdot)$, and then extend by continuity to $\C{R}$. It holds for all $p\in\C{P}$: \[ \updelta_p(\phi) = \bkt{\varkappa(p,\cdot)}{\phi}_{\C{R}} = \bkt{R r(p)}{R u}_{\C{R}} = \bkt{r(p)}{u}_{\C{U}} = Ru(p) =\phi(p) ,\] which is the reproducing property. As $\updelta_p$ is defined via the inner product, it is obviously continuous in $\phi$, hence this extends to the closure of $\tilde{\C{R}}$, which is $\C{R}$. \end{proof} Hilbert spaces with such a reproducing kernel are called a \emph{reproducing kernel Hilbert space} (RKHS) \citep{berlinet, Janson1997}. In other settings like classification or machine learning with support vector machines, where $p \in \C{P}$ has to be classified as belonging to a certain subsets of $\C{P}$, one can use such a map $r:\C{P}\to\C{U}$, the so-called \emph{feature map}, implicitly through using an appropriate kernel. This is then referred to as the `kernel trick', and classification may be achieved by mapping these subsets with $r$ into $\C{U}$ and separating them with hyperplanes---a linear classifier. Observe also that the set $\C{P}$ is embedded in $\C{R}$ via the correspondence $\C{P}\ni p \mapsto \varkappa(p,\cdot) \in \C{R}$, which is the Riesz-representation in $\C{R}$ of the continuous linear functional $\updelta_p \in \C{R}^*$. Now we have a Hilbert space $\C{R}$ of real-valued functions on $\C{P}$ and a linear surjective map $R^{-1}:\C{R}\to\C{U}$ which can be used for representation. In fact, as $\C{U}$ was assumed separable, so is the isomorphic space $\C{R}$, one may now choose a basis $\{\varphi_m\}_{m\in\D{N}}$ in $\C{R}$, which may be assumed to be a complete orthonormal system (CONS). \begin{coro} \label{C:RKHS-decomp} With the CONS $\{y_m \;|\; y_m= R^{-1} \varphi_m = R^*\varphi_m\}_{m\in\D{N}}$ in $\C{U}$, the operator $R$, and its adjoint or inverse $R^*=R^{-1}$, and the parametric element $r(p)$ become \begin{equation} \label{eq:VII0} R = \sum_m \varphi_m \otimes y_m; \quad R^*=R^{-1} = \sum_m y_m \otimes \varphi_m; \quad r(p) = \sum_m \varphi_m(p) y_m = \sum_m R^* \varphi_m . \end{equation} These decompositions may also be seen as the \emph{singular value decompositions} of the maps $R$ and $R^*=R^{-1}$, and are akin to the \emph{Karhunen-Lo\`eve{} decomposition} of $r(p)$. \end{coro} \begin{proof} As $R$ is unitary, its singular values are all equal to unity, and any CONS such as $\{\varphi_m\}_m$ is a set of \emph{right singular vectors}, giving the SVD of $R$ and hence of $R^*=R^{-1}$. For any $m, n\in\D{N}$ one has from the CONS property of $\{\varphi_m\}_{m}$ that \[ \bkt{y_m}{y_n}_{\C{U}}=\bkt{R y_m}{R y_n}_{\C{R}}=\bkt{\varphi_m}{\varphi_n}_{\C{R}} =\updelta_{m,n} , \] and hence for any $p\in\C{P}$ and any $n\in\D{N}$: $\varphi_n(p) = \bkt{r(p)}{y_n}_{\C{U}} = (R y_n)(p)$, due to \refeq{eq:IV} and the definition of the CONS $\{y_m\}$. The last in \refeq{eq:VII0} follows from definition of $R^*$. \end{proof} Observe that the relations \refeq{eq:VII0} exhibit the tensorial nature of the representation mapping. Looking at the Karhunen-Lo\`eve{} representation of $r(p)$, one may see two things. One is that this tensorial decomposition is a \emph{separated} representation, as the $p$-dependence and the vector space have been separated. The other is the observation that \emph{model reductions} may be achieved by choosing only subspaces of $\C{R}$, i.e.\ a---typically finite---subset of $\{\varphi_m\}_{m}$. A good reduced order model (ROM) is hence a representation where one only needs a few terms for a good approximation. Which subsets give a good approximation will be addressed in the next \refS{correlat}. Furthermore, the representation of $r(p)$ in \refeq{eq:VII0} is \emph{linear} in the new `parameters' $\varphi_m$. This means that by choosing the \emph{`co-ordinates'} $\varphi_m$ on $\C{P}$, i.e.\ transforming $\C{P}\ni p \mapsto (\varphi_1(p),\dots,\varphi_m(p),\dots)\in\D{R}^{\D{N}}$, one obtains a \emph{linear / affine} representation on $\D{R}^{\D{N}}$. \section{Refinements} \label{S:refine} Often the parametric element has more structure than is resolved by saying that for each $p\in\C{P}$ one has $r(p)$ in some Hilbert space $\C{U}$. Most of the preceding had to do with alternative ways of describing the dependence on the parameter $p$. Here a short look is taken on the case when the Hilbert space $\C{U}$ has more structure, which one might want to treat separately. One big area, which we only entered slightly, are invariance properties as the invariance w.r.t.\ shifts for stationary or stochastically homogeneous random fields touched on in \refSS{kernel-fact}. We shall look only at two simple but instructive cases. \subsection{Vector fields} \label{SS:vector_fields} One of the simplest variations on the modelling in the previous sections is the refinement that the r\^ole of the Hilbert space $\C{Q}$ is taken by a tensor product $\C{W}=\C{Q} \otimes \C{E}$, where as before $\C{Q}$ is a Hilbert space of scalar functions and $\C{E}$ a \emph{finite-dimensional} inner-product (Hilbert) space \citep{kreeSoize86}. The associated linear map is then a map \begin{equation} \label{eq:vect-R} R_{\C{E}}: \C{V} \to \C{W} = \C{Q}\otimes \C{E}. \end{equation} One possible situation where this occurs is when, similar to the third example in \refSS{xmpl}, the random field $\vek{u}(x,p)$ is not scalar- but vector valued, i.e.\ $\vek{u}(x,p) \in \C{E}$. It could be that several correlated scalar fields have to be described which have been collected into a vector $[u_a(x,p), \dots, u_j(x,p)]$, or that $\vek{u}(x,p) \in \D{R}^n$ is actually to be interpreted as a vector in the space $\D{R}^n$, e.g.\ a velocity vector field. Without loss of generality we shall assume that $\C{E} = \D{R}^n$. This obviously also covers the case when $\C{E}$ is a space of tensors of higher degree; although for tensors of even degree we shall show a further simplification in \refSS{tensor_fields}. In this case, when $\C{V} = \C{U} \otimes \C{E}$, the parametric map is \begin{equation} \label{eq:param-vect-r} \tns{r}:\C{P}\to\C{V} = \C{U} \otimes \C{E};\quad \tns{r}(p) = \sum_k r_k(p) \vek{r}_k, \end{equation} where as before $r_k(p) \in \C{U}$ --- here in the motivating example a Hilbert space of scalar fields --- and the $\vek{r}_k \in \C{E}$. In this case the associated map $R_{\C{E}}$ is chosen \citep{kreeSoize86} to be \begin{equation} \label{eq:vect-R-ex} R_{\C{E}} = \sum_k R_k\otimes\vek{r}_k:\C{U} \ni u \mapsto \sum_k R_k(u) \vek{r}_k = \sum_k \bkt{u}{r_k(\cdot)}_{\C{U}}\vek{r}_k \in \C{Q}\otimes \C{E}. \end{equation} where the maps $R_k:\C{U}\to\C{Q}$ are just the maps from \refeq{eq:IV}, but each $R_k$ is the associated map to $r_k(p)$. The `correlation' can now be given by a bilinear form, but not with values in $\D{R}$ as in \refD{corr}, but with values in $\C{E}\otimes\C{E}$. For this we define on $\C{W}^2 = (\C{Q}\otimes\C{E})^2$ a bilinear form $[\cdot\mid\cdot]$ with values in $\C{E}\otimes\C{E}$ first on elementary tensors, \begin{equation} \label{eq:vect-biform} \forall \tns{s} = s\otimes\vek{s}, \tns{t}= \tau\otimes\vek{\tau} \in \C{W}=\C{Q}\otimes\C{E}:\quad [s\otimes\vek{s}\mid \tau\otimes\vek{\tau}] := \bkt{s}{\tau}_{\C{Q}}\, \vek{s}\otimes\vek{\tau} , \end{equation} and then extended by linearity. Concerning $\C{U}$ and $\C{Q}$ we make the same assumptions as before in \refSS{ass-lin-map} and \refS{correlat}. \begin{defi}[Vector-Correlation] \label{D:corr-vec} Define a densely defined map $C_{\C{E}}$ in $\C{V}=\C{U}\otimes\C{E}$ on elementary tensors as \begin{multline} \label{eq:vect-corr} \forall \tns{u} = u\otimes\vek{u}, \tns{v}=v\otimes\vek{v} \in \C{V}=\C{U}\otimes\C{E}:\\ \bkt{C_{\C{E}}\tns{u}}{\tns{v}}_{\C{U}} := \vek{u}^{\ops{T}}\,[ R_{\C{E}} u \mid R_{\C{E}}v]\, \vek{v} = \sum_{k,j} \bkt{R_k(u)}{R_j(v)}_{\C{Q}} \,(\vek{u}^{\ops{T}}\vek{r}_k)\, (\vek{r}_j^{\ops{T}}\vek{v}) \end{multline} and extend it by linearity. It may be called the \emph{`correlation'} operator. By construction it is self-adjoint and positive. \end{defi} The factorisations and decompositions then have to be of this operator. The eigenproblem on $\C{V}$ is: Find $\lambda\in\D{R}$, $\tns{v}\in\C{U}\otimes\C{E}$ with $\tns{v}= \sum_\ell v_\ell \otimes\vek{r}_\ell$, such that \begin{equation} \label{eq:vect-eig-U} C_{\C{E}}\tns{v} = \sum_{k,j,\ell} (R_j^* R_k v_\ell)\, (\vek{r}_k^{\ops{T}}\vek{r}_\ell)\, \vek{r}_j = \lambda \tns{v} = \lambda \sum_j v_j \otimes\vek{r}_j . \end{equation} The kernel $\vek{\varkappa}_{\C{E}}: \C{P}^2 \to (\C{E}\otimes\C{E})$ for the eigenvalue problem on $\C{W}=\C{Q}\otimes\C{E}$ is \begin{equation} \label{eq:vect-kernel} \vek{\varkappa}_{\C{E}}(p_1,p_2) = \sum_{k,j} \bkt{r_k(p_1)}{r_j(p_2)}_{\C{V}} \; \vek{r}_k\otimes\vek{r}_j . \end{equation} So $\vek{\varkappa}$ is matrix-valued. These are actually `correlation' matrices. In case the function space $\C{Q}$ has the structure of $\mrm{L}_2(\C{P})$ with measure $\varpi$ on $\C{P}$, the Fredholm eigenproblem has the following form: Find $\lambda\in\D{R}$, $\tns{s}\in\C{Q}\otimes\C{E}$ with $\tns{s}= \sum_\ell \varsigma_\ell(\cdot)\vek{r}_\ell$ such that \begin{multline} \label{eq:vect-Fredh-eig} \int_{\C{P}} \vek{\varkappa}_{\C{E}}(p_1,p_2) \left(\sum_\ell \varsigma_\ell(p_2) \vek{r}_\ell\right) \, \varpi(\mathrm{d} p_2)=\\ \sum_{k,j,\ell} \left( \int_{\C{P}} \bkt{r_k(p_1)}{r_j(p_2)}_{\C{U}}\, \varsigma_\ell(p_2) \; \varpi(\mathrm{d} p_2) \right)(\vek{r}_k^{\ops{T}}\vek{r}_\ell) \vek{r}_j = \lambda \sum_j \varsigma_j(p_1) \vek{r}_j . \end{multline} Both of these eigenproblems then combine into a generalised Karhunen-Lo\`eve{} expansion, the analogue of \refeq{eq:XIV} in \refT{1st-spec-rep}: \begin{equation} \label{eq:vect-KL} \tns{r}(p) = \sum_k r_k(p) \vek{r}_k = \sum_m \lambda_m^{1/2} \left( \sum_k \varsigma_{m,k}(p) v_{m,k} \,\vek{r}_k \right) = \sum_k \left( \sum_m \lambda_m^{1/2} \varsigma_{m,k}(p) v_{m,k} \right) \,\vek{r}_k. \end{equation} \subsection{Tensor fields} \label{SS:tensor_fields} Some situations as described in the previous \refSS{vector_fields} allow an even somewhat simpler approach. This is the case when the vector space $\C{E}$ in \refeq{eq:vect-R} consist of tensors of \emph{even} degree. Formally this means that $\C{E} = \C{F}\otimes\C{F}$ for some space of tensors $\C{F}$ of half the degree. A tensor of even degree can always be thought of as a linear map from a space of tensor of half that degree into itself. Namely, let for example $\tensor*{\tnb{A}}{_a_b_c^d^e^f}\in \C{E}$ be a tensor of \emph{even} degree---here six. Then this tensor acts as a linear map on the space of tensors of e.g.\ the form $\tensor*{\tnb{f}}{^b_e_f}\in\C{F}$ (the Einstein summation convention for tensor contraction is used in this symbolic index notation): \[ \tensor*{\tnb{A}}{_a_b_c^d^e^f} \tensor*{\tnb{f}}{^b_e_f} = \tensor*{\tnb{q}}{^d_a_c}. \] Often the particular application domain will dictate which space of tensors it acts on. Being a linear map, it can be represented as a \emph{matrix} $\vek{A}\in\D{R}^{n\times n}$, which we shall assume from now on. Often these linear maps / matrices have to satisfy some additional properties, for example they have to be positive definite or orthogonal. It is maybe now the opportunity to make an important remark: The representation methods which have been shown here are \emph{linear} methods, which means they work best when the object to be represented is in a \emph{linear} or \emph{vector space}, essentially free from nonlinear constraints. Consider two illustrative examples: As a first example, assume that $\vek{A}$ has to be orthogonal, then one requires $\vek{A}^{\ops{T}}\vek{A} =\vek{I}=\vek{A}\vek{A}^{\ops{T}}$, a nonlinear constraint. It is well known that the orthogonal matrices $\ops{O}(n)$ form a compact Lie group, just as the sub-group of special orthogonal matrices $\ops{SO}(n)$. Here it is important to notice that their Lie algebra $\F{o}(n) = \F{so}(n)$, the tangent space at the group identity $\vek{I}$, are the \emph{skew} symmetric matrices, a \emph{free} linear space. And each $\vek{Q}\in \ops{SO}(n)$ can be represented with the exponential map $\vek{Q} = \exp(\vek{S})$ with $\vek{S}\in\F{so}(n)$ and `$\exp(\cdot)$' the matrix exponential. This recipe, using the exponential map from the Lie algebra, which is a vector space, to its corresponding Lie group, is a very general one. One has to deal only with representations on free linear spaces, the Lie algebra, but models entities in the Lie group. For another example, assume that the matrix $\vek{A}\in\ops{Sym}^+(n)$ has to be symmetric positive definite (spd), as is often required when one wants to model constitutive material tensors. One defining condition is that it can be factored as $\vek{A}=\vek{G}^{\ops{T}}\vek{G}$ with invertible $\vek{G}\in\ops{GL}(n)$. Both of these are nonlinear constraints. In fact the spd matrices are an open cone, a Riemannian manifold, in the space of all symmetric matrices $\F{sym}(n)$. There are different ways how to make $\ops{Sym}^+(n)$ into a Lie group, but the important thing here is that any $A\in\ops{Sym}^+(n)$ can be represented again with the matrix exponential as $\vek{A}=\exp(\vek{H})$ with $\vek{H}\in\F{sym}(n)$, a \emph{free} linear space. Let us point out that this also important in the case $n=1$, i.e.\ when $\vek{A}$ is a positive scalar. Here $\F{sym}(1)=\D{R}$ and the map $\exp(\cdot)$ is the usual exponential. A parametric element in this special case of \refeq{eq:param-vect-r}, let us say in the example of positive definite matrices $\vek{A}(p)\in\C{Q}\otimes\ops{Sym}^+(n)$, would be represented by an element $\vek{H}(p)\in\C{Q}\otimes\F{sym}(n)$ and then exponentiated: \begin{equation} \label{eq:exp-rep-spd} \vek{H}(p) = \sum_k \varsigma_k(p) \vek{H}_k,\qquad \vek{H}(p) \mapsto \exp(\vek{H}(p)) = \vek{A}(p) . \end{equation} This way one is sure that $\vek{A}(p)\in\ops{Sym}^+(n)$ for each $p\in\C{P}$. Therefore we can now concentrate on the problem of representing $\vek{H}(p)$. Here everything is very similar to the previous \refSS{vector_fields}. The associated linear map in \refeq{eq:vect-R} remains as it is, only that now $\C{E}=\F{sym}(n)$. The parametric map would be written as \begin{equation} \label{eq:tens-r-param} \tns{R}(p) = \sum_k r_k(p)\otimes\vek{R}_k \in \C{U}\otimes \C{E},\quad \text{ with } \quad \vek{R}_k \in \F{sym}(n). \end{equation} The correlation corresponding to \refD{corr-vec} is now defined as \begin{defi}[Tensor-Correlation] \label{D:corr-tens} Define a densely defined map $C_{\C{E}}$ in $\C{W}=\C{U}\otimes\C{F} = \C{U}\otimes\D{R}^n$ --- observe, \emph{not} $\C{U}\otimes\C{E}=\C{U}\otimes\F{sym}(n) = \C{U}\otimes\D{R}^{n(n+1)/2}$ --- on elementary tensors through a bilinear form as \begin{multline} \label{eq:tens-corr} \forall (\tns{u} = u\otimes\vek{v}), (\tns{v}=v\otimes\vek{v}) \in \C{W}=\C{U}\otimes\C{F}:\\ \bkt{C_{\C{F}}\tns{u}}{\tns{v}}_{\C{U}} := \sum_{k,j} \bkt{R_k(u)}{R_j(v)}_{\C{Q}} \,(\vek{R}_k\vek{u})^{\ops{T}}(\vek{R}_j \vek{v}), \end{multline} and extend it by linearity. It may be called the \emph{`correlation'} operator. By construction it is self-adjoint and positive. \end{defi} The eigenproblem for $C_{\C{F}}$ corresponding to \refeq{eq:vect-eig-U} is now formulated on $\C{W}=\C{U}\otimes\C{F} = \C{U}\otimes\D{R}^n$, i.e.\ like \refeq{eq:vect-eig-U} but with $\C{E}$ replaced by $\C{F}$, otherwise everything is as before and hence will not be spelled out in detail. The eigenvectors, analogous to \refeq{eq:vect-eig-U}, will look the same as there but with $\vek{v}_{m,j}\in\C{F}$: \begin{equation} \label{eq:eig-tens-U} \C{W}\ni \tns{v}_m = \sum_j v_{m,j} \otimes \vek{v}_{m,j} \in \C{U}\otimes\C{F} . \end{equation} The kernel corresponding to \refeq{eq:vect-kernel} is \begin{equation} \label{eq:tens-kernel} \vek{\varkappa}_{\C{F}}(p_1,p_2) = \sum_{k,j} \bkt{r_k(p_1)}{r_j(p_2)}_{\C{U}} \; \vek{R}_k^{\ops{T}} \vek{R}_j , \end{equation} with eigenvectors of the form $\tns{s}_m(p) = \sum_j \varsigma_{m,j}(p) \vek{v}_{m,j} \in\C{Q}\otimes\C{F}$. So $\vek{\varkappa}_{\C{F}}$ is matrix-valued here as well. From these the final representation, the analogue of \refeq{eq:XIV} in \refT{1st-spec-rep} and \refeq{eq:vect-KL}, is obtained as: \begin{equation} \label{eq:tens-KL} \tns{R}(p) = \sum_k r_k(p) \vek{R}_k = \sum_m \lambda_m^{1/2} \left( \sum_j \varsigma_{m,j}(p) v_{m,j} \,\vek{v}_{m,j} \otimes \vek{v}_{m,j} \right) . \end{equation}
{ "timestamp": "2018-06-19T02:12:21", "yymm": "1806", "arxiv_id": "1806.01101", "language": "en", "url": "https://arxiv.org/abs/1806.01101" }
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{I}{}ncrementally described in the form of graph in big data applications, graph are processed in an iterative manner. For example, search services (such as Google~\cite{Google}) use PageRank algorithm to sort results, social networks (such as Facebook~\cite{Facebook}) use Clustering algorithm to analyze user communities, knowledge sharing sites (such as Wikipedia~\cite{Wikipedia}) use Named Entity Recognition algorithm to identify text information, video sites (such as Netflix~\cite{Netflix} and Anysee~\cite{Anysee}) Based on Collaborative Filtering algorithm to provide film and television recommendations. Relevant studies indicate that the computational and storage characteristics of graph computing make it difficult for data-oriented parallel programming models to provide efficient support. The lack of description of correlation between data and inefficient support for iterative calculations can result in multiple times. Dozens of times the performance loss. The urgent need for an efficient Graph Computation system has made it one of the most important issues to be solved in the field of parallel and distributed processing. Current graph system processing strategy~\cite{GraphLab, PowerSwitch, HybirdGraph, Gemini, GraphChi, NXgraph, Mosaic} still lack of efficiency which listed below: (1) High cache miss rate; (2) Large I/O access overhead; (3) Slow convergence rate of large-scale graph data. We profiled the solutions that resulted in the low performance of the existing representative graph systems.Due to the small-world phenomenon, the graph vertices will obey the power function distribution. A few graph vertices will connect the vast majority of graph vertices, while the vast majority of these vertices need to transfer state through these few vertices. Therefore, frequent visits and updates are needed for these core graph vertices while other vertices shortly converge, resulting in low frequency of access, thus confronting the problem mentioned above. So this paper adopts the graph partition of the dynamic increment, which will be explained explicitly in Section 3. Currently, some work has already been done for graph partition of power law graph, but most of them are based on a distributed environment, regarding the underlying computing nodes as equivalent nodes. However, most graph processing methods treat the underlying graph data as black boxes, lacking research on dynamic graph partitioning and graph processing based on graph structure. However, in the real world, the graph structure is constantly changing. With iteration, a large number of graph vertices may converge in the graph partition. Frequent accesses to a small number of active vertices may result in repetitive loading of the entire graph partition including convergence, but these convergent vertices do not require access and processing, which leads to the severe waste of memory bandwidth and cache. The existing method does not consider the structural features of each partition, and the graph algorithm requires more update times for convergence and each update requires large overhead. The graph vertex degree and its state degree have particularly critical influence on the convergence of graph vertices. Meanwhile, they also determine the processing order of the graph vertices. In the case of PowerSwitch system as shown in figure~\cite{PowerSwitch}, vertex 1 has a large degree and is more active. Theoretically, asynchronous method should be adopted to increase the convergence speed as a large number of graph vertices ($v_2,v_3,v_4,v_5$) require state transfer through active vertices. After updating its own data by asynchronous method, each vertex will be immediately updated through sending messages, so that the neighbors can be calculated by using the latest data. The vertices ($v_2,v_4,v_6$) have lower degree and will shortly converge, and it is of no high value to adopt asynchronous system to increase the convergence rate. The synchronization system should be adopted to reduce the cache miss rate and the time required for state updates of graph vertices. Presently, the graph structure can be diverse, and its processing performance can be more different in a uniform way. Secondly, the graph structure formed by the unconverged graph vertices are constantly changing in operation, causing large fluctuations in performance. According to the above reasons, the paper proposes graph processing methods for graph structure perception. This paper incrementally obtain the graph structure characteristics formed by unconverged graph vertices in accordance with the analysis, adopting a suitable graph processing method for each graph partition block adaptively according to the underlying operation environment (the processor load, cache miss rate, etc. in each graph partition). More specifically, the main contributions of this work are summarized as follows: \begin{itemize} \item This paper analyzes the existing problems in the state-of-the-art distributed graph processing system and points out that the current graph processing system is lacked with targeted processing in the graph structure, affecting system performance. \item This paper proposes the structure-centered graph partition and graph processing. According to the graph structure (graph vertices heat, etc.), the graph is partitioned by dynamic increment manner. The order of block partition is processed according to the graph schedule map of graph partition state degree. \item This paper uses the graph structure perception combined with feature analysis in operation to switch each block of graph partition to the appropriate processing method. \item The method is applied in the latest system. Experiments with five applications on five real-world graphs show that Gemini significantly outperforms existing distributed implementations, and the performance is improved by 2 times. \end{itemize} The remainder of this paper is organized as follows. Section $2$ analyzed the defects of the existing graph processing system, which puts forward the dynamic graph partitioning and adaptive graph processing optimization strategy. Section $3$ presents the dynamic graph partitioning modus, followed by adaptive graph processing method in Section $4$. Section $5$ shows experimental results. The related work is surveyed in Section $6$, and finally, Section $7$ concludes this work. \section{Background and Motivation} With the present of big data era, increasing data applications needed to be expressed in the form of vertices and edges, and processed through iterations. While state-of-the-art graph processing systems mainly concentrated on solving load balancing and communication overhead among varies of runtime environment, therefor ignoring the graph structural features of input data which have great impact on system performance. First, Assorted graph structure been processed may lead to immense performance differences with unified method; Second, Structure variations of the vertices that haven't converged in operation bring out volatile performance. \begin{figure}[h] \centering \includegraphics[scale=0.25]{fig1_PowerSwitch.pdf} \caption{Ineffective graph processing of partitions} \label{fig:PowerSwitch} \end{figure} Graph processing methods are very sensitive to the graph structure, therefore graph processing systems show quite different performance among diverse data sets. Most of the previous graph processing methods treats underlying data as black boxes, take neither partitioning nor processing strategy accordingly. Current graph computing model research work are mainly carried out in two aspects: one is focusing on performance optimization for a certain pattern, the other providing a same interface for two patterns(Synchronous mode and Asynchronous mode) that allows the user to choose according to the algorithmic features.Three issues has arisen due to the ignorance of above model, including low cache hit ratios, high input/output overhead, and slow convergence of large scale data. \subsection{Disadvantages of Existing Methods} To study the performance lose, We select some typical graph processing algorithms: PR (PageRank), CC (Connected Components), SSSP (Single-Source Shortest Paths), BFS (Breadth-First Search) and BC (Betweenness Centrality), along with commonly used graph data sets: amazon-2008, WikiTalk and twitter-2010 to evaluate the performance otherness among different algorithms and data sets. We set up experiments on an 8-node high-performance cluster interconnected with Infiniband EDR network (with up to 100Gbps bandwidth), each node containing two Intel Xeon E5-2670 v3 CPUs (12 cores and 30MB L3 cache per CPU) and 128 GB DRAM. We run 100 iterations on Gemini. Figure~\ref{fig:6} shows the vertex convergence of six data sets with different structures under four algorithms through iterations. Figure~\ref{fig:6} gives detailed cache miss rate for different algorithms under different data sets. As shown in Figure~\ref{fig:6}, for the same data set, The structure of subgraph that non-convergent vertices composed of change continuously, traditional methods lack of the reflection to the diversity and dynamic changes of the graph structure, but a integrated graph partitioning and processing methods. The above strategies may depress convergence rate of the whole algorithm: In the iteration, some less active vertices have already converged while other remain active, which keep the entire partition loaded uninterruptedly and lead to decline in cache miss rate. (See Figure~\ref{fig:3}) \subsection{Optimized strategy} We argue that inefficiency of traditional strategies mainly illustrated by following three points: (1) Static graph partition methods. Structural diversification caused by vertex convergence is not considered. After one iteration. large number of vertices in each partition may converge, several vertices remain active, which result in frequent loading of a whole cache block, eventually wasting memory bandwidth, reducing cache hit rate, and frequent IO as well. (2) Unified message processing mechanism. In terms of graph processing, The structural differences among graph partitions haven't been considered by the message passing model of existing systems, instead, they adopt a unified message processing mechanism. Some graph processing systems, such as $PowerSwitch$~\cite{PowerSwitch}, allow switching execution modes between synchronous and asynchronism, but are indistinguishably operated on all blocks. When synchronous message passing mechanism is adopted, the convergence speed of graph partition with more active vertices is limited. When asynchronous, High cache miss rate occur in partitions with less active vertices. (3) Equal treatment to all graph partitions. The partitions are all treated the same as giving the same weight, nevertheless, It is known that natural graphs subject to skewed power-law degree distribution, which means small portion of vertex connects bulk of edge. Therefore, Frequent IO and high cache miss rate will arise in the event of average vertices partition. For the reasons mentioned above, We present a novel graph structure-aware technique in the paper that obtains graph structure of the vertices that are not convergent by the analysis, and then incrementally partition the graph. After dynamic partition, We schedule the processing order of graph partitions, and for each iteration, adaptively choosing appropriate way to processing the graph partitions.In Summary, we have the following contributions: \begin{itemize} \item Our partition method separates the hot vertices from the cold, which endues the former with frequent update and significant change a higher priority, and reach the convergence faster, eventually reduce the average number of updates that an input graph needs to achieve converge. \item After The graph partition s with dramatically drop-off in active vertices will be repartitioned after specific times of iterations. This method, on the one hand, takes the load balance problem caused by the change of graph structure into consideration, on the other hand, controls the computation overhead caused by the migration of vertices during dynamic graph partition. \item We put high activity vertices with frequent updates into the same cache, for the vertices will be loaded in memory at the same time. By doing this, we reduce the overhead caused by inactive vertices and their loading times as well. \end{itemize} \section{Dynamic Graph Partition} Due to the small-world phenomenon, the graph vertices will obey the power function distribution. A few graph vertices will connect with the vast majority of graph vertices, while the vast majority of these vertices need to transfer state through these few vertices. Therefore, frequent visits and updates are needed for these core graph vertices while other vertices rapidly reaching convergence, resulting in low frequency of access, thus confronting the problem mentioned above. Consequently, according to changes in graph structure caused by the convergence of some vertices during iteration. In this paper, partitions will be redivided, the less active vertices will be moved together to decrease the calculation frequency by graph partition manner of dynamic increment, thereby reducing the I/O overhead caused by active vertices and lowering the cache miss rate. \begin{figure}[h] \centering \includegraphics[scale=0.18]{example_graph.png} \caption{Example graph} \label{fig:6} \end{figure} \subsection{Active Degree and State Degree} Before getting to details, let us first give the targeted graph processing concepts. As the graph data increases dramatically, the researchers divided the graph data into several partitions and assigned the closely-related graph vertices to the same partition in order to accelerate convergence of the graph vertices. The input graph data is represented by $G = (V, E)$. While $V$ represents all the vertices and $E$ represents the edges of all the connected vertices. The current graph processing system stores the updated messages in the vertices by default, and the edges exist as fixed values. Therefore, the vertex degree is regarded as a fixed value in the computing. \begin{table} \begin{center} \setlength\tabcolsep{1pt} \begin{tabular}{cc} \toprule Symbol&Definition\\ \midrule $D_i (v_i)$&In-degree of vertex $i$\\ $D_o (v_i)$&Out-degree of vertex $i$\\ $D(v_i)$&Degree function of vertex $i$\\ $D_{Max}(V)$&The maximum degree of all vertices\\ $SD(v_i)$&State degree of vertex $i$\\ $AD(v_i)$&Active degree of vertex $i$\\ $I_1$&Iteration that re-partitioning the partitions\\ $I_2$&Iteration that schedule cold partitions to compute\\ $T_1$&Threshold of vertices active degree\\ $T_2$&Threshold of vertices convergence\\ \bottomrule \end{tabular} \caption{Definitions of symbols} \label{fig:3} \end{center} \end{table} {\bf Degree}\quad In this paper, $D_i (v_i)$ is used to represent in-degree of vertex $i$. The larger the in-degree, the more easily the vertex is affected by the neighbors. Which means, only when most neighbor vertex converge can the vertex tend to converge. Therefore, in the practical computation, vertices with large in-degree should be delayed to reduce the number of unnecessary updates. $D_o (v_i)$ indicates out-degree of vertex $i$. The greater the out-degree, more vertices will be affected by its update state. That indicates that only when the vertex converges can its neighbors tend to converge. Thence, in the practical computing, vertices with large out-degree should be processed in priority to accelerate the entire graph convergence. Regarding which mentioned above, the paper puts forward the concept of vertex power function, which is used to quantify the static structure features of graph vertices. Its formula is as follows: \begin{align} D(v_i) = D_o(v_i) + \alpha*D_i(v_i) \end{align} The parameter $\alpha$ $($0.5$<\alpha<$1$)$ is an adjustable parameter, which is dynamically adjusted according to different data sets in the actual computation in order to achieve optimal performance. It can be a challenge to select the condition to match the value when computing heat value. The basis for selection is: value $α$ is adjusted according to the structure of input graph data. In the case of road network, a data set, each vertex has even in-edge and out-edge distributions and most graphs have similar vertex activity. The entire graph is of even distribution with value a trending to 0.5. However, in the case of a data set focused on by Weibo users, a few celebrities will have a large number of followers while most people have few followers, which leads to data skew. It amplifies the influence of vertex out-edge on the convergence of the entire graph, so value $\alpha$ will trend to 1 accordingly. {\bf Active Degree}\quad The vertex activity depends not only on its degree function, but also on its neighbor vertex structure. In order to predict the initial activity information of each vertex in an input graph data set, the graph data is optimally partitioned under the condition of guaranteeing load balancing while improving the computing efficiency of subsequent iterations. The paper puts forward the structure features of quantification graph active degree, which are used as reference factors for the initial graph partition of data of data graph. It relies on the in-degree $D_i(v_i)$ and out-degree $D_o(v_i)$ of the vertex as well as the degree $D(v_k)$ of its neighbors.To this end, We use the hot-cold notion as in HotGraph and present our active degree algorithm function, scilicet the following $AD(v_i)$ : \begin{equation} AD(v_i) = D(v_i) + \frac{\sum_{v_k}^{V} D(v_k)}{\sqrt{D_{Max}(V)} * D(v_i)} \end{equation} \begin{figure*}[!tb] \centering \includegraphics[scale=0.15]{Initial_graph_partition.png} \caption{Initial Chunk-based partitioning} \label{fig:6} \end{figure*} $D(v_k)$ indicates its neighbors’ degrees while $D_{Max}(V)$ indicates the maximum degree of all vertices, here we explore the feasibility of extending such design with fine-grained quantification of graph structure. The major difference is decoupling the in-degree and out-degree of vertices, note that unlike in HotGraph, $D(v_i)$ in this paper act like an degree function, taking in degree and out degree both into consideration and extending the graph data set to a more common directed graph. $T_1$ is set as the active degree threshold, and is determined on the basis of user-defined sample size and the ratio of hot vertices, which follows T in HotGraph. For example: if the vertex number $V$ is viewed as 10000, the user-defined sample size is $V$=1000 and the ratio of hot vertex $R$ is 0.1, then the active degree threshold is $AD(V) = AD(v_{100})$, ie the active degree of the 100th vertex in the sample. The vertices with active degree value $AD (v_i)$ greater than $AD(V)$ are marked as hot vertices and are stored in the hot partition. The vertices with active degree value smaller than $AD(V)$ are marked as cold vertices and stored in the cold partition. The hot and cold partitions are physically composed of cache blocks. The hot and cold vertices are stored in multiple cache blocks. For instance, vertices with active degree value greater than 50 are hot vertices and the number is 200. On the contrary are cold vertices and the number is 2000. One cache block can store 100 vertices. Therefore, there are 2 hot partition and 20 cold partition. Particularly, vertices with 0 degree neither affect nor been affected by other vertices. Its convergence can be achieved in one iteration. This paper uniformly partitions them into regions with continuous addresses. The region is called as: dead partition. {\bf State Degree}\quad According to the characteristics of input graph structure, $AD (V)$ evaluates the activity of vertices. As the vertices convergence, in iteration process, the activity would alter. Thereby, the state degree, $SD(v_i)$ represent the alteration of the activity of vertices in iteration process. The state degree means that when the state degree is higher, the state of graph vertices changed more. Also, more activity vertices have more influence on neighbor vertices. only when the vertex converges can its neighbors tend to converge. Otherwise, the low state degree vertices would continue to be updated. For different algorithmic, the definition of state degree and the methods of calculation are different, we will elaborate on the state degree formula corresponding to the common graph algorithm in section 3.3. The partition state degree, $PSD(j)$ is the average of all vertices state degree accumulation in this partition. As a result of separation according to active degree value, the state degree of hot vertices is high and the state degree of cold vertices is low, which avoid this situation where low state degree vertices are more and there are fewer high state degree vertices so that the partition state degree improve. In conclusion, that the average of all vertices state degree accumulation in this partition is regarded as the state degree of the whole partition is reasonable. The vertices state degree, $SD(v_i)$ and the partition state degree, $PSD(j)$ are applied in evaluating the activity of graph vertices and partition, respectively. In order to the whole graph can be convergence rapidly, as well as making high state degree vertices synchronous load to reduce cache invalidation, high state degree vertices and partition would be dealt priority. Vertices active degree, $AD(v_i)$ and state degree, $SD(v_i)$ play an important role in vertices separation. This essay gives details about how to divide input graph structure and initial graph based in vertices activity in section 3.2 and section 3.3. \subsection{Activity-based Partitioning} In order to reduce the average number of updates that an input graph needs to achieve converge, this paper propose a graph partition strategy about graph structure sense, which not only is extensive, but also it has a characteristics that when the scale of data is larger, the performance is better. According to graph vertices in-degree and out-degree, vertices active degree, $AD(v_i)$ is calculated. And then according to $AD(v_i)$, graph vertices sort in descending. Based in this order, vertices are separated. The scale of partition is an exact of multiple of cache number. This separation act only is operated when data input. The time of reordering graph vertices is once in the whole algorithmic process. The expense produced by initial graph separation is divided to every time iteration. Not only it is helpful to improve cache hits rate, but also it lessen number of calculation. It can be proved that systems performance improvement is much more than extra expense. What’s more, for big scale input data, expense producing by every time become fewer contribution to great system extensiveness. In initial iteration situation, 0 state degree vertices is investigated and put them into the dead partition. We Create a table named the first graph vertices degree table store vertices in-degree and out-degree. Moreover, We Create a table named the second graph vertices degree table to store position of neighbor vertices. The first graph vertices value table and the second graph vertices value table are applied in storing this time calculation value and last time calculation, respectively. Based in the value stored in the first graph vertices value table and the second graph vertices value table, vertices state degree and partition state degree can be known. To store partition ID and partition state degree, we create two tables, one is called ID table and the other one is partition state degree table. After this, we can separated hot partition and cold partition based in heat of vertices. As soon as all vertices are marked and separated to specific partitions, the table, partition state degree,is initialized and output initial partition. \begin{algorithm} \begin{footnotesize} \caption{Initial Activity-based partitioning} \label{algorithm:PNPFI} \begin{algorithmic}[1] \Procedure{Active\_Based Partition}{\emph{v$_i$, D$_o$(v$_i$),D$_i$(v$_i$)}} \State expected chunk size $\leftarrow$ remain amount $/$ remain partitions \While{ $V$ has unvisited vertex v$_i$} \If{ \emph{D$_i$(v$_i$) = $0$ and D$_o$(v$_i$) = $0$}} \State \emph{P$_{hot}$} $\leftarrow$ \emph{v$_i$} \EndIf \If{ \emph{AD(v$_i$) $\geqslant$ T$_1$}} \State $hot\ edges$ $\leftarrow$ ${hot\ edges} \cup \emph{D$_o$(v$_i$)}$ \If{ ${hot\ edges}$ $>$ expected chunk size} \State $hot\ partitions$ $\leftarrow$ $hot\ partitions$ $+\ 1$ \EndIf \State \emph{hot\ partitions} $\leftarrow$ \emph{D$_o$(v$_i$)} \State \emph{P$_{hot}$} $\leftarrow$ \emph{v$_i$} \EndIf \If{ \emph{AD(v$_i$) $\leqslant$ T$_1$}} \State $cold\ edges$ $\leftarrow$ ${cold\ edges} \cup \emph{D$_o$(v$_i$)}$ \If{ ${cold\ edges}$ $>$ expected chunk size} \State $cold\ partitions$ $\leftarrow$ $cold\ partitions$ $+\ 1$ \EndIf \State \emph{cold\ partitions} $\leftarrow$ \emph{D$_o$(v$_i$)} \State \emph{P$_{cold}$} $\leftarrow$ \emph{v$_i$} \EndIf \EndWhile \EndProcedure \end{algorithmic} \end{footnotesize} \end{algorithm} Figure 5 gives an example of chunk-based partitioning, showing the vertex set on three nodes, with their corresponding dense mode edge sets. Knowing graph vertices active degree value and sorting them in descending relying on $AD(V)$, we separate graph vertices to two partition, $P_{cold}$ and $P_{hot}$. Each partition is made up of equal cache blocks. To read data conveniently, the scale of cache block is designed as the integral multiple of cache page number. For 0 state degree vertices, not only it does not received message of neighbor vertices, but also it can not transfer and update. And only one iteration can make it convergence. For this reason, we filter 0 state degree vertices firstly and deal with these data alone, which means these vertices would be separated when the state degree of vertices is calculated, would separate store them and would calculate priority in adaptive schedule period. When the iteration of 0 state degree vertices is achieved, there is no any act to reduce expense of iteration. \begin{figure*}[!tb] \centering \includegraphics[scale=0.28]{Dynamic_graph_partition_PG.png} \caption{Dynamic Structure-based graph partition for PageRank} \label{fig:7} \end{figure*} \begin{figure}[h] \centering \includegraphics[scale=0.26]{cold_and_hot.png} \caption{The comparision of cold partition and hot partition} \label{fig:6} \end{figure} Due to the constant of edge data and input/output degree, we can preprocess input data and distinguish hot vertices and cold vertices relying on edge function, which is useful to increase rate of cache hits rate and decrease expense of I/O. Also, it is a great way to lessen number of iteration. Hot vertices become cold is a common trend in iteration process. There is a few cold vertices affected by neighbor vertices to be hot. It is essential to improve system performance which is separated based in heat graph partition. However, as the graph vertices convergence in calculation process, graph structure would modified so that it can not satisfy requirement that in initial partition, the high activity vertices is calculated priority during iteration. Because of this situation, dynamic increment graph partition is proposed. In initial partition, it would be separated again, according to graph vertices state. \subsection{Structure-based Partitioning} After a certain number of iteration, due to hot vertices convergence continuously, the number of hot vertices plumped. In order to make expense fall, hot partition would be rescheduled which would not result plenty of expense. Only to a marked variance is required to be updated. Time Complexity is $O(n)$. The accumulation of the vertex state degree is obtained every $T_1$ iteration to obtain the average block state degree of all the hot and cold partitions and to determine whether there are hot partition with value smaller than the threshold $T_1$ and whether there are cold partition with value larger than threshold $T_1$. The hot partition with decreasing activity can be marked as the cold partition, and similarly, the cold partition with increasing activity can be marked as the hot partition. Because in the previous section, it is divided according to active degree value. Generally speaking, the state degree of hot vertices is higher while the state degree of the cold vertex is lower. So the phenomenon will not exist that many vertices with low state degree are in the partition while a few vertices with high state degree raise the state degree of the whole partition. Therefore, it is reasonable to use the average value of the vertex state degree in the partition as the state degree of the entire partition. However, for some graph algorithms such as $PageRank$, the graph data shows the whole tendency from dense state to sparse state under these algorithms. The case fails to exist that the cold notion tan become the hot notion. In order to optimize the algorithm to reduce the program space occupation, the border variable barrier is maintained to partition cold and hot vertices. As the hot block gradually becomes cold, the barrier also moves accordingly. Compared with the universal partition method mentioned above which requires maintaining a tag variable table, the method only needs to maintain a $Vertex\_ID$ variable. However, for graph algorithms such as $SSSP$, the graph data tends to be dense and then tend to be sparse as a whole in these algorithms. That is to say, the cold vertices will first become hot and then converge, and a single barrier variable cannot represent the tendency. It requires the application of the universal method first proposed. \begin{algorithm} \begin{footnotesize} \caption{Dynamic Structure-based Partition} \label{algorithm:PNPFI} \begin{algorithmic}[1] \Function{Process\_Vertex}{\emph{v$_i$}, \emph{curr[ ]}, \emph{nexr[ ]}} \State \emph{\#Pragma omp parallel reduction($+$:reducer)} \While{ \emph{active vertices} all been visited} \State \emph{local\_reducer} $\leftarrow$ \emph{local\_reducer} $+$ \emph{Process(v$_i$, curr[v$_i$], next[v$_i$])} \State \emph{v$_i++$} \EndWhile \State \emph{reducer} $\leftarrow$ \emph{reducer} $+$ \emph{local\_reducer} \State \emph{end Pragma} \State \emph{global\_reducer} $\leftarrow$ \emph{global\_reducer} $+$ \emph{reducer} \State \Return{\emph{global\_reducer}} \EndFunction \State \Procedure{Structed\_Based Partition}{\emph{barrier}, \emph{curr$[$ $]$}, \emph{nexr$[$ $]$}} \If{\emph{iteration} == \emph{I$_1$}} \For{\emph{P$_{hot}$} and \emph{P$_{cold}$} have all been processed} \For{\emph{v$_i$} belongs to \emph{Partition i}} \State Process\_Vertex(\emph{v$_i$}, \emph{curr[ ]}, \emph{nexr[ ]}) \EndFor \If{\emph{SD(P$_i$)} \emph{$<$} \emph{T$_1$} and \emph{P$_{hot}$}} \State \emph{P$_{cold}$} $\leftarrow$ \emph{Partition i} \State $barrier$ $\leftarrow$ $i$ \EndIf \If{\emph{SD(P$_i$)} \emph{$>=$} \emph{T$_1$} and \emph{P$_{cold}$}} \State \emph{P$_{hot}$} $\leftarrow$ \emph{Partition i} \EndIf \EndFor \EndIf \EndProcedure \end{algorithmic} \end{footnotesize} \end{algorithm} Figure~\cite{7} indicate the process of dynamic graph partitioning in $PageRank$. According to the accumulation state degree of each vertices, the average state degree of hot partition can be calculated. Find out the partition whose average state degree is less than $T_1$. And then, value of barrier is changed to be ID of the first vertices. This separation methods separate hot vertices again, but for cold vertices there is no effect. When it is calculated, hot vertices would fewer and fewer. Therefore, it is obvious that the scale of reschedule would zoom out. When measuring the value of $PageRank$, the edge would be operated and divided, so the results relates to input edge and output edge of the vertex. The in-degree and the out-degree would straightly affected the vertices convergence. Hence, the difference of $Rank$ could be applied in evaluation the activity of vertices, which means, for $PageRank$, the definition of state degree is accumulation of the difference between this algorithmic result and last algorithmic result. Give $PageRank$ an example, assume that the first result is default 1 and there is no accumulation result at first time. If the second result is 5,and then it is obvious that the difference is 4. The accumulation is also 4. If the third result is 7, the difference between this result 7 and last result 5 is 2 so that the accumulation is 6, 4 and 2. \begin{equation} \Delta_{PG} = \sum \left|Rank_{curr} - Rank_{next}\right| \end{equation} Figure~\cite{8} indicate the process of dynamic graph partitioning in $SSSP$. According to the accumulation state degree of each vertices, the average state degree of hot partition can be calculated. The partition whose average state degree is less than $T_1$, is marked as hot partition. Otherwise, it is marked as cold partition. For $SSSP$, there is the same methods. When $SSSP$ was applied, because the calculation of the shortest path is relate to the accumulation account, it is not adaptable to evaluate the activity with the difference. In this methods, the smaller edge data between two calculation results is utilized, which is accumulated to decide whether there is modification of the vertices activity. Consequently, for $SSSP$, the definition of state degree is the accumulation of the smaller edge data between this result and last result. Same analogy to $CC$, it take a maximum clique. In this situation, the definition of state degree is the accumulation of the larger between this result and last result. The example of $CC$ is not a separate example here. \begin{equation} \Delta_{SSSP} = min\{Edge\_data_{curr} , Edge\_data_{next}\} \end{equation} \begin{figure*}[!tb] \centering \includegraphics[scale=0.28]{Dynamic_graph_partition_SSSP.png} \caption{Dynamic Structure-based graph partition for SSSP} \label{fig:8} \end{figure*} The number of dynamic Structure-based Partition is positively correlated with the number of iteration. For this reason, when the number of iteration goes up, the adjacent interval of rescheduling increases. Under the condition of ensuring the right results of algorithmic and avoiding extra expense, the rate of convergence is improved and the absence rate of cache is decreased. \section{Adaptive Partition Scheduling} In adaptive scheduling period, because of the different convergence rate of each vertices and the distinguish of the state degree of graph vertices, in calculating the accumulation of state degree of graph vertices process, the vertices that vary more frequent and greater have priority to be measured, as well as increasing the rate of convergence of graph vertices and shortening algorithmic running time. During $T_1$ iteration, hot partition would be separated again. After separation, if there is still hot partition, we adaptively scheduling hot partition and cold partition for calculation. If there is no situation where the whole graph is not convergence, the highest state degree cold partition is measured. When there is hot partition after rescheduling, in each iteration process, the $n$ highest state degree cold partition and $m$ highest state degree hot partition are operated. The value of $m+n$ keep pace with the number of CPU. For example, if the number of CPU is 10, $m+n$ would be 10. For $I_2$ iteration, the value of $m$ and $n$ is decided by the algorithmic. Usually, it have to satisfy the condition $m>n$. It means that each time in hot partition the $m$ highest state degree cache partition are chosen and in cold partition the n highest state degree cache partition are chosen. On the contrary, if it is not $I_2$ iteration, we only apply the highest state degree hot partition. Thus, $n$ is equal to 0, and $m$ is equal to the number of CPU and equal to 10. Interval vertices stores with orders in ID sequence. If we need the specific partition to calculate, it represents reading in ascending ID order. The sum of state degree values with all partitions is computed based on the partition state degree values stored in the partition state degree table. The smaller the state degree is, the closer the vertices are to convergence. When the sum of partition state degrees is smaller than a minimum value $T_2$, it can be regarded that the entire graph converges. Therefore, when the sum of state degree values with all partitions is smaller than the convergence threshold, it is determined that the entire picture converges, and the computation comes to an end and its result is output. Preferably, the specific value of the convergence threshold is defined by the user, and the default value is 0.000001. According to a preferred mode of execution, the graph processing method stated further includes the step: judging whether it is the initial iteration. In the case of the first iteration, the block with the highest state degree in the hot partition is scheduled to be computed on the basis of computation the mentioned dead partition, and the convergence of the entire graph is determined after the iterative computations based on the sum of status degree value with all partitions. In the case that the entire graph does not converge, the subsequent iteration is proceeded. \begin{algorithm} \begin{footnotesize} \caption{Adaptive Partition Scheduling} \label{algorithm:PNPFI} \begin{algorithmic}[1] \Function{Procee\_Active}{\emph{m, n}, Partition \emph{P$_{hot}$, P$_{cold}$}} \State \emph{threads} = \emph{numa\_num\_configured\_cpus()} \For{each Partition \emph{p}} \For{Vertex \emph{v$_i$} belongs to Partition \emph{p}} \State SD(p) $\leftarrow$ SD(p) + Process\_Vertices(Process($v_i$), V) \EndFor \EndFor \If{ Still remains \emph{P$_{hot}$}} \If{iterations == I$_2$} \State \emph{actives vertices} $\leftarrow$ \emph{m} * \emph{P$_{hot}$} + \emph{n} * \emph{P$_{cold}$} \Else \State \emph{actives vertices} $\leftarrow$ \emph{threads} * \emph{P$_{hot}$} \EndIf \EndIf \If{ Only remains \emph{P$_{cold}$}} \State \emph{actives vertices} $\leftarrow$ \emph{threads} * \emph{P$_{cold}$} \EndIf \State \Return{\emph{actives vertices}} \EndFunction \State \Procedure{Scheduling}{\emph{active vertices}} \For{Still remains untraversed Partition} \State Send \emph{edge} in Partition \emph{p} to other nodes \EndFor \For{edge in Partition p hasn't all been received} \State Receive \emph{edge} in Partition \emph{p} from other nodes \EndFor \If{ \emph{P$_{hot}$}} \State master $\leftarrow$ mirror vertex update \EndIf \If{ \emph{P$_{cold}$}} \State mirror $\leftarrow$ master vertex update \EndIf \EndProcedure \end{algorithmic} \end{footnotesize} \end{algorithm} \begin{figure}[tb] \centering \includegraphics[scale=0.36]{Adaptively_schedule.png} \caption{Adaptive Partition Scheduling} \label{fig:9} \end{figure} One of the challenges of adaptive scheduling is to ensure that the hot partition is sufficiently computed. When the number of hot partitions is greater than that of machine threads, it indicates that one single iteration fails to make all hot partitions to be computed. Therefore, it is necessary to ensure for scheduling of partition that the hot partition with higher heat can be computed after the activity of the hot partition is reduced. It should be noted that it is a long process when the hot partition is repeatedly computed and the activity declines, trending to the activity of the cold partition. Due to the complexity of the structure, even the computation number increases, the convergence condition the hot partition requires is also more than that of the cold partition. And when all the hot partitions tend to converge, the entire graph will tend to converge and the graph algorithm will also be close to the end of computation. In addition, regarding the convergence threshold: (1) Whether the algorithm actually converges or whether the computation is completed has nothing to do with the convergence threshold. $T_2$ is just a value for judging whether the current algorithm converges as the accomplish time of computation cannot be known in operational process of algorithm. Therefore, the state degree should be obtained at regular intervals to obtain the result that if it has converged compared with $T_2$. Consequently, it is not the case that the smaller the D2 is set, the faster the algorithm converges. (2) It does not make much sense that different convergence thresholds should be set based on different algorithms or application cases. Because the state degree of all algorithms can be 0 when reaching the convergence. It requires a relatively long time from 0.000001 to 0. In order to improve performance, the state degree 0.000001 is considered as convergence, and the convergence threshold is fixed at 0.000001 while the algorithm result is within the tolerance range. \section{Related Work} Previous graph processing systems, over distributed~\cite{Pregel, X-Pregel, GraphLab, PowerGraph, Giraph, PowerSwitch, HybirdGraph, Gemini} or single multi-core platform~\cite{GraphChi, TurboGraph, VENUS, GridGraph, NXgraph, Mosaic}, have done plenty of work on effective graph processing such as load balancing and communication overhead reducing. Most of those approaches treat graph data as black box, which means, graph have been managed as a combination of vertices and edges (i.e.Vertex-centric and Edge-centric) rather than logical structure, the difference among which generates performance variability. In this section, we give a brief summary of related categories of prior work. \textbf{Distribute Systems:} Pregel~\cite{Pregel} divide the graph by hashing the vertex id which ensure loading balancing. Yet Pregel uses message communication paradigm, messages that need to be processed will be huge when vertices has many adjacent points. Coincidentally, Pregel performs inefficiency in power-law graph and only allows global synchronization. X-Pregel~\cite{X-Pregel} optimizes Pregel's messaging mechanism by reducing the number of messages that needed to be delivered in every iteration. Giraph~\cite{Giraph} adds more features compare to Pregel, including master computation, out-of-core computation, etc. But the poor locality of data access limits its effective. GraphLab~\cite{Giraph} follows the vertex-centric GAS model, but its partitions are still obtained by randomly division. On the other hand, Its shared memory storage strategy may have performance bottlenecks for large graphs. PowerGraph~\cite{PowerGraph} works well on power-law graph, but no special optimizations are considered for speeding up I/O access just as GraphLab. PowerSwitch~\cite{PowerSwitch} proposed an adaptive graph processing method based on PowerGraph, adaptively switching between synchronous and asynchronous processing modes according to the amount of vertices processed per unit time to achieve the best performance. However, it treats all the vertices of a graph as the same, and does not handle the convergence according to the vertices in the iteration. PREDIcT~\cite{PREDIcT} proposes an experimental methodology for predicting the runtime of iterative algorithms, which optimizes cluster resource allocations among multiple workloads of iterative algorithms. Maiter~\cite{Maiter} propose delta-based accumulative iterative computation which reduce costs and accelerate calculations.HybridGraph~\cite{HybirdGraph} puts forward a algorithm adaptively switching between pull and push, focusing on performing graph analysis on a cluster IO-efficiently. Compare to GraphLab, PowerGraph employs a vertex-cut mechanism to reduce the network cost of sending requests and transferring messages at the expense of incurring the space cost of vertex replications. GrapH~\cite{GrapH} focus on minimize overall communication costs by using an adaptive edge migration strategy to avoid frequent communication over expensive network links. Gemini~\cite{Gemini} is a computation-centric distributed graph processing system that uses a hybrid pull/push approach to facilitate state updates and messaging of graph vertices. \textbf{Single-machine Systems:} GraphChi~\cite{GraphChi} is a vertex-centric graph processing system and improve IO access efficiency by parallel Sliding Window processing strategy. But the outgoing edges of all vertices have to be loaded into memory before computation, resulting in unnecessary transfer of disk data. Also, all memory blocks have to be scanned when accessing neighboring vertices, which lead to inefficient graph traversal. TurboGraph~\cite{TurboGraph} proposed a Pin-And-Slide model to solve this problem. PAS has no delay in dealing with local graph data, but only applies to some specific parallel algorithms. Compare to the two above, VENUS~\cite{VENUS} expands to nearly every algorithm and enables streamlined processing which performs computation while the data is streaming in. Moreover, it uses a fixed buffer to cache the v-shard, which can reduce random IO. GridGraph~\cite{GridGraph} uses a 2-level Hierarchical Partitioning scheme to reduce the amount of data transfer, enable streamlined disk access, and maintain locality. But it requires more disk data transfer using TurboGraph-like updating strategy. Besides, it cannot fully utilize the parallelism of multi-thread CPU without sorted edges. NXgraph~\cite{NXgraph} propose the Destination-Sorted Sub-Shard (DSSS) structure to store graph with three updating strategies: SPU, DPU and MPU. it adaptively choose suitable one to fully utilize the memory space and reduce the amount of data transfer. It achieves higher locality than v-shards in VENUS~\cite{VENUS} and reduces the amount of data transfer and enables streamlined disk access pattern. Mosaic~\cite{Mosaic} combines fast host processors for concentrated memory-intensive operations, with coprocessors for compute and I/O intensive components. Traditional graph systems, either memory-share nor distribute, take the variable of graph structure into consideration, which appears through constantly convergence of vertices during iterations and plays significant role in program optimization. In this case, We present a novel graph structure-aware technique in the paper that provide adaptive graph partitioning and processing scheduling according to the variety of graph structure. Our strategy reduce the overhead caused by inactive vertices and their loading times as well speed up convergence rate. \section{Conclusion} In this paper, We adopted a structure-centric distributed graph processing method, Through graph structure perception, graph structure features of unconvergent vertices are incrementally obtained according to analysis, adaptively scheduling suitable graph processing methods. Our development reveal that (1) The dynamic incremental partitioning of vertex degree and state degree can significantly reducing IO resource overhead and cache miss rate, and (2) Computation and communication overhead of less active vertices can be reduced by setting priority of graph partitions and scheduling them based on predestinated order, and accelerated the algorithm convergence as well. Our experimental results on a variety of different data sets and their structural features of the graph demonstrate the efficiency, effectiveness and scalability of our approach, in comparison to state-of-the-art race detection approaches. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:13:23", "yymm": "1806", "arxiv_id": "1806.00907", "language": "en", "url": "https://arxiv.org/abs/1806.00907" }
\section{Introduction and statement of the main result} \label{sec : Introduction and the statement of the main results } \subsection{Introduction} Let $n$ and $k$ be integers such that $n \geq k \geq 1$. If $\operatorname{Mat}_{k,n}$ denotes the space of all real $k\times n$ matrices of rank $k$, then the real Grassmannian $\operatorname{G}_k(\mathbb{R}^n)$ --- the space of all $k$-dimensional linear subspaces of $\mathbb{R}^n$ --- can be defined as the orbit space $\operatorname{G}_k(\mathbb{R}^n) = \operatorname{GL}_k \backslash \operatorname{Mat}_{k,n}$. The totally nonnegative part of the Grassmannian is defined quite analogously. \begin{definition}[Postnikov {\cite[Sec.\,3]{Postnikov2006}}] Let $n \geq k \geq 1$ be integers, let $\operatorname{Mat}_{k,n}^{\ge0}$ be the space of all real $k \times n$ matrices of rank $k$ all whose maximal minors are nonnegative, and let $\operatorname{GL}_k^+$ denote the group of all real $k\times k$ matrices with positive determinant, which acts freely on $\operatorname{Mat}_{k,n}^{\ge0}$ by matrix multiplication from the left. The \emph{totally nonnegative Grassmannian} $\operatorname{G}_k^{\ge0}(\mathbb{R}^n)$ is the orbit space $\operatorname{G}_k^{\ge0}(\mathbb{R}^n) = \operatorname{GL}_k^+ \backslash \operatorname{Mat}_{k,n}^{\ge0}$. \end{definition} The totally nonnegative Grassmannian was introduced and studied by Postnikov in 2006 \cite[Sec.\,3]{Postnikov2006}, building on works by Lusztig \cite{Lusztig1994} and by Fomin \& Zelevinsky \cite{Fomin1999}. Subsequently, the geometric and combinatorial properties of the totally nonnegative Grassmannian were studied intensively. Rietsch \& Williams showed that the totally nonnegative Grassmannian is contractible \cite[Thm.\,1.1]{Rietsch2010}; an earlier argument by Lusztig \cite[Sec.\,4.4]{Lusztig1998} can also be adapted to prove the same. Galashin, Karp \& Lam \cite[Thm.\,1.1]{Galashin2017} proved that $\operatorname{G}_k^{\ge0}(\mathbb{R}^n)$ is indeed homeomorphic to a closed $k(n-k)$-dimensional ball. In 2014, the physicists Arkani-Hamed \& Trnka \cite[Sec.\,9]{Arkani-Hamed2014} introduced the amplituhedra as certain images of the totally nonnegative Grassmannians. They conjectured that their geometry describes scattering amplitudes in some quantum field theories. For a gentle introduction to amplituhedra in physics and mathematics consult \cite{Bourjaily2018}. Shortly after, Lam introduced Grassmann polytopes \cite{Lam2016}, which generalize amplituhedra. Postnikov \cite[Def.\,3.2, Thm.\,3.5]{Postnikov2006} defined a CW structure on the totally nonnegative Grassmannian $\operatorname{G}_k^{\ge0}(\mathbb{R}^n)$ such that each cell, also called a \emph{positroid cell}, is indexed by the associated matroid -- a positroid -- of rank $k$ on $n$ elements, see also \cite{Postnikov2009}. Furthermore, Rietsch \& Williams \cite{Rietsch2010} showed that the closures of positroid cells are contractible and that their boundaries are homotopy equivalent to spheres. \begin{definition} \label{def:amplituhedron} Let $k\geq 1$, $m\ge0$ and $n \geq k+m$ be integers, and let $Z$ be a real $(k+m)\times n$ matrix such that the assignment \begin{equation} \label{eqn:definition map Z} \widetilde{Z}(\span(V))=\span(VZ^\top) \end{equation} induces a map \[ \widetilde{Z} \colon \operatorname{G}_k^{\ge0}(\mathbb{R}^n) \longrightarrow \operatorname{G}_k(\mathbb{R}^{k+m}). \] Here $V\in \operatorname{Mat}_{k,n}^{\geq 0}$, $\span$ denotes the row span of a matrix, and $Z^\top$ is the transpose of the matrix $Z$. The image $\widetilde{Z}(\bar{e})$ of a closed positroid cell $\bar{e}$ in the CW decomposition of the nonnegative Grassmannian $\operatorname{G}_k^{\ge0}(\mathbb{R}^n)$ is called a \emph{Grassmann polytope}, denoted by $\GP{e}$. If $e$ is the maximal cell, which for this CW decomposition means $\bar{e} = \operatorname{G}_k^{\ge0}(\mathbb{R}^n)$, and all $(k+m)\times (k+m)$ minors of the matrix $Z$ are positive, then the Grassmann polytope $\GP{e}$ is called an \emph{amplituhedron} and is denoted by $\mathcal{A}_{n,k,m}(Z)$. \end{definition} The previous definition in particular means that if $v_1, \dots, v_k \in \mathbb{R}^n$ are linearly independent row vectors, then \[ \widetilde{Z}(\span\{v_1, \dots, v_k\}) = \span\{v_1Z^\top, \dots, v_kZ^\top\}. \] The map $\widetilde{Z}$ is said to be \emph{well defined} if $\span(VZ^\top)$ is a $k$-dimensional subspace of $\mathbb{R}^{k+m}$ for every $V \in \operatorname{Mat}_{k,n}^{\ge0}$. The fact that the map $\widetilde{Z}$ is well defined when $Z$ is a matrix with positive maximal minors was established by Arkani-Hamed \& Trnka in \cite{Arkani-Hamed2014} and by Karp in \cite[Thm.\,4.2]{Karp2017b}. Lam \cite[Prop.\,15.2]{Lam2016}, however, considers a larger class of matrices $Z$ for which the map $\widetilde{Z}$ is still well defined. The structure of the amplituhedron is known only in a few cases. In the case $m=0$ all amplituhedra $\mathcal{A}_{n,k,0}(Z)$ are the point $\operatorname{G}_k(\mathbb{R}^{k})$, whereas when $m=1$ Karp \& Williams \cite[Cor.\,6.18]{Karp2017} have shown that the amplituhedron is homeomorphic to a ball. For $k=1$ the amplituhedron is a cyclic polytope of dimension $m$ on $n$ vertices \cite{Sturmfels1988}, and for $n=k+m$ the map $Z$ is a linear isomorphism, and consequently the amplituhedron is homeomorphic to the totally nonnegative Grassmannian $\operatorname{G}_k^{\ge0}(\mathbb{R}^n)$, which is a ball by \cite[Thm.\,1.1]{Galashin2017}. Finally, Galashin, Karp \& Lam \cite[Thm.\,1.2]{Galashin2017} proved that the cyclically symmetric amplituhedra, amplituhedra arising from particularly chosen matrices $Z$, are homeomorphic to balls whenever $m$ is even. The topology of other Grassmann polytopes is unknown. \medskip \subsection{Main results} Our first result gives a family of contractible Grassmann polytopes. \begin{theorem} \label{thm:contractible} Let $k \geq 1$ and $m \ge0$ be integers, and let $Z$ be a real $(k+m)\times (k+m+1)$ matrix such that the map $\widetilde{Z}\colon \nng{k}{k+m+1} \longrightarrow \operatorname{G}_k(\mathbb{R}^{k+m})$ is well defined. Then the Grassmann polytope $\GP{e}$ is contractible for every positroid cell $e$ in the CW decomposition of $\operatorname{G}_k^{\ge0}(\mathbb{R}^{k+m+1})$. \end{theorem} The proof of Theorem \ref{thm:contractible} relies on classical results of Smale \cite[Main Thm.]{Smale1957} and Whitehead \cite[Thm.\,1]{Whitehead1949} combined with the fact that every Grassmann polytope admits a triangulation (as a topological space), see Theorem \ref{thm:triangulation}. \medskip The following is a consequence of Smale's result \cite[Main Thm.]{Smale1957}. \begin{theorem}[Smale] \label{thm:Smale} Let $X$ and $Y$ be path connected, locally compact, separable metric spaces, and in addition let $X$ be locally contractible. Let $f\colon X\longrightarrow Y$ be a continuous surjective proper map, that is, any inverse image of a compact set is compact. If for every $y\in Y$ the inverse image $f^{-1}(\{y\})$ is contractible, then the induced homomorphism \[ f_{\#} \colon \pi_i(X)\longrightarrow \pi_i(Y) \] is an isomorphism for all $i \geq 0$. \end{theorem} Recall that a continuous map $f\colon X\longrightarrow Y$ between topological spaces $X$ and $Y$ is a {\em weak homotopy equivalence} if the induced map on the path connected components $f_{\#}\colon\pi_0(X)\longrightarrow\pi_0(Y)$ is bijective, and for every point $x_0\in X$ and for every integer $n \geq 1$ the induced map $f_{\#}\colon\pi_n(X,x_0)\longrightarrow\pi_n(Y,f(x_0))$ is an isomorphism. \begin{theorem}[{\cite[Thm.\,1]{Whitehead1949}}] \label{thm:Whitehead} Let $X$ and $Y$ be topological spaces that are homotopy equivalent to CW complexes. Then a continuous map $f\colon X \longrightarrow Y$ is a weak homotopy equivalence if and only if it is a homotopy equivalence. \end{theorem} Since Theorem \ref{thm:Whitehead} requires that spaces have the homotopy type of a CW complex, the following theorem is a necessary ingredient in the proof of Theorem \ref{thm:contractible}. \begin{theorem} \label{thm:triangulation} Every Grassmann polytope is semi-algebraic as a subset of a Grassmannian. In particular, every Grassmann polytope is homeomorphic to a semi-algebraic subset of some real affine space, and admits a triangulation. \end{theorem} Note that Theorem \ref{thm:triangulation} claims that every Grassmann polytope $\GP{e}$ can be triangulated in a classical sense, thus there exists a simplicial complex $T$ and a homeomorphism $T \longrightarrow \GP{e}$. A very similar argument to ours was also given by Arkani-Hamed, Bai \& Lam in \cite[Appendix~J]{Arkani-Hamed2017}. \medskip In order to apply Theorem \ref{thm:Smale} to the map $\widetilde{Z}$, we need to understand its fibers. Thus we prove the following theorem. \begin{theorem} \label{thm:fibers} Let $k \geq 1$ and $m \ge0$ be integers, and let $Z$ be a real $(k+m)\times (k+m+1)$ matrix such that the map $\widetilde{Z}$ is well defined. Then for every positroid cell $e$ and for every point $y \in \GP{e}$, the inverse image $(\widetilde{Z}|_{\bar{e}})^{-1}(\{y\}) = \widetilde{Z}^{-1}(\{y\}) \cap \bar{e}$ under the restriction map $\widetilde{Z} |_{\bar{e}} \colon \bar{e} \longrightarrow \GP{e}$ is contractible. \end{theorem} The proof of Theorem \ref{thm:fibers} is postponed to the next section. Here we show that Theorem \ref{thm:fibers} in combination with Theorems \ref{thm:Smale}--\ref{thm:triangulation} implies our main result. \begin{proof}[Proof of Theorem \ref{thm:contractible}] Let $e$ be a positroid cell in the CW decomposition of $\nng{k}{k+m+1}$. We apply Theorem \ref{thm:Smale} to the map $\widetilde{Z}: \bar{e} \longrightarrow \GP{e}$. The spaces $\bar{e}$ and $\GP{e}$, as well as the map $\widetilde{Z}$, satisfy the assumptions of Theorem \ref{thm:Smale}. Furthermore, Theorem \ref{thm:fibers} implies that for every $y \in \bar{e}$, the fiber $\widetilde{Z}^{-1}(\{y\})$ is contractible. Thus, from Theorem \ref{thm:Smale} we have that the map $\widetilde{Z}$ is a weak homotopy equivalence. The closed positroid cell $\bar{e}$ is a CW complex. Furthermore, the Grassmann polytope $\GP{e}$ is a CW complex, by Theorem \ref{thm:triangulation}. Thus, from Theorem \ref{thm:Whitehead}, we conclude that the map $\widetilde{Z}$ is a homotopy equivalence. Hence, the Grassmann polytope $\GP{e}$ is homotopy equivalent to the closed positroid cell $\bar{e}$, which is contractible, see \cite[Thm.\,1.1]{Rietsch2010}. \end{proof} Theorem \ref{thm:contractible} in particular implies that all amplituhedra $\mathcal{A}_{k+m+1,k,m}(Z)$ are contractible. Our next result shows that if in addition $m$ is even, they are homeomorphic to balls. \begin{theorem} \label{thm:homeomorphic} Let $k \geq 1$ be an integer, let $m \ge0$ be an even integer, and let $Z \in \operatorname{Mat}_{k+m,k+m+1}$ be a matrix with all $(k+m)\times (k+m)$ minors positive. Then the amplituhedron $\mathcal{A}_{k+m+1,k,m}(Z)$ induced by the matrix $Z$ is homeomorphic to a $km$-dimensional ball. \end{theorem} The proof of Theorem \ref{thm:homeomorphic} is presented in Section \ref{sec:homeomorphic}. We remark that the combinatorics of the amplituhedron in the case $n=k+m+1$ with $m$ even is identical to that of a cyclic polytope, see \cite{Galashin2018}. \subsection*{Acknowledgement} The authors thank Rainer Sinn for sharing the knowledge about semi-algebraic sets, to Thomas Lam, whose great observations increased the generality of the results in this paper, and to Steven Karp for helpful comments. We are grateful to the referee for careful reading of our manuscript and for useful suggestions that improved the quality of our paper. \section{Proof of Theorem \ref{thm:fibers}} \label{sec:contractible} Let $k \geq 1, m \ge0$ and $n \geq k+m$ be integers and let $Z$ be a real $(k+m)\times n$ matrix such that the map $\widetilde{Z}$ is well defined. Since the action of the group $\operatorname{GL}_k^+$ on $\operatorname{Mat}_{k,n}^{\geq 0}$ is free, there is a fibration \begin{equation} \label{eqn:fibration nonnegative Grassmannian} \operatorname{GL}_k^+ \longrightarrow \operatorname{Mat}_{k,n}^{\ge0} \longrightarrow \operatorname{G}_k^{\ge0}(\mathbb{R}^n). \end{equation} The matrix $Z$, as in Definition \ref{def:amplituhedron}, induces a map \begin{eqnarray*} \widehat{Z} : \operatorname{Mat}_{k,n}^{\ge0} & \longrightarrow & \operatorname{Mat}_{k,k+m},\\ V & \longmapsto & VZ^\top, \end{eqnarray*} which is again well defined, see for example \cite[Prop.\,15.2]{Lam2016}. Let $e$ be a positroid cell in the CW decomposition of $\nng{k}{n}$, and let $I_e \subseteq \binom{[n]}{k}$ be the family of nonbases (dependent sets) of cardinality $k$ of the matroid that defines the cell $e$. The maximal minors of a $k \times n$ matrix are indexed by the set $\binom{[n]}{k}$. Denote by $\operatorname{Mat}_{k,n}^{\ge0}(e)$ the set of all matrices $V \in \operatorname{Mat}_{k,n}^{\ge0}$ whose minors indexed by elements of $I_e$ are equal to zero. Then every point in $\bar{e} \subseteq \nng{k}{n}$ is represented by a matrix in $\operatorname{Mat}_{k,n}^{\ge0}(e)$, and the row span of every such matrix lies in $\bar{e}$. In other words, $\bar{e}= \operatorname{GL}_k^{+} \backslash \operatorname{Mat}_{k,n}^{\geq 0}(e)$. Thus the restriction of the fibration \eqref{eqn:fibration nonnegative Grassmannian} is a fibration \begin{equation} \label{eqn:fibration positroid cell} \operatorname{GL}_k^+ \longrightarrow \operatorname{Mat}_{k,n}^{\ge0}(e) \longrightarrow \bar{e}. \end{equation} Note that if $e$ is the maximal positroid cell, the set $\operatorname{Mat}_{k,n}^{\ge0}(e)$ is the whole set $\operatorname{Mat}_{k,n}^{\ge0}$. Denote by $\widehat{\GP{e}}$ the image of the set $\operatorname{Mat}_{k,n}^{\ge0}(e)$ under the map $\widehat{Z}$. With a usual abuse of notation, we consider maps $\widehat{Z}: \operatorname{Mat}_{k,n}^{\ge0}(e) \longrightarrow \widehat{\GP{e}}$ and $\widetilde{Z}:\bar{e} \longrightarrow \GP{e}$. Then there exists a commutative diagram of spaces and continuous maps \[ \begin{tikzcd} \operatorname{Mat}_{k,n}^{\ge0}(e) \arrow{r}{\widehat{Z}} \arrow{d} & \widehat{\GP{e}} \arrow{d} \\ \bar{e} \arrow{r}{\widetilde{Z}} & \GP{e}, \end{tikzcd} \] where vertical maps send any matrix to its row span. \medskip The proof of Theorem \ref{thm:fibers} splits into the following two lemmas. \begin{lemma} \label{lemma:fiber} Let $k \geq 1$ and $m\ge0$ be integers, $n=k+m+1$, and let $Z$ be a real $(k+m)\times n$ matrix such that the map $\widetilde{Z}$ is well defined. Then for every positroid cell $e$ in the CW decomposition of $\nng{k}{n}$ and for every $W \in \widehat{\GP{e}}$, the inverse image $\widehat{Z}^{-1}(\{W\}) \subseteq \operatorname{Mat}_{k,n}^{\ge0}(e)$ is nonempty and convex. \end{lemma} \begin{proof} The matrix $Z$ induces a linear map \begin{eqnarray} \label{eqn:map corank 1} \mathbb{R}^n &\longrightarrow& \mathbb{R}^{k+m}\\ v &\longmapsto& vZ^\top \nonumber, \end{eqnarray} where $v \in \mathbb{R}^n$ is a row vector. Since $n=k+m+1$, the kernel of the map \eqref{eqn:map corank 1} is $1$-dimensional. Fix a generator $a \in \mathbb{R}^n$ of that kernel. Choose an arbitrary point $W \in \widehat{\GP{e}}$, and let $U$ and $V$ be any two points in $\widehat{Z}^{-1}(\{W\})$. Our goal is to show that for every $\lambda \in [0,1]$ the convex combination $(1-\lambda)U+\lambda V$ also belongs to $\widehat{Z}^{-1}(\{W\})$. Since $UZ^\top=VZ^\top=W$, the rows of the matrix $V-U$ belong to $\ker(Z)$. Consequently, there exists a row vector $x \in \mathbb{R}^k$ such that $V-U = x^\top a$, where $a$ is also considered as a row vector. Thus we have to show that for every $\lambda \in [0,1]$ the convex combination \begin{equation} \label{eqn:convex combination} (1-\lambda)U + \lambda V = U + \lambda x^\top a \end{equation} belongs to the space $\operatorname{Mat}_{k,n}^{\ge0}(e)$, this means that every $k \times k$ minor of the matrix \eqref{eqn:convex combination} is nonnegative, and in addition that all the minors of the matrix \eqref{eqn:convex combination} indexed by the nonbases $I_e \subseteq \binom{[n]}{k}$ of the matroid corresponding to $e$ are equal to zero. A $k \times k$ submatrix of the matrix \eqref{eqn:convex combination} is of the form \begin{equation} \label{eqn:matrix minor} \left( \begin{matrix} u_{1i_1}+\lambda x_1a_{i_1} & \dots & u_{1i_k}+\lambda x_1a_{i_k} \\ \vdots & & \vdots \\ u_{ki_1}+\lambda x_ka_{i_1} & \dots & u_{ki_k}+\lambda x_ka_{i_k} \end{matrix} \right), \end{equation} where \[ U=\left( \begin{matrix} u_{11} & \dots & u_{1n} \\ \vdots & &\vdots \\ u_{k1} & \dots & u_{kn} \end{matrix} \right), \ x=(x_1 \dots x_k), \ a=(a_1 \dots a_n), \] and $1 \leq i_1 < \dots <i_k \leq n$. The matrix \eqref{eqn:matrix minor} can be transformed using row operations into a matrix that contains the variable $\lambda$ only in one row. Therefore, every $k\times k$ minor of the matrix \eqref{eqn:convex combination} is a polynomial of degree at most $1$ in the variable $\lambda$. Since it takes nonnegative values for $\lambda=0$ and $\lambda=1$, it is also nonnegative for all $\lambda\in [0,1]$. Thus for every $\lambda\in [0,1]$, the point $(1-\lambda)U+\lambda V$ belongs to $\operatorname{Mat}_{k,n}^{\ge0}$. Similarly, if $\{i_1, \dots, i_k\}$ is a nonbasis of the matroid corresponding to $e$, then the determinant of the matrix \eqref{eqn:matrix minor} is zero for $\lambda=0$ and $\lambda=1$, so it is a constant zero-polynomial, meaning that the matrix \eqref{eqn:convex combination} belongs to $\operatorname{Mat}_{k,n}^{\ge0}(e)$ for every $\lambda \in [0,1]$. Consequently the set $\widehat{Z}^{-1}(\{W\})$ is convex. \end{proof} \begin{lemma} \label{lemma:homeomorphism} Let $k\geq 1, m\ge0$ and $n\geq k+m$ be integers. For every positroid cell $e$ and for every $W \in \widehat{\GP{e}}$, the inverse images \[ \widehat{Z}^{-1}(\{W\}) \subseteq \operatorname{Mat}_{k,n}^{\ge0}(e) \subseteq \operatorname{Mat}_{k,n}^{\ge0} \qquad\text{and}\qquad \widetilde{Z}^{-1}(\{\span(W)\}) \subseteq \bar{e} \subseteq \operatorname{G}_k^{\ge0}(\mathbb{R}^n) \] are homeomorphic. \end{lemma} \begin{proof} Let $\varphi \colon \widehat{Z}^{-1}(\{W\}) \longrightarrow \widetilde{Z}^{-1}(\{\span(W)\})$ be defined by $\varphi(U)=\span(U)$, where $U\in \widehat{Z}^{-1}(\{W\})$, and $\span$ denotes the row span. We prove that $\varphi$ is a homeomorphism. Clearly, $\varphi$ is continuous, so it suffices to find a continuous map $\psi:\widetilde{Z}^{-1}(\{\span(W)\}) \longrightarrow \widehat{Z}^{-1}(\{W\})$ such that $\varphi \circ \psi$ is the identity map on $\widetilde{Z}^{-1}(\{\span(W)\})$ and $\psi \circ \varphi$ is the identity map on $\widehat{Z}^{-1}(\{W\})$. Let $L \in \widetilde{Z}^{-1}(\{\span(W)\})$. Then there exists a matrix $K \in \operatorname{Mat}_{k,n}^{\geq0}(e)$ whose rows span the subspace $L$. Since \[ \span(KZ^\top) = \span(W), \] there exists a unique $C \in \operatorname{GL}_k$ such that $KZ^\top=CW$. Now define $\psi$ as $\psi(L)=C^{-1}K$. It can be seen using Cauchy--Binet formula that $\det(C)>0$, thus, $C^{-1}K \in \operatorname{Mat}_{k,n}^{\ge0}(e)$. Even though we have defined the map $\psi$ using an arbitrarily chosen matrix $K$ such that $\span(K)=L$, it can be checked directly that the definition of $\psi$ does not depend on a choice of $K$. In order to prove that the map $\psi$ is continuous, we need to show that the choice of a matrix $K$ can be made continuously on $\widetilde{Z}^{-1}(\{\span(W)\})$. The choice of a matrix $K$ is equivalent to the choice of a positively oriented basis for the subspace $L \subseteq \mathbb{R}^n$. Therefore, we need a continuous section of the fiber bundle \eqref{eqn:fibration positroid cell} restricted to the set $\widetilde{Z}^{-1}(\{\span(W)\})$. Since the base space $\bar{e}$ is contractible, the fiber bundle \eqref{eqn:fibration positroid cell} is trivial. In particular, its restriction on $\widetilde{Z}^{-1}(\{\span(W)\})$ is also trivial, so it admits a continuous section. Therefore, the bases for elements of $\widetilde{Z}^{-1}(\{\span(W)\})$ can be chosen continuously. On the other hand, the matrix $C$ is a solution of the linear system $KZ^\top=CW$, which depends continuously on $K$, thus it also depends continuously on $L$. Lastly, \[ \varphi(\psi(L))=\varphi(C^{-1}K)=\span(C^{-1}K)=\span(K)=L, \] holds for every $L \in \widetilde{Z}^{-1}(\{\span(W)\})$, and \[ \psi(\varphi(U))=\psi(\span(U))=C^{-1}U, \] for every $U \in \widehat{Z}^{-1}(\{W\})$, where $C$ is the unique $k \times k$ matrix such that $W=\widehat{Z}(U)=UZ^\top=CW$, hence $C$ is the identity matrix. \end{proof} Finally, Lemma \ref{lemma:fiber} and Lemma \ref{lemma:homeomorphism} complete the proof of Theorem \ref{thm:fibers}. \section{Proof of Theorem \ref{thm:triangulation}} \label{sec:triangulation} Let us fix an arbitrary positroid cell $e$ in the CW decomposition of the totally nonnegative Grassmannian $\nng{k}{n}$. Furthermore, let $d=\binom{k+m}{k}$, and consider the Veronese embedding $\nu \colon \RP^{d-1} \longrightarrow \mathbb{R}^{d\times d}$ given by \[ x=(x_1: \ldots :x_d) \longmapsto \left( \frac{x_i x_j}{x_1^2+\dots +x_d^2} \right)_{1\leq i,j\leq d}, \] where $x=(x_1: \ldots :x_d)\in \RP^{d-1}$. The embedding $\nu$ maps every line $x \in \RP^{d-1}$ to the matrix of the projection $\mathbb{R}^d \longrightarrow x$. For more details on the Veronese embedding see for example \cite[Sec.\,3.4.2]{BochnakEtAl1998}. Consider next, with obvious abuse of notation, the continuous map $\nu\colon \mathbb{R}^d{\setminus}\{0\} \longrightarrow \mathbb{R}^{d\times d}$ given by \[ (x_1, \dots, x_d) \longmapsto \left( \frac{x_i x_j}{x_1^2+\dots +x_d^2} \right)_{1\leq i,j\leq d} \in \mathbb{R}^{d \times d}. \] In this way, we obtain the commutative diagram of spaces and maps \[ \begin{tikzcd} \operatorname{Mat}_{k,n}^{\ge0} \arrow{r}{\widehat{Z}} \arrow{d} & \operatorname{Mat}_{k,k+m} \arrow{r}{\gamma} \arrow{d} & \mathbb{R}^d \setminus \{0\} \arrow{d}{\pi} \arrow{r}{\nu} & \mathbb{R}^{d\times d} \arrow{d}{\operatorname{id}} \\ \nng{k}{n} \arrow{r}{\widetilde{Z}} & \operatorname{G}_k(\mathbb{R}^{k+m}) \arrow{r}{\gamma} & \RP^{d-1} \arrow{r}{\nu} & \mathbb{R}^{d\times d} , \end{tikzcd} \] where $\gamma\colon \operatorname{G}_k(\mathbb{R}^{k+m}) \longrightarrow \RP^{d-1}$ is the Pl\"ucker embedding, $\gamma\colon \operatorname{Mat}_{k,k+m} \longrightarrow \mathbb{R}^{d}\setminus \{0\}$ maps every matrix to the tuple of its $k \times k$ minors, and $\pi\colon \mathbb{R}^{d}\setminus \{0\} \longrightarrow \RP^{d-1}$ is the quotient map. The Grassmann polytope $\GP{e}=\widetilde{Z}(\bar{e})$ is embedded into $\RP^{d-1}$ via $\gamma$, the projective space $\RP^{d-1}$ is embedded into the Euclidean space $\mathbb{R}^{d \times d}$ via $\nu$, and thus the image $\nu(\gamma(\GP{e}))$ is homeomorphic to $\GP{e}$. First, we prove that the homeomorphic image of the Grassmann polytope $\nu(\gamma(\GP{e}))$ is semi-algebraic. The commutativity of the diagram above implies that \[ \nu(\gamma(\GP{e})) = \nu(\pi(\gamma(\widehat{\GP{e}}))) = \nu(\gamma(\widehat{\GP{e}}))= \nu(\gamma(\widehat{Z}(\operatorname{Mat}_{k,n}^{\geq 0}(e)))). \] The set $\operatorname{Mat}_{k,n}^{\ge0}(e) \subseteq \mathbb{R}^{k \times n}$ is semi-algebraic, even algebraic. Since the map $\widehat{Z}$ is multiplication by a matrix, the set $\widehat{\GP{e}}$ is also semi-algebraic \cite[Cor.\,2.4(2)]{Coste2002}. Furthermore, the map $\gamma\colon \operatorname{Mat}_{k,k+m} \longrightarrow \mathbb{R}^{d}\setminus \{0\}$ is a restriction of a polynomial map $\mathbb{R}^{k\times (k+m)}\longrightarrow\mathbb{R}^d$, and thus $\gamma(\widehat{\GP{e}}) \subseteq \mathbb{R}^{d} \setminus \{0\}$ is semi-algebraic by \cite[Cor.\,2.4(2)]{Coste2002} as well. Finally, the map $\nu\colon\mathbb{R}^d \longrightarrow \mathbb{R}^{d \times d}$ is a regular rational map, and consequently it maps semi-algebraic sets to the semi-algebraic sets, see \cite[Prop.\,2.2.7]{BochnakEtAl1998} \cite[Cor.\,2.9(1)]{Coste2002}. Hence, we have proved that $\nu(\gamma(\GP{e}))$ is semi-algebraic in $\mathbb{R}^{d\times d}$, and consequently the Grassmann polytope $\GP{e}$ is homeomorphic to a semi-algebraic set. In particular, since $\GP{e}$ is compact and homeomorphic to a semi-algebraic set it admits a triangulation according to \cite[Thm.\,9.2.1]{BochnakEtAl1998} \cite[Thm.\,3.11]{Coste2002}. \medskip Second, notice that we obtained a bit more. The $\pi$ inverse image of the embedded Grassmann polytope $\GP{e}$ via $\gamma$ can be presented as follows \[ \pi^{-1}(\gamma(\GP{e}))=\gamma(\widehat{Z}(\operatorname{Mat}_{k,n}^{\ge0}(e)))\cup \big(-\gamma(\widehat{Z}(\operatorname{Mat}_{k,n}^{\geq 0}(e)))\big). \] Since we proved that $\gamma(\widehat{Z}(\operatorname{Mat}_{k,n}^{\geq 0}(e)))$ is semi-algebraic we can conclude that $\pi^{-1}(\gamma(\GP{e}))$ is also a semi-algebraic subset of $\mathbb{R}^d$. Having in mind that every real projective variety is affine, we can define that a subset $X$ of the real projective space $\RP^{d-1}$ is semi-algebraic if, for example, its preimage $\pi^{-1}(X)\subseteq\mathbb{R}^d$, via the defining quotient map $\mathbb{R}^d\longrightarrow \RP^{d-1}$, is semi-algebraic. Thus, we proved that the Grassmann polytope $\GP{e}$, when embedded in $\RP^{d-1}$ via the Pl\"ucker embedding, is a semi-algebraic subset of the real projective space. \qed \section{Proof of Theorem \ref{thm:homeomorphic}} \label{sec:homeomorphic} Let $ k \geq 1$, $m \ge0$ and $n \geq k+m$ be integers, and suppose in addition that $m$ is even. Let $S \in \operatorname{GL}_n$ be given by \[ S(x_1, \dots, x_n) = (x_2, \dots, x_n, (-1)^{k-1}x_1). \] Denote by $Z_0 \in \operatorname{Mat}_{k+m,n}$ the matrix whose rows are the eigenvectors of the matrix $S+S^\top$ that correspond to the largest $k+m$ eigenvalues. It was shown in \cite[Lemma~3.1]{Galashin2017} that all $(k+m)\times (k+m)$ minors of the matrix $Z_0$ are positive, thus it defines an amplituhedron $\mathcal{A}_{n,k,m}(Z_0)$, called \emph{cyclically symmetric amplituhedron}. Galashin, Karp \& Lam \cite[Thm.\,1.2]{Galashin2017} showed that $\mathcal{A}_{n,k,m}(Z_0)$ is homeomorphic to a closed $km$-dimensional ball whenever the parameter $m$ is even. We conclude the proof of Theorem \ref{thm:homeomorphic} by showing that the amplituhedra $\mathcal{A}_{n,k,m}(Z)$ and $\mathcal{A}_{n,k,m}(Z_0)$ are homeomorphic. From \cite[Cor.\,1.12(ii)]{Karp2017b} we know that entries of every nonzero vector of $\ker(Z_0)$ and of $\ker(Z)$ are nonzero, and they alternate in sign. Since $n=k+m+1$, the kernels of matrices $Z$ and $Z_0$ are $1$-dimensional. Let $a=(a_1, \dots, a_n)\in \mathbb{R}^n$ be a generator of the kernel of $Z$ and let $b=(b_1, \dots, b_n)\in \mathbb{R}^n$ be a generator of the kernel of $Z_0$ (it follows from the cyclic symmetry of $Z_0$ that $b_i=(-1)^{i-1}$ for $1\leq i\leq n$, see~\cite{Galashin2017}). Choose them in such a way that $a_1$ and $b_1$ have the same sign. Consequently, for every $1 \leq i \leq n$, the entries $a_i$ and $b_i$ have the same sign. Let $D$ be an $n \times n$ diagonal matrix $D=\operatorname{diag}(\frac{a_1}{b_1}, \dots, \frac{a_n}{b_n})$. The matrix $ZD$ has the same kernel as the matrix $Z_0$, and since the diagonal entries of the matrix $D$ are positive, all maximal minors of the matrix $ZD$ are positive. The fact that the matrices $ZD$ and $Z_0$ have the same kernel implies that they have the same row spans, as well. In particular, there exists a matrix $C \in \operatorname{GL}_{k+m}^+$ such that $Z_0 = CZD$. Multiplication by $D$ on the right gives a homeomorphism $\widehat{D}: \operatorname{Mat}_{k,n}^{\ge0} \longrightarrow\operatorname{Mat}_{k,n}^{\ge0}$, which induces a homeomorphism $\widetilde{D}:\nng{k}{n} \longrightarrow \nng{k}{n}$. Furthermore, multiplication by $C^\top$ on the right gives a homeomorphism $\widehat{C}:\operatorname{Mat}_{k,k+m} \longrightarrow \operatorname{Mat}_{k,k+m}$, thus the induced map $\widetilde{C}: \operatorname{G}_k(\mathbb{R}^{k+m}) \longrightarrow \operatorname{G}_k(\mathbb{R}^{k+m})$ is also a homeomorphism. Hence, we obtain the commutative diagram of spaces and maps \[ \begin{tikzcd} \operatorname{Mat}_{k,n}^{\ge0} \arrow{r}{\widehat{D}} \arrow{d} & \operatorname{Mat}_{k,n}^{\ge0} \arrow{r}{\widehat{Z}} \arrow{d} &\operatorname{Mat}_{k,k+m} \arrow{r}{\widehat{C}} \arrow{d} & \operatorname{Mat}_{k,k+m} \arrow{d} \\ \nng{k}{n} \arrow{r}{\widetilde{D}} & \nng{k}{n} \arrow{r}{\widetilde{Z}} & \operatorname{G}_k(\mathbb{R}^{k+m}) \arrow{r}{\widetilde{C}} & \operatorname{G}_k(\mathbb{R}^{k+m}). \end{tikzcd} \] The image of the composition $\widetilde{C}\circ\widetilde{Z}\circ\widetilde{D}$ of the maps in the lower row of the diagram is the cyclically symmetric amplituhedron $\mathcal{A}_{n,k,m}(Z_0)$ and the image of the map $\widetilde{Z}$ is the amplituhedron $\mathcal{A}_{n,k,m}(Z)$. Since the maps $\widetilde{C}$ and $\widetilde{D}$ are homeomorphisms, these two amplituhedra are homeomorphic. Finally, the fact that the cyclically symmetric amplituhedron $\mathcal{A}_{n,k,m}(Z_0)$ is homeomorphic to a $km$-dimensional ball \cite[Thm.\,1.2]{Galashin2017}, when $m$ is even, concludes the argument that every amplituhedron $\mathcal{A}_{n,k,m}(Z)$ is homeomorphic to a $km$-dimensional ball whenever $n=k+m+1$ and $m$ is even.\qed \medskip The proof of Theorem \ref{thm:homeomorphic} gives even more. Let us say that two Grassmann polytopes $\GPZ{e}{Z},\GPZ{e'}{Z'}\subseteq \operatorname{G}_k(\mathbb{R}^{k+m})$ are \emph{projectively equivalent} if there exists a matrix $M\in \operatorname{GL}_{k+m}$ such that \[ \GPZ{e'}{Z'}=\{\widetilde{M}(x)\mid x\in \GPZ{e}{Z}\}. \] Here $\widetilde{M}$ denotes a map $\operatorname{G}_k(\mathbb{R}^{k+m})\longrightarrow \operatorname{G}_k(\mathbb{R}^{k+m})$ induced by the natural action of $M$ on $\mathbb{R}^{k+m}$. For $k=1$, this coincides with the standard notion of projective equivalence for polytopes in the projective space $\RP^{k+m-1}$. The proof of Theorem \ref{thm:homeomorphic} actually shows that for $n=k+m+1$, any two amplituhedra are projectively equivalent.
{ "timestamp": "2019-01-28T02:16:14", "yymm": "1806", "arxiv_id": "1806.00827", "language": "en", "url": "https://arxiv.org/abs/1806.00827" }
\section{Introduction} \indent The hybrid superconductor-semiconductor nanowire system is the prime candidate to realize, control, and manipulate Majorana zero modes (MZMs) for topological quantum information processing~\cite{NayakRevModPhys2008,PluggeNPJ2017,KarzigPRB2017}. Majorana zero modes can be engineered in these hybrid nanowire systems by combining the one dimensional nature of the nanowire, strong spin-orbit coupling, superconductivity, and appropriate external electric (to control the chemical potential) and magnetic fields (to control the Zeeman energy) to drive the system into a topologically non-trivial phase~\cite{LutchynPRL2010,OregPRL2010}. To induce superconductivity in the semiconductor nanowire, it needs to be coupled to a superconductor. The electronic coupling between the two systems turns the nanowire superconducting~\cite{deGennesProximityEffect}, known as the proximity effect. Following this scheme, the first signatures of MZMs were observed in these hybrid systems, characterized by a zero bias peak (ZBP) in the tunneling conductance spectrum~\cite{MourikScience2012,DasNatPhys2012,DengNanoLett2012,ChurchillPRB2013}. Since then, significant progress has been made in Majorana experiments~\cite{GulBallisticMajorana2017,DengScience2016,ZhangNature2018,LutchynReview2018}, enabled by more uniform coupling between the superconductor and semiconductor nanowire. This has been achieved by improved interface engineering: through careful ex situ processing~\cite{GulHardGap2017,ZhangBallisticSC2017,GillarXiv2018}, by depositing the superconductor on the nanowires in situ~\cite{KrogstrupNatMat2015,ChangNatNano2015}, and a combination of in situ and ex situ techniques~\cite{SasaNature2017}, finally leading to the quantization of the Majorana conductance~\cite{ZhangNature2018}.\\ \indent However, the treatment of the superconductor-semiconductor coupling in the interpretation of experiments is often oversimplified. This coupling has recently been predicted to depend substantially on the confinement induced by external electric fields~\cite{AntipovarXiv2018}. In this work, we experimentally show that the superconductor-semiconductor coupling, as parameterized by the induced superconducting gap, is affected by gate induced electric fields. Due to the change in coupling, the renormalization of material parameters is altered, as evidenced by a change in the effective g-factor of the hybrid system. Furthermore, the electric field is shown to affect the spin-orbit interaction, revealed by a change in the level repulsion between Andreev states. Our experimental findings are corroborated by numerical simulations. \section{Experimental set-up} \begin{figure}[htbp] \includegraphics[width=8.6cm]{Figure_1.pdf} \centering \caption{\textbf{Device schematics}. (\textbf{a}) SEM of device A, with InSb nanowire in gray, superconducting aluminum shell in green, Cr/Au contacts in yellow, and local tunnel gate in red. Scale bar is 500 nm. (\textbf{b}) Schematic of experimental set-up. The substrate acts as a global back gate. The magnetic field is applied along the nanowire direction ($x$-axis). (\textbf{c}) Geometry used in the numerical simulations. A uniform potential $V_{\mathrm{Gate}}$ is applied as a boundary condition at the interface between substrate and dielectric. The superconductor (green) is kept at a fixed potential, which is set by the work function difference at the superconductor-semiconductor interface.} \end{figure} \begin{figure*}[htbp] \includegraphics[width=17.8cm]{Figure_2.pdf} \centering \caption{\textbf{Gate dependence of the induced superconducting gap.} (\textbf{a},\textbf{b}) Differential conductance d$I$/d$V$ measured in device A as a function of $V_{\mathrm{Bias}}$ and $V_{\mathrm{Tunnel}}$ for $V_{\mathrm{BG}}$ = -0.6\,V (\textbf{a}) and $V_{\mathrm{BG}}$ = -0.3\,V (\textbf{b}). Insets show the calculated electron density in the wire for $V_{\mathrm{Gate}}$ = -0.3\,V and $V_{\mathrm{Gate}}$ = 0.3\,V, respectively. (\textbf{c}) Line-cuts from (a) and (b), indicated by the colored bars, in linear (top) and logarithmic (bottom) scale. (\textbf{d}) Calculated DOS for the density profiles shown in the insets of (a) and (b), shown in red and black, respectively. (\textbf{e}) Induced gap magnitude $\Delta$ as a function of $V_{\mathrm{BG}}$, showing a decrease for more positive gate voltages. Top right inset: line traces showing the coherence peak position (indicated by the arrow) for $V_{\mathrm{BG}}$ = -0.6\,V (solid red line) and $V_{\mathrm{BG}}$ = -0.4\,V (dashed black line). Bottom left inset: induced gap from the calculated DOS as a function of $V_{\mathrm{Gate}}$, consistent with the experimental observation.} \end{figure*} \indent We have performed tunneling spectroscopy experiments on four InSb-Al hybrid nanowire devices, labeled A-D, all showing consistent behaviour. The nanowire growth procedure is described in ref.~\cite{SasaNature2017}. A scanning electron micrograph (SEM) of device A is shown in Fig.~1(a). Figure~1(b) shows a schematic of this device and the measurement set-up. For clarity, the wrap-around tunnel gate, tunnel gate dielectric and contacts have been removed on one side. A normal-superconductor (NS) junction is formed between the part of the nanowire covered by a thin shell of aluminum (10 nm thick, indicated in green, S), and the Cr/Au contact (yellow, N). The transmission of the junction is controlled by applying a voltage $V_{\mathrm{Tunnel}}$ to the tunnel gate (red), galvanically isolated from the nanowire by 35 nm of sputtered SiN$_x$ dielectric. The electric field is induced by a global back gate voltage $V_{\mathrm{BG}}$, except in the case of device B, where this role is played by the side gate voltage $V_{\mathrm{SG}}$ (not shown in Fig. 1)~\cite{Supplement}. To obtain information about the density of states in the proximitized nanowire, we measure the differential conductance d$I$/d$V_{\mathrm{Bias}}$ as a function of applied bias voltage $V_{\mathrm{Bias}}$. In the following, we will label this quantity as d$I$/d$V$ for brevity. A magnetic field is applied along the nanowire direction ($x$-axis in Figs.~1(b),1(c)). All measurements are performed in a dilution refrigerator with a base temperature of 20\,mK. \section{Theoretical model} \indent The device geometry used in the simulation is shown in Fig.~1(c). We consider a nanowire oriented along the $x$-direction, with a hexagonal cross-section in the $yz$-plane. The hybrid superconductor-nanowire system is described by the Bogoliubov-de Gennes Hamiltonian \begin{equation} \begin{aligned} H=&\left[\frac{\hbar^2 \mathbf k^2}{2m^*}-\mu-e\phi\right]\tau_ +\alpha_y (k_z \sigma_x - k_x \sigma_z)\tau_z\\ &+\alpha_z (k_x\sigma_y - k_y \sigma_x)\tau_z +\frac{1}{2}g\mu_\mathrm{B} B\sigma_x+\Delta \tau_x. \label{eq:ham} \end{aligned} \end{equation} The first term contains contributions from the kinetic energy and the chemical potential, as well as the electrostatic potential $\phi$. The second and third terms describe the Rashba spin-orbit coupling, with the coupling strength $\alpha_y$ ($\alpha_z$) depending on the $y$-component ($z$-component) of the electric field. The Zeeman energy contribution, proportional to $g$, the Land\'{e} g-factor, is given by the fourth term. Finally, the superconducting pairing $\Delta$ is included as the fifth term. All material parameters are position dependent, taking different values in the InSb nanowire and the Al superconductor~\cite{Supplement}.\\ \indent If the coupling between the superconductor and semiconductor is small (compared to the bulk gap of the superconductor $\Delta$, known as weak coupling), superconductivity can be treated as a constant pairing potential term in the nanowire Hamiltonian, with the induced superconducting gap being proportional to the coupling strength~\cite{VolkovPhysicaC1995}. However, if the coupling becomes strong, the wave functions of the two materials hybridize, and the superconductor and semiconductor have to be considered on equal footing~\cite{ReegPRB2018}. We achieve this by solving the Schr\"{o}dinger equation in both materials simultaneously. When desired, the orbital effect of the magnetic field is added via Peierls substitution~\cite{NijholtPRB2016}. The simulations are performed using the \texttt{kwant} package~\cite{kwant}.\\ \indent The electrostatic potential in the nanowire cross-section is calculated from the Poisson equation, assuming an infinitely long wire. We use a fixed potential $V_{\mathrm{Gate}}$ as a boundary condition at the dielectric-substrate interface. The superconductor enters as the second boundary condition, with a fixed potential to account for the work function difference between superconductor and semiconductor~\cite{VuikNPJ2016}. We approximate the mobile charges in the nanowire by a 3D electron gas (Thomas-Fermi approximation). It has been demonstrated that the potentials calculated using this approximation give good agreement with results obtained by self-consistent Schr\"{o}dinger-Poisson simulations~\cite{MikkelsenarXiv2018}. The calculated potential for a given $V_{\mathrm{Gate}}$ is then inserted into the Hamiltonian~(\ref{eq:ham}).\\ \indent By solving the Schr\"{o}dinger equation for a given electrostatic environment, we can see how the gate potential alters the electronic states in the nanowire, how they are coupled to the superconductor, and how this coupling affects parameters such as the induced gap, effective g-factor, and spin-orbit energy. \begin{figure*}[htbp] \includegraphics[width=17.8cm]{Figure_3.pdf} \centering \caption{\textbf{Effective g-factor.} (\textbf{a},\textbf{b}) d$I$/d$V$ measured in device A as a function of applied bias voltage $V_{\mathrm{Bias}}$ and magnetic field $B$ for $V_{\mathrm{BG}}$ = -0.59\,V and $V_{\mathrm{BG}}$ = -0.41\,V, respectively. The effective g-factor is extracted from a linear fit of the lowest energy state dispersion (dashed lines). (\textbf{c}) $g_{\mathrm{eff}}$ as a function of $V_{\mathrm{BG}}$, showing an increase as the gate voltage becomes more positive. Data from device A. (\textbf{d},\textbf{e}) Simulated DOS in the nanowire as a function of magnetic field for $V_{\mathrm{Gate}}$ = -0.6\,V and $V_{\mathrm{Gate}}$ = -0.3\,V, respectively. (\textbf{f}) Extracted $g_{\mathrm{eff}}$ (based on lowest energy state in the spectrum, black circles) and $g_{\mathrm{spin}}$ (based on the spectrum at $k$ = 0, red squares) from the simulation.} \end{figure*} \section{Gate voltage dependence of the induced superconducting gap} When the transmission of the NS-junction is sufficiently low (i.e., in the tunneling regime), the differential conductance d$I$/d$V$ is a direct measure of the density of states (DOS) in the proximitized nanowire~\cite{BardeenPRL1961}. In Fig.~2(a), we plot d$I$/d$V$ measured in device A as a function of applied bias voltage $V_{\mathrm{Bias}}$ and tunnel gate voltage $V_{\mathrm{Tunnel}}$, for $V_{\mathrm{BG}}$ = -0.6\,V. In the low transmission regime, we resolve the superconducting gap $\Delta$ around 250 $\mathrm{\mu}$eV, indicated by the position of the coherence peaks. The ratio of sub-gap to above-gap conductance (proportional to the normal state transmission of the junction, $T$) follows the behavior expected from BTK theory~\cite{BTK,BeenakkerPRB1992}, indicating the sub-gap conductance is dominated by Andreev reflection processes (proportional to $T^2$). This is generally referred to as a hard gap. However, for more positive back gate voltages, the sub-gap conductance is larger and shows more resonances, as is illustrated in Fig.~2(b) for $V_{\mathrm{BG}}$ = -0.3\,V. Fig.~2(c) shows line traces taken at a similar transmission (above-gap conductance) for both cases. The sub-gap conductance for $V_{\mathrm{BG}}$ = \nobreakdash-0.3\,V (black line) exceeds that of the hard gap case (red line) by an order of magnitude. This is indicative of a surplus of quasi-particle states inside the gap, referred to as a soft gap.\\ \indent The gate voltage induced transition from soft to hard gap is generically observed in multiple devices. To understand this phenomenology, we calculate the electron density in the nanowire cross-section for different values of $V_{\mathrm{Gate}}$. Because the charge neutrality point in our devices is unknown, there is a difference between the gate voltages used in the experiment and the values of $V_{\mathrm{Gate}}$ used in the simulation. By comparing the transition point between hard and soft gaps in the experiment and the simulation, we estimate that the experimental gate voltage range -0.6\,V~\textless~$V_{\mathrm{BG}}$~\textless~-0.4\,V roughly corresponds to the simulated gate voltage range -0.4\,V~\textless~$V_{\mathrm{Gate}}$~\textless~-0.2\,V.\\ \indent For more negative $V_{\mathrm{Gate}}$, the electric field from the gate pushes the electrons towards interface with the superconductor (inset of Fig.~2(a)). We solve the Schr\"{o}dinger equation for the calculated electrostatic potential and find that this stronger confinement near the interface leads to a stronger coupling. This results in a hard gap, as illustrated by the calculated energy spectrum (Fig.~2(d), red line). However, for more positive voltages, the electrons are attracted to the back gate, creating a high density pocket far away from the superconductor (inset of Fig.~2(b)). These states are weakly coupled to the superconductor, as demonstrated by a soft gap structure (Fig.~2(d), black line). We can therefore conclude that the electron tunneling between the semiconductor and the superconductor is strongly affected by the gate potential.\\ \indent The change in superconductor-semiconductor coupling does not just affect the hardness, but also the size of the gap. For each back gate voltage, we fit the BCS-Dynes expression~\cite{Dynes} for the DOS in order to extract the position of the coherence peaks, giving the gap size $\Delta$. The results are shown in Fig.~2(e). As $V_{\mathrm{BG}}$ becomes more positive, the superconductor-semiconductor coupling becomes weaker, reducing the size of the gap. From $V_{\mathrm{BG}}$ \textgreater \,-0.4\,V onward it becomes difficult to accurately determine the gap, as it tends to become too soft and the coherence peaks are not always clearly distinguishable. The top right inset shows the shift of the coherence peak (indicated by the arrows) to lower bias voltage as $V_{\mathrm{BG}}$ is increased. The lower left inset shows the extracted coherence peak position from the numerical simulations, showing the same trend. \section{Effective g-factor} \indent As the electric field induced by the back gate clearly has an important effect on the hybridization between the nanowire and the superconductor, we now look at the effect this has on the Zeeman term in the Hamiltonian. This term affects the energy dispersion of spinful states in a magnetic field. We study the dispersion of the states in the nanowire by measuring d$I$/d$V$ in device A as a function of applied bias voltage and magnetic field, as shown in Fig.~3(a) and Fig.~3(b). We define the effective g-factor as $g_{\mathrm{eff}} = \frac{2}{\mu_B} \frac{\Delta E}{\Delta B}$, with $\frac{\Delta E}{\Delta B}$ the average slope of the observed peak in the differential conductance as it disperses in magnetic field. This effective g-factor is different from the pure spin g-factor $g_{\mathrm{spin}}$, as the dispersion used to estimate $g_{\mathrm{eff}}$ is generally not purely linear in magnetic field, and has additional contributions from the spin-orbit coupling, magnetic field induced changes in chemical potential, and orbital effects~\cite{VuikNPJ2016,AntipovarXiv2018,Winkler2017}. The effective g-factor is the parameter which determines the critical magnetic field required to drive the system through the topological phase transition~\cite{DasSarmaPRB2011}. We obtain the slope $\frac{\Delta E}{\Delta B}$ from a linear fit (shown as black dashed lines in Fig.~3(a),(b)) of the observed peak position. Fig.~3(c) shows the extracted $g_{\mathrm{eff}}$ for device A, with more positive back gate voltages leading to larger $g_{\mathrm{eff}}$ (visible as a steeper slope). A similar result has recently been reported in hybrid InAs-Al nanowires~\cite{VaitiekenasarXiv2017}.\\ \indent We use our numerical model to calculate the DOS in the nanowire as a function of applied magnetic field, shown in Fig.~3(d) and Fig.~3(e). From the calculated spectrum, we apply the same procedure used to fit the experimental data to extract $g_{\mathrm{eff}}$ (white dashed lines). The results for different values of $V_{\mathrm{Gate}}$ are given in Fig.~3(f) as black circles. The applied back gate voltage changes the hybridization of the states in the InSb ($\lvert g_{\mathrm{spin}}\rvert$ = 40~\cite{KammhuberNanoLett2016}) and the Al ($\lvert g_{\mathrm{spin}}\rvert$ = 2). As a more positive gate voltage increases the weight of the wave function in the InSb, we expect the renormalized g-factor to increase as the gate voltage is increased, consistent with the results of Fig.~3(c) and Fig.~3(f). \\ \indent To see how well $g_{\mathrm{eff}}$ describes the Zeeman term in the Hamiltonian, we turn our attention to the energy spectrum at $k$ = 0. At this point, the effect of spin-orbit coupling vanishes. If orbital effects are excluded, we can then define the pure spin g-factor as $g_{\mathrm{spin}} = \frac{2}{\mu_B} \frac{\Delta E(k=0)}{\Delta B}$. The resulting values for $g_{\mathrm{spin}}$ are shown as red squares in Fig.~3(f). By comparing the results for $g_{\mathrm{eff}}$ and $g_{\mathrm{spin}}$, we can conclude that when the lowest energy state has a momentum near $k$ = 0 (as is the case for $V_{\mathrm{Gate}} \textless$ -0.2\,V), the effect of spin-orbit coupling is negligible, and $g_{\mathrm{eff}}$ is a good proxy for the pure spin g-factor. However, when this is no longer the case, significant deviations can be observed, as is the case for $V_{\mathrm{Gate}} \geq$ -0.2\,V. As we expect the experimental gate voltage range of Fig.~3(c) to be comparable to values of $V_{\mathrm{Gate}}$\,\textless\,-0.2\,V, we conclude that the experimentally obtained $g_{\mathrm{eff}}$ is a reasonable approximation of $g_{\mathrm{spin}}$ in this parameter regime. However, we stress once more that in general, one needs to be careful when interpreting the $g_{\mathrm{eff}}$ extracted from experimental data as the g-factor entering the Hamiltonian in the Zeeman term.\\ \indent The increasing trend of $g_{\mathrm{eff}}$ does not change when the orbital effect of magnetic field is considered~\cite{Supplement}. However, there is a significant increase in the predicted values, in agreement with previous findings for InAs nanowires~\cite{Winkler2017}. These values are larger than the ones generally observed in our experiment, suggesting that the orbital effect is not a dominant mechanism in determining the effective g-factor in these devices. We note that the data from this device was taken solely in the hard gap regime, where one expects a strong confinement near the superconductor. This suppresses the orbital contribution of the magnetic field. Another possible explanation for the discrepancy between the results of the simulation and the experimental data is an overestimation of the density in the nanowire, as higher sub-bands have a stronger contribution from the orbital effect. Minimizing the orbital effect is desirable for Majorana physics, as the orbital contributions of the magnetic field are detrimental to the topological gap~\cite{NijholtPRB2016}. \section{Level repulsion due to spin-orbit coupling} \begin{figure*}[htbp] \includegraphics[width=17.8cm]{Figure_4.pdf} \centering \caption{\textbf{Spin-orbit coupling induced level repulsion.} (\textbf{a}-\textbf{c}) d$I$/d$V$ as a function of $V_{\mathrm{Bias}}$ for device B, showing the dispersion of subgap states in magnetic field, for $V_{\mathrm{SG}}$ = 1.98\,V, 2.325\,V, and 2.70\,V, respectively. The two lowest energy states $L_1$, $L_2$, and their particle-hole symmetric partners are indicated by the white dashed lines. (\textbf{d}) Calculated low energy spectrum of the finite nanowire system as a function of the Zeeman energy $E_\mathrm{Z}$ for $\alpha$ = 0\,eV\,\AA (dashed black lines) and $\alpha$ = 0.1\,eV\,\AA (solid red lines), showing the opening of an energy gap 2$\delta$ due to spin-orbit coupling. Inset: the energy gap 2$\delta$ as a function of the Rashba $\alpha$ parameter (solid line), and the estimate 2$\delta$ = $\alpha\pi/l$ (dashed line), with $l$ the nanowire length. All energy scales are in units of the superconducting gap $\Delta$. (\textbf{e}) Zoom-in of the anti-crossing in (\textbf{b}), showing the splitting $A$ and the coupling strength $\delta_{\mathrm{SO}}$. Green solid lines indicate a fit of the anti-crossing, with the dashed black lines showing the uncoupled energy levels. (\textbf{f}) Coupling $\delta_{\mathrm{SO}}$ (black circles) and splitting $A$ (red squares) as a function of $V_{\mathrm{SG}}$, showing opposite trends for these parameters.} \end{figure*} \begin{figure*}[htbp] \includegraphics[width=17.8cm]{Figure_5.pdf} \centering \caption{\textbf{Zero bias pinning due to strong level repulsion.} (\textbf{a}-\textbf{c}) d$I$/d$V$ as a function of $V_{\mathrm{Bias}}$ for device A, showing the dispersion of $L_1$ and $L_2$ as a function of magnetic field for $V_{\mathrm{BG}}$ = -0.3845\,V, -0.3835\,V, and -0.3825\,V, respectively. (\textbf{d}) Line traces at magnetic fields indicated by the colored bars in (b), showing the stable pinning of $L_1$ to zero bias voltage. (\textbf{e},\textbf{f}) d$I$/d$V$ measured as a function of $V_{\mathrm{BG}}$ at fixed magnetic field $B$ = 0.26\,T and 0.36\,T, respectively. Gate voltages from (a), (b), and (c) are indicated by orange square, purple triangle, and green circle, respectively.} \end{figure*} \indent The last term in the Hamiltonian that remains to be explored describes the Rashba spin-orbit coupling. The strength of the spin-orbit coupling is determined by the parameter $\alpha$, which depends on the material (and thus, on the superconductor-semiconductor coupling), and the electric field~\cite{NittaPRL1997,vanWeperenPRB2015,ScherublInAsSOI2016}. Therefore, we expect that this term will be affected by the gate potential as well. In finite systems, the spin-orbit interaction can couple states with different orbitals and spins~\cite{StanescuPRB2013}. These states are thus no longer orthogonal to each other, and the spin-orbit mediated overlap between them causes energy splitting, leading to level repulsion~\cite{LeeNatureNano2014,vanHeckPRB2017,OFarrellarXiv2018}. This level repulsion, which is generic in class D systems in the presence of superconductivity, magnetic field and spin-orbit coupling~\cite{PikulinNPJ2012}, can be extracted from the low energy nanowire spectrum as measured by tunneling spectroscopy.\\ \indent In Figs.~4(a)-(c), we show the evolution of the level repulsion between the two lowest energy sub-gap states (labeled $L_1$ and $L_2$, as indicated by the white dashed lines in panel c) in device B. For these measurements, the global back gate is grounded, with the electric field being induced by applying a voltage to the side gate~\cite{Supplement}.\\ \indent We parameterize the level repulsion by two quantities: the coupling strength $\delta_{\mathrm{SO}}$, and the splitting $A$, defined as the maximum deviation of $L_1$ from zero energy after the first zero crossing. This splitting has previously been linked to the overlap between two MZM in a finite system~\cite{AlbrechtNature2016}. In Fig.~4(e), we zoom in on the anti-crossing feature in panel Fig.~4(b), showing the minimum energy difference between $L_1$ and $L_2$ (given by 2$\delta_{\mathrm{SO}}$) and the splitting $A$. We extract these parameters by a fit of the anti-crossing (solid green lines, with the uncoupled states shown by the dashed black lines)~\cite{Supplement}.\\ \indent Because we expect finite size effects to be relevant, we cannot use our previous theoretical model, as it is based on an infinitely long nanowire. Therefore, we modify the model to take into account the finite size of the nanowire system, and calculate the low energy spectrum for different values of the Rashba spin-orbit strength~\cite{Supplement}. In Fig.~4(d), we plot the two lowest energy states in the nanowire as a function of the Zeeman energy ($E_\mathrm{Z} = \frac{1}{2}g\mu_B B$), in units of the superconducting gap $\Delta$. If $\alpha$ = 0 (no spin-orbit coupling, dashed black lines), there is no coupling between the states, and no level repulsion occurs. However, if spin-orbit coupling is included (e.g., $\alpha$ = 0.1\,eV\,\AA, solid red lines), the levels repel each other, with the magnitude of the anti-crossing given by 2$\delta$. The level repulsion strength scales with $\alpha$ (inset of Fig.~4(d)), providing a way to estimate $\alpha$ based on the low energy spectrum using 2$\delta \sim \alpha\pi/l$, where $l$ is the length of the nanowire.\\ \indent In Fig.~4(f), we plot $\delta_{\mathrm{SO}}$ (black circles) and $A$ (red squares) as a function of the applied side gate voltage. The two parameters follow opposite trends, with $A$ being maximal when $\delta_{\mathrm{SO}}$ is minimal. When $\delta_{\mathrm{SO}}$ is larger, the levels repel each other more, leading to $L_1$ being pushed closer to zero energy, reducing the splitting $A$. When $V_{\mathrm{SG}}$\,\textless \,2.0\,V, both parameters become smaller with decreasing $V_{\mathrm{SG}}$. At this point, other states at higher energies become relevant for the lowest energy dispersion (a situation demonstrated in Fig.~4(a)), and our method to extract these parameters breaks down. We expect this method to be reliable when the energetically lowest two states can be clearly separated from the rest.\\ \indent Because $\delta_{\mathrm{SO}}$ depends not only on $\alpha$, but also on the details of the confinement potential, as well as the coupling to the superconductor, a precise estimate goes beyond the current approximations in our model. That being said, based on the observed magnitude of $\delta_{\mathrm{SO}}$ and our simulations of the finite nanowire system, we can estimate the Rashba parameter $\alpha$ to be around 0.1\,eV\,\AA~in this gate voltage range. This value is comparable to the values reported in InSb nanowire based quantum dots~\cite{NadjPergePRL2012}, and smaller than the values measured in weak anti-localization experiments~\cite{vanWeperenPRB2015}. A large value of $\alpha$ is beneficial for Majorana physics, as it determines the maximum size of the topological gap~\cite{SauPRB2012}. \section{Zero Bias Peak in extended magnetic field range} \indent In the previous sections, we have described the effect of the gate induced electric field on the various terms in the Hamiltonian (\ref{eq:ham}). As this Hamiltonian is known to describe Majorana physics, we now turn our attention to possible signatures of MZMs in this system. In particular, when 2$\delta_{\mathrm{SO}}$ becomes comparable to the energy of $L_2$, we find that $L_1$ can become pinned close to zero bias over an extended range in magnetic field, as demonstrated in Fig.~5(b) (data from device A). Fig.~5(d) shows that the state stays pinned to zero energy over a range of over 0.2\,T, corresponding to a Zeeman energy of over 300\,$\mathrm{\mu}$eV, which is larger than the induced gap. The stability of the ZBP in terms of the ratio of Zeeman energy to induced gap is comparable to the most stable ZBPs reported in literature~\cite{DengScience2016,GulBallisticMajorana2017}. When we fix the magnetic field to $B$ = 0.26\,T and change the back gate voltage (Fig.~5(e)), it appears that there is a stable ZBP over a few mV as well.\\ \indent We might be tempted to conclude that this stability implies this is a Majorana zero mode. However, if we change either the gate voltage (Fig.~5(a), Fig.~5(c)) or the magnetic field (Fig.~5(f)) a little bit, we observe that this stability applies only to very particular combinations of gate voltage and magnetic field. One should keep in mind that in a finite system, MZMs are not expected to be stable with respect to local perturbations if the system size is comparable to the Majorana coherence length, which is likely the case in our devices. This further complicates the determination of the origin of the observed peaks. As we find no extended region of stability, we conclude that it is unlikely that this state pinned to zero energy is caused by a topological phase transition. Rather, this seems to be due to a fine-tuned coincidence in which the repulsion between two states combined with particle-hole symmetry leads to one of the states being pinned to $E$ = 0. We reiterate that simply having a stable zero energy state over an extended range in magnetic field is not sufficient to make claims about robust Majorana modes~\cite{KellsSmoothPotential2012,PradaNS2012,LiuTrivialABSMajorana2017}. Further experimental checks, such as stability of the ZBP in an extended region of the parameter space spanned by the relevant gate voltages~\cite{GulBallisticMajorana2017}, as well as magnetic field, are required in order to assign a possible Majorana origin. \section{Conclusion \& Outlook} \indent We have used InSb nanowires with epitaxial Al superconductor to investigate the effect of the gate voltage induced electric field on the superconductor-semiconductor coupling. This coupling is determined by the distribution of the wave function over the superconductor and semiconductor, and controls essential parameters of the Majorana Hamiltonian: the proximity induced superconducting gap, the effective g-factor, and spin-orbit coupling. Our observations show that the induced superconductivity, as parameterized by the hardness and size of the induced gap, is stronger when the electrons are confined to a region close to the superconductor. The stronger coupling leads to a lower effective g-factor. We also determine that the gate voltage dependence of the effective g-factor is dominated by the change in coupling to the superconductor, rather than by orbital effects of the magnetic field. Finally, we study the effect of level repulsion due to spin-orbit coupling. Appropriate tuning of the repulsion leads to level pinning to zero energy over extended parameter ranges, mimicking the behavior expected from MZMs. Our result deepens the understanding of a more realistic Majorana nanowire system. More importantly, it is relevant for the design and optimization of future advanced nanowire systems for topological quantum information applications.\\ \begin{acknowledgments} \indent We thank J.G. Kroll, A. Proutski, and S. Goswami for useful discussions. This work has been supported by the European Research Council, the Dutch Organization for Scientific Research, the Office of Naval Research, the Laboratory for Physical Sciences, and Microsoft Corporation Station Q. \end{acknowledgments} \section*{Author contributions} \indent M.W.A.d.M., J.D.S.B., D.X., and H.Z. fabricated the devices, performed the measurements, and analyzed the data. G.W.W., A.B., A.E.A., and R.M.L. performed the numerical simulations. N.v.L. and G.W. contributed to the device fabrication. R.L.M.o.h.V., S.G., and D.C. grew the InSb nanowires under the supervision of E.P.A.M.B.. J.A.L., M.P., and J.S.L. deposited the aluminum shell on the nanowires under the supervision of C.J.P.. L.P.K. and H.Z. supervised the project. M.W.A.d.M. and H.Z. wrote the manuscript with comments from all authors. M.W.A.d.M., J.D.S.B., and D.X. contributed equally to this work. Correspondence to H.Z. (H.Zhang-3@tudelft.nl). \input{Manuscript_v6.bbl} \end{document}
{ "timestamp": "2018-06-05T02:14:54", "yymm": "1806", "arxiv_id": "1806.00988", "language": "en", "url": "https://arxiv.org/abs/1806.00988" }
\section{} Solitons are fascinating mathematical objects which arise as solutions of certain integrable nonlinear evolution equations \cite{ablowitz}. Their remarkable collision behaviour and other dynamical properties have led them to find applications in several fields ranging from water waves \cite{whitham}, plasma physics \cite{infeld}, nonlinear optics \cite{kivshar} to Bose-Einstein condensates (BEC) \cite{kevrekidis}. Especially in optics, they result due to a delicate balance of natural dispersive spreading of the wave by an inherent nonlinearity, namely Kerr nonlinearity. Solitons in single mode optical fibres were first predicted by Hasegawa and Tappert \cite{hase1} and experimentally confirmed subsequently by Mollenauer \textit{et al}. \cite{molle}. Following this, there has been a great surge of studies on optical solitons. One important aspect of such a study which is recently receiving attention is multicomponent solitons (MSs)\cite{kevrekidis1}. Some examples of multicomponent solitons are partially coherent solitons \cite{akhmediev}, soliton complexes \cite{soto}, multi-mode pulses in optical fibers \cite{mecozzi}, symbiotic solitons \cite{abdullaev}, spinor solitons \cite{ieda}, etc. The work horse for such enormous studies on MS is the celebrated Manakov model \cite{man}, which is also known to be integrable \cite{radhakrishnan}. In optics, the Manakov system governs the propagation of a pair of orthogonally polarized high intense optical pulses in single mode optical fibers. The interplay between the dispersion and the self phase modulation (phase shift in a given mode depending upon its own intensity) as well as the cross phase modulation (phase shift due to the intensity of the co-propagating mode) effects result in optical solitons. Recent developments in photonic crystal fibers have lead to significant developments in the experimental observation of such optical solitons \cite{philbin}. The incoherently (intensity dependent nonlinearity) coupled nonlinear Schr\"odinger system describing the propagation of two orthogonally polarized high intense optical pulses in an elliptically birefringent fiber with high birefringence can be casted as \cite{agarwal1}, \begin{subequations} \begin{eqnarray} i(Q_{1\zeta}+\beta_{1x}Q_{1\tau})-\frac{\beta_{2}}{2}Q_{1\tau\tau}+\gamma(|Q_1|^2+B|Q_2|^2)Q_1=0,\\ i(Q_{2\zeta}+\beta_{1y}Q_{2\tau})-\frac{\beta_{2}}{2}Q_{2\tau\tau}+\gamma(|Q_2|^2+B|Q_1|^2)Q_2=0, \end{eqnarray} \label{1} \end{subequations} where $\zeta$ and $\tau$ are respectively propagation direction and time, $Q_j$'s, $j=1,2,$ are complex slowly varying amplitudes, $\beta_{1x}$ and $\beta_{1y}$ are inverse of group velocities of two modes, $\beta_{2}$ represents group velocity dispersion (GVD) and the effective Kerr nonlinearity coefficient $\gamma$ is defined as $\frac{8 n_2 \omega_0}{9 c A_{eff}}$, where $n_2$ is the nonlinear index coefficient, $\omega_0$ is the carrier frequency and $A_{eff}$ is the effective core area. Here $\gamma$ and $\beta_2$ are same for both the pulses as they are at the same wavelength. The cross phase modulation coupling parameter $B=\frac{2+2\sin^2\theta}{2+cos^2\theta}$, where $\theta$ is the ellipticity angle which can vary between 0 and $\pi/2$. For lossless fibres, after suitable transformations, the above equation (\ref{1}) can be expressed in the following dimensionless form using soliton units \cite{agarwal1}, \begin{subequations} \begin{eqnarray} i Q_{1z}-\frac{sgn(\beta_2)}{2} Q_{1tt}+\mu^2(|Q_1|^2+B|Q_2|^2)Q_1=0,\\ i Q_{2z}-\frac{sgn(\beta_2)}{2} Q_{2tt}+\mu^2(|Q_2|^2+B|Q_1|^2)Q_2=0, \end{eqnarray} \end{subequations} where the dimensionless length and retarded time are defined as $z=\frac{\zeta}{L_D}$, $t=\frac{T}{T_0}=(\tau-\tilde\beta_1\zeta)$ in which the dispersion length $L_D=\frac{T_0^2}{|\beta_2|}$, nonlinear length $L_{NL}=\frac{1}{\gamma P_0}$ and $\tilde\beta_1=\frac{1}{2}({\beta_{1x}+\beta_{1y}})$ with $T_0$ and $P_0$ being the initial width and peak power, $\mu^2=\frac{\gamma P_0 T_0^2}{|\beta_2|}$. In the anomalous (normal) dispersion regime, $\beta_{2}<0~(>0)$, where the high (low) frequency pulses travel faster than the low (high) frequency pulses, the above equation is referred as focusing (defocusing) coupled nonlinear Schr\"odinger (CNLS) equation and the fibre supports bright (dark and dark-bright) solitons. These are consequences of the polarization modulation instability \cite{baronio}. For the polarizing angle $\theta=35^\circ$ (for which $B=1$) in the anomalous dispersion regime, with trivial transformations $z^{\prime}=\frac{z}{2}$, $q_j=\mu Q_j, j=1,2,$ and dropping the prime, we get the standard Manakov model in normalized form as \cite{man}, \begin{subequations} \begin{eqnarray} i q_{1z}+ q_{1tt}+2(|q_1|^2+|q_2|^2)q_1=0,\\ i q_{2z}+ q_{2tt}+2(|q_1|^2+|q_2|^2)q_2=0. \end{eqnarray} \label{manakov1} \end{subequations} The Manakov system (\ref{manakov1}) has been extensively studied (for details see \cite{man,hie,kannaprl,kannapre,dinda,kanapramana,epj} and references therein). The striking feature of this Manakov system is the fascinating energy sharing collision of bright solitons as a consequence of change in the polarization vector during collision. In such energy sharing collision, the intensity of soliton in a given component can be enhanced (suppressed) while the other soliton experiences an opposite effect. Contrary to this the solitons in the second component display a reverse scenario thereby preserving the total intensity as well as the intensity of the individual component \cite{hie,kannaprl,kannapre,dinda,kanapramana,epj}. Following this, multisoliton interactions in various multicomponent CNLS type systems including the Manakov system have been studied in Refs. \cite{kannapre,kanapramana,epj}. These Manakov bright solitons have been experimentally realized in $Al_x Ga_{1-x} As$ waveguides \cite{expt} and their energy sharing collision has also been experimentally demonstrated \cite{ana}. Recently, optical dark rogue waves have also been experimentally observed in the Manakov model with defocusing nonlinearity \cite{millot}. Thus the Manakov solitons are suitable candidates for experimental realization and for further technological applications. Also, in Ref. \cite{josa} it has been clearly demonstrated that this energy sharing property of the Manakov solitons is preserved in the presence of fibre losses and is robust against strong environmental perturbations. This shows that the present construction procedure will hold good even in the presence of losses. From a technological point of view, in the era of digital electronics, integrated circuits are usually made up of universal NOR and NAND gates which are considered to be the basic modules of all other logic gates. However, modern day computers have their own demerits such as heat dissipation, processing speed, space, and speed of transmission, etc. \cite{book,murali,sapin,coupler}. To overcome these difficulties, many researchers proposed that the light field, especially solitons, can act as the carrier of information instead of electrons which we employ in our present day computers \cite{ada_BZe,jaku,steig,rand,pramana,miller}. As an important advancement in optical computing, in Ref. \cite{miller}, the criteria required for practical optical logic (POL) are nicely discussed in detail. In Ref. \cite{jaku} energy sharing collision of Manakov solitons was profitably used for performing nontrivial information transformation. Especially, sequences of solitons operating on other sequences of solitons effect logic operations. Later, Steiglitz \cite{steig} theoretically constructed various gates such as the COPY, FANOUT, Z and Y converters and combined them to realize the NAND gate. In Ref. \cite{steig}, through numerical simulation separate sequential collisions of non-interacting data and operator solitons were considered where the operator soliton always remains unaffected. The main task of this proposal is to use the pair-wise energy sharing collisions of bright Manakov solitons as such without imposing any constraints on the colliding solitons for realizing the universal logic gate. In such a collision process, all solitons undergo energy sharing collisions and every soliton interacts with every other solitons which are involved in the collision process. To be specific, we employ just a four bright soliton collision process, in which each soliton undergoes three pair-wise energy sharing interactions. We have recently constructed single input logic gates such as COPY gate, NOT gate and ONE gate using the energy sharing collisions of three bright optical solitons associated with the three soliton solution of the Manakov system \cite{one_gate}. However, it is not a straightforward task to extend this study for the realization of two input gates due to the cumbersome form of the N-soliton solution of the Manakov system with $N>3$. Also there is no clue about the number of solitons required. Here, we succeed by a careful albeit tedious asymptotic analysis to identify that collision of four solitons is sufficient to construct the universal NOR gate in a more practical physical situation. Indeed, the computation occurs by the pair-wise energy sharing collisions of solitons, where each soliton bears a finite state value before collision, and state transformations occur at the time of collisions between solitons. The novelty of the present work is to realize the universal NOR gate in a theoretical sense by utilizing the energy sharing collisions \cite{hie,kannaprl,kannapre} of only a minimal number of four solitons arising in a high birefringent telecommunication fibre. Other physical systems where the energy sharing collision can be observed and hence suitable for computing are multi-species BEC \cite{kevrekidis}, photorefractive materials \cite{photoref} and left handed materials \cite{lhm}, etc. Here, we consider the interaction of the four bright solitons in the Manakov system, described by four soliton solution given in the supplemental material \cite{supplement}. As pointed out earlier, during the energy sharing collision, the Manakov solitons experience a change in their states (polarizations) due to the enhancement or suppression of intensity which is the desirable property for performing computation. Also, it is sufficient to examine these states well before (i.e., at the input) and well after (output) collisions. The key idea is to define the asymptotic states of the $j^{th}$ soliton as $\rho^{j\pm}=\frac{q_1^j(z\rightarrow\pm\infty)}{q_2^j(z\rightarrow\pm\infty)}=\frac{A_1^{j\pm}}{A_2^{j\pm}},$ where $A_{1,2}^{j\pm}$ are the polarization components (1,2) of the $j^{th}$ soliton. Here suffices denote the components, $+(-)$ designates the state after (before) collision and superscript $j$ represents the soliton number. The logic gates deal with binary logic, either $1$ or $0$. We define such $1(0)$ state if the intensity $|\rho^{j_{\pm}}|^{2}$ of the state vector exceeds (falls below) a particular reference value. This clearly shows that our construction procedure avoids the critical biasing, a property for POL mentioned in Ref. \cite{miller}. Additionally, as we are dealing with states defined by the ratio of intensities, the important differential signalling criteria required for POL \cite{miller} is also naturally satisfied. Also, we may note that the Manakov solitons undergo pair-wise collisions. In our construction procedure, we are going to consider four solitons which interact in a pair-wise manner and we denote the input solitons as unprimed solitons $S_{j}, j=1,2,3,4$. We will refer the solitons emerging after the first, second and third pair-wise collisions as primed, double primed and triple primed solitons respectively. In fact $S_{j}^{'''}$ represent the output solitons. A schematic diagram of this collision process is shown in the supplemental material \cite{supplement}. The intensities of the four colliding solitons at the input and at the output are calculated analytically from a systematic but rather lengthy asymptotic analysis, presented in the supplemental material \cite{supplement}. Here, we assume that the soliton parameters $k_{jR}>0$; $k_{1I}>k_{2I}>k_{3I}>k_{4I}$ ($k_{jR}=$Re$(k_{j})$, $k_{jI}=$Im$(k_{j})$), where the suffices R and I denote the real and imaginary parts. For constructing the universal NOR gate, the inputs are fed into the solitons $S_1$ and $S_2$ before interaction, and the output is taken up from the soliton $S_4$ after interaction. Thus the input and output solitons are treated separately which will prevent the input pulses to be reflected back into the output pulse, a criteria referred as input/output isolation necessary for POL. The explicit forms of the states of the solitons $S_1$ and $S_2$ before interaction are \begin{eqnarray} \rho^{1-}&=&\frac{\alpha_1^{(1)}}{\alpha_1^{(2)} },\\ \rho^{2-}&=&\frac{A_1^{2-}}{A_2^{2-}}=\frac{N_1^{2-}}{N_2^{2-}}=\frac{\alpha_1^{(1)} \kappa_{21}-\alpha_2^{(1)} \kappa_{11}}{\alpha_1^{(2)} \kappa_{21}-\alpha_2^{(2)} \kappa_{11}}. \end{eqnarray} Similarly, the state of the soliton $S_4$ after collision is given by $\rho^{4+}=\frac{\alpha_4^{(1)}}{\alpha_4^{(2)} }$. In the above equations, $\alpha_{l}^{(m)},l=1,2,4, m=1,2,$ represent the polarization parameters of solitons $S_1$, $S_2$ and $S_4$ and they can take any arbitrary complex value. Similarly, the other quantities $\kappa_{11}, \kappa_{21}$, and $\kappa_{12}$ are defined by the soliton parameters $\alpha$'s and $k$'s. Though $S_3$ does not explicitly appear in the above expression, it indirectly influences the energy sharing collisions. The ratio of intensities of the solitons $S_1$, and $S_2$ before interaction as well as the soliton $S_4$ after interaction can be obtained by taking the absolute squares of these complex states and they are given by $|\rho^{1-}|^2$, $|\rho^{2-}|^2$ and $|\rho^{4+}|^2$, respectively. Hence, one can measure the ratio of the intensities of the input/output solitons analytically from the asymptotic analysis. Then as mentioned before if the ratio of intensities of a given soliton $S_j$ is greater (less) than some specific threshold value, say 1, before interaction then we denote the input state of $S_j$ as $``1 (0)"$ state. Thus $1$($0$) state of soliton $S_j$ corresponds to $|\rho^{j-}|^2>1 (<1)$. To achieve the required output corresponding to the NOR gate from asymptotic analysis, we deduce the following condition on the soliton parameters: \begin{eqnarray} \hspace{-0.3cm}\alpha_4^{(2)}=\left(\frac{\alpha_1^{(1)}}{\alpha_1^{(2)}}+\frac{\alpha_2^{(1)}\left((k_1-k_2)|\alpha_1^{(1)}|^2-(k_2+k_1^*)| \alpha_1^{(2)}|^2\right)+(k_1+k_1^*)\alpha_1^{(1)}\alpha_1^{(2)*}\alpha_2^{(2)}}{(k_1+k_1^*)\alpha_1^{(1)*}\alpha_2^{(1)}\alpha_1^{(2)}-\alpha_2^{(2)}\left((k_2-k_1) |\alpha_1^{(2)}|^2+(k_2+k_1^*)|\alpha_1^{(1)}|^2\right)}\right) \alpha_4^{(1)}. \label{con2} \end{eqnarray} The above relation (\ref{con2}) is obtained by imposing the condition, $\rho^{4+}=\left(\rho^{1-}+\rho^{2-}\right)^{-1}$ on the state vectors of the input solitons ($S_1$, $S_2$) and the output soliton ($S_4$) so that the Boolean algebra of the NOR gate is satisfied. Assigning $(0,0)$ input states to ($S_1$, $S_2$) by choosing $\alpha_1^{(1)}=2, \alpha_1^{(2)}=6, \alpha_2^{(1)}=2, \alpha_2^{(2)}=5,$ we achieve the $``1"$ output state from soliton $S_4$ for the parameter choices $k_1=0.5+i, k_2=1+0.5 i, k_3=0.9-0.5 i, k_4=1.3-i, \alpha_3^{(1)}=3, \alpha_3^{(2)}=1, \alpha_4^{(1)}=0.001-0.002 i$ along with the condition (\ref{con2}), which is depicted in Fig \ref{nor_00}. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\linewidth]{fig1.png} \caption{NOR gate: The state of input solitons ($S_1$ and $S_2$) are $``0"$ and $``0"$ and the state of output soliton ($S_4$) is $``1"$. The first column (a) displays the mesh plots of the intensity profiles while the middle and last columns (b) and (c) depict the two dimensional plots of intensities at the input $(z=-15)$ and at the output $(z=15)$, respectively }\label{nor_00} \end{center} \begin{center} \includegraphics[width=1 \linewidth]{fig2.png}\label{nor_01_10_11} \caption{NOR gate: (a) Input states $``0"$ and $``1"$, Output state $``0"$; (b) Input states $``1"$ and $``0"$, Output state $``0"$; (c) Input states $``1"$ and $``1"$, Output state $``0"$ } \end{center} \end{figure}This same choice is chosen for proving the rest of the three cases of the NOR gate whereas the input $\alpha_{i}^{(j)}$,$ i, j=1,2,$ parameters are varied. The quantities plotted in all the figures given in our work are in dimensionless form. In Figure 1, the first column (a) displays the mesh plots of the intensity profiles while the middle and last columns (b) and (c) depict the two dimensional plots of intensities at the input $(z=-15)$ and at the output $(z=15)$, respectively. Fig. 2(a) demonstrates that choosing ($0, 1$) input states to solitons ($S_1$, $S_2)$ for $\alpha_1^{(1)}=2, \alpha_1^{(2)}=6, \alpha_2^{(1)}=5, \alpha_2^{(2)}=2$, we get $``0"$ as the output state of $S_4$ after interaction. Next, $(1, 0)$ input states are assigned to solitons ($S_1$, $S_2$) and the $``0"$ output state is observed at the output of soliton $S_4$ for $\alpha_1^{(1)}=6, \alpha_1^{(2)}=2, \alpha_2^{(1)}=2, \alpha_2^{(2)}=5$ which is shown in Fig. 2(b). Finally, when $(1, 1)$ input states are given to solitons ($S_1$, $S_2$), the $``0"$ output state results for soliton $S_4$ for the parameter choices $\alpha_1^{(1)}=6, \alpha_1^{(2)}=2, \alpha_2^{(1)}=5, \alpha_2^{(2)}=2$ as shown in the last column as Fig. 2(c). The two dimensional plots of intensities corresponding to Fig. 2 are given in supplemental material for a better understanding. The truth table and the corresponding intensity tables (calculated values of the ratios of intensities of solitons) are given in Tables I and II. It is clear that the input states ($``0"$ and $`1"$) are attained by adjusting properly the $\alpha_i^{(l)}, i,l=1,2,$ parameters as discussed above. However, all the other soliton parameters except $\alpha_4^{(2)}$ can have arbitrary values while it is fixed by the condition (\ref{con2}) for the desired output that fulfills the truth table. In a similar fashion, the OR gate can also be constructed from the condition $\rho^{4+}=\rho^{1-}+\rho^{2-}$. The explicit form of the condition and the collision scenario leading to OR gate are provided in the supplemental material \cite{supplement}. \begin{table}[!ht] \begin{minipage}{.5\linewidth} \caption{Truth table of NOR gate} \begin{tabular} {|c|c|c|c|} \hline Input 1 & Input 2 & Output \\ ($S_1$) & ($S_2$) & ($S_{4}'''$) \\\hline 0& 0 & 1 \\ \hline 0& 1& 0 \\ \hline 1& 0 & 0\\ \hline 1& 1 & 0 \\ \hline \end{tabular} \end{minipage}% \begin{minipage}{.5\linewidth} \centering \caption{Intensity table of NOR gate} \begin{tabular}{|c|c|c|c|} \hline Input 1 \;\; & Input 2 \;\;& Output \;\;\\ $|\rho^{1-}|^2$ (W) &$|\rho^{2-}|^2$ (W) &$|\rho^{4+}|^2$ (W)\\ \hline 0.1& 0.2 & 2 \\ \hline 0.1& 47& 0.02 \\ \hline 9& 0.02 & 0.1 \\ \hline 9& 5 & 0.03 \\ \hline \end{tabular} \end{minipage} \end{table} Another advantage of this theoretical construction procedure is that the universal two input NOR gate can also be constructed by cascading the output of the one input gate, a desirable property for POL. To elucidate this, we point out that in a four soliton collision process, the first three solitons can be used to realize the one input copy gate \cite{one_gate} where the input at $S_2$ is copied at the output of $S_3$ (say $S_{3}''$). Now, this $S_{3}''$ and the input at $S_1$ can act as the two inputs for the NOR gate while the output is taken away from the soliton $S_4$, say $S_{4}'''$ as usual. This demonstrates the possibility of cascadability. Similarly, another important criteria of POL, namely Fan-out can also be achieved from our present theoretical construction. Particularly, in a four soliton collision process, if the input state is fed into the soliton $S_1$ before collision, then it can be switched to the output of any other two solitons after collision, say solitons ($S_2$ and $S_4$) by appropriately imposing conditions on the soliton parameters. This really implies the process of fanout which indicates the state of one soliton ($S_1$) before collision is used to drive as an input to at least two solitons ($S_2$ and $S_4$) after collision. The details of this will be presented elsewhere. Here the optical pulses propagate in the form of solitons which are by nature well localized structures that can travel over long distances without alteration in shape. This special property of solitons can restore the logic signal throughout its propagation in an optical fibre. Hence, our theoretical work completely satisfies all the criteria necessary for realizing POL. This clearly demonstrates the strength and versatility of our theoretical construction of universal logic gate which will have significant impact in realizing optical logic. In summary, we have demonstrated theoretically the construction of the universal gate, namely the NOR gate as well as the OR gate using the energy sharing collision of four bright solitons in a high birefringent fiber described by the famous Manakov system. Here the computing is performed by analyzing the asymptotic state variations of the colliding solitons that follows from a detailed asymptotic analysis of the explicit four-soliton solution of the Manakov system. We have demonstrated systematically that by altering the polarization parameters of the input solitons $S_1$ and $S_2$, one can realize the favourable output from the soliton $S_4$ without adjusting any other parameters. This implementation of universal NOR gate is quite interesting and will provide the gateway for the experimentalists to realize the optical logical gates including the universal gate. We wish to remark that our theoretical construction of logic gates very well satisfies all the criteria required for POL recently discussed in Ref. \cite{miller}. As the computation is performed in a conservative system, it will have its own advantages like re-usage of the output solitons. Another important point that can be inferred from the above construction is that the collision in the Manakov system can be dynamically reconfigured to realize NOR gate. This successful theoretical construction of two input optical logic gate by exploiting the energy sharing collisions of the Manakov solitons suggests the possibility of employing the very same idea to implement quantum logic gates such as X-gate, Y-gate, Z-gate and Hadamard gate. Our work can be extended to realize multi-state logic too by considering multicomponent nonlinear Schr\"odinger system with more than two components. Work is in progress in these directions. \begin{acknowledgments} M. V. acknowledges the support of Science and Engineering Research Board, Department of Science and Technology (DST-SERB), Government of India, Start Up Research Grant (Young Scientist: File No.YSS/2015/000629). The work of T. K. is supported by Science and Engineering Research Board, Department of Science and Technology (DST-SERB), Government of India, in the form of a major research project (File No. EMR/2015/001408). The work of M. L. is supported by a DST-SERB Distinguished Fellowship Programme and a DST/SERB project (Diary No.SERB/F/4307/2016-17). The authors thank the referees for their critical comments and for providing a few important references which helped them to present the material in a proper perspective. \end{acknowledgments} \section*{Four soliton solution of the Manakov system} Using the Hirota's bilinearization method, we obtain the four bright soliton solution of the Manakov system in Gram determinant form as below \cite{kannapre,epj}: \begin{eqnarray} q_s=\frac{g^{(s)}}{f}, \quad s=1,2.\nonumber \end{eqnarray} where \begin{eqnarray} g^{(s)}= \left| \begin{array}{ccc} A & I & \phi\\ -I & B & {\bf 0}^T\\ {\bf 0} & C_s & 0 \end{array} \right|, \quad f= \left| \begin{array}{cc} A & I\\ -I & B \end{array} \right|.\nonumber \end{eqnarray} In the above expression, $I$ is a $(4\times 4)$ identity matrix, \begin{eqnarray} C_s= -\left(\alpha_1^{(s)}, \alpha_2^{(s)}, \alpha_{3}^{(s)}, \alpha_{4}^{(s)}\right),\quad {\bf{0}}=(0, 0, 0, 0).\;\;\; \psi_j=\left( \begin{array}{c} \alpha_j^{(1)}\\ \alpha_j^{(2)}\\ \end{array} \right), \quad \phi=\left( \begin{array}{c} e^{\eta_1}\\ e^{\eta_2}\\ e^{\eta_3}\\ e^{\eta_{4}} \end{array} \right),\nonumber \end{eqnarray} The elements of the matrices $A$ and $B$ are given by \begin{eqnarray} A_{ij}= \frac{e^{\eta_i+\eta_j^*}}{(k_i+k_j^*)}, \quad B_{ij}=\kappa_{ji}=\frac{\psi_i^{\dagger} \psi_j}{(k_i^*+k_j)}, \quad i,j=1, 2, \ldots, 4. \label{omg}\nonumber \end{eqnarray} In equation (\ref{omg}), $\dagger$ represents the transpose conjugate, $k_j=k_{jR}+i k_{jI}, j=1,2,$, where the real part of $k_j$ ($k_{jR}$) represent the amplitudes of the solitons and the complex part of $k_j$ ($k_{jI}$) represent the velocities of the solitons. One can refer to \cite{epj} for a detailed derivation of the above Gram determinant form of four soliton solution. \newpage \section*{Schematic of four soliton collision process} The schematic pair-wise four soliton collision process considered in our work is shown below. \begin{figure}[ht] \includegraphics[width=0.3\linewidth]{s_fig1.png} \caption{Collision picture of solitons $S_1$, $S_2$, $S_3$ and $S_4$.}\label{collision} \end{figure} \section*{Asymptotic analysis of four soliton solution of the Manakov system} Considering the above four soliton solution, without loss of generality, we assume that the quantities $k_{1R}$, $k_{2R}$,$k_{3R}$, and $k_{4R}$ are positive and $k_{1I}>k_{2I}>k_{3I}>k_{4I}$ . For this condition, the asymptotic behaviour variables $\eta_{iR}$'s, $i=1,2,3,4,$ for the four solitons ($S_1$, $S_2$, $S_3$ and $S_4$) is given below. (i) $\eta_{1R} \approx 0$, $\eta_{2R} \rightarrow \pm \infty$, $\eta_{3R} \rightarrow \pm \infty$, $\eta_{4R} \rightarrow \pm \infty$, as $z \rightarrow \pm \infty$, (ii) $\eta_{2R} \approx 0$, $\eta_{1R} \rightarrow \mp \infty$, $\eta_{3R} \rightarrow \pm \infty$, $\eta_{4R} \rightarrow \pm \infty$, as $z \rightarrow \pm \infty$, (iii) $\eta_{3R} \approx 0$, $\eta_{1R} \rightarrow \mp \infty$, $\eta_{2R} \rightarrow \mp \infty$, $\eta_{4R} \rightarrow \pm \infty$, as $z \rightarrow \pm \infty$, (iv) $\eta_{4R} \approx 0$, $\eta_{1R} \rightarrow \mp \infty$, $\eta_{2R} \rightarrow \mp \infty$, $\eta_{3R} \rightarrow \mp \infty$, as $z \rightarrow \pm \infty$. We have the following asymptotic forms of the above four-soliton solution.\\ \noindent\underline{(i)Before Collision (limit $ z \rightarrow -\infty$)}:\\ \noindent(a) \underline{\it Soliton 1} ($\eta_{1R} \approx 0$, $\eta_{2R} \rightarrow -\infty$, $\eta_{3R} \rightarrow -\infty$, $\eta_{4R} \rightarrow -\infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx & \left( \begin{array}{c} A_1^{1-} \\ A_2^{1-} \end{array} \right)k_{1R} {\mbox{sech}\,\left(\eta_{1R}+\frac{R_1}{2}\right)}e^{i\eta_{1I}} ,\nonumber\\ \left( \begin{array}{c} A_1^{1-} \\ A_2^{1-} \end{array} \right) &=& \left( \begin{array}{c} \alpha_1^{(1)}\\ \alpha_1^{(2)} \end{array} \right) \frac{e^{\frac{-R_1}{2}}}{(k_1+k_1^*)}.\nonumber \end{eqnarray} \noindent(b) \underline{\it Soliton 2} ($\eta_{2R} \approx 0$, $\eta_{1R} \rightarrow \infty$, $\eta_{3R} \rightarrow -\infty$, $\eta_{4R} \rightarrow -\infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx& \left( \begin{array}{c} A_1^{2-} \\ A_2^{2-} \end{array} \right)k_{2R} {\mbox{sech}\,\left(\eta_{2R}+\frac{R_4-R_1}{2}\right)}e^{i\eta_{2I}} ,\nonumber\\ \left( \begin{array}{c} A_1^{2-}\\ A_2^{2-} \end{array} \right) &=& \left( \begin{array}{c} e^{\delta_{11}}\\ e^{\delta_{12}} \end{array} \right)\frac{e^{-\frac{(R_1+R_4)}{2}}}{(k_2+k_2^*)}.\nonumber \end{eqnarray} \noindent (c) \underline{\it Soliton 3} ($\eta_{3R} \approx 0$, $\eta_{1R} \rightarrow \infty$, $\eta_{2R} \rightarrow \infty$, $\eta_{4R} \rightarrow -\infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx& \left( \begin{array}{c} A_1^{3-} \\ A_2^{3-} \end{array} \right)k_{3R} {\mbox{sech}\,\left(\eta_{3R}+\frac{R_7-R_4}{2}\right)}e^{i\eta_{3I}} ,\nonumber\\ \left( \begin{array}{c} A_1^{3-}\\ A_2^{3-} \end{array} \right) &=& \left( \begin{array}{c} e^{\tau_{11}}\\ e^{\tau_{12}} \end{array} \right)\frac{e^{-\frac{(R_4+R_7)}{2}}}{(k_3+k_3^*)}.\nonumber \end{eqnarray} \noindent(d) \underline{\it Soliton 4} ($\eta_{4R} \approx 0$, $\eta_{1R} \rightarrow \infty$, $\eta_{2R} \rightarrow \infty$, $\eta_{3R} \rightarrow \infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx & \left( \begin{array}{c} A_1^{4-} \\ A_2^{4-} \end{array} \right)k_{4R} {\mbox{sech}\,\left(\eta_{4R}+\phi^{4-}\right)}e^{i\eta_{4I}},\nonumber\\ \left( \begin{array}{c} A_1^{4-} \\ A_2^{4-} \end{array} \right) &=& -\sqrt{\frac{n^{4-}}{2 k_{4R}(n^{4-})^{*}}}\left( \begin{array}{c} N_1^{4-}\\ N_2^{4-} \end{array} \right) \frac{1}{(D_1^{4-}D_2^{4-})^{1/2}},\nonumber \end{eqnarray} In the above equations, the various other quantities are defined below: \begin{eqnarray} e^{\delta_{1j}}=\frac{(k_1-k_2)(\alpha_1^{(j)}\kappa_{21}-\alpha_2^{(j)}\kappa_{11} )}{(k_1+k_1^*)(k_1^*+k_2)},\;\;j=1,2,\nonumber \end{eqnarray} \begin{eqnarray} &&e^{\tau_{1j}}=\frac{(k_2-k_1)(k_3-k_1)(k_3-k_2)(k_2^*-k_1^*)} {(k_1^*+k_1)(k_1^*+k_2)(k_1^*+k_3)(k_2^*+k_1)(k_2^*+k_2)(k_2^*+k_3)}\nonumber\\ &&\times \left[\alpha_1^{(j)}(\kappa_{21}\kappa_{32}-\kappa_{22}\kappa_{31}) +\alpha_2^{(j)}(\kappa_{12}\kappa_{31}-\kappa_{32}\kappa_{11}) +\alpha_3^{(j)}(\kappa_{11}\kappa_{22}-\kappa_{12}\kappa_{21}) \right],\nonumber \end{eqnarray} \begin{eqnarray} e^{R_1}=\frac{\kappa_{11}}{k_1+k_1^*}, \quad e^{R_4}=\frac{(k_2-k_1)(k_2^*-k_1^*)} {(k_1^*+k_1)(k_1^*+k_2)(k_1+k_2^*)(k_2^*+k_2)} \left[\kappa_{11}\kappa_{22}-\kappa_{12}\kappa_{21}\right],\nonumber \end{eqnarray} \begin{eqnarray} &&e^{R_7}= \frac{|k_1-k_2|^2|k_2-k_3|^2|k_3-k_1|^2} {(k_1+k_1^*)(k_2+k_2^*)(k_3+k_3^*)|k_1+k_2^*|^2|k_2+k_3^*|^2|k_3+k_1^*|^2} \nonumber\\ &&\times\left[(\kappa_{11}\kappa_{22}\kappa_{33}- \kappa_{11}\kappa_{23}\kappa_{32}) +(\kappa_{12}\kappa_{23}\kappa_{31}- \kappa_{12}\kappa_{21}\kappa_{33})+(\kappa_{21}\kappa_{13}\kappa_{32}- \kappa_{22}\kappa_{13}\kappa_{31})\right],\nonumber \end{eqnarray} \begin{eqnarray} &&n^{4-}=(k_4-k_1)(k_4-k_2)(k_4-k_3)(k_1+k_4^*)(k_2+k_4^*)(k_3+k_4^*),\nonumber\\ &&N_j^{4-}=\left| \begin{array}{cccc} \alpha_1^{(j)}&\alpha_2^{(j)}&\alpha_3^{(j)}&\alpha_4^{(j)}\\ \kappa_{11} &\kappa_{21} &\kappa_{31}&\kappa_{41}\\ \kappa_{12} &\kappa_{22}&\kappa_{32}&\kappa_{42}\\ \kappa_{13} &\kappa_{23}&\kappa_{33}&\kappa_{43}\\ \end{array} \right|,\quad j=1, 2,\nonumber\\ &&D_1^{4-}=\left| \begin{array}{ccc} \kappa_{11} &\kappa_{12} &\kappa_{13}\\ \kappa_{21} &\kappa_{22}&\kappa_{23}\\ \kappa_{31} &\kappa_{32}&\kappa_{33}\\ \end{array} \right|,\quad D_2^{4-}=\left| \begin{array}{cccc} \kappa_{11} &\kappa_{12} &\kappa_{13}&\kappa_{14}\\ \kappa_{21} &\kappa_{22}&\kappa_{23}&\kappa_{24}\\ \kappa_{31} &\kappa_{32}&\kappa_{33}&\kappa_{34}\\ \kappa_{41} &\kappa_{42}&\kappa_{43}&\kappa_{44}\\ \end{array} \right|,\nonumber\\ &&\phi^{4-} = \ln\Bigg[\frac{|k_4-k_1||k_2-k_4||k_3-k_4|}{|k_1+k_4^*||k_2+k_4^*||k_3+k_4^*|\sqrt{2 k_{4R}}}\Bigg]+\frac{1}{2}\ln\Bigg[\frac{D_2^{4-}}{D_1^{4-}}\Bigg],\nonumber \end{eqnarray} and \begin{eqnarray} \kappa_{il}= \frac{\sum_{n=1}^2\alpha_i^{(n)}\alpha_l^{(n)*}} {\left(k_i+k_l^*\right)},\;i,l=1,2,3,4.\nonumber \end{eqnarray} \noindent\underline{(ii)After Collision (limit $ z \rightarrow \infty$)}:\\ After interaction (as $ z \rightarrow \infty$) the forms of the solitons are given below.\\ \noindent(a) \underline{\it Soliton 1} ($\eta_{1R} \approx 0$, $\eta_{2R} \rightarrow \infty$, $\eta_{3R} \rightarrow \infty$, $\eta_{4R} \rightarrow \infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx & \left( \begin{array}{c} A_1^{1+} \\ A_2^{1+} \end{array} \right)k_{1R} {\mbox{sech}\,\left(\eta_{1R}+\phi^{1+}\right)}e^{i\eta_{1I}},\nonumber \end{eqnarray} where \begin{eqnarray} \left( \begin{array}{c} A_1^{1+} \\ A_2^{1+} \end{array} \right) &=& -\sqrt{\frac{n^{1+}}{2 k_{1R}(n^{1+})^{*}}}\left( \begin{array}{c} N_1^{1+}\\ N_2^{1+} \end{array} \right) \frac{1}{(D_1^{1+}D_2^{4-})^{1/2}},\nonumber\\ \phi^{1+} &=& \ln\Bigg[\frac{|k_1-k_2||k_4-k_1||k_3-k_1|}{|k_1+k_2^*||k_1+k_3^*||k_1+k_4^*|\sqrt{2 k_{1R}}}\Bigg]+\frac{1}{2}\ln\Bigg[\frac{D_2^{4-}}{D_1^{1+}}\Bigg].\nonumber \end{eqnarray} In the above equations, \begin{eqnarray} n^{1+}&=&(k_4-k_1)(k_3-k_1)(k_2-k_1)(k_2+k_1^*)(k_3+k_1^*)(k_4+k_1^*),\nonumber\\ N_j^{1+}&=&\left| \begin{array}{cccc} \alpha_1^{(j)}&\alpha_2^{(j)}&\alpha_3^{(j)}&\alpha_4^{(j)}\\ \kappa_{12} &\kappa_{22}&\kappa_{32}&\kappa_{42}\\ \kappa_{13} &\kappa_{23}&\kappa_{33}&\kappa_{43}\\ \kappa_{14} &\kappa_{24} &\kappa_{34}&\kappa_{44}\\ \end{array} \right|, j=1, 2, \quad D_1^{1+}=\left| \begin{array}{ccc} \kappa_{22} &\kappa_{23} &\kappa_{24}\\ \kappa_{32} &\kappa_{33}&\kappa_{34}\\ \kappa_{42} &\kappa_{43}&\kappa_{44}\\ \end{array} \right|.\nonumber \end{eqnarray} \noindent(b) \underline{\it Soliton 2} ($\eta_{2R} \approx 0$, $\eta_{1R} \rightarrow -\infty$, $\eta_{3R} \rightarrow \infty$, $\eta_{4R} \rightarrow \infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx & \left( \begin{array}{c} A_1^{2+} \\ A_2^{2+} \end{array} \right)k_{2R} {\mbox{sech}\,\left(\eta_{2R}+\phi^{2+}\right)}e^{i\eta_{2I}},\nonumber \end{eqnarray} where \begin{eqnarray} \left( \begin{array}{c} A_1^{2+} \\ A_2^{2+} \end{array} \right) &=& \frac{\sqrt{\kappa_{22}}}{\sqrt{\mu \bigg(|\alpha_2^{(1)}|^2+|\alpha_2^{(2)}|^2}\bigg)}\left( \begin{array}{c} \frac{N_1^{1+}}{\alpha_2^{(1)}}\\ \frac{N_2^{1+}}{\alpha_2^{(2)}} \end{array} \right)\left( \frac{ f_1}{ f_1^*} \right)\left( \begin{array}{c} \alpha_2^{(1)}\\ \alpha_2^{(2)} \end{array} \right) \frac{1}{(D_2^{1+}D_2^{2+})^{1/2}},\nonumber\\ \phi^{2+} &=& \ln\Bigg[\frac{|k_2-k_3||k_2-k_4|}{|k_2+k_3^*||k_4+k_2^*|\sqrt{2 k_{2R}}}\Bigg]+\frac{1}{2}\ln\Bigg[\frac{D_2^{2+}}{D_2^{1+}}\Bigg].\nonumber \end{eqnarray} The quantities $f_1$, $N_1^{2+},$ $N_2^{2+},$ $D_2^{1+}$, and $D_2^{2+}$ are given below: \begin{eqnarray} f_1&=&\sqrt{(k_4+k_2^*)(k_2^*+k_3)(k_3-k_2)(k_4-k_2)},\nonumber\\ N_1^{2+}&=&\left| \begin{array}{ccc} \alpha_2^{(1)} &\alpha_3^{(1)} &\alpha_4^{(1)}\\ \kappa_{23} &\kappa_{33}&\kappa_{43}\\ \kappa_{24} &\kappa_{34}&\kappa_{44}\\ \end{array} \right|,\quad N_2^{2+}=\left| \begin{array}{ccc} \alpha_2^{(2)} &\alpha_3^{(2)} &\alpha_4^{(2)}\\ \kappa_{23} &\kappa_{33}&\kappa_{43}\\ \kappa_{24} &\kappa_{34}&\kappa_{44}\\ \end{array} \right|,\nonumber\\ D_2^{1+}&=&\left| \begin{array}{cc} \kappa_{33} &\kappa_{34} \\ \kappa_{43} &\kappa_{44}\\ \end{array} \right|,\quad D_2^{2+}=\left| \begin{array}{ccc} \kappa_{22} &\kappa_{32} &\kappa_{42}\\ \kappa_{23} &\kappa_{33}&\kappa_{43}\\ \kappa_{24} &\kappa_{34}&\kappa_{44}\\ \end{array} \right|.\nonumber \end{eqnarray} \noindent(c) \underline{\it Soliton 3} ($\eta_{3R} \approx 0$, $\eta_{1R} \rightarrow -\infty$, $\eta_{2R} \rightarrow -\infty$, $\eta_{4R} \rightarrow \infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx & \left( \begin{array}{c} A_1^{3+} \\ A_2^{3+} \end{array} \right)k_{3R} {\mbox{sech}\,\left(\eta_{3R}+\phi^{3+}\right)}e^{i\eta_{3I}},\nonumber \end{eqnarray} where \begin{eqnarray} \left( \begin{array}{c} A_1^{3+} \\ A_2^{3+} \end{array} \right) &=& \frac{1}{\sqrt{\mu \bigg(|\alpha_3^{(1)}|^2+|\alpha_3^{(2)}|^2}\bigg)}\left( \begin{array}{c} \frac{N_1^{3+}}{D_1^{3+}}\\ \frac{N_2^{3+}}{D_1^{3+}} \end{array} \right)\left( \frac {g_2} {g_2^*} \right)\Bigg(\frac{\kappa_{43}\kappa_{33}}{\kappa_{34}\kappa_{44}}\Bigg)^{\frac{1}{2}} ,\nonumber\\ \phi^{3+} &=& \ln\Bigg[\frac{|k_4-k_3|}{|k_3+k_4^*|\sqrt{2 k_{3R}}}\Bigg]+\ln\Bigg[\frac{D_1^{3+}}{\sqrt{\kappa_{44}}}\Bigg],\nonumber \end{eqnarray} Here \begin{eqnarray} g_2&=&(k_3^*+k_4)\sqrt{(k_3-k_4) (\alpha_3^{(1)}\alpha_4^{(1)*}+\alpha_3^{(2)}\alpha_4^{(2)*})},\nonumber\\ N_1^{3+}&=&\left| \begin{array}{cc} \alpha_3^{(1)} &\alpha_4^{(1)} \\ \kappa_{34} &\kappa_{44}\\ \end{array} \right|,\quad N_2^{3+}=\left| \begin{array}{cc} \alpha_3^{(2)} &\alpha _4^{(2)} \\ \kappa_{34} &\kappa_{44}\\ \end{array}\right|,\nonumber\\ D_1^{3+}&=&\left| \begin{array}{cc} \kappa_{33} &\kappa_{34} \\ \kappa_{43} &\kappa_{44}\\ \end{array} \right|^\frac{1}{2}.\nonumber \end{eqnarray} \noindent(d) \underline{\it Soliton 4} ($\eta_{4R} \approx 0$, $\eta_{1R} \rightarrow -\infty$, $\eta_{2R} \rightarrow -\infty$, $\eta_{3R} \rightarrow -\infty$): \begin{eqnarray} \left( \begin{array}{c} q_1\\ q_2 \end{array} \right) &\approx & \left( \begin{array}{c} A_1^{4+} \\ A_2^{4+} \end{array} \right)k_{4R} {\mbox{sech}\,\left(\eta_{4R}+\phi^{4+}\right)}e^{i\eta_{4I}}\nonumber, \end{eqnarray} where \begin{eqnarray} \left( \begin{array}{c} A_1^{4+} \\ A_2^{4+} \end{array} \right) &=& \left( \begin{array}{c} \alpha_4^{(1)}\\ \alpha_4^{(2)} \end{array} \right)\frac{e^{D_1^{4+}/2}}{(k_4+k_4^*)},\nonumber\\ \phi^{4+} &=& \frac{D_1^{4+}}{2}, \quad e^{D_1^{4+}}=\frac{\mu (|\alpha_4^{(1)}|^2+|\alpha_4^{(2)}|^2)} {(k_4+k_4^*)^2}.\nonumber \end{eqnarray} \newpage \section{Two dimensional plots of intensities of NOR gate} The Figures (2-4) of columns (a) and (b) depict the two dimensional plots of intensities of the NOR gate at the input $(z=-15)$ and at the output $(z=15)$, respectively corresponding to the figures 2(a) - 2(c) in the main text. \begin{figure}[h] \begin{center} \includegraphics[width=0.6\linewidth]{s_fig2.png} \caption{NOR gate: Input states $``0"$ and $``1"$; Output state $``0"$.}\label{nor_01_2d} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.6\linewidth]{s_fig3.png} \caption{NOR gate: Input states $``1"$ and $``0"$; Output state $``0"$. }\label{nor_10_2d} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.6\linewidth]{s_fig4.png} \caption{NOR gate: Input states $``1"$ and $``1"$; Output state $``0"$. }\label{nor_11_2d} \end{center} \end{figure} \section{Explicit construction of OR gate} To construct the OR gate, two inputs are fed into the solitons $S_1$ and $S_2$ and the output is taken up from soliton $S_4$ . In order to get the desired output satisfying the truth table of the OR gate, we make use of the condition $\rho^{4+}=\rho^{1-}+\rho^{2-}$ and choose \begin{eqnarray} \alpha_4^{(1)}= \left(\frac{\alpha_1^{(1)}}{\alpha_1^{(2)}}+\frac{\alpha_2^{(1)}\left((k_1-k_2)|\alpha_1^{(1)}|^2-(k_2+k_1^*)| \alpha_1^{(2)}|^2\right)+(k_1+k_1^*)\alpha_1^{(1)}\alpha_1^{(2)*}\alpha_2^{(2)}}{(k_1+k_1^*)\alpha_1^{(1)*}\alpha_2^{(1)}\alpha_1^{(2)}-\alpha_2^{(2)}\left((k_2-k_1) |\alpha_1^{(2)}|^2+(k_2+k_1^*)|\alpha_1^{(1)}|^2\right)}\right) \alpha_4^{(2)}\nonumber. \end{eqnarray} To achieve the input states of OR gate such as $``0" \&``0"$, $``0" \& ``1"$, $``1" \& ``0"$, and $``1" \& ``1"$, the polarization parameters $(\alpha_{l}^{(m)},l=m=1,2)$ of solitons $S_1$ and $S_2$ are chosen as the same as that of the NOR gate. Similarly, all the other soliton parameters are chosen as in the case of the NOR gate except with $\alpha_4^{(2)}=0.001-0.002 i$. Figures (5-8) depict the implementation of the OR gate. One can refer tables I and II as the truth table and the intensity table of the OR gate, respectively. \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{s_fig5.png} \caption{OR gate: Input states $``0"$ and $``0"$; Output state $``0"$. }\label{or_00} \end{center} \begin{center} \includegraphics[width=1 \linewidth]{s_fig6.png} \caption{OR gate: Input states $``0"$ and $``1"$; Output state $``1"$. }\label{or_01} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=1 \linewidth]{s_fig7.png} \caption{OR gate: Input states $``1"$ and $``0"$; Output state $``1"$. }\label{or_10} \end{center} \begin{center} \includegraphics[width=1\linewidth]{s_fig8.png} \caption{OR gate: Input states $``1"$ and $``1"$; Output state $``1"$. }\label{or_11} \end{center} \end{figure} \begin{table}[!ht] \begin{minipage}{.5\linewidth} \caption{Truth table of OR gate} \centering \begin{tabular}{|c|c|c|c|} \hline Input 1 ($S_1$) & Input 2 ($S_2$) & Output ($S_{4}'''$) \\\hline 0& 0 & 0 \\ \hline 0& 1& 1 \\ \hline 1& 0 & 1 \\ \hline 1& 1 & 1 \\ \hline \end{tabular} \end{minipage}% \begin{minipage}{.5\linewidth} \caption{Intensity table of OR gate} \begin{tabular}{|c|c|c|c|} \hline Input 1 ($S_1$) (W) & Input 2 ($S_2$)(W) & Output ($S_{4}'''$) (W) \\\cline{1-3} $|\rho^{1-}|^2$ &$|\rho^{2-}|^2$ &$|\rho^{4+}|^2$ \\ \hline 0.1& 0.3 & 0.7 \\ \hline 0.1& 32& 40 \\ \hline 10& 0.02 & 8 \\ \hline 9& 5 & 27 \\ \hline \end{tabular} \end{minipage} \end{table} \newpage
{ "timestamp": "2018-06-05T02:14:28", "yymm": "1806", "arxiv_id": "1806.00965", "language": "en", "url": "https://arxiv.org/abs/1806.00965" }
\section{Introduction} Considerable attention has been paid to the quantum entanglement entropy which becomes an important physical concept to figure out quantum features of a variety of physics areas. Although the entanglement entropy is well defined in a quantum field theory (QFT) \cite{Holzhey:1994we,Vidal:2002rm,Latorre:2003kg,Casini:2004bw}, it is not easy to calculate it for an interacting QFT. In this situation, holography recently conjectured in the string theory \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj,Witten:1998zw} which allows us to evaluate such a nontrivial entanglement entropy nonperturbatively even for a strongly interacting theory \cite{Ryu:2006bv,Ryu:2006ef,Hubeny:2007xt,Solodukhin:2008dh,Nishioka:2009un,Casini:2009sr,Myers:2010tj,Takayanagi:2012kg, VanRaamsdonk:2010pw,Casini:2011kv}. We investigate the quantum entanglement of an expanding system and of a inflationary cosmology by applying holography. In order to calculate the entanglement entropy, one first divides a system into two subsystems, $A$ and $B$, and then the reduced density matrix of $A$ is defined as the trace the total density matrix over the other subsystem $B$. In this case, two subsystems are divided by an entangling surface and an observer living in $A$ cannot receive any information from $B$. This situation is very similar to the black hole \cite{Bombelli:1986rw,Srednicki:1993im}. An observer living at the asymptotic boundary can never get any information from the inside of the black hole horizon. Because of such a similarity, there were many attempts to understand the Bekenstein-Hawking entropy in terms of the entanglement entropy \cite{Brustein:2005vx,Emparan:2006ni,Cadoni:2007vf,Solodukhin:2011gn,Casini:2012ei,Klebanov:2012yf,Nishioka:2014kpa,Myers:2012ed,Nozaki:2013wia,Nozaki:2014hna,Caputa:2014vaa,Park:2015hcz,Kim:2016jwu}. Furthermore, the similarity between the black hole horizon and the entangling surface has led to a new and fascinating holographic formula to calculate the entanglement entropy on the dual gravity side. Although the holographic method has not been proved yet, it was checked that the holographic formula perfectly reproduces the known results of a two-dimensional conformal field theory (CFT) \cite{Calabrese:2004eu,Calabrese:2005zw,Calabrese:2009qy,Lewkowycz:2013nqa,Kim:2016hig,Kim:2017lyx,Narayanan:2018ilr}. In a cosmological model described by a dS space \cite{Maldacena:2012xp,Liu:2012eea}, there exists a specific surface called a cosmic event horizon which is similar to the black hole horizon. An observer living at the center of a dS space cannot see the outside of cosmic event horizon and cosmic event horizon radiates similar to the black hole horizon. From the quantum entanglement point of view, cosmic event horizon naturally divides a universe into two subsystems. One is a visible universe which we can see in future and the other is called a invisible universe. In this case, the invisible universe indicates that an observer living in a visible universe cannot see the outside of cosmic event horizon even after infinite time evolution. In general, cosmic event horizon remains as a constant in the late inflation era. The cosmic event horizon similar to the black horizon provides a natural entangling surface dividing a universe into two parts. Although the visible and invisible universes are casually disconnected from each other, quantum correlation between them can still exist. Therefore, it would be interesting to investigate the quantum entanglement between the visible and invisible universes, which may give us new information about the outside of our visible universe and the effect of the invisible universe on the cosmology of the visible universe. In order to investigate the quantum entanglement between two subsystems in the expanding universe, we take into account an AdS space with a dS boundary space \cite{Bucher:1994gb,Sasaki:1994yt}. The minimal surface extended to such an AdS space corresponds to the entanglement entropy of an expanding space defined at the boundary of the AdS space \cite{Maldacena:2012xp}. Before studying the entanglement entropy of a visible universe, we first consider a system whose boundary expands in time unlike cosmic event horizon. In this case, the entanglement entropy in the early time era increases by the square of the cosmological time $\tau$, whereas it in the late time era grows up exponentially by $e^{(d-2) H \tau}$ for a $d$-dimensional QFT. If we take cosmic event horizon as an entangling surface, the entanglement entropy shows a totally different behavior. The cosmic event horizon at $\tau=0$ is located at the equator of a $(d-1)$-dimensional sphere and it monotonically decreases as the cosmological time goes on. In the late inflation era, cosmic event horizon approaches a constant value proportional to the inverse of Hubble constant. Similarly, the corresponding entanglement entropy also monotonically decreases and approaches a constant value at $\tau=\infty$. The rest of this paper is organized as follows: In Sec. \ref{sec:2}, we briefly review an AdS space with a dS boundary. On this background, we study the entanglement entropy of an expanding system for $d=2,3,4$ cases in Sec. \ref{sec:3}. In Sec. \ref{sec:4}, we introduce a cosmic event horizon and divide a universe into visible and invisible universes. On this background, we study the quantum correlation between the visible and invisible universes in the inflationary cosmology. Finally, we finish this work with concluding remarks in Sec. \ref{sec:5}. \section{AdS space with a dS boundary} \label{sec:2} Consider a $(d+1)$-dimensional AdS space which can be embedded into a $(d+2)$-dimensional flat manifold with two time signatures. Denoting the $(d+2)$-dimensional flat metric as \begin{eqnarray} ds^2 = - dY_{-1}^2 - dY_{0}^2 + \delta_{ij} dY^i dY^j , \end{eqnarray} where $i$ and $j$ run from $1$ to $d$, the Lorentz group of this $(d+2)$-dimensional flat space is given by $SO(2,d)$. In order to obtain a $(d+1)$-dimensional AdS metric, we impose the following constraint \begin{eqnarray} - R^2 = - Y_{-1}^2 - Y_{0}^2 + \delta_{ij} Y^i Y^j . \end{eqnarray} Then, the hyper-surface satisfying this constraint represents a $(d+1)$-dimensional AdS space with an AdS radius $R$. Since the imposed constraint is also invariant under the $SO(2,d)$ transformation, the resulting AdS geometry becomes a $(d+1)$-dimensional space invariant under the $SO(2,d)$ transformation which is nothing but the isometry group of the AdS space. There exist a variety of parametrizations satisfying the above constraint. In this work, we focus on the parametrization which allows a $d$-dimensional de Sitter (dS) space at the boundary. Now, let us parametrize the coordinates of the ambient space as \cite{Maldacena:2012xp} \begin{eqnarray} Y_{-1} = R \cosh \frac{\rho}{R} \quad , \quad Y_{0} = R \sinh \frac{\rho}{R} \sinh \frac{t}{R} \quad {\rm and} \quad Y^i = R n^i \sinh \frac{\rho}{R} \cosh \frac{t}{R} , \end{eqnarray} where $n^i$ indicates a $d$-dimensional orthonormal vector satisfying $\delta_{ij} n^i n^j=1$. The resulting AdS metric then gives rise to \begin{eqnarray} \label{res:dp1metric} ds^2 = d\rho^2 + \sinh^2 \left( \frac{\rho}{R} \right) \left[ - d t^2 + R^2 \cosh^2 \left( \frac{t}{R} \right) \left( d \theta^2 + \sin^2 \theta d \Omega_{d-2}^2 \right) \right] , \end{eqnarray} where $d \theta^2 + \sin^2 \theta d \Omega_{d-2}^2$ indicates a metric of a $(d-1)$-dimensional unit sphere. According to the AdS/CFT correspondence, the boundary of this AdS space defined at $\rho=\infty$ can be regarded as the space-time we live in. Above the boundary metric shows a dS space which can describes an inflationary cosmology. In this work, after dividing the boundary space into two subsystems, we investigate the quantum correlation between them. In order to divide the boundary space into two subsystems, let us first assume that we are at $\theta=0$, and that the two subsystem is bordered at $\theta_o$. For convenience, we call the subsystem we are in is an observable system and the other subsystem an unobservable system. In general, the border is called the entangling surface in the entanglement entropy study. Although we do not get any information from the unobservable system, the quantum state of the observable system can be affected from the unobservable system due to the nontrivial quantum entanglement. Assuming that the entire system is given by a pure state $\left. |\Psi \right\rangle$ represented by the product of two subsystem's states \cite{Ryu:2006bv,Ryu:2006ef,Casini:2011kv,Calabrese:2004eu,Calabrese:2005zw,Calabrese:2009qy,Rosenhaus:2014woa,Rosenhaus:2014ula,Rosenhaus:2014zza} \begin{eqnarray} \left. |\Psi \right\rangle = \left. |\psi \right\rangle_o \left. |\psi \right\rangle_{u} , \end{eqnarray} where $\left. |\psi \right\rangle_o$ and $\left. |\psi \right\rangle_{u}$ indicate the state of the observable and unobservable systems, respectively. Then, the reduced density matrix of the observable system is given by tracing over the unobservable part \begin{eqnarray} \rho_o = {\rm Tr}\,_{u} \left. |\Psi \right\rangle \left\langle \Psi | \right. , \end{eqnarray} and the entanglement entropy is described by the Von Neumann entropy \begin{eqnarray} S_E = - {\rm Tr}\,_o \ \rho_o \log \rho_o . \end{eqnarray} Although the entanglement entropy is conceptually well defined in a quantum field theory, it is not easy to calculate it in general cases. Recently, Ryu and Takayanagi have proposed a new method called the holographic entanglement entropy \cite{Ryu:2006bv,Ryu:2006ef}. According to the AdS/CFT correspondence, the entanglement entropy can be easily evaluated by calculating the minimal surface area extended to the dual geometry. Following the holographic proposition, we will discuss the entanglement entropy of the expanding system in \eq{res:dp1metric}. \section{Entanglement entropy on the expanding system } \label{sec:3} Until now, we have discussed about the entanglement entropy between the observable and unobservable systems. However, it is not still clear how we can divide the observable and unobservable systems. One simple choice is to take a constant $\theta_o$. Under this simple ansatz, the sizes of the two subsystems gradually increases as time goes on. The set-up with a constant $\theta_o$ may be useful to describe an expanding material or to figure out the entanglement entropy of a time-dependent subsystem. On the other hand, it is also interesting to take into account the time-dependent $\theta_o $. In this section, we first investigate the entanglement entropy defined by a constant $\theta_o$ and then discuss further the entanglement entropy in the inflationary cosmology with a time-dependent $\theta_o$ in the next section For simplicity, let us first consider the $d=2$ case which can give us a solvable toy model. For $d=2$, the dual geometry reduces to the three-dimensional AdS space \begin{eqnarray} \label{res:d2metric} ds^2 = d\rho^2 + \sinh^2 \left( \frac{\rho}{R} \right) \left[ - d t^2 + R^2 \cosh^2 \left( \frac{t}{R} \right) \ d \theta^2 \right] . \end{eqnarray} If we focus on the boundary space of this AdS space with a fixed $\rho$, the boundary metric has the form of the cosmological type metric depending on time. In order to describe the entanglement entropy on the time-dependent background, we assume that the observable system is in the range of \begin{eqnarray} - \frac{\theta_o}{2} \le \theta \le \frac{\theta_o}{2} . \end{eqnarray} In this section, we regard $\theta_o$ as a constant as mentioned before. Then, $\pm \theta_o/2$ correspond to two boundaries of the observable system. Since we took the constant $\theta_o$, the size of the observable system expands. More precisely, the size of the observable system is given by $\sinh (\Lambda/R) \cosh (t/R) R \theta_o$ at the AdS boundary denoted by $\rho=\Lambda$. This shows that the size of the observable system increases by $\cosh (t/R)$. Since $\cosh (t/R)$ is invariant under the time reversal, from now on we take into account only the non-negative time period, $0 \le t < \infty$. This implies that the observable system begins the expansion at $t=0$. If we take $\Lambda$ as an infinity, it usually leads to a divergence. In the holographic set-up, this divergence is associated with a UV divergence of the dual field theory and $\Lambda$ is introduced to regularize the UV divergence. If we take a finite but large energy scale for $\Lambda$, it may be associated with the energy scale at which the expansion begins. In order to get more physical intuition about the entanglement entropy on the time-dependent geometry, let us consider several particular limits. We first define the turning point as $\rho_*$ which corresponds to the minimum value extended by the minimal surface. In the case with $ \rho_* /R \gg 1$, we can calculate the entanglement entropy analytically but perturbatively even for higher dimensional cases. This parameter range corresponds to the UV limit and may give rise to a good guide line to figure out physical implication for the numerical study. For $\rho_* /R \gg 1$, the entanglement entropy is governed by \cite{Park:2015afa,Park:2015dia,Kim:2014yca,Kim:2014qpa,Kim:2018mgz} \begin{eqnarray} S_E = \frac{1}{4 G} \int_{-\theta_o/2}^{\theta_o/2} d \theta \sqrt{\rho'^2 + \frac{ R^2}{4} e^{2 \rho/R} \cosh^2 \left( t /R \right) } . \end{eqnarray} Solving the equation of motion derived from it, $\theta_o$ at given $t$ is determined by the turning point \begin{eqnarray} \theta_o = \frac{4 }{e^{\rho_*/R} \cosh ( t/R )} . \end{eqnarray} When $\theta_o$ and $t$ are given, inversely, the turning point can also be regarded as a function of $\theta_o$ and $t$ \begin{eqnarray} \label{res:sturnpt} e^{\rho_*/R} = \frac{4}{\theta_o } \frac{1}{ \cosh ( t/R ) } . \end{eqnarray} Note that $t/R$ must not be large to obtain a large $\rho_*/R$. This fact implies that the approximation with $\rho_*/R \gg 1$ is valid only in the early time. In addition, this result shows that the turning point goes into the interior of the AdS space as time evolves. Performing the integral of the entanglement entropy with the obtained solution, the resulting entanglement entropy reduces to \begin{eqnarray} S_E = \frac{ \left[ (\Lambda- \rho_*) + R \log 2 \right] }{2 G } . \end{eqnarray} This result together with \eq{res:sturnpt} shows that the entanglement entropy increases by $t^2$ for $t/R \ll 1$ \begin{eqnarray} \label{res:earlybe1} S_E \sim \frac{ R \log \theta_o + \Lambda - R\log 2 }{2 G}+\frac{ t^2 }{4 G R} . \end{eqnarray} If $ t/R > 1$, on the other hand, it increases linearly in time \begin{eqnarray} \label{res:earlybe2} S_E \sim \frac{ R \log \theta_o + \Lambda -2 R \log 2 }{2 G }+\frac{ t}{2 G} . \end{eqnarray} Now, let us take into account a more general case without the constraint $ \rho_*/R \gg 1$. The general form of the entanglement entropy reads from \eq{res:d2metric} \begin{eqnarray} S_E = \frac{1}{4 G} \int_{-\theta_o/2}^{\theta_o/2} d \theta \sqrt{\rho'^2 + R^2 \sinh^2 ( \rho/R) \cosh^2 (t/R) } . \end{eqnarray} After solving the equation of motion, performing the integral gives rise to \begin{eqnarray} \frac{\theta_o}{2} &=& \int_{\rho_*}^{\infty} d \rho \ \frac{ \sinh \left(\rho _* /R \right) }{ R \cosh(t/R) \sinh ( \rho/R ) \sqrt{\sinh ^2(\rho/R )-\sinh ^2\left(\rho _*/R \right)}} \nonumber\\ &=& \frac{1}{ \cosh (t/R) } \left[ \frac{\pi}{2} - \arctan \left( \sinh ( \rho_*/R) \right) \right] . \end{eqnarray} Rewriting it leads to the following relation \begin{eqnarray} \sinh (\rho_*/R) = \cot \left( \frac{\theta_o \cosh (t/R)}{2}\right) , \end{eqnarray} which reproduces the previous result in \eq{res:sturnpt} for $\rho_* /R \gg 1$. In general case, the resulting entanglement entropy reads \begin{eqnarray} \label{res:exact2dHEE} S_E &=& \frac{\Lambda }{2 G} +\frac{ R \log \left[ \sin \left(\frac{1}{2} \theta_o \cosh (t/R) \right)\right]}{2 G }. \end{eqnarray} When $\theta_o \cosh (t/R) \ll 1$, this result again reproduces the previous ones obtained in the early inflation era. It is worth noting that the resulting entanglement entropy is well defined only in the time range of $0 \le t < t_f$, where $t_f$ satisfies $\theta_o \cosh (t_f /R)= 2 \pi$. After this critical time $t_f$, the logarithmic term of the entanglement entropy is not well defined. Now, let us define an additional critical time $t_m$ satisfying $\theta_o \cosh (t_m/R) = \pi$. At this critical time ($t=t_m$), the observable and unobservable systems have the same size. In this case, the turning point is located at $\rho_*=0$ and the entanglement entropy has a maximum value, $S_E=\Lambda/(2 G)$. Near $t_m$ ($t<t_m$), the entanglement entropy approaches this maximum value slowly by $- (t_m - t)^2$ \begin{eqnarray} S_E \approx \frac{ \Lambda }{2 G} -\frac{\theta_o^2 \sinh ^2\left( t_m /R \right)}{16 G R} \ (t_m - t)^2 + {\cal O} \left( (t_m - t)^4 \right) . \end{eqnarray} From these results, we can see that the entanglement entropy of the observable system increases by $t^2$ in the early time and saturates the maximum value at a finite time $t_m$. After $t_m$, the entanglement entropy rapidly decreases as shown in Fig. \ref{d2afig1}. As a consequence, we can summarize the entanglement entropy of an one-dimensional expanding system as follows \begin{itemize} \item In the early time with $\rho_*/R \gg1$ and $t/R \ll1$, the entanglement entropy increases by $t^2$. \item In the intermediate era with $\rho_*/R \gg1$ and $t/R > 1$, the entanglement entropy increases linearly as time evolves. \item In the late time with $ \rho_*/R \sim 0$ and $t \approx t_m$, the entanglement entropy slowly increases by $- (t_m - t)^2$ and finally saturates the maximum value at $t=t_m$. \item After $t_m$, the entanglement entropy rapidly decreases. \end{itemize} In Fig. \ref{d2afig1}, we plot the exact entanglement entropy given in \eq{res:exact2dHEE}, which shows the time dependence expected by the analytic calculation in several particular limits. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{fig1} \caption{We take $\epsilon=1/1000$, $R=100$, $\theta_o = 1/10$ and $G=1$.} \label{d2afig1} \end{center} \end{figure} It has been well known that the entanglement entropy of a two-dimensional CFT dual to an AdS$_3$ has a logarithmic divergence and its coefficient is proportional to the central charge of the dual CFT \cite{Ryu:2006bv,Ryu:2006ef}. However, the above result for AdS$_3$ with the dS$_2$ boundary shows a linear divergence ($\sim \Lambda$) instead of the logarithmic one. This is because the coordinate used in this work is different from the one usually used in Ref. \cite{Ryu:2006bv,Ryu:2006ef,Narayanan:2018ilr}. To see this, let us introduce a new coordinate \begin{eqnarray} \sinh ( \rho/R) = \frac{R}{ z} . \end{eqnarray} Then, the three-dimensional AdS metric in \eq{res:d2metric} can be rewritten as \begin{eqnarray} ds^2 = \frac{R^2 d z^2}{ z^2 (1+z^2/R^2)} + \frac{R^2}{ z^2} \left[ - d t^2 + R^2 \cosh^2 ( t/R) \ d \theta^2 \right] , \end{eqnarray} where the boundary is located at $z=0$. In the UV limit ($z \to 0$) with $t/R \ll1$, the new coordinate is related to the original one by $e^{ \rho/R} \sim R/z$ and the above metric is further simplified to \begin{eqnarray} ds^2 \approx \frac{R^2 d z^2}{ z^2 } + \frac{R^2}{z^2} \left[ - d t^2 + \frac{R^2}{4} \ d \theta^2 \right] , \end{eqnarray} which is locally equivalent to the AdS space in the Poincare patch. Thus, the linear divergence appearing in \eq{res:exact2dHEE} can be reinterpreted as a logarithmic one in the new coordinate system, $\Lambda/R = - \log (\epsilon/R)$, where $\epsilon$ indicates the UV cut-off of the $z$-coordinate. As a result, the linear divergence obtained here is consistent with the known logarithmic one up to the coordinate transformation. \subsection{On higher dimensional expanding observable system} Now, let us take into account higher dimensional cases with $d \ge 3$. For convenience, we use the new coordinate $R/ z =\sinh ( \rho/R)$. Then, the previous $(d+1)$-dimensional AdS metric can be rewritten as \begin{eqnarray} \label{metric:general} ds^2 = \frac{R^2 d z^2}{z^2 (1+z^2/R^2)} + \frac{R^2}{z^2} \left[ - d t^2 + R^2 \cosh^2 (t/R) \ \left( d \theta^2 + \sin^2 \theta d \Omega_{d-2}^2 \right) \right] , \end{eqnarray} where the boundary is located at $z=0$. On this background, the holographic entanglement entropy is governed by \begin{equation} \label{eq:entangleform} S_E = \dfrac{\Omega_{d-2} R^{2 d-3} \cosh^{d-2} (t/R) }{4 G} \int_{0}^{\theta_o} d\theta \ \frac{\sin^{d-2}\theta }{z^{d-1}} \sqrt{\dfrac{z'^2}{1 + z^2/R^2} + R^2 \cosh^2 (t/R) } , \end{equation} where we take the range of $\theta$ as $0 \le \theta \le \theta_o$ instead of $-\theta_o /2 \le \theta \le \theta_o/2$. Varying this action at a given time, the configuration of a minimal surface is determined by the highly nontrivial differential equation. Since it does not allow us to write an analytic solution, the numerical study is inevitable for a higher dimensional theory. However, if we focus on the early time behavior of the entanglement entropy, we can find a perturbative and analytic solution and gain more intuitions. In this section, we first study analytically the entanglement entropy in the early time and then look into the time evolution of the entanglement entropy numerically. Now, let us discuss the entanglement entropy of the observable system in the early time with $t/R \ll 1$. We first assume that the observable system is very tiny in the early time. Then, we can take $\theta_o \ll 1$. In this case, the minimal surface is extended only to the UV region represented as $0 \le z \le z_*$ with $z_*/R \ll 1$. This is because $z_*/R$ is usually proportional to $\theta_o$ at $t=0$, as will be seen. Due to the small size of the observable system, the AdS metric in the early time can be well approximated by \begin{eqnarray} \label{metric:pert} ds^2 \approx \frac{R^2 dz^2}{z^2 (1+z^2/R^2)} + \frac{R^2}{z^2} \left[ - d t^2 + R^2 \cosh^2 ( t/R) \ \left( d \theta^2 + \theta^2 d \Omega_{d-2}^2 \right) \right] . \end{eqnarray} On this background, the entanglement entropy is given by \begin{eqnarray} \label{act:original} S_E = \frac{\Omega_{d-2} R^{2d-3} \cosh ^{d-2} (t/R)}{4 G} \int_{0}^{\theta_o} d \theta \ \frac{\theta ^{d-2} }{z^{d-1} } \sqrt{ \frac{z'^2}{1+z^2/R^2} + R^2 \cosh ^2 (t/R) }. \end{eqnarray} In order to find a perturbative solution satisfying $z/R \le z_*/R \ll 1$, we introduce a small parameter $\lambda$ for indicating the smallness of the solution. Then, the perturbative expansion of the solution can be parametrized as \begin{eqnarray} \label{ansatz:pertsol} z(\theta)= \lambda \left( z_0 (\theta) + \lambda z_1(\theta) + \lambda^2 z_2 (\theta)+\cdots \right) . \end{eqnarray} When varying this perturbative solution with respect to $\theta$, it is worth noting that the derivative of the solution, $z'(\theta)$, must be expanded as \begin{eqnarray} z'(\theta)= z_0' (\theta) + \lambda z_1'(\theta) + \lambda^2 z_2'(\theta)+\cdots . \end{eqnarray} This is because $\theta$ has the same order of $z/R$ in the early time. Before performing the explicit calculation, let us think about the parity transformation, $z \to - z$ and $\theta \to - \theta$. Under this parity transformation, we can easily see that the metric in \eq{metric:pert} and the entanglement entropy are invariant. If we transforms $\lambda \to - \lambda$ instead of $z_n$ in \eq{ansatz:pertsol}, only $z_{2n}$ terms give rise to the consistent transformation with $z \to - z$. Due to this reason, the $z_{2n+1}$ terms automatically vanish. As a consequence, we can set $z_1(\theta)=0$ without loss of generality. At leading order of $\lambda$, the entanglement entropy is given by \begin{eqnarray} \label{act:generalact} S_0 = \frac{\Omega_{d-2} R^{2 d-3} \cosh ^{d-2} (t/R)}{4 G} \int_{0}^{\theta_o} d \theta \ \frac{\theta ^{d-2} }{z_0^{d-1} } \sqrt{ z_0'^2 + R^2 \cosh ^2 (t/R) } . \end{eqnarray} In a higher dimensional theory unlike the $d=2$ case, the entanglement entropy relies on $\theta$ explicitly. Thus, there is no well-defined conserved quantity unlike the $d=2$ case. This fact implies that we must solve the second order differential equation to obtain the entanglement entropy. At leading order, the minimal surface configuration can be determined by solving the equation of motion derived from $S_0$ \begin{eqnarray} \label{eq:generaleq} 0 &=& \frac{2 \cosh ^2(t/R) \theta z_0 z_0'' }{R^2} + \frac{2 (d-2) z_0 z_0'^3}{R^4 } + \frac{2 (d-1) \cosh ^2(t/R) \theta z_0'^2}{R^2 } \nonumber\\ && + \frac{2 (d-2) \cosh ^2(t/R) z_0 z_0' }{R^2} + 2 (d-1) \cosh ^4 ( t/R) \theta . \end{eqnarray} Despite the complexity of the equation of motion, it allows the following simple and exact solution regardless of the dimension $d$ \begin{eqnarray} \label{res:leadingsol} \frac{z_0}{R} = \cosh (t/R) \sqrt{\theta_o^2 - \theta^2} . \end{eqnarray} From this, we see that the turning point denoted by $z_*$ is proportional to $\theta_o$, as mentioned before, \begin{eqnarray} \frac{z_*}{R} = \theta_o \cosh (t/R) . \end{eqnarray} Note that this relation is derived from the leading order entanglement entropy. If we further consider higher order corrections, the turning point can vary with some small corrections. When a UV cut-off denoted by $\epsilon$ is given, we can easily see from the background metric that the volume of the observable system is given by \begin{eqnarray} {\cal V}_{d-1} = \frac{\Omega_{d-2} R^{2(d-1)} \cosh^{d-1} (t/R)}{d-1} \ \frac{\theta_o^{d-1}}{\epsilon^{d-1}} , \end{eqnarray} while the area of the entangling surface becomes \begin{eqnarray} {\cal A}_{d-2} (t) = \Omega_{d-2} R^{2(d-2)} \cosh^{d-2} (t/R) \ \frac{\theta_o^{d-2}}{\epsilon^{d-2}} . \end{eqnarray} These formulae show that the area of the entangling surface increases by $\cosh^{d-2} (t/R)$ as time evolves. At $t=0$, in particular, the area reduces to \begin{eqnarray} \label{res:coshoarea} \bar{{\cal A}}_{d-2} = \Omega_{d-2} R^{2(d-2)} \ \frac{\theta_o^{d-2}}{\epsilon^{d-2}} , \end{eqnarray} which can be determined by two parameters, $\epsilon$ and $\theta_o$. In the holographic study, the minimal surface is extended only to $\epsilon \le z \le z_*$, so that $z_*> \epsilon $ must be satisfied for consistency. Recalling further that $ z_*/R = \theta_o$ at $t=0$, we finally obtain $\theta_o > \epsilon/R$. This fact implies that, when the expansion begins at $t=0$, the observable system and the entangling surface have the non-vanishing volume and area. Now, let us consider the $d=3$ case. Using the perturbative expansion discussed before, the entanglement entropy is expanded into \begin{eqnarray} S_E = S_0 + S_2 + \cdots, \end{eqnarray} with \begin{eqnarray} S_0 &=& \frac{\Omega_1 R^3 \cosh (t/R)}{4 G} \int_0^{\theta_o-\theta_c} d \theta \ \frac{\theta}{ z_0^2} \sqrt{ z_0'^2 + R^2 \cosh^2 (t/R) } , \nonumber\\ S_2 &=& - \frac{\Omega _1 R^4 \cosh (t/R) }{8 G } \int_0^{\theta_o} d \theta \ \frac{\theta \left[ z_0^3 z_0'^2 + 4 R^2 z_2 z_0'^2- 2 R^2 z_0 z_0' z_2' +4 R^4 z_2 \cosh ^2 (t/R) \right] }{ R^3 z_0^3 \sqrt{z_0'^2+R^2 \cosh ^2 (t/R)}} , \end{eqnarray} where we set $\lambda=1$ and introduce $\theta_c$ as a UV cut-off in the $\theta$-direction. In the second integral, $\theta_c$ was removed because it does not give any additional UV divergence. Substituting the leading order solution in \eq{res:leadingsol} into $S_0$ and performing the integral, we finally obtain the leading contribution to the entanglement entropy \begin{eqnarray} S_0 = \frac{\Omega _1 R^2 \sqrt{\theta_o} }{4 \sqrt{2} G \sqrt{\theta_c}}-\frac{\Omega _1 R^2}{4 G } . \end{eqnarray} The first correction caused by $z_2(\theta)$ is determined by the following differential equation \begin{eqnarray} 0 = z_2''+ \frac{\left(\theta_o^2 - 2 \theta ^2\right) }{\theta \left(\theta_o^2 - \theta ^2 \right)} z_2' -\frac{2 \theta_o^2 }{\left(\theta_o^2 - \theta ^2 \right){}^2} z_2 + 2 R \sqrt{\theta_o^2-\theta ^2} \cosh ^3(t/R) . \end{eqnarray} This equation allows an exact solution \begin{eqnarray} z_2 =c_2 -\frac{c_2 \theta_o \tanh ^{-1}\left(\frac{\sqrt{\theta_o^2-\theta ^2}}{\theta_o}\right)}{\sqrt{\theta_o^2-\theta ^2}}+\frac{6 c_1 +\left(\theta ^4-4 \theta_o^2 \theta ^2+3 \theta_o^4+4 \theta_o^4 \log \theta \right) R \cosh^3(t/R)}{6 \sqrt{\theta_o^2-\theta ^2} } , \end{eqnarray} where $c_1$ and $c_2$ are two integral constants. These two integral constants must be fixed by imposing two appropriate boundary conditions. The natural boundary conditions are $z_2 (\theta_o) = 0$ and $z'(0)=0$. The first conditions implies that the end of the minimal surface is located at the boundary, while the second constraint is required to obtain a smooth minimal surface at $\theta=0$. These two boundary conditions determine two integral constants to be \begin{eqnarray} c_1 &=& - \frac{2 \theta_o^4 R \log\theta_o \cosh^3 (t/R) }{3 } ,\nonumber\\ c_2 &=& - \frac{2 \theta_o^3 R \cosh^3 (t/R)}{3 } . \end{eqnarray} Substituting the found perturbative solutions again into $S_2$, the first correction to the entanglement entropy is given by \begin{eqnarray} S_2 = - \frac{5 \theta_o^2 \Omega_1 R^2 \cosh^2 (t/R)}{36 G } . \end{eqnarray} Above the regulator $\theta_c$ is usually associated with the regulator $\epsilon$ in the $z$-direction. Using the perturbative solution we found, $\theta_c$ can be represented as a function of $\epsilon$ \begin{eqnarray} \theta_c = \frac{\epsilon^2 }{ 2 \theta_o R^2 \cosh^2 (t/R)} - \frac{2 \epsilon^3}{9 R^3 \cosh (t/R) } + {\cal O} (\epsilon^4) \end{eqnarray} As a consequence, the resulting perturbative entanglement entropy leads to \begin{eqnarray} S_E =\frac{\theta_o \Omega_1 R^3 \cosh (t/R)}{4 G \epsilon}-\frac{\Omega_1 R^2 \left(\theta_o^2 \cosh ^2 (t/R)+3 \right)}{12 G } + {\cal O} \left( \epsilon \right). \end{eqnarray} Recalling the formula in \eq{res:coshoarea}, this entanglement entropy can be rewritten as \begin{eqnarray} S_E =\frac{ {{\cal A}}_1 (t) R}{4 G}-\frac{\Omega_1 R^2 \left(\theta_o^2 \cosh ^2 (t/R)+3 \right)}{12 G } + {\cal O} \left( \epsilon \right) , \end{eqnarray} where ${\cal A}_1 (t)$ indicates the area of the entangling surface at a given time $t$. The leading contribution to the entanglement entropy, as expected, satisfies the area law even in the time-dependent space. Expanding it further in the early time, the entanglement entropy leads to \begin{eqnarray} S_E = \frac{ \bar{\cal A}_1 R}{4 G } -\frac{\Omega _1 R^2}{4 G } -\frac{\theta_o^2 \Omega _1R^2}{12 G } + \left( \frac{\bar{\cal A}_1}{8 G R} -\frac{\theta_o^2 \Omega _1}{12 G} \right) t^2 + {\cal O} \left( t^4 \right) , \end{eqnarray} where $\bar{\cal A}_1 = {\cal A}_1 (0)$. This result shows that the entanglement entropy in the early time increases by $t^2$ \begin{eqnarray} S_E (t) - S_E(0) \approx \left( \frac{\bar{\cal A}_1}{8 G R} -\frac{\theta_o^2 \Omega _1}{12 G} \right) t^2 . \end{eqnarray} It also shows that the increase of the entanglement entropy is proportional to the area of the entangling surface at leading order. In order to see the entanglement entropy in the late time, we must go beyond the perturbative expansion. After finding a numerical solution satisfying \eq{eq:generaleq}, we investigate how the corresponding entanglement entropy increases in time. In Fig. \ref{d3nfig1}, we depict the value of $S_E / \left( R^2 \cosh (t/R) \right)$ and its time derivative. In Fig. \ref{d3nfig1}(a), the value of $S_E /\left( R^2 \cosh (t/R) \right)$ approaches a constant in the late time. This fact becomes manifest in Fig. \ref{d3nfig1}(b), where the time derivative of $S_E / \left( R^2 \cosh (t/R) \right)$ approaches zero in the late time. Consequently, we can see that the entanglement entropy increases exponentially ($S_E \sim e^{t/R}$) in the late time (see Fig. \ref{d3nfig0}). \begin{figure} \begin{center} \hspace{-0.0cm} \subfigure[][]{\includegraphics[width=0.45\textwidth]{fig21}} \hspace{-0.0cm} \subfigure[][]{\includegraphics[width=0.45\textwidth]{fig22}} \caption{We take $\epsilon=1/1000$, $R=100$, $\theta_o = 1/20$ and $G=1$.} \label{d3nfig1} \end{center} \end{figure} \begin{figure} \begin{center} \subfigure{\includegraphics[width=0.45\textwidth]{fig3}} \caption{We take $\epsilon=1/1000$, $R=100$, $\theta_o = 1/20$ and $G=1$ .} \label{d3nfig0} \end{center} \end{figure} Repeating the same calculation for $d=4$, the entanglement entropy of the $d=4$ observable system, similar to the $d=3$ case, increases by $t^2$ in the early time and exponentially grows in the late time. In the late time, the increment of the entanglement entropy is proportional to $S_E \sim e^{t/R}$ for $d=3$ and $S_E \sim e^{2t/R}$ for $d=4$ which becomes manifest in Fig. \ref{d4nfig1}. These results imply that the entanglement entropy of the expanding observable system increases by $t^2$ in the early time regardless of $d$ and in the late time grows by $S_E \sim e^{(d-2) t/R}$ for a general $d$. For the black hole formation corresponding to the thermalization of the dual field theory, the entanglement entropy usually increases by $t^2$ in the early time similar to the expanding observable system. However, in the late time of the thermalization the entanglement entropy is saturated and becomes a thermal entropy, while the entanglement entropy of the expanding observable system increases exponentially in the late time. \begin{figure} \begin{center} \hspace{-0.0cm} \subfigure[][]{\includegraphics[width=0.45\textwidth]{fig41}} \hspace{-0.0cm} \subfigure[][]{\includegraphics[width=0.45\textwidth]{fig42}} \caption{We take $\epsilon=1/1000$, $R=100$, $\theta_o = 1/20$ and $G=1$.} \label{d4nfig1} \end{center} \end{figure} \section{Entanglement entropy of the visible universe in the inflationary cosmology} \label{sec:4} In the previous section, we studied the quantum entanglement of the expanding observable system which is described by the constant $\theta_o$. In this section, we investigate the entanglement entropy of the visible universe in the inflationary cosmology. In an inflationary model, there exists a natural way to divide the entire universe into two parts. Because of the growing scale factor in the inflationary model, there exists an invisible universe which we cannot see forever. On the other hand, the universe we can see is called the visible universe and the boundary of the visible universes is called cosmic event horizon which corresponds to the border of the visible and invisible universes. In this case, the invisible universe is casually disconnected from us. Due to the existence of the natural border of two universes in the inflationary model, it would be interesting to study the quantum correlation between them. In this section, we will investigate such an entanglement entropy for a four-dimensional inflationary cosmology. Let us first define cosmic event horizon as the boundary of the visible universe. From \eq{metric:general} for $d=4$, the boundary metric reads at $z=\epsilon$ \begin{eqnarray} ds_B^2 = \frac{R^2}{ \epsilon^2} \left[ - d t^2 + R^2 \cosh^2 (t/R) \ \left( d \theta^2 + \sin^2 \theta d \Omega_{d-2}^2 \right) \right] , \end{eqnarray} which describes ${\bf R}^+ \times {\bf S}^3$. In order to interpret the boundary metric as the cosmological one, we introduce a cosmological time $\tau$ and Hubble constant $H$ such that \begin{eqnarray} \label{rel:timeH} \tau = \frac{R}{\epsilon} t \quad {\rm and} \quad H = \frac{\epsilon}{R^2} . \end{eqnarray} Then, the boundary metric reduces to the one representing an inflationary cosmology \begin{eqnarray} ds_B^2 = - d \tau^2 + \frac{\cosh^2 (H \tau)}{H^2} \ \left( d \theta^2 + \sin^2 \theta d \Omega_{d-2}^2 \right) , \end{eqnarray} where the scale factor is given by $a(\tau)= \cosh ( H \tau) /H $. Due to the nontrivial scale factor, the distance travelled by light is restricted to a finite region whose boundary by definition corresponds to cosmic event horizon. More precisely, cosmic event horizon in the above cosmological metric is determined by \begin{eqnarray} \label{res:ceventh} d(\tau) = a(\tau) \int_t^{\infty} \frac{c \ d \tau'}{a(\tau')} = \left[ \frac{\pi}{2} - 2 \arctan \left( \tanh \frac{H \tau}{2} \right) \right] \frac{\cosh H \tau}{H}, \end{eqnarray} where the light speed was taken to be $c=1$. In Fig. \ref{d4nfig2}(a), we plot how cosmic event horizon changes as the cosmological time $\tau$ evolves. In the early inflation era, cosmic event horizon decreases as time goes on, whereas it approaches a constant value $1/H$ in the late inflation era which is a typical feature of the dS space. The existence of cosmic event horizon indicates that the visible universe, the inside of cosmic event horizon, is casually disconnected from the invisible universe, the outside of cosmic event horizon \cite{Gibbons:1977mu,MargalefBentabol:2013bh,Anderson:1983nq,Anderson:1984jf}. In other words, if we are at the center of the visible universe, we can never receive any information from the invisible universe. Even in this situation, there can exist a nontrivial quantum correlation between them, which can be measured by the entanglement entropy. From the viewpoint of the entanglement entropy, cosmic event horizon naturally plays a role of an entangling surface which divides a system into two subsystems. Therefore, it would be interesting to investigate the entanglement entropy of the inflationary cosmology to know how the our visible universe is quantumly correlated to the invisible universe we cannot see forever. To go further, let us re-express cosmic event horizon in terms of the angle appearing in the AdS space. For distinguishing cosmic event horizon from the previous expanding entangling surface parametrized by $\theta_o$, we use a different symbol $\theta_v$ which is given by a function of $\tau$ unlike $\theta_o$. Assuming that we are at the north pole of the three-dimensional sphere denoted by $\theta=0$, our visible universe can be characterized by $0 \le \theta \le \theta_v$. In this case, the radius of the entangling surface is determined from the AdS metric \begin{eqnarray} l = \int_0^{\theta_v} d \theta \ \frac{\cosh H \tau }{H} = \frac{\theta_v \cosh H \tau }{H} . \end{eqnarray} Because the radius of the entangling surface must be identified with cosmic event horizon, the comparison of them determines $\theta_v$ as a function of the cosmological time \begin{eqnarray} \label{rel:cosmicthv} \tan \left( \frac{\pi}{4} - \frac{\theta_v}{2} \right) = \tanh \frac{H \tau}{2} . \end{eqnarray} This result shows that $\theta_v$ start with $\pi/2$ at $\tau=0$ and gradually decreases to $0$ at $\tau=\infty$ with a fixed subsystem size $l$. In the late inflation era, cosmic event horizon becomes a constant independent of the cosmological time, $d(\tau) = 1/H$. In Fig. \ref{d4nfig2}(b), we plot $\theta_v$ relying on the cosmological time. In this figure, $\theta_v$ starts from $\pi/2$ at $\tau=0$ and monotonically and rapidly decreases to $0$ as the cosmological time goes on. \begin{figure} \begin{center} \subfigure[][]{\includegraphics[width=0.45\textwidth]{fig51}} \hspace{0.5cm} \subfigure[][]{\includegraphics[width=0.45\textwidth]{fig52}} \caption{cosmic event horizon relying on the cosmological time $\tau$ where we take $H=1$.} \label{d4nfig2} \end{center} \end{figure} By using $\theta_v$ we found, it is possible to calculate holographically the entanglement entropy of the visible universe. Before performing the calculation, it is worth noting that the cosmological time and the Hubble constant are defined only at the boundary. The minimal surface corresponding to the entanglement entropy of the visible universe is extended to the bulk of the dual geometry, so that we cannot exploit the definition of $\tau$ and $H$ in the course of calculating the area of the minimal surface. After the calculation, however, we can replace $t$ and $\epsilon$ with $\tau$ and $H$ through \eq{rel:timeH}. This is because the resulting area of the minimal surface represents the entanglement entropy defined at the boundary at which $\tau$ and $H$ are well defined. \subsection{Entanglement entropy at \texorpdfstring{$\tau=0$}{tau=0}} For simplicity, let us first consider the entanglement entropy at $\tau=0$. Using the relation in \eq{rel:timeH}, $\tau=0$ implies $t=0$ regardless of $\epsilon$. For $d=4$, the holographic entanglement entropy formula is given by \eq{eq:entangleform} with $t=0$ and $\theta_v$ instead of $\theta_o$. If we alternatively take into account $\theta$ as a function of $z$, the corresponding entanglement entropy in the inflationary model can be rewritten as \begin{eqnarray} S_E = \dfrac{\Omega_{2} R^{5} }{4 G} \int_{\epsilon}^{\infty} d z \ \frac{\sin^{2}\theta }{z^{3}} \sqrt{ R^2 \dot{\theta}^2 + \dfrac{1}{1 + z^2/R^2} } , \end{eqnarray} where the dot indicates a derivative with respect to $z$. Deriving the equation of motion from this action, it allows a specific solution which satisfies $\dot{\theta}=0$ and furthermore $\theta = \pm \pi/2$. This solution indicates an equatorial plane of $S^{3}$. Performing the above integral with this equatorial plane solution, we finally obtain \begin{eqnarray} S_E = \dfrac{\Omega_{2} R^{5} }{4 G} \left( \frac{1}{2 \epsilon ^2} -\frac{1}{2 R^2} \log \frac{2 R}{\epsilon} + \frac{1}{4 R^2} \right) . \end{eqnarray} If we interpret $\epsilon$ as the UV cut-off, this result shows the power-law divergence together with the logarithmic divergence, as expected in the entanglement entropy calculation for $d=4$. Rewriting $\epsilon$ in terms of $H$ by using \eq{rel:timeH}, we finally obtain the following entanglement entropy at $\tau=0$ \begin{eqnarray} \label{res:HEEatt0} S_E = \dfrac{\Omega_{2} R }{8 G H^2} - \dfrac{\Omega_{2} R^{3} }{8 G} \log \frac{2}{H R} + \dfrac{\Omega_{2} R^{3} }{16 G} . \end{eqnarray} \subsection{Entanglement entropy in the late inflation era} In the inflationary cosmology unlike the previous expanding system, the perturbative calculation of the entanglement entropy is possible in the late inflation era because $\theta_v$ becomes small at large $t$ or $\tau$. In the late inflation era we can apply the previous perturbative expansion of $z$. Using the perturbation of $z$, the leading contribution and the first correction to the entanglement entropy are given by \begin{eqnarray} S_0 &=& \frac{\Omega_2 R^5 \cosh^2 (t/R)}{4 G} \int_0^{\theta_v-\theta_c} d \theta \ \frac{\theta^2}{ z_0^3} \sqrt{ z_0'^2 +R^2 \cosh^2 (t/R) } , \\ S_2 &=& - \frac{\Omega _2 R^6 \cosh^2 (t/R) }{8 G } \int_0^{\theta_v-\theta_c} d \theta \ \frac{\theta \left(z_0^3 z_0'^2+6 R^2 z_2 z_0'^2-2 R^2 z_0 z_0' z_2'+6 R^4 z_2 \cosh ^2 (t/R) \right)}{ z_0^4 R^3 \sqrt{z_0'^2+R^2 \cosh ^2 (t/R) }} . \nonumber \end{eqnarray} Note that unlike the $d=3$ case, the upper limit of the integral range in $S_2$ has $\theta_c$. This is because we need to reintroduce $\theta_c$ to regularize an additional divergence appearing in $S_2$ for $d=4$. Substituting the leading solution in \eq{res:leadingsol} into $S_0$, we obtain the following leading contribution to the entanglement entropy \begin{eqnarray} S_0 = \frac{\theta_v R^3 \Omega_2}{16 G \theta_c} - \frac{\Omega_2 R^3}{16 G } \log \frac{2 \theta_v}{\theta_c} - \frac{\Omega_2 R^3 }{32 G } . \end{eqnarray} In this result, we can see that, when $\theta_c \to 0$, the leading contribution leads to the expected power-law and logarithmic divergences for $d=4$. Now, let us consider the deformation of the minimal surface described by $z_2$, which is governed by the following differential equation \begin{eqnarray} 0= z_2'' +\frac{2 }{\theta } z_2'-\frac{3 \theta_v^2 }{\left(\theta_v^2-\theta ^2\right){}^2} z_2 +\frac{\left(3 \theta_v^2-2 \theta ^2\right) R \cosh ^3 (t/R) }{\sqrt{\theta_v^2-\theta ^2} } . \end{eqnarray} This equation allows us to find the following exact solution \begin{eqnarray} z_2 &=& \frac{c_1 \left(\theta_v -\theta\right) ^2 }{\theta \sqrt{\theta_v^2-\theta ^2}}+\frac{c_2}{\sqrt{\theta_v^2-\theta ^2}} +\frac{(\theta ^5-5 \theta_v^2 \theta ^3-2 \theta_v^3 \theta ^2-2 \theta_v^4 \theta -2 \theta_v^5) R \cosh ^3 (t/R) }{6 \theta \sqrt{\theta_v^2-\theta ^2} } \nonumber\\ && +\frac{\left\{ \left(\theta_v +\theta \right){}^2 \log \left(\theta_v +\theta \right) - \left(\theta_v -\theta\right){}^2 \log \left(\theta _0-\theta \right) \right\} \theta_v^3 R \cosh ^3 (t/R) }{2 \theta \sqrt{\theta_v^2-\theta ^2} } , \end{eqnarray} where $c_1$ and $c_2$ are two integration constants. Imposing two boundary condition, $z_2 (\theta_v) = 0$ and $z'(0)=0$ discussed in the previous section, $c_1$ and $c_2$ are determined to be \begin{eqnarray} c_1 &=& \frac{\theta_v^3 R \cosh^3 (t/R) }{3 } , \nonumber\\ c_2 &=& \frac{\theta_v^4 R \cosh^3 (t/R) \lb5 - 6 \log (2 \theta_v) \right]}{3} . \end{eqnarray} Substituting the obtained solutions into $S_2$ again and performing the integral result in \begin{eqnarray} S_2 = -\frac{3 \theta_v^2\Omega_2 R^3 \cosh^2 (t/R) }{32G} \log\frac{2\theta_v}{\theta_c}+ \frac{11 \theta_v^2\Omega_2 R^3 \cosh^2 (t/R) }{64G} . \end{eqnarray} When $\theta_c \to 0$, it shows that the first correction gives rise to an additional logarithmic divergence unlike the known entanglement entropy. From the solutions obtained perturbatively, $\theta_c$ is determined in terms of $\epsilon$ \begin{eqnarray} \theta_c &=& \frac{\epsilon^2}{ 2 \theta_v R^2 \cosh^2 (t/R) } +\frac{\epsilon^4 }{8 \theta_v^3 R^4 \cosh^4 (t/R) } + \frac{\epsilon^4}{48 \theta_v R^4 \cosh^2 (t/R) } \nonumber\\ && - \frac{\epsilon^4}{4 \theta_v R^4 \cosh^2 (t/R) } \log \frac{2 \theta_v R \cosh (t/R) }{ \epsilon} + {\cal O} \left( \epsilon^6\right) . \end{eqnarray} Using this relation, the resulting entanglement entropy leads to \begin{eqnarray} S_E &=&\frac{R {\cal A}_2 (t) }{8 G } -\frac{\Omega_2 R^3 }{16 G} \log \frac{4 {\cal A}_2 (t) }{\Omega_2 R^2} -\frac{\Omega _2 R^3}{16 G } \nonumber\\ && + \frac{\theta_v^2 \Omega _2 R^3 \cosh ^2 (t/R) }{6 G } - \frac{\theta_v^2 R^3 \Omega _2 \cosh ^2 (t/R) }{16 G} \log \frac{4 {\cal A}_2 (t) }{\Omega_2 R^2} , \end{eqnarray} where the area of cosmic event horizon is given by \begin{eqnarray} {\cal A}_2 (t) = \frac{\theta_v^2 R^4 \Omega _2 \cosh ^2 (t/R) }{ \epsilon^2} . \end{eqnarray} Replacing $t$ and $\epsilon$ by $\tau$ and $H$ by using \eq{rel:timeH}, $\theta_v$ and the area of cosmic event horizon in the late inflation era ($H\tau \gg 1$) are approximated by \begin{eqnarray} \theta_v &\approx& 2 e^{-H \tau} , \nonumber\\ {\cal A}_2 (\tau) &\approx& \frac{\Omega_2}{H^2} . \end{eqnarray} As a result, the entanglement entropy of the visible universe in the late inflation era leads to the following expression \begin{eqnarray} \label{res:HEEinLIE} S_E &\approx&\frac{\Omega_2 R }{8 G H^2} -\frac{\Omega_2 R^3 }{4 G} \log \frac{2 }{ R H} + \frac{ 5 \Omega _2 R^3 }{48 G } . \end{eqnarray} This result shows that the entanglement entropy of the visible universe in the late inflation era is time-independent and determined by the Hubble constant and the area of cosmic event horizon. This is because cosmic event horizon remains as a constant in the late inflation era. The change of the entanglement entropy during the inflation era is given by \begin{eqnarray} \Delta S_E \equiv S_E (\infty) - S_E(0) = -\frac{\Omega_2 R^3 }{8 G} \log \frac{2 }{ R H} + \frac{ \Omega _2 R^3 }{24 G } , \end{eqnarray} where the result in \eq{res:HEEatt0} was used. Since $HR = \epsilon/R \ll 1$, $\Delta S_E$ always becomes negative. This indicates that the quantum correlation between the visible and invisible universes decreases with time. In Fig. \ref{d4ent}, we plot how the entanglement entropy of the visible universe changes as the cosmological time goes on. As expected by the perturbative and analytic calculation, the entanglement entropy gradually decreases and finally approaches a constant value after an infinite time. \begin{figure} \begin{center} \subfigure{\includegraphics[width=0.45\textwidth]{fig6}} \caption{We take $\epsilon=1$, $R=1$ and $G=1$ for simplicity.} \label{d4ent} \end{center} \end{figure} \section{Discussion} \label{sec:5} In this work, we have studied the quantum entanglement entropy of the expanding system and the inflationary universe. In order to take into account the expanding system and universe holographically, we considered an AdS space whose boundary is given by a dS space. In order to describe the quantum entanglement on the expanding system and space, we have investigated the holographic entanglement entropy of a subsystem on the boundary of the AdS space. In this model, we took two different subsystems. One of them corresponds to an expanding system in which we determined the subsystem size with a fixed $\theta_o$. In this case, since the volume of the boundary space increases with the cosmological time, the subsystem size also increases. In the early time era, we found that the entanglement entropy of an expanding system increases by $t^2$ regardless of the dimensionality of the system. In the late time era, on the other hand, we showed that, when the boundary of the AdS space expands with the expansion rate of $e^{H t}$, the increase of the entanglement entropy of a $d$-dimensional system is proportional to $e^{(d-2) t}$ for a $d$-dimensional space-time. For a dS space, there is an important length scale called cosmic event horizon. If an observer is at the center of a dS space, he cannot see the outside of cosmic event horizon even after the infinite time evolution. In other words, the observer at the center of dS can never get any information from the outside of cosmic event horizon. From the quantum information viewpoint, cosmic event horizon like a black hole horizon resembles the entangling surface dividing a total system into two subsystems. In the present model, cosmic event horizon starts with $\theta_v=\pi/2$ at $\tau=0$ and eventually approaches $\theta_v=0$ at $\tau=\infty$ with a fixed $\theta_v e^{H \tau}$. In the late inflation era, cosmic event horizon is given by the inverse of the Hubble constant, $d (\infty)= 1/H$. We showed that the entanglement entropy of the visible universe in the inflationary cosmology decreases continuously as time evolves and that it finally approaches a finite value independent of the cosmological time. \vspace{1cm} \section*{\small Acknowledgement} S. Koh (NRF-2016R1D1A1B04932574), J. H. Lee (NRF-2016R1A6A3A01010320), C. Park (NRF-2016R1D1A1B03932371) and D. Ro (NRF-2017R1D1A1B03029430) were supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education. J. H. Lee, C. Park and D. Ro were also supported by the Korea Ministry of Education, Science and Technology, Gyeongsangbuk-Do and Pohang City. \bibliographystyle{ref_jhep}
{ "timestamp": "2018-06-05T02:17:54", "yymm": "1806", "arxiv_id": "1806.01092", "language": "en", "url": "https://arxiv.org/abs/1806.01092" }
\section{Introduction} The concept of a Generative Adversarial Network (GAN) was recently proposed as a method for producing new samples that appear similar to those in a given dataset \cite{goodfellow2014generative}. It consists of two neural networks: a generator that attempts to transform an input latent vector into a realistic sample, and a discriminator that attempts to identify fake examples. When trained using seismic velocity models, the generator learns to produce plausible velocity models from latent vectors. As the dimension of the latent vector space is smaller than that of the model space, this can be considered to be a form of model order reduction. As the generator is differentiable, gradients may be backpropagated through it, allowing the latent vector to be optimized. My hypothesis is that a GAN trained to produce realistic wave speed models, from latent vectors with fewer parameters than the full models, can be used to reduce the number of parameters to be inverted in seismic Full-Waveform Inversion (FWI). Fewer parameters should make the method less prone to overfitting, as this reduces the flexibility of the model, constraining it to be realistic. Using a GAN in this way could therefore be thought of as a form of regularization. A regularizer that favors plausible features, such as deformed layers of constant velocity and salt bodies, is difficult to express mathematically, but the GAN constructs one automatically. Reducing the number of model parameters should also make using stochastic optimization to find an initial model more computationally feasible. \section{Materials and Methods} The proposed method consists of two components: a GAN that generates realistic seismic velocity models, and a modified FWI implementation that incorporates the GAN. \subsection{Generative Adversarial Network} To evaluate the method, I test it on a small 2D model, so a GAN generator that produces 2D samples is needed. The DCGAN \cite{radford2015unsupervised} network structure has been shown to be successful for producing realistic 2D images. This consists of a latent vector with 100 elements, and five convolutional layers in both the generator and discriminator (convolution transpose for the generator). I use it with only one modification: I add a term to the cost function of the generator that penalizes wave speeds outside the range 1450 -- 5000 m/s. In my implementation, this cost is equal to the magnitude of the deviation from this range. Training the GAN requires many velocity model samples. To produce these, I use a simple code to generate models with $64 \times 64$ cells that consist of sedimentary layers and salt bodies. The sedimentary layers are constant-velocity layers, with the velocity generally increasing with depth, that are randomly distorted. To produce the salt body that is added to each model, I randomly distort a circle, and fill it with a constant velocity of approximately 4600 m/s. I use $2^{17} = 131072$ of these models, and train for eight epochs with the Adam optimizer using a batch size of 64 and a learning rate of 0.0001. \subsection{Full-Waveform Inversion} In conventional FWI, the residual between the true data and data produced by forward propagating waves through a model, is backpropagated through the model to calculate the gradient of the cost function with respect to the model parameters. This gradient is then used to update the model parameters. To incorporate the GAN, I modify this last step. Instead of updating the wave speed model directly, I further backpropagate the gradient through the GAN to update the latent vector that produced the model. The models considered by FWI are therefore limited to those that can be produced by the generator, ensuring that they are always plausible, and reducing the number of parameters to the dimension of the generator's latent vector space. FWI requires an initial model from where it begins iteratively converging toward the true model. In the proposed method, the latent vector that produces this initial model must first be found. I use a random search to find the initial latent vector. To do this, I create random latent vectors and forward propagate through the resulting models to calculate the value of the FWI cost function associated with each. I scale the data residual by time to approximately compensate for spreading losses. To reduce computational cost, I sum all of the shots into a single shot with multiple sources. I use the vector with the lowest cost function value as the initial latent vector for gradient-based optimization. \subsection{Method overview} For clarity, the following describes the steps of the proposed method: \begin{enumerate} \item Train the GAN using example seismic models \item Find the initial latent vector \begin{enumerate} \item Initialize the latent vector with random numbers \item Apply the generator to the vector to create a model \item Forward propagate sources through the model to calculate its FWI cost function value \item Iterate until stopping criterion, and use latent vector with lowest cost \end{enumerate} \item Iteratively optimize the latent vector \begin{enumerate} \item Apply the generator to the vector to create a model \item Perform one iteration of FWI to calculate the gradient of the cost function with respect to the model \item Backpropagate the gradient through the generator to update the latent vector \item Iterate until stopping criterion \end{enumerate} \end{enumerate} \subsection{SEAM dataset} To test the ability of the method to perform seismic inversion, I use a model derived from the SEAM Phase I model~\cite{fehler2011seam}. I use a 2D section of the Vp model extracted from the 23900 m North line, covering from 9600 m to 16000 m in the horizontal direction and from 100 m to 6500 m in the depth direction. The extracted model is interpolated onto a grid with 100 m cell spacing (making it $64 \times 64$ cells). The dataset contains shots forward modeled on this model, using a 1 Hz Ricker wavelet as the source with a source spacing of 500 m and receiver spacing of 100 m along the top surface. For conventional FWI, I use 20 LBFGS steps, each requiring that the cost and gradient of the entire dataset are calculated 20 times. With 13 shots, and the equivalent of two forward modeling steps to calculate the cost and gradient, this equates to $20 \times 20 \times 13 \times 2 = 10400$ forward shot modeling steps. I use a learning rate of 0.1 as, unlike the proposed method, large step sizes can lead to unrealistically high wave speeds that require the wave propagator to use a small time step size. I start from an initial model that increases by 0.5 m/s/m with depth from 1490 m/s at the surface. For the proposed method, I use 50 vectors in the random search for an initial latent vector. I then run 5 LBFGS steps with a learning rate of 1 to optimize this result. The combination of these involves $50 + 5 * 20 * 13 * 2 = 2650$ forward shot modeling steps. \section{Results} I present results from the two stages of the method: training the GAN to produce realistic seismic models, and using the resulting generator during FWI to invert for a model. The code to reproduce these results is included in the ancillary files accompanying this article. \subsection{GAN training} \begin{figure} \includegraphics{gan_models.eps} \caption{Example models from the GAN training dataset and produced by the generator after training. The generated models look similar to the training models.} \label{fig:gan_models} \end{figure} A random selection of models used to train the GAN, and output models produced by the generator network from random latent vectors, are shown in Figure \ref{fig:gan_models}. The generator appears to have learned to produce realistic seismic models, as the generated models are similar to the training models. \subsection{FWI} \begin{figure} \includegraphics{fwi.eps} \caption{The true model, conventional FWI result, initial GAN model, and optimized GAN model.} \label{fig:fwi} \end{figure} The results of conventional FWI and the proposed method are presented in Figure \ref{fig:fwi}. The conventional FWI result is quite smooth, as expected. It also contains only the top portion of the salt body, which then fades to the background model with depth. More of the salt may have been captured by using modifications such as those proposed by \cite{esser2018total}. The initial model for the proposed method, found after a random search, is already quite close to the true model. Iterative optimization improves the result further. The result looks plausible. However, compared to the true model, the shape of the salt body is not fully correct, and the lowest portion of it is missing. The structure of the sedimentary layers also appears to be less accurate than in the conventional result, but the wave speed is closer to the truth. One may argue that, with its sharp salt body edges, the proposed method favors precision over accuracy, but the result is closer to the true model than that found by conventional FWI, with RMS model errors of 510 m/s and 758 m/s respectively. \section{Discussion} \paragraph{GAN training and plausible models} The generator network learns to produce models similar to the input training models during GAN training. The plausibility of the generated models thus depends on that of the training models. I use simple synthetic models during training. It is likely that better results would have been obtained from the modified FWI proposal if more realistic models were used for training. In addition to only producing plausible models, another concern is whether the generator can produce all possible plausible models. GAN training is notoriously susceptible to problems such as collapsing to a state where the generator always produces the same output. Care (and luck) is therefore needed to ensure that GAN training is successful. \paragraph{Computational cost} The proposed method has three differences from conventional FWI: the addition of a generator, a different method of finding an initial model, and the need to train the GAN. Forward modeling through the generator network to produce a model and backpropagating through it to update the latent vector are of negligible computational cost compared to wave propagation through the model. Adding the generator therefore does not noticeably increase the cost of FWI. In the results above, I find an initial latent vector using a random search. The computational cost of this depends on the number of random vectors that are evaluated, which I expect would need to be higher for more realistically sized models. Other initialization methods are available, as I discuss below. Using a GPU, training the GAN takes about the same amount of time as the FWI step for the results above. Once the generator is trained, however, it can be used for any dataset with the same model size. This is discussed further below. \paragraph{Initial model} If a good initial model is available, such as one derived from tomography, then it may be used instead of finding one with a random search. A latent vector that approximately produces the chosen initial model must be found. This can be achieved by starting with a random latent vector and using the mean squared error between the model generated from this and the desired initial model to iteratively optimize the latent vector. An outline of this approach follows. \begin{enumerate} \item Initialize the latent vector with random numbers \item Iteratively optimize the vector \begin{enumerate} \item Apply the generator to the vector to create a model \item Calculate the residual between the generated model and the desired initial model \item Backpropagate through the generator to update the latent vector \item Iterate until stopping criterion \end{enumerate} \end{enumerate} \paragraph{Generating different model sizes} In my implementation, the generator network produces a $64 \times 64$ array. This suggests that a new generator would need to be trained if another model size is desired. It is quite likely, however, that the models it produces would still look plausible even if they were stretched, for example to produce a $64 \times 128$ model. Stretching the outputs in this way would allow the same generator to be used for a variety of model sizes, as long as the model gradient is compressed back to $64 \times 64$ so that it can be backpropagated through the generator to update the latent vector during FWI. It may also be possible to stitch together multiple generated models to produce a larger model. \paragraph{Local minima} Conventional FWI suffers from a problem with local minima, and so a good initial model is often required to ensure convergence to a satisfactory result. I expect the proposed method to be similarly dependent on a good initial model. It is even possible that this dependence may be stronger. I tried starting the proposed FWI method from a random initial latent vector, but it did not appear to converge toward the true model, for example. This problem may be mitigated by the reduced cost of randomly searching for an initial model. \paragraph{Related work} The most closely related work appears to be \cite{mosser2018rapid}, in which the authors use GANs to transform a seismic image into a seismic wave speed model. As a wave speed model is needed to create a seismic image, this method may be applicable later in the processing workflow than my proposal. In another use of GANs for seismic applications, the authors of \cite{siahkoohiseismic} use one to replace missing seismic data. Although GANs are not used, \cite{lewis2017deep} has similarities as it uses deep neural networks during FWI, identifying areas likely to contain salt. \section{Conclusion} The hypothesis appears to be true. It is possible to train a GAN that generates plausible seismic models. This generator can be used in a random search to quickly find a good initial latent vector. Optimizing this vector, by combining conventional FWI with the generator, produces a realistic result that may be a good starting model for further refinement with conventional FWI. \section{Introduction} The concept of a Generative Adversarial Network (GAN) was recently proposed as a method for producing new samples that appear similar to those in a given dataset \cite{goodfellow2014generative}. It consists of two neural networks: a generator that attempts to transform an input latent vector into a realistic sample, and a discriminator that attempts to identify fake examples. When trained using seismic velocity models, the generator learns to produce plausible velocity models from latent vectors. As the dimension of the latent vector space is smaller than that of the model space, this can be considered to be a form of model order reduction. As the generator is differentiable, gradients may be backpropagated through it, allowing the latent vector to be optimized. My hypothesis is that a GAN trained to produce realistic wave speed models, from latent vectors with fewer parameters than the full models, can be used to reduce the number of parameters to be inverted in seismic Full-Waveform Inversion (FWI). Fewer parameters should make the method less prone to overfitting, as this reduces the flexibility of the model, constraining it to be realistic. Using a GAN in this way could therefore be thought of as a form of regularization. A regularizer that favors plausible features, such as deformed layers of constant velocity and salt bodies, is difficult to express mathematically, but the GAN constructs one automatically. Reducing the number of model parameters should also make using stochastic optimization to find an initial model more computationally feasible. \section{Materials and Methods} The proposed method consists of two components: a GAN that generates realistic seismic velocity models, and a modified FWI implementation that incorporates the GAN. \subsection{Generative Adversarial Network} To evaluate the method, I test it on a small 2D model, so a GAN generator that produces 2D samples is needed. The DCGAN \cite{radford2015unsupervised} network structure has been shown to be successful for producing realistic 2D images. This consists of a latent vector with 100 elements, and five convolutional layers in both the generator and discriminator (convolution transpose for the generator). I use it with only one modification: I add a term to the cost function of the generator that penalizes wave speeds outside the range 1450 -- 5000 m/s. In my implementation, this cost is equal to the magnitude of the deviation from this range. Training the GAN requires many velocity model samples. To produce these, I use a simple code to generate models with $64 \times 64$ cells that consist of sedimentary layers and salt bodies. The sedimentary layers are constant-velocity layers, with the velocity generally increasing with depth, that are randomly distorted. To produce the salt body that is added to each model, I randomly distort a circle, and fill it with a constant velocity of approximately 4600 m/s. I use $2^{17} = 131072$ of these models, and train for eight epochs with the Adam optimizer using a batch size of 64 and a learning rate of 0.0001. \subsection{Full-Waveform Inversion} In conventional FWI, the residual between the true data and data produced by forward propagating waves through a model, is backpropagated through the model to calculate the gradient of the cost function with respect to the model parameters. This gradient is then used to update the model parameters. To incorporate the GAN, I modify this last step. Instead of updating the wave speed model directly, I further backpropagate the gradient through the GAN to update the latent vector that produced the model. The models considered by FWI are therefore limited to those that can be produced by the generator, ensuring that they are always plausible, and reducing the number of parameters to the dimension of the generator's latent vector space. FWI requires an initial model from where it begins iteratively converging toward the true model. In the proposed method, the latent vector that produces this initial model must first be found. I use a random search to find the initial latent vector. To do this, I create random latent vectors and forward propagate through the resulting models to calculate the value of the FWI cost function associated with each. I scale the data residual by time to approximately compensate for spreading losses. To reduce computational cost, I sum all of the shots into a single shot with multiple sources. I use the vector with the lowest cost function value as the initial latent vector for gradient-based optimization. \subsection{Method overview} For clarity, the following describes the steps of the proposed method: \begin{enumerate} \item Train the GAN using example seismic models \item Find the initial latent vector \begin{enumerate} \item Initialize the latent vector with random numbers \item Apply the generator to the vector to create a model \item Forward propagate sources through the model to calculate its FWI cost function value \item Iterate until stopping criterion, and use latent vector with lowest cost \end{enumerate} \item Iteratively optimize the latent vector \begin{enumerate} \item Apply the generator to the vector to create a model \item Perform one iteration of FWI to calculate the gradient of the cost function with respect to the model \item Backpropagate the gradient through the generator to update the latent vector \item Iterate until stopping criterion \end{enumerate} \end{enumerate} \subsection{SEAM dataset} To test the ability of the method to perform seismic inversion, I use a model derived from the SEAM Phase I model~\cite{fehler2011seam}. I use a 2D section of the Vp model extracted from the 23900 m North line, covering from 9600 m to 16000 m in the horizontal direction and from 100 m to 6500 m in the depth direction. The extracted model is interpolated onto a grid with 100 m cell spacing (making it $64 \times 64$ cells). The dataset contains shots forward modeled on this model, using a 1 Hz Ricker wavelet as the source with a source spacing of 500 m and receiver spacing of 100 m along the top surface. For conventional FWI, I use 20 LBFGS steps, each requiring that the cost and gradient of the entire dataset are calculated 20 times. With 13 shots, and the equivalent of two forward modeling steps to calculate the cost and gradient, this equates to $20 \times 20 \times 13 \times 2 = 10400$ forward shot modeling steps. I use a learning rate of 0.1 as, unlike the proposed method, large step sizes can lead to unrealistically high wave speeds that require the wave propagator to use a small time step size. I start from an initial model that increases by 0.5 m/s/m with depth from 1490 m/s at the surface. For the proposed method, I use 50 vectors in the random search for an initial latent vector. I then run 5 LBFGS steps with a learning rate of 1 to optimize this result. The combination of these involves $50 + 5 * 20 * 13 * 2 = 2650$ forward shot modeling steps. \section{Results} I present results from the two stages of the method: training the GAN to produce realistic seismic models, and using the resulting generator during FWI to invert for a model. The code to reproduce these results is included in the ancillary files accompanying this article. \subsection{GAN training} \begin{figure} \includegraphics{gan_models.eps} \caption{Example models from the GAN training dataset and produced by the generator after training. The generated models look similar to the training models.} \label{fig:gan_models} \end{figure} A random selection of models used to train the GAN, and output models produced by the generator network from random latent vectors, are shown in Figure \ref{fig:gan_models}. The generator appears to have learned to produce realistic seismic models, as the generated models are similar to the training models. \subsection{FWI} \begin{figure} \includegraphics{fwi.eps} \caption{The true model, conventional FWI result, initial GAN model, and optimized GAN model.} \label{fig:fwi} \end{figure} The results of conventional FWI and the proposed method are presented in Figure \ref{fig:fwi}. The conventional FWI result is quite smooth, as expected. It also contains only the top portion of the salt body, which then fades to the background model with depth. More of the salt may have been captured by using modifications such as those proposed by \cite{esser2018total}. The initial model for the proposed method, found after a random search, is already quite close to the true model. Iterative optimization improves the result further. The result looks plausible. However, compared to the true model, the shape of the salt body is not fully correct, and the lowest portion of it is missing. The structure of the sedimentary layers also appears to be less accurate than in the conventional result, but the wave speed is closer to the truth. One may argue that, with its sharp salt body edges, the proposed method favors precision over accuracy, but the result is closer to the true model than that found by conventional FWI, with RMS model errors of 510 m/s and 758 m/s respectively. \section{Discussion} \paragraph{GAN training and plausible models} The generator network learns to produce models similar to the input training models during GAN training. The plausibility of the generated models thus depends on that of the training models. I use simple synthetic models during training. It is likely that better results would have been obtained from the modified FWI proposal if more realistic models were used for training. In addition to only producing plausible models, another concern is whether the generator can produce all possible plausible models. GAN training is notoriously susceptible to problems such as collapsing to a state where the generator always produces the same output. Care (and luck) is therefore needed to ensure that GAN training is successful. \paragraph{Computational cost} The proposed method has three differences from conventional FWI: the addition of a generator, a different method of finding an initial model, and the need to train the GAN. Forward modeling through the generator network to produce a model and backpropagating through it to update the latent vector are of negligible computational cost compared to wave propagation through the model. Adding the generator therefore does not noticeably increase the cost of FWI. In the results above, I find an initial latent vector using a random search. The computational cost of this depends on the number of random vectors that are evaluated, which I expect would need to be higher for more realistically sized models. Other initialization methods are available, as I discuss below. Using a GPU, training the GAN takes about the same amount of time as the FWI step for the results above. Once the generator is trained, however, it can be used for any dataset with the same model size. This is discussed further below. \paragraph{Initial model} If a good initial model is available, such as one derived from tomography, then it may be used instead of finding one with a random search. A latent vector that approximately produces the chosen initial model must be found. This can be achieved by starting with a random latent vector and using the mean squared error between the model generated from this and the desired initial model to iteratively optimize the latent vector. An outline of this approach follows. \begin{enumerate} \item Initialize the latent vector with random numbers \item Iteratively optimize the vector \begin{enumerate} \item Apply the generator to the vector to create a model \item Calculate the residual between the generated model and the desired initial model \item Backpropagate through the generator to update the latent vector \item Iterate until stopping criterion \end{enumerate} \end{enumerate} \paragraph{Generating different model sizes} In my implementation, the generator network produces a $64 \times 64$ array. This suggests that a new generator would need to be trained if another model size is desired. It is quite likely, however, that the models it produces would still look plausible even if they were stretched, for example to produce a $64 \times 128$ model. Stretching the outputs in this way would allow the same generator to be used for a variety of model sizes, as long as the model gradient is compressed back to $64 \times 64$ so that it can be backpropagated through the generator to update the latent vector during FWI. It may also be possible to stitch together multiple generated models to produce a larger model. \paragraph{Local minima} Conventional FWI suffers from a problem with local minima, and so a good initial model is often required to ensure convergence to a satisfactory result. I expect the proposed method to be similarly dependent on a good initial model. It is even possible that this dependence may be stronger. I tried starting the proposed FWI method from a random initial latent vector, but it did not appear to converge toward the true model, for example. This problem may be mitigated by the reduced cost of randomly searching for an initial model. \paragraph{Related work} The most closely related work appears to be \cite{mosser2018rapid}, in which the authors use GANs to transform a seismic image into a seismic wave speed model. As a wave speed model is needed to create a seismic image, this method may be applicable later in the processing workflow than my proposal. In another use of GANs for seismic applications, the authors of \cite{siahkoohiseismic} use one to replace missing seismic data. Although GANs are not used, \cite{lewis2017deep} has similarities as it uses deep neural networks during FWI, identifying areas likely to contain salt. \section{Conclusion} The hypothesis appears to be true. It is possible to train a GAN that generates plausible seismic models. This generator can be used in a random search to quickly find a good initial latent vector. Optimizing this vector, by combining conventional FWI with the generator, produces a realistic result that may be a good starting model for further refinement with conventional FWI.
{ "timestamp": "2018-06-05T02:11:36", "yymm": "1806", "arxiv_id": "1806.00828", "language": "en", "url": "https://arxiv.org/abs/1806.00828" }
\section{Introduction} \label{section_introduction} Quantum interference is often considered to be one of the fundamental features of quantum theory, responsible for quantum advantage in a number of computational tasks. However, there is a known limit to how much interference quantum theory can exhibit. Sorkin proposed a hierarchy of theories based on the maximum order of interference they exhibit \cite{Sorkin1,Sorkin2}, which is quantified by the maximum number of slits on which a theory shows an irreducible interference behaviour. Interference in quantum theory is limited to the second order: the interference pattern of two slits cannot be reduced to the pattern of single slits, but the interference pattern of three slits can be reduced to the pattern arising from pairs of slits and single slits. This limitation has been recently confirmed in various experiments \cite{sinha2010ruling,park2012three,sinha2015superposition,kauten2015obtaining,jin2017experimental}. A natural question arises: Why is interference in Nature limited to the second order? Does the presence of higher-order interference create any paradoxical consequences in Nature that conflict with some of the principles we believe to be fundamental? Recent work has shown that higher-order interference---i.e. interference of order higher than the second---is forbidden \cite{HOP} in physical theories which admit a fundamental level of description where everything is pure and reversible \cite{TowardsThermo,Purity}. Further work has ruled out higher-order interference based on thermodynamic considerations \cite{Barnum-thermo,TowardsThermo}. Other literature has instead focused on the analysis of specific feature that theories with higher-order interference would possess, e.g.\ whether they would provide any advantage in certain computational tasks \cite{Lee-Selby-interference,Control-reversible,Lee-Selby-Grover,Oracles}. It was also shown that theories having second-order interference and lacking interference of higher orders are relatively close to quantum theory \cite{Barnum-interference,Ududec-3slits,CozThesis,Niestegge}. Unfortunately, one of the major shortcomings in the study of higher-order interference is the scarcity of concrete models displaying such post-quantum features, so that it has so far been very hard to look for specific examples of paradoxical or counter-intuitive consequences. Two models---density cubes \cite{Density-cubes} and quartic quantum theory \cite{Quartic-theory}---have been proposed in the past, but are not fully defined operational theories, e.g.\ because they do not deal with composite systems \cite{Lee-Selby-interference}. This limitation precludes them from being used to study all possible consequences of higher order interference, including potential violation of Tsirelson bound. In this article, we provide the first complete construction of a full-fledged operational theory exhibiting interference up to the fourth order. Our construction is inspired by the double-dilation construction of \cite{double-mixing} and it is carried out in within the framework of categorical probabilistic theories \cite{gogioso2017categorical}. The resulting theory of `density hypercubes' has composite systems, exhibits higher-order interference and possesses hyper-decoherence maps \cite{Quartic-theory,Lee-Selby-interference,lee2017no}. Quantum theory, with its second-order interference, is an extension of classical theory: the latter can be recovered by decoherence, which eliminates the second-order interference effects. Similarly, the theory of density hypercubes, with its third- and fourth-order interference, is an extension of quantum theory: the latter can now be recovered by hyper-decoherence, which eliminates third- and fourth-order interference effects. The paper is organized as follows. In Section \ref{section_densityHypercubes}, we define the categorical probabilistic theory of density hypercubes using the double-dilation construction. In Section \ref{section_hyperDecoherence}, we define hyper-decoherence maps, and show that quantum theory is recovered in the Karoubi envelope. In Section \ref{section_higherOrderInterference}, we show that density hypercubes display interference of third- and fourth-order, but not of fifth-order and above. In Section \ref{section_conclusions}, finally, we discuss open questions and future lines of research. Proofs of all results can be found in the Appendix. \vspace{12pt} \section{The Theory of Density Hypercubes} \label{section_densityHypercubes} \subsection{Construction of the theory} In this section, we define the categorical probabilistic theory of \defi{density hypercubes}, using a recently introduced construction known as \defi{double dilation} \cite{double-mixing}. The construction is done in two steps: first we define the category \DDCategory{\fHilbCategory}, containing hyper-quantum systems and processes between them, and only in a second moment we introduce quantum and classical systems, using (hyper-)decoherence and working in the Karoubi envelope \KaroubiEnvelope{\DDCategory{\fHilbCategory}}. The \defi{double-dilation category} \DDCategory{\fHilbCategory} is defined to be a symmetric monoidal subcategory of \CPMCategory{\fHilbCategory} with objects---the \defi{density hypercubes}---in the form $\DDCategory{H}:= \mathcal{H} \otimes \mathcal{H}$, where $H$ is a finite-dimensional Hilbert space and $\mathcal{H}:=H^\ast \otimes H$ is the corresponding doubled system in the CPM category. Even though \DDCategory{\fHilbCategory} is symmetric monoidal and has its own graphical calculus, in this work we will always use the graphical calculi of \CPMCategory{\fHilbCategory} and \fHilbCategory{} to talk about density hypercubes. When working in \CPMCategory{\fHilbCategory}, we will use solid black lines for morphisms and calligraphic letters (e.g. $\mathcal{H}$) for objects. When working in \fHilbCategory, we will use solid grey lines for morphisms and plain letters (e.g. $H$) for objects. The morphisms $\DDCategory{H} \rightarrow \DDCategory{K}$ in \DDCategory{\fHilbCategory} are the CP maps $\mathcal{H}\otimes \mathcal{H} \rightarrow \mathcal{K} \otimes\mathcal{K}$ taking the following form for a doubled CP map $F$, some auxiliary systems $\mathcal{E},\mathcal{G}$ and some special commutative $\dagger$-Frobenius algebra $\ZdotSym$ (henceforth known as a \defi{classical structure}) on $G$ in \fHilbCategory: \begin{equation} \scalebox{0.8}{$ \input{pictures/doublyMixedCPmap.tikz} $} \end{equation} In the diagram above, $F$ is a doubled CP map $\mathcal{H} \rightarrow \mathcal{G} \otimes \mathcal{K} \otimes \mathcal{E}$ in \CPMCategory{\fHilbCategory}---i.e. one in the form $F = f^\ast \otimes f$ for some $f:H \rightarrow G \otimes K \otimes E$ in \fHilbCategory---and we have used $\bar{F}$ to denote the CP map obtained by inverting the tensor product ordering of inputs and outputs of $f$ (for purely aesthetic reasons). We will always use upper-case letters (e.g. $F$) to denote doubled CP maps in \CPMCategory{\fHilbCategory}, lower-case letters to denote the corresponding linear maps in \fHilbCategory, and we will always write discarding maps explicitly. Composition in \DDCategory{\fHilbCategory} is the same as composition of CP maps, while tensor product is only slightly adjusted to take into account the doubled format of our new morphisms: \begin{equation} \scalebox{0.5}{$ \input{pictures/tensorProductDHmaps1.tikz} \hspace{3mm} \bigotimes \hspace{3mm} \input{pictures/tensorProductDHmaps2.tikz} \hspace{3mm} = \hspace{3mm} \input{pictures/tensorProductDHmaps3.tikz} $} \end{equation} Just as was the case for CP maps, maps of density hypercubes can all be obtained as composition of a ``doubled'' map and one or two ``discarding'' maps: \begin{equation} \underbrace{\addstackgap[6pt]{$ \scalebox{0.8}{$\input{pictures/DHpureDiscardingMaps1.tikz}$} $}}_{\text{doubled map}} \hspace{3cm} \underbrace{\addstackgap[6pt]{$ \scalebox{0.8}{$\input{pictures/DHpureDiscardingMaps2.tikz}$} $}}_{\text{discarding maps}} \end{equation} We refer to the discarding map obtained by doubling $\trace{\mathcal{E}}$ as the ``forest'' and to the discarding map obtained from the classical structure $\ZdotSym$ as the ``bridge''. The scalars of \DDCategory{\fHilbCategory} are exactly the scalars $\reals^+$ of \CPMCategory{\fHilbCategory}, and hence the theory of density hypercubes is probabilistic. It is furthermore convex, because the following ``tree-on-a-bridge'' effects can be used to add-up maps of density hypercubes---analogously to the way ordinary discarding maps $\trace{\mathcal{H}}$ can be used to add-up CP maps in \CPMCategory{\fHilbCategory}---by expanding them in terms of the orthonormal bases $\ket{\psi_x}_{x \in X}$ associated \cite{coecke2013new} with the classical structures $\ZbwdotSym$: \begin{equation} \scalebox{0.7}{$ \input{pictures/DHclassicalDiscardingMaps2.tikz} $} \hspace{3mm} = \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHquantumDiscardingMaps1.tikz} $} \hspace{3mm} = \hspace{3mm} \sum_{x \in X} \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHclassicalDiscardingMaps3.tikz} $} \end{equation} \subsection{Component symmetries} States in the theory \DDCategory{\fHilbCategory} take the form of fourth order tensors, an observation which prompted the choice of \inlineQuote{density hypercubes} as a name for the theory. If $(\ket{\psi_x})_{x \in X}$ is a choice of orthonormal basis for some finite-dimensional Hilbert space $H$, the states on $\DDCategory{H} = \mathcal{H} \otimes \mathcal{H}$ in \DDCategory{\fHilbCategory} can be expanded as follows in $\fHilbCategory$: \begin{equation} \scalebox{0.8}{$\input{pictures/DHtensorForm1.tikz}$} \hspace{3mm} = \hspace{3mm} \begin{color}{gray}\sum_{x_{00},x_{01},x_{10},x_{11} \in X} \end{color} \hspace{3mm} \scalebox{0.6}{$\input{pictures/DHtensorForm2.tikz}$} \end{equation} Recall that density matrices possess a $\integersMod{2}$ symmetry given by self-adjointness. This symmetry can be understood in terms of the following action $\tau: \integersMod{2} \rightarrow \Aut{\complexs}$ of $\integersMod{2}$ on the complex numbers: \begin{equation} \begin{array}{rcrlcrcrl} \tau(0) &:=& z &\mapsto z &\hspace{2cm}& \tau(1) &:=& z &\mapsto z^\ast \end{array} \end{equation} The components of a density matrix $\rho$ then satisfy the following equation, for every $a \in \integersMod{2}$ (trivial for $a=0$, self-adjoint for $a = 1$): \begin{equation} \tau(a)( \rho_{\,x_0 \, x_1} ) = \rho_{x_{(0\oplus a)} \, x_{(1 \oplus a)}} \end{equation} Instead of a $\integersMod{2}$ symmetry, density hypercubes possess a $\integersMod{2} \times \integersMod{2}$ symmetry. This symmetry can be understood in terms of the following action $\tau: \integersMod{2} \times \integersMod{2} \rightarrow \Aut{\complexs}$ of $\integersMod{2} \times \integersMod{2}$ on the complex numbers: \begin{equation} \begin{array}{rcrlcrcrl} \tau(0,0) &:=& z &\mapsto z &\hspace{2cm} &\tau(0,1) &:=& z &\mapsto z^\ast \\ \tau(1,0) &:=& z &\mapsto z^\ast &\hspace{2cm} &\tau(1,1) &:=& z &\mapsto z \end{array} \end{equation} The components of a density hypercube $\rho$ satisfy the following equation for every $(a,b) \in \integersMod{2} \times \integersMod{2}$, where by $\oplus$ we have denoted addition in $\integersMod{2}$: \begin{equation} \tau(a,b)( \rho_{\,x_{(0,0)} \, x_{(0,1)} \, x_{(1,0)} \, x_{(1,1)}} ) = \rho_{\,x_{(0\oplus a,0\oplus b)} \, x_{(0\oplus a,1\oplus b)} \, x_{(1\oplus a,0\oplus b)} \, x_{(1\oplus a,1\oplus b)}} \end{equation} We see that the components are related by a trivial symmetry for $a = (0,0)$, by a self-adjoining symmetry for $a=(1,0)$ and $a=(0,1)$, and by a self-transposing symmetry in for $a=(1,1)$. An alternative way to look at this symmetry is observe that states of density hypercubes can all be expressed as certain sums of doubled states in the following form: \begin{equation} \scalebox{0.8}{$\input{pictures/DHdoubledState.tikz}$} \end{equation} For these states, we have the usual self-conjugating $\integersMod{2}$ symmetry of density matrices $\Phi \otimes \overline{\Phi} \mapsto \Phi^\ast \otimes \overline{\Phi^\ast}$ as well as an independent self-transposing $\integersMod{2}$ symmetry $\Phi \otimes \overline{\Phi} \mapsto \overline{\Phi \otimes \overline{\Phi}}$, which taken together give the same $\integersMod{2} \times \integersMod{2}$ symmetry described above in terms of components. In order to visualise the $\integersMod{2}\times\integersMod{2}$ symmetry action, we divide the components $\rho_{x_{00}x_{01}x_{10}x_{11}}$ of a $d$-dimensional density hypercube $\rho$ into 15 classes, depending on which indices $x_{00},x_{01},x_{10},x_{11}$ have same/distinct values chosen from the set $\{1,...,d\}$. We arrange the indices on a square: index $00$ is on the top left corner, $10$ acts as reflection about the vertical mid-line, $01$ acts as reflection about the horizontal mid-line and $11$ acts as 180\textsuperscript{o} rotation about the centre. We use colours as names for index values in $\{1,...,d\}$, with distinct colours denoting distinct values. \begin{equation} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/allDistinct.tikz}}} }_{\text{4 distinct}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoEqualRight.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoEqualLeft.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoEqualTop.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoEqualBot.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoEqualBLTR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoEqualTLBR.tikz}}} }_{\text{3 distinct}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBL.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTL.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoVertical.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoHorizontal.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoDiagonal.tikz}}} }_{\text{2 distinct}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/allEqual.tikz}}} }_{\text{all equal}} \end{equation} For example, the component $\rho_{0321}$ of a $4^{+}$-dimensional system will fall into the 1\textsuperscript{st} class from the left above, the component $\rho_{0122}$ will fall into the 2\textsuperscript{nd} class, the component $\rho_{0003}$ into the 8\textsuperscript{th} class, the component $\rho_{0011}$ into the 12\textsuperscript{th} class and the component $\rho_{0000}$ into the 15\textsuperscript{th} class. Then we look at the individual orbits of components in each class under the symmetry. Classes with components having orbits of order 4 are shown below: each orbit contributes a single independent complex value to the tensor, i.e. two independent real values, and each component class is annotated by the total number of independent real values contributed in dimension $d$. Just as we did above, we are using colours to denote values in $\{1,...,d\}$: the geometric action of $\integersMod{2}\times\integersMod{2}$ on the coloured vertices/edges of the squares exactly mirrors the algebraic action of $\integersMod{2}\times\integersMod{2}$ on the components in the different classes. \begin{equation} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/allDistinct.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/allDistinct10.tikz}} \\ \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} & \vspace{2mm} \hspace{0mm} \raisebox{-2mm}{$ \nearrow \hspace{-4mm} \nwarrow \hspace{-4mm} \searrow \hspace{-3.65mm} \swarrow \hspace{-1mm}\text{\scriptsize{11}} \hspace{-1mm} $} & \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} \\ \resizebox{!}{3mm}{\input{pictures/squares/allDistinct01.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/allDistinct11.tikz}} \\ \end{array} $}}_{2\frac{1}{4}d(d-1)(d-2)(d-3)} \hspace{5mm} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/twoEqualRight.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoEqualRight10.tikz}} \\ \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} & \vspace{2mm} \hspace{0mm} \raisebox{-2mm}{$ \nearrow \hspace{-4mm} \nwarrow \hspace{-4mm} \searrow \hspace{-3.65mm} \swarrow \hspace{-1mm}\text{\scriptsize{11}} \hspace{-1mm} $} & \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} \\ \resizebox{!}{3mm}{\input{pictures/squares/twoEqualRight01.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoEqualRight11.tikz}} \\ \end{array} $}}_{2\frac{1}{4}d(d-1)(d-2)} \hspace{5mm} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTop.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTop10.tikz}} \\ \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} & \vspace{2mm} \hspace{0mm} \raisebox{-2mm}{$ \nearrow \hspace{-4mm} \nwarrow \hspace{-4mm} \searrow \hspace{-3.65mm} \swarrow \hspace{-1mm}\text{\scriptsize{11}} \hspace{-1mm} $} & \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} \\ \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTop01.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTop11.tikz}} \\ \end{array} $}}_{2\frac{1}{4}d(d-1)(d-2)} \hspace{5mm} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTLBR.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTLBR10.tikz}} \\ \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} & \vspace{2mm} \hspace{0mm} \raisebox{-2mm}{$ \nearrow \hspace{-4mm} \nwarrow \hspace{-4mm} \searrow \hspace{-3.65mm} \swarrow \hspace{-1mm}\text{\scriptsize{11}} \hspace{-1mm} $} & \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} \\ \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTLBR01.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoEqualTLBR11.tikz}} \\ \end{array} $}}_{2\frac{1}{4}d(d-1)(d-2)} \hspace{5mm} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBR.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBR10.tikz}} \\ \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} & \vspace{2mm} \hspace{0mm} \raisebox{-2mm}{$ \nearrow \hspace{-4mm} \nwarrow \hspace{-4mm} \searrow \hspace{-3.65mm} \swarrow \hspace{-1mm}\text{\scriptsize{11}} \hspace{-1mm} $} & \raisebox{-2mm}{$\updownarrow \text{\scriptsize{01}}$} \\ \resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBR01.tikz}} & \stackrel{10}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBR11.tikz}} \\ \end{array} $}}_{2\frac{1}{4}d(d-1)} \end{equation} Classes with components having orbits of order 2 and 1 are shown below, each component class annotated by the total number of independent real values contributed in dimension $d$. Each orbit in the first, second and fourth classes contributes a single independent real value, because each component is stabilised by (at least) one self-adjoining symmetry; each orbit in the third class contributes instead two independent real values, because the components are only stabilised by a self-transposing symmetry. \begin{equation} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoVertical.tikz}} & \stackrel{10,11}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoVertical10.tikz}} \end{array} $}}_{\frac{1}{2}d(d-1)} \hspace{5mm} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoHorizontal.tikz}} & \stackrel{01,11}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoHorizontal01.tikz}} \end{array} $}}_{\frac{1}{2}d(d-1)} \hspace{5mm} \underbrace{\addstackgap[4pt]{$ \begin{array}{ccc} \resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoDiagonal.tikz}} & \stackrel{10,01}{\leftrightarrow} & \resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoDiagonal10.tikz}} \end{array} $}}_{2\frac{1}{2}d(d-1)} \hspace{5mm} \underbrace{\addstackgap[4pt]{$ \resizebox{!}{3mm}{\input{pictures/squares/allEqual.tikz}} $}}_{\text{d}} \end{equation} Adding up the contributions from all orbit classes, we see that the states of $d$-dimensional density hypercubes form a convex cone of real dimension $\frac{1}{2}(d^4-3d^3+7d^2-3d)$ within the $(2d^4)$-dimensional real vector space of complex fourth-order tensors. \subsection{Normalisation and causality} The ``forest'' discarding maps $\trace{\,\,\DDCategory{H}}:=\CPMCategory{\trace{\mathcal{H}}}$ in \DDCategory{\fHilbCategory} (i.e.\ the doubled versions of the discarding maps of \CPMCategory{\fHilbCategory}) form an environment structure \cite{gogioso2017categorical,coecke2010environment}, and we say that a map of density hypercubes is \defi{normalised} if the corresponding CP map is trace preserving (with normalised states as a special case): \begin{equation} \scalebox{0.7}{$\input{pictures/normalisedDHmap1.tikz}$} \hspace{3mm} \text{normalised} \hspace{3mm} \Leftrightarrow \hspace{3mm} \scalebox{0.7}{$\input{pictures/normalisedDHmap2.tikz}$} \hspace{3mm} = \hspace{3mm} \scalebox{0.7}{$\input{pictures/normalisedDHmap3.tikz}$} \end{equation} Normalised maps of density hypercubes form a sub-SMC of \DDCategory{\fHilbCategory}, which we refer to as the \defi{normalised sub-category}. \defi{Sub-normalised} maps of density hypercubes can be defined analogously by requiring the corresponding CP map to be trace non-increasing: they also form a sub-SMC of \DDCategory{\fHilbCategory}, which we refer to as the \defi{sub-normalised sub-category}. Despite the presence of several kinds of discarding maps, the following results shows that the sub-normalised sub-category is causal \cite{Chiribella-purification}, or equivalently that that the normalised sub-category is terminal \cite{coecke2013causal,coecke2016terminality}. \newcounter{proposition_causality_c} \setcounter{proposition_causality_c}{\value{theorem_c}} \begin{proposition} \label{proposition_causality} The process theory $\DDCategory{\fHilbCategory}$ is causal, in the following sense: for every object $\DDCategory{H}$, the only effect $\DDCategory{H} \rightarrow \reals^+$ in \DDCategory{\fHilbCategory} which yields the scalar $1$ on all normalised states of $\DDCategory{H}$ is the ``forest'' discarding map of density hypercubes $\trace{\,\,\DDCategory{H}}$. \end{proposition} \section{Decoherence and Hyper-decoherence} \label{section_hyperDecoherence} So far, we have constructed a symmetric monoidal category, which is enriched in convex cones and comes equipped with an environment structure providing a notion of normalization. The final ingredients necessary for the definition of the \defi{categorical probabilistic theory of density hypercubes} is the demonstration that classical systems and quantum systems arise in the Karoubi envelope of \DDCategory{\fHilbCategory} by choosing some suitable family of decoherence and hyper-decoherence maps. \subsection{Decoherence to classical theory} Consider a finite-dimensional Hilbert space $H$ and a classical structure $\ZdotSym$ on it, associated with some orthonormal basis $(\ket{\psi_x})_{x \in X}$. We define the \defi{$\ZdotSym$-decoherence map} $\decoh{\ZdotSym}$ on the density hypercube $\DDCategory{H}$ to be the following morphism in \DDCategory{\fHilbCategory}: \begin{equation} \decoh{\ZdotSym} \hspace{3mm} := \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHdecoherence1.tikz} $} \hspace{3mm} = \hspace{3mm} \sum_{x \in X} \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHdecoherence2.tikz} $} \end{equation} The $\decoh{\ZdotSym}$ map defined above is idempotent, so it can be used to define classical systems via the Karoubi envelope construction---in the same way as ordinary decoherence maps gives rise to classical systems in quantum theory. It should be noted that decoherence maps defined this way are sub-normalised but not normalised, so that the hyperquantum-to-classical transition in the theory of density hypercubes is not deterministic; we defer further discussion of this point to the next sub-section on hyper-decoherence. \newcounter{proposition_classical_c} \setcounter{proposition_classical_c}{\value{theorem_c}} \begin{proposition} \label{proposition_classical} Let $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}$ be the Karoubi envelope of \DDCategory{\fHilbCategory}, and write $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}_K$ for the full subcategory of $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}$ spanned by objects in the form $(\DDCategory{H},\decoh{\ZdotSym})$. There is an $\reals^+$-linear monoidal equivalence of categories between $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}_K$ and the probabilistic theory $\RMatCategory{\reals^+}$ of classical systems. Furthermore, classical stochastic maps correspond to the maps in $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}_K$ normalised with respect to the discarding maps $\trace{\,\,(\DDCategory{H},\decoh{\ZdotSym})} := \trace{\,\,\DDCategory{H}} \circ \decoh{\ZdotSym}$, which we can write explicitly as follows: \newcounter{proposition_classical_c_eq} \setcounter{proposition_classical_c_eq}{\value{equation}} \begin{equation} \label{proposition_classical_eq_label} \trace{\,\,(\DDCategory{H},\decoh{\ZdotSym})} \hspace{3mm} := \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHclassicalDiscardingMaps1.tikz} $} \hspace{3mm} = \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHclassicalDiscardingMaps2.tikz} $} \end{equation} \end{proposition} \subsection{Hyper-decoherence to quantum theory} We now show that the quantum systems arise in the Karoubi envelope as well, via suitable \defi{hyper-decoherence} maps. Recall that the generic discarding map in the theory of density hypercubes involved two pieces: (the doubled version of) a traditional discarding map from \CPMCategory{\fHilbCategory} and a second ``tree-on-a-bridge'' discarding map derived from a classical structure $\ZdotSym$. In the previous sub-section, we saw that the latter is the discarding map of some classical system living in the Karoubi envelope \KaroubiEnvelope{\DDCategory{\fHilbCategory}}, and that it can be used to define the ``hyper-quantum--to--classical'' decoherence maps. In this sub-section, we shall see that this ``hyper-quantum--to--classical'' decoherence process can be understood in two steps: a ``hyper-quantum--to--quantum'' hyper-decoherence, followed by the usual ``quantum--to--classical'' decoherence. If $\ZdotSym$ is a classical structure on a density hypercube $\DDCategory{H}$, we define the \defi{$\ZdotSym$-hyper-decoherence map} $\hypdecoh{\ZdotSym}$ to be the following map of density hypercubes: \begin{equation} \hypdecoh{\ZdotSym} \hspace{3mm} := \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHhyperdecoherence.tikz} $} \end{equation} Hyper-decoherence maps are idempotent, and hence we can consider the full subcategory $\mathcal{C}$ of the Karoubi envelope \KaroubiEnvelope{\DDCategory{\fHilbCategory}} spanned by objects in the form $(\DDCategory{H},\hypdecoh{\ZdotSym})$: doing so allows us to prove that the hyper-decoherence maps defined above truly provide the desired ``hyper-quantum--to--quantum'' decoherence, as considered by \cite{Lee-Selby-interference,lee2017no}. \newcounter{proposition_quantum_c} \setcounter{proposition_quantum_c}{\value{theorem_c}} \begin{proposition} \label{proposition_quantum} Let $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}$ be the Karoubi envelope of \DDCategory{\fHilbCategory}, and write $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}_Q$ for the full subcategory of $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}$ spanned by objects in the form $(\DDCategory{H},\hypdecoh{\ZdotSym})$. There is an $\reals^+$-linear monoidal equivalence of categories between $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}_Q$ and the probabilistic theory $\CPMCategory{\fHilbCategory}$ of quantum systems and CP maps between them. Furthermore, trace-preserving CP maps correspond to the maps in $\KaroubiEnvelope{\DDCategory{\fHilbCategory}}_Q$ normalised with respect to the discarding maps $\trace{\,\,(\DDCategory{H},\hypdecoh{\ZdotSym})} := \trace{\,\,\DDCategory{H}} \circ \hypdecoh{\ZdotSym}$, which we can write explicitly as follows: \newcounter{proposition_quantum_c_eq} \setcounter{proposition_quantum_c_eq}{\value{equation}} \begin{equation} \label{proposition_quantum_eq_label} \trace{\,\,(\DDCategory{H},\hypdecoh{\ZdotSym})} \hspace{3mm} := \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHquantumDiscardingMaps1.tikz} $} \hspace{3mm} = \hspace{3mm} \scalebox{0.7}{$ \input{pictures/DHclassicalDiscardingMaps2.tikz} $} \end{equation} \end{proposition} Taking the double-dilation construction together with the content of Propositions \ref{proposition_classical} and \ref{proposition_quantum}, we come to the following definition of a categorical probabilistic theory \cite{gogioso2017categorical} of density hypercubes. \begin{definition} The \defi{categorical probabilistic theory of density hypercubes} \DHCategory{\fHilbCategory} is defined the be the full sub-SMC of \KaroubiEnvelope{\DDCategory{\fHilbCategory}} spanned by objects in the following form: \begin{itemize} \item the \defi{density hypercubes} $(\DDCategory{H},\id{\DDCategory{H}})$; \item the \defi{quantum systems} $(\DDCategory{H},\hypdecoh{\ZdotSym})$, for all classical structures $\ZdotSym$ on $H$; \item the \defi{classical systems} $(\DDCategory{H},\decoh{\ZdotSym})$, for all classical structures $\ZdotSym$ on $H$. \end{itemize} The environment structure for the categorical probabilistic theory is given by the discarding maps $\trace{\,\,\DDCategory{H}}$, $\trace{\,\,(\DDCategory{H},\hypdecoh{\ZdotSym})}$ and $\trace{\,\,(\DDCategory{H},\decoh{\ZdotSym})}$ respectively. The classical sub-category for the categorical probabilistic theory is the full sub-SMC spanned by the classical systems. \end{definition} The hyper-quantum--to--classical and hyper-quantum--to--quantum decoherence maps of density hypercubes play well together with the quantum--to--classical decoherence map of quantum theory: the decoherence map $\decoh{\ZdotSym}:(\DDCategory{H},\id{\DDCategory{H}}) \rightarrow (\DDCategory{H},\decoh{\ZdotSym})$ of density hypercubes factors, as one would expect, into the hyper-decoherence map $\hypdecoh{\ZdotSym}:(\DDCategory{H},\id{\DDCategory{H}}) \rightarrow (\DDCategory{H},\hypdecoh{\ZdotSym})$ followed by the decoherence map $\decoh{\ZdotSym}:(\DDCategory{H},\hypdecoh{\ZdotSym}) \rightarrow (\DDCategory{H},\decoh{\ZdotSym})$ of quantum systems. From this, it is clear that the reason why hyper-quantum--to--classical transition was sub-normalised is that the hyper-quantum--to--quantum transition itself is sub-normalised (see Appendix \ref{appendix_extension}). The sub-normalisation of hyper-decoherence maps is a sign that the theory of density hypercubes presented here is still partially incomplete, and that some suitable extension will need to be researched in the future. What we know for sure is that the current theory does not satisfy the no-restriction condition on effects, and that an extension in which hyper-decoherence maps are normalised is possible: the additional effect needed by normalisation exists in \CPMCategory{\fHilbCategory} and is non-negative on all states of \DDCategory{\fHilbCategory} (see Appendix \ref{appendix_extension}). In line with the recent no-go theorem of \cite{lee2017no}, preliminary considerations seem to indicated that the addition of said effect would mean that the theory no longer satisfies purification. \newcommand{\begin{color}{red}\bullet\end{color}}{\begin{color}{red}\bullet\end{color}} \newcommand{\begin{color}{blue}\bullet\end{color}}{\begin{color}{blue}\bullet\end{color}} \newcommand{\begin{color}{green}\bullet\end{color}}{\begin{color}{green}\bullet\end{color}} \newcommand{\begin{color}{RoyalPurple}\bullet\end{color}}{\begin{color}{RoyalPurple}\bullet\end{color}} \newpage \section{Higher Order Interference} \label{section_higherOrderInterference} In this section, we will show that the theory of density hypercubes displays third- and fourth-order interference effects, broadly inspired by the framework for higher-order interference in GPTs presented by \cite{HOP,Lee-Selby-interference,Barnum-interference}. Because interference has to do with decompositions of the identity map in terms of certain projectors, we begin by introducing a handy graphical notation for keeping track of the various pieces that the identity map is composed of. The identity map of hyper-quantum systems $\id{\DDCategory{H}} : \DDCategory{H} \rightarrow \DDCategory{H}$ takes the following explicit form in $\fHilbCategory$, for any orthonormal basis $(\ket{\psi_x})_{x \in X}$ of the Hilbert space ${H}$: \begin{equation} \scalebox{0.8}{$ \input{pictures/DHidentity1.tikz} $} \hspace{3mm} = \hspace{1mm} \begin{color}{gray} \sum_{x_{00},x_{01},x_{10},x_{11}\in X} \hspace{2mm} \end{color} \scalebox{0.8}{$ \input{pictures/DHidentity2.tikz} $} \end{equation} In order to denote the pieces in the decomposition corresponding to specific values $x_{00}, x_{01}, x_{10}, x_{11} \in X$ of the indices, we adopt the following graphical notation, inspired by the $\integersMod{2} \times \integersMod{2}$ symmetry of the components: \begin{equation} \scalebox{0.8}{$ \input{pictures/DHprojectorNotation1old.tikz} $} \hspace{3mm} := \hspace{1mm} \scalebox{0.8}{$ \input{pictures/DHidentity2.tikz} $} \end{equation} In fact, we will adopt the same colour-based notation for index values which we originally introduced in Section \ref{section_densityHypercubes}, so that the following is a decomposition piece involving two distinct index values $\{\begin{color}{red}\bullet\end{color},\begin{color}{blue}\bullet\end{color}\} \subseteq X$: \begin{equation} \scalebox{1}{$ \input{pictures/DHprojectorNotation2.tikz} $} \hspace{3mm} := \hspace{3mm} \scalebox{1}{$ \input{pictures/DHidentity2example.tikz} $} \end{equation} Using the colour-based notation defined above for its pieces, the identity on a 2-dimensional hyper-quantum system (with $X = \{\begin{color}{blue}\bullet\end{color},\begin{color}{red}\bullet\end{color}\}$) would be fully decomposed as follows: \begin{equation} \id{\complexs^2} \hspace{1mm} = \hspace{1mm} {\resizebox{!}{3mm}{\input{pictures/squares/allEqual.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/allEqualred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoVertical.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoVertical10.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoHorizontal.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoHorizontal01.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoDiagonal.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoDiagonal10.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBR.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBRred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBL.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBLred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTL.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTLred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTR.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTRred.tikz}}} \end{equation} The same notation can be used to graphically decompose projectors corresponding to various subspaces determined by the orthonormal basis $(\ket{\psi_x})_{x \in X}$. For any non-empty subset $U \subseteq X$, we define the following projector on $\DDCategory{H}$: \begin{equation} P_{U} := \textnormal{DD}\left(\sum_{x \in U} \ket{\psi_x}\bra{\psi_x}\right) \end{equation} In particular, the $P_{\{\begin{color}{blue}\bullet\end{color}\}}$ for $\begin{color}{blue}\bullet\end{color} \in X$ are the projectors corresponding to the individual vectors $\ket{\psi_{\begin{color}{blue}\bullet\end{color}}}$ of the basis, while $P_{X}$ is the identity $\id{\DDCategory{H}}$. No matter how large $X$ is (with $\#X \geq 2$), the projectors $P_{\{\begin{color}{blue}\bullet\end{color},\begin{color}{red}\bullet\end{color}\}}$ corresponding to 2-element subsets $\{\begin{color}{blue}\bullet\end{color},\begin{color}{red}\bullet\end{color}\} \subseteq X$ are always decomposed as follows: \begin{equation} P_{\{\begin{color}{blue}\bullet\end{color},\begin{color}{red}\bullet\end{color}\}} \hspace{1mm} = \hspace{1mm} {\resizebox{!}{3mm}{\input{pictures/squares/allEqual.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/allEqualred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoVertical.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoVertical10.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoHorizontal.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoHorizontal01.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoDiagonal.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/twoAndTwoDiagonal10.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBR.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBRred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBL.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentBLred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTL.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTLred.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTR.tikz}}} + {\resizebox{!}{3mm}{\input{pictures/squares/oneDifferentTRred.tikz}}} \end{equation} The presence of higher order interference in the theory of density hypercubes is really a matter of shapes: when the dimension of $\mathcal{H}$ is at least 3, the identity contains pieces of shapes which do not appear in projectors for 1-element and 2-element subsets. Because of this, in the theory of density hypercubes the probabilities obtained from 1-slit and 2-slit interference experiments will not be enough to explain the probabilities obtained from 3-slit and/or 4-slit experiments; however, the probabilities obtained from 1-slit, 2-slit, 3-slit and 4-slit experiments will always be enough to explain the probabilities obtained in experiments with 5 or more slits. Below you can see an atlas of all possible shapes that pieces of the identity can take in our graphical notation, together with a note of the smallest dimension that a projector must have to contain pieces of that shape: \begin{equation} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/allEqual.tikz}}} }_{\text{1-dim}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentBR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentBL.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentTL.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentTR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoAndTwoVertical.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoAndTwoHorizontal.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoAndTwoDiagonal.tikz}}} }_{\text{2-dim}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualRight.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualLeft.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualTop.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualBot.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualBLTR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualTLBR.tikz}}} }_{\text{3-dim}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/allDistinct.tikz}}} }_{\text{4-dim}} \end{equation} The shape labelled as 1-dimensional only requires a single index value, and hence pieces of that shape appear in all projectors. The shapes labelled as 2-dimensional all require exactly two distinct index values, and hence pieces of those shapes can only appear in projectors for subsets with at least 2 elements. The shapes labelled as 3-dimensional all require exactly three distinct index values, and hence pieces of those shapes can only appear in projectors for subsets with at least 3 elements. Finally, the shape labelled as 4-dimensional requires exactly four index values, and hence pieces of that shape can only appear in projectors for subsets with at least 4 elements. Thanks to the graphical notation introduced above, we already have a first intuition of why density hypercubes display higher-order interference. However, a rigorous proof requires a complete set-up with states, projectors, measurements and probabilities for a $d$-slit interference experiment, so that is what we now endeavour to provide. \begin{enumerate} \item We choose a $d$-dimensional space $H \isom \complexs^d$, and we value our tensor indices in the set $X = \{1,...,d\}$ (the same set that we use to label the $d$ slits). \item We fix an orthonormal basis $(\ket{x})_{x \in X}$, and we interpret $\ket{x}$ to be the state in which the particle goes through slit $x$ with certainty. \item The initial state for the particle is the superposition state in which the particle goes through each slit with the same amplitude. More precisely, it is the pure normalised density hypercube state $\rho_+$ corresponding to the vector $\frac{1}{\sqrt{d}}\ket{\psi_+} := \frac{1}{\sqrt{d}}(\ket{1} + ... + \ket{d})$: \begin{equation} \rho_+ \hspace{1mm} := \hspace{1mm} \frac{1}{d^2} \hspace{1mm} \scalebox{0.8}{$ \input{pictures/InterferenceExpInitialState.tikz} $} \end{equation} \item The particle goes through some non-empty subset $U \subseteq X$ of slits at random: afterwards, the experimenter knows which subset the particle passed through, but no more information than that is available in the universe. \item The particle is measured at the screen, and the experimenter estimates the probability $\mathbb{P}[+|U]$ that the particle is still in state $\rho_+$ after having passed through the given subset $U$ of the slits: \begin{equation} \mathbb{P}[+|U] \hspace{1mm} := \hspace{1mm} \frac{1}{d^2} \hspace{1mm} \scalebox{0.8}{$ \input{pictures/InterferenceExpProbability.tikz} $} \hspace{1mm} \frac{1}{d^2} \end{equation} \end{enumerate} It is immediate to see that the outcome probability $\mathbb{P}[+|U]$ depends solely on the number of different pieces appearing in the decomposition of the projector $P_U$: \begin{equation} \mathbb{P}[+|U] = \frac{1}{d^4} \cdot \textnormal{number of pieces in }P_U \end{equation} To count the number of pieces in $P_U$, it is convenient to group them by shapes. If $U$ is a subset of size $k$, standard combinatorial arguments can be used to obtain the number of pieces of each shape appearing in the decomposition (as a convention, we set ${k \choose{j}} = 0$ for $j > k$): \begin{equation} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/allEqual.tikz}}} }_{{k \choose{1}} \cdot 1!} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentBR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentBL.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentTL.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/oneDifferentTR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoAndTwoVertical.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoAndTwoHorizontal.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoAndTwoDiagonal.tikz}}} }_{\text{7 shapes, }{k \choose{2}} \cdot 2!\text{ each}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualRight.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualLeft.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualTop.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualBot.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualBLTR.tikz}}} \hspace{2mm} \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/twoEqualTLBR.tikz}}} }_{\text{6 shapes, }{k \choose{3}} \cdot 3! \text{ each}} \hspace{2mm} \underbrace{ \fbox{\resizebox{!}{3mm}{\input{pictures/squares/shapes/allDistinct.tikz}}} }_{{k \choose{4}} \cdot 4!} \end{equation} By adding up the contributions from pieces of each shape, we get the following closed expression for the outcome probability $\mathbb{P}[+|U]$: \begin{equation} \mathbb{P}[+|U] = \frac{1}{d^4}(\# U)^4 \end{equation} For $d\geq 3$ we observe third-order interference, witnessed (by definition) by the following inequality: \begin{eqnarray} \mathbb{P}[+|\{1,2,3\}] \neq \sum_{\stackrel{V \subset \{1,2,3\}}{\textnormal{s.t. }\#V = 2}}\hspace{-2mm}\mathbb{P}[+|V] \hspace{2mm}- \sum_{\stackrel{V \subset \{1,2,3\}}{\textnormal{s.t. }\#V = 1}}\hspace{-2mm}\mathbb{P}[+|V] \end{eqnarray} Indeed, the left hand side evaluates to $81/d^4$, while the right hand side evaluates to the following expression (again by standard combinatorial arguments): \begin{equation} \frac{1}{d^4}\Big[ {3\choose{2}}2^4 - {3\choose{1}}1^4 \Big] = \frac{1}{d^4}45 \neq \frac{1}{d^4}81 \end{equation} The difference between left and right hand sides is $36/d^4$, which is exactly the contribution $\frac{1}{d^4}6\cdot{3\choose{3}}\cdot3!$ of the 6 shapes requiring 3 distinct values (appearing in $P_{\{1,2,3\}}$ but not in any of the sub-projectors). For $d\geq 4$ we observe fourth-order interference, witnessed (by definition) by the following inequality: \begin{eqnarray} \mathbb{P}[+|\{1,2,3,4\}] \neq \sum_{\stackrel{V \subset \{1,2,3,4\}}{\textnormal{s.t. }\#V = 3}}\hspace{-2mm}\mathbb{P}[+|V]\hspace{2mm} - \sum_{\stackrel{V \subset \{1,2,3,4\}}{\textnormal{s.t. }\#V = 2}}\hspace{-2mm}\mathbb{P}[+|V] \hspace{2mm}+ \sum_{\stackrel{V \subset \{1,2,3,4\}}{\textnormal{s.t. }\#V = 1}}\hspace{-2mm}\mathbb{P}[+|V] \end{eqnarray} Indeed, the left hand side evaluates to $256/d^4$, while the right hand side evaluates to the following expression (again by standard combinatorial arguments): \begin{equation} \frac{1}{d^4}\Big[ {4\choose{3}}3^4 - {4\choose{2}}2^4 + {4\choose{1}}1^4 \Big] = \frac{1}{d^4}232 \neq \frac{1}{d^4}256 \end{equation} The difference between left and right hand sides is $24/d^4$, which is exactly the contribution $\frac{1}{d^4}{4\choose{4}}\cdot4!$ of the shape requiring 4 distinct values (appearing in $P_{\{1,2,3,4\}}$ but not in any of the sub-projectors). For $d \geq 5$, however, we observe absence of fifth-order (or higher-order) interference, witnessed (by definition) by the following equality: \begin{eqnarray} \mathbb{P}[+|\{1,2,3,4,5\}] &= \sum\limits_{\stackrel{V \subset \{1,2,3,4,5\}}{\textnormal{s.t. }\#V = 4}}\hspace{-2mm}\mathbb{P}[+|V] \hspace{2mm}- \sum\limits_{\stackrel{V \subset \{1,2,3,4,5\}}{\textnormal{s.t. }\#V = 3}}\hspace{-2mm}\mathbb{P}[+|V] \nonumber\\ &+ \sum\limits_{\stackrel{V \subset \{1,2,3,4,5\}}{\textnormal{s.t. }\#V = 2}}\hspace{-2mm}\mathbb{P}[+|V] \hspace{2mm}- \sum\limits_{\stackrel{V \subset \{1,2,3,4,5\}}{\textnormal{s.t. }\#V = 1}}\hspace{-2mm}\mathbb{P}[+|V] \end{eqnarray} Indeed, the left hand side evaluates to $625/d^4$, and the right hand side yields the same: \begin{equation} \frac{1}{d^4}\Big[ {5\choose{4}}4^4 - {5\choose{3}}3^4 + {5\choose{2}}2^4 - {5\choose{1}}1^4 \Big] = \frac{1}{d^4}625 \end{equation} \section{Conclusions} \label{section_conclusions} In this work, we used an iterated CPM construction known as double-dilation to construct a full-fledged probabilistic theory of density hypercubes, possessing hyper-decoherence maps and showing higher-order interference effects. We have defined all the necessary categorical structures. We have gone over the mathematical detail of the (hyper-)decoherence–induced relationship between our new theory, quantum theory and classical theory. We have imported diagrammatic reasoning from the familiar setting of mixed-state quantum theory. We have developed a graphical formalism to study the internal component symmetries of states and processes. Finally, we have shown that the theory displays interference effects of orders up to four, but not of orders five and above. A number of questions are left open and will be answered as part of future work. Firstly, we endeavour to carry out a more physically-oriented analysis of the theory, including a study of the structure of normalised states and effects and a characterisation of the normalised reversible transformations. Secondly, we need to investigate the physical significance and implications of sub-normalisation of the hyper-decoherence maps, and construct a suitable extension of our theory where said maps become normalised. Finally, we intend to look at concrete implementations of certain protocols in our theory, such as those previously studied \cite{Lee-Selby-Grover,Lee-Selby-interference} in the context of higher-order interference. From a categorical standpoint, we also wish to further understand the specific roles played by double-mixing and double-dilation in our theory. At present, we know that the former is enough for density hypercubes to show higher-order interference and decohere to classical systems, but the latter seems to be necessary for quantum systems to arise by hyper-decoherence. Further investigation will hopefully shed more light on the individual contributions of the two constructions. Finally, we endeavour to investigate the generalisation of our results to higher iterated dilation, and more generally to higher-order CPM constructions \cite{higherOrderCPM} (with finite abelian symmetry groups other than the $\integersMod{2}^N$ groups arising from iterated dilation). \newpage \bibliographystyle{eptcs}
{ "timestamp": "2018-06-05T02:13:33", "yymm": "1806", "arxiv_id": "1806.00915", "language": "en", "url": "https://arxiv.org/abs/1806.00915" }
\section{Introduction} \label{sec:intro} Modern data analysis and processing tasks typically involve large sets of structured data, where the structure carries critical information about the nature of the data. One can find numerous examples of such data sets in a wide diversity of application domains, including transportation networks, social networks, computer networks, and brain networks. Typically, graphs are used as mathematical tools to describe the structure of such data. They provide a flexible way of representing relationship between data entities. Numerous signal processing and machine learning algorithms have been introduced in the past decade for analyzing structured data on \emph{a priori} known graphs \cite{Zhu05,Fortunato10,Shuman13}. However, there are often settings where the graph is not readily available, and the structure of the data has to be estimated in order to permit effective {representation, processing, analysis or visualization of graph data.} In this case, a crucial task is to infer a graph topology that describes the characteristics of the {data observations}, hence capturing the underlying relationship between these entities. {Consider an example in brain signal analysis. Suppose we are given blood-oxygen-level-dependent (BOLD) signals, which are time series extracted from functional magnetic resonance imaging (fMRI) data that reflect the activities of different regions of the brain. An area of significant interest in neuroscience is to infer functional connectivity, i.e., capture relationship between brain regions which correlate or synchronize given a certain condition of a patient, which may help reveal underpinnings of some neurodegenerative diseases (see Fig.~\ref{fig:brain-example} for an illustration). This leads to the problem of inferring a graph structure given the multivariate BOLD time series data.} \begin{figure}[t] \centering \includegraphics[width=16cm]{Fig1} \label{brain-example} \caption{{Inferring functional connectivity between different regions of the brain. (a) BOLD time series recorded in different regions of the brain. (b) A functional connectivity graph where the vertices represent the brain regions and the edges (with thicker bars indicating heavier weights) represent the strength of functional connections between these regions. Figure adapted from \cite{Richiardi13} with permission.}} \label{fig:brain-example} \end{figure} {Formally, the problem of graph learning is the following: given $M$ observations on $N$ variables or data entities, represented in a data matrix $\mathbf{X} \in \mathbb{R}^{N \times M}$, and given some prior knowledge (e.g., distribution, data model, etc) about the data, we would like to build or infer relationship between these variables that take the form of a graph $\mathcal{G}$. As a result, each column of the data matrix $\mathbf{X}$ becomes a graph signal defined on the node set of the estimated graph, and the observations can be represented as $\mathbf{X}=\mathcal{F}(\mathcal{G})$, where $\mathcal{F}$ represents a certain generative process or function on the graph.} {The graph learning problem is an important one because: 1) a graph may capture the actual geometry of structured data, which is essential to efficient processing, analysis and visualization; 2) learning relationship between data entities benefits numerous application domains, such as understanding functional connectivity between brain regions or behavioral influence between a group of people; 3) the inferred graph can help in predicting data evolution in the future.} Generally speaking, inferring graph topologies from observations is an ill-posed problem, and there are many ways of associating a topology with the observed data samples. {Some of the most straightforward methods include computing sample correlation, or using a similarity function, e.g., a Gaussian RBF kernel function, to quantify the similarity between data samples. These methods are based purely on observations without any explicit prior or model of the data, hence they may be sensitive to noise and have difficulty in tuning the hyper-parameters.} A meaningful data model or accurate prior may, however, guide the graph inference process and lead to a graph topology that better reveals the intrinsic relationship among the data entities. Therefore, a main challenge in this problem is to define such a model for the generative process or function $\mathcal{F}$, such that it captures the relationship between the observed data $\mathbf{X}$ and the learned graph topology $\mathcal{G}$. Naturally, such models often correspond to specific criteria for describing or estimating structures between the data samples, e.g., models that put a smoothness assumption on the data, or that represent an information diffusion process on the graph. Historically, there have been two general approaches to learning graphs from data, one based on statistical models and one based on physically-motivated models. From the statistical perspective, $\mathcal{F}(\mathcal{G})$ is modeled as {a function that draws a realization from} a probability distribution over {the variables} that is determined by the structure of $\mathcal{G}$. One prominent example is found in probabilistic graphical models \cite{Koller09}, where the graph structure encodes conditional independence relationship among random variables that are represented by the vertices. Therefore, learning the graph structure is equivalent to learning a factorization of a \emph{joint probability distribution} of these random variables. {Typical application domains include inferring interactions between genes using gene expression profiles, and relationship between politicians given their voting behavior \cite{Banerjee08}.} \begin{figure}[t] \centering \includegraphics[width=12cm]{Fig2} \label{approaches} \caption{A broad categorization of different approaches to the problem of graph learning.} \label{fig:approaches} \end{figure} For physically-motivated models, $\mathcal{F}(\mathcal{G})$ is defined based on the assumption of an underlying physical phenomenon or process on the graph. One popular process is \emph{network diffusion or cascades} \cite{GomezRodriguez_2010,Myers2010,GomezRodriguez2014,Nan2012}, where $\mathcal{F}(\mathcal{G})$ dictates the diffusion behavior on $\mathcal{G}$ that leads to the observation of $\mathbf{X}$, possibly at different time steps. In this case, the problem is equivalent to learning a graph structure on which the generative process of the observed signals may be {explained}. Practical applications include understanding information flowing over a network of online media sources \cite{GomezRodriguez_2010} or observing epidemics spreading over a network of human interactions \cite{Groendyke11}, {given the state of exposure or infection at certain time steps.} The fast growing field of graph signal processing \cite{Shuman13,Sandryhaila13} offers a new perspective to the problem of graph learning. In this setting, the columns of the observation matrix $\mathbf{X}$ are {explicitly} considered as signals that are defined on the vertex set of a weighted graph $\mathcal{G}$. The learning {problem} can then be cast as one of learning a graph $\mathcal{G}$ such that {$\mathcal{F}(\mathcal{G})$ permits to make certain properties or characteristics of the observations $\mathbf{X}$ explicit, e.g., smoothness with respect to $\mathcal{G}$ or sparsity in a basis related to $\mathcal{G}$.} {This \emph{signal representation} perspective is particularly interesting as it puts a strong and {explicit} emphasis on the relationship between the signal representation and the graph topology, where $\mathcal{F}(\mathcal{G})$ often comes with an interpretation of frequency-domain analysis or filtering operation of signals on the graph. For example, it is typical to adopt the eigenvectors of the graph Laplacian matrix associated with $\mathcal{G}$ as a surrogate for the Fourier basis for signals supported on $\mathcal{G}$ \cite{Shuman13,Ortega18}; we go deeper into the details of this view in Sec.~\ref{sec:gsp}.} One common representation of interest is a smooth representation in which $\mathbf{X}$ has a slow variation on $\mathcal{G}$, which can be interpreted as $\mathbf{X}$ mainly consisting of low frequency components in the graph spectral domain. {Such Fourier-like analysis on the graph leads to {novel graph inference methods compared to} approaches rooted in statistics or physics; more importantly,} it offers the {opportunity} to represent $\mathbf{X}$ in terms of its behavior in the graph spectral domain, which makes it possible to capture complex and non-typical behavior of graph signals that cannot be explicitly handled by classical tools, {for example bandlimited signals on graphs.} Therefore, {given potentially more accurate assumptions underlying the GSP models,} the inference of $\mathcal{G}$ given a specifically designed $\mathcal{F}$ may better reveal the intrinsic relationship between the data entities and benefit subsequent data processing applications. {Conceptually, as illustrated in Fig.~\ref{fig:approaches}, GSP-based graph learning approaches can thus be considered as a new family of methods that {have close connections} with classical methods while also offering certain unique advantages {in graph inference.}} In this tutorial overview, we first review well-established solutions to the problem of graph learning {that adopt} a statistics or a physics perspective. Next, we survey a series of recent GSP-based approaches and show how signal processing tools and concepts can be utilized to provide novel solutions to the {graph learning} problem. Finally, we showcase applications of GSP-based methods in a number of domains and conclude with open questions and challenges that are central to the design of future signal processing and machine learning algorithms for learning graphs from data. \section{Literature review} \label{sec:literature} The recent availability of a large amount of data collected in a variety of application domains leads to an increasing interest in estimating the structure, often encoded in the form of a network or a graph, that underlies the data. Two general approaches have been proposed in the literature, one based on statistical models and the other based on physically-motivated models. We provide a detailed review of these two approaches next. \subsection{Statistical models} \label{sec:statistical} The general philosophy behind the statistical view is that there exists a graph $\mathcal{G}$ whose structure determines the joint probability distribution of the observations on the data entities, i.e., {columns of the data matrix $\mathbf{X}$}. In this case, the function $\mathcal{F}(\mathcal{G})$ in our problem formulation is one that draws a collection of realizations, i.e., the columns of $\mathbf{X}$, from the distribution governed by $\mathcal{G}$. Such models are known as probabilistic graphical models \cite{Koller09,Meinshausen06,Banerjee08,Friedman08,Hsieh11}, {where the edges (or lack thereof) in the graph encode conditional independence relationship among the random variables represented by the vertices.} There are two main types of graphical models: 1) undirected graphical models, also known as Markov random fields (MRFs), in which local neighborhoods of the graph capture the independence structure of the variables; and 2) directed graphical models, also known as Bayesian networks or belief networks (BNs), which have a more complicated notion of independence by taking into account the direction of edges. Both MRFs and BNs have their respective advantages and disadvantages. In this {section}, we focus primarily on the approaches for learning MRFs, {which admit a simpler representation of conditional independence and also have connections to GSP-based methods, as we will see later.} Readers who are interested in the comparison between MRFs and BNs as well as approaches for learning BNs are referred to \cite{Koller09,Heckerman95}. An MRF with respect to a graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$, where $\mathcal{V}$ and $\mathcal{E}$ denote the vertex and edge set, respectively, is a set of random variables $\mathbf{x} = \{x_i : v_i \in \mathcal{V}\}$ that satisfy a Markov property. We are particularly interested in the pairwise Markov property: \begin{equation} (v_i,v_j) \notin \mathcal{E} \Leftrightarrow p(x_i | x_j, \mathbf{x} \setminus \{x_i, x_j\}) = p(x_i | \mathbf{x} \setminus \{x_i, x_j\}). \label{eq:markov} \end{equation} Eq.~(\ref{eq:markov}) states that two variables $x_i$ and $x_j$ are conditionally independent given the rest if there is no edge between the corresponding vertices $v_i$ and $v_j$ in the graph. Suppose we have $N$ random variables, then this condition holds for the {exponential family of distributions} with a parameter matrix $\mathbf{\Theta} \in \mathbb{R}^{N \times N}$: \begin{equation} p(\mathbf{x}|\mathbf{\Theta}) = \frac{1}{Z(\mathbf{\Theta})} \text{exp} \left( \sum_{v_i \in \mathcal{V}} \theta_{ii}x_i^2 + \sum_{(v_i,v_j) \in \mathcal{E}} \theta_{ij}x_i x_j \right), \end{equation} where $\theta_{ij}$ represents the $ij$-th entry of $\mathbf{\Theta}$, and $Z(\mathbf{\Theta})$ is a normalization constant. Pairwise MRFs consist of two main classes: 1) Gaussian graphical models or Gaussian MRFs (GMRFs), in which the variables are continuous; 2) discrete MRFs, in which the variables are discrete. In the case of a (zero-mean) GMRF, the joint probability can be written as follows: \begin{equation} p(\mathbf{x}|\mathbf{\Theta}) = \frac{|\mathbf{\Theta}|^{1/2}}{(2 \pi)^{N/2}} \text{exp} \big( -\frac{1}{2} \mathbf{x}^T \mathbf{\Theta} \mathbf{x} \big), \end{equation} where $\mathbf{\Theta}$ is the inverse covariance or \emph{precision} matrix. In this context, learning the graph structure boils down to learning the matrix $\mathbf{\Theta}$ that encodes pairwise conditional independence between the variables. It is common to assume, or take as a prior, that $\mathbf{\Theta}$ is sparse because: 1) real world interactions are typically local; 2) the sparsity assumption makes learning computationally more tractable. In what follows, we review some key developments in learning Gaussian and discrete MRFs. For learning GMRFs, one of the first approaches is suggested in \cite{Dempster72}, where the author has proposed to learn $\mathbf{\Theta}$ by sequentially pruning the smallest elements in the inverse of the sample covariance matrix $\widehat{\boldsymbol{\Sigma}} = \frac{1}{M-1}\mathbf{X} \mathbf{X}^T$ (see Fig.~\ref{fig:invcov}). Although it is based on a simple and effective rule, {this method does not perform well when the sample covariance is not a good approximation of the ``true'' covariance, often due to a small number of samples.} {In fact, the method cannot even be applied} when the sample size is smaller than the number of variables, in which case the sample covariance matrix is not invertible. \begin{figure}[t] \centering \subfloat[] {\includegraphics[width=3cm]{Fig3a}}~ \subfloat[] {\includegraphics[width=4.86cm]{Fig3b}}~ \subfloat[] {\includegraphics[width=3cm]{Fig3c}}~ \subfloat[] {\includegraphics[width=3cm]{Fig3d}} \caption{(a) A groundtruth precision $\mathbf{\Theta}$. (b) An observation matrix $\mathbf{X}$ drawn from a multivariate Gaussian distribution with $\mathbf{\Theta}$. (c) The sample covariance $\widehat{\boldsymbol{\Sigma}}$. (d) The inverse of the sample covariance $\widehat{\boldsymbol{\Sigma}}$.} \label{fig:invcov} \end{figure} {Since a graph is a representation of pairwise relationship, it is clear that learning a graph is equivalent to learning a neighborhood for each vertex, i.e., the other vertices to which it is connected. In this case, it is natural to assume that the observation at a particular vertex may be represented by observations at the neighboring vertices. Based on this assumption,} the authors in \cite{Meinshausen06} have proposed to approximate the observation at each variable as a sparse linear combination of the observations at other variables. For a variable $x_1$, for instance, this approximation leads to a Lasso regression problem \cite{Tibshirani96} of the form: \begin{equation} \underset{\boldsymbol{\beta}_1}{\text{min}}~ || \mathbf{X}_1 - \mathbf{X}_{\backslash 1}\boldsymbol{\beta}_1||_2^2 + \lambda || \boldsymbol{\beta}_1||_1, \label{eq:ns} \end{equation} where $\mathbf{X}_1$ and $\mathbf{X}_{\backslash 1}$ represent the observations on the variable $x_1$ (i.e., transpose of the first row of $\mathbf{X}$) and the rest of the variables, respectively, and $\boldsymbol{\beta}_1 \in \mathbb{R}^{N-1}$ is a vector of coefficients for $x_1$ (see Fig.~\ref{fig:ns}(a)-(b)). In Eq.~(\ref{eq:ns}), the first term can be interpreted as \errata{the negative} local log-likelihood of $\boldsymbol{\beta}_1$ and the $L^1$ penalty is added to enforce its sparsity, with a regularization parameter $\lambda$ balancing the two terms. The same procedure is then repeated for all the variables (or vertices). \errata{Finally, a connection between a pair of vertices $v_i$ and $v_j$ is established if either of $\beta_{ij}$ and $\beta_{ji}$ is nonzero, or both (notice that it should not be interpreted that $\beta_{ij}$ and $\beta_{ji}$ are directly related to the corresponding entries in the precision matrix $\boldsymbol{\Theta}$). This \emph{neighborhood selection} approach using the Lasso is intuitive with certain theoretical guarantees \cite{Meinshausen06}; however, it does not involve solving an optimization problem whose objective is an explicit function of $\boldsymbol{\Theta}$.} {Instead of per-node neighborhood selection, the works in \cite{Yuan06,Banerjee08,Friedman08} have proposed a popular method for estimating an inverse covariance or precision matrix at once, which is based on maximum likelihood estimation.} Specifically, the so-called \emph{graphical Lasso} method aims to solve the following problem: \begin{equation} \underset{\boldsymbol{\Theta}}{\text{max}}~\text{log}~\text{det} \boldsymbol{\Theta} - \mathrm{tr}(\widehat{\boldsymbol{\Sigma}}\boldsymbol{\Theta}) - \rho ||\boldsymbol{\Theta}||_1, \label{eq:gLasso} \end{equation} where $\widehat{\boldsymbol{\Sigma}}$ is the sample covariance matrix\footnote{\errata{In the graphical Lasso formulation, the sample covariance is computed as $\widehat{\boldsymbol{\Sigma}} = \frac{1}{M}\mathbf{X} \mathbf{X}^T$.}}, and $\text{det}(\cdot)$ and $\mathrm{tr}(\cdot)$ represent the determinant and trace operators, respectively. The first two terms together can be interpreted as the log-likelihood under a GMRF and {the entry-wise $L^1$ norm of $\boldsymbol{\Theta}$} is added to enforce sparsity of the connections with a regularization parameter $\rho$. The main difference between this approach and the neighborhood selection method of \cite{Meinshausen06} is that the optimization in the latter is decoupled for each vertex, while the one in graphical Lasso is coupled, which can be essential for stability under noise. Although the problem of Eq.~(\ref{eq:gLasso}) is convex, log-determinant programs are in general computationally demanding. Nevertheless, a number of efficient approaches have been proposed specifically for the graphical Lasso. For example, the work in \cite{Hsieh11} proposes a quadratic approximation of the Gaussian negative log-likelihood that can significantly speed up optimization. \begin{figure}[t] \centering \subfloat[] {\includegraphics[width=4cm]{Fig4a}}~ \subfloat[] {\includegraphics[width=6cm]{Fig4b}}~ \subfloat[] {\includegraphics[width=6cm]{Fig4c}} \caption{(a) Learning graphical models by neighborhood selection. (b) Neighborhood selection via the Lasso regression for Gaussian MRFs. (c) Neighborhood selection via logistic regression for discrete MRFs.} \label{fig:ns} \end{figure} Unlike the GMRFs, variables in discrete MRFs take discrete values. One popular example is the binary Ising model \cite{Cipra87}. Various learning methods may be applied in such cases, and one notable example is the approach proposed in \cite{Ravikumar10}, based on the idea of neighborhood selection similar to that in \cite{Meinshausen06}. Specifically, given the exponential family distribution introduced before, it is easy to verify that the conditional probability of one variable given the rest, e.g., $p(\mathbf{X}_{1m}|\mathbf{X}_{{\backslash 1}m})$ for variable $x_1$ where $\mathbf{X}_{1m}$ and $\mathbf{X}_{{\backslash 1}m}$ respectively represent the first entry and the rest of the $m$-th column of $\mathbf{X}$ (see Fig.~\ref{fig:ns}(c)), follows the form of a logistic function. Therefore, $x_1$ can be considered as the dependent variable in a logistic regression where all the other variables serve as independent variables. To learn sparse connections within the neighborhood of this vertex, the authors of \cite{Ravikumar10} have proposed to solve an $L^1$-regularized logistic regression: \begin{equation} \errata{\underset{\boldsymbol{\beta}_1}{\text{max}}~ \sum_{m=1}^M \text{log}~ p_{\boldsymbol{\beta}_1} (\mathbf{X}_{1m}|\mathbf{X}_{{\backslash 1}m}) - \lambda || \boldsymbol{\beta}_1||_1.} \end{equation} The same procedure is then repeated for the rest of the vertices to compute the final connection matrix, similar to that in \cite{Meinshausen06}. Most previous approaches for learning GMRFs recover a precision matrix with both positive and negative entries. A positive off-diagonal entry in the precision matrix implies a negative partial correlation between the two random variables, which is difficult to interpret in some contexts, such as road traffic networks. For such application settings, it is therefore desirable to learn a graph topology with non-negative weights. To this end, the authors in \cite{Slawski15} have proposed to select the precision matrix from the family of the so-called M-matrices \cite{Poole74}, which are symmetric and positive definite matrices with non-positive off-diagonal entries, leading to the \emph{attractive} GMRFs. Since the graph Laplacian matrix $\mathbf{L}$ is a (singular) M-matrix that {uniquely determines the adjacency matrix $\mathbf{W}$}, {it is a popular modeling choice and numerous papers have focused on learning $\mathbf{L}$ as a specific instance of the precision matrices.} One notable example is the work in \cite{Lake10}, which adapts the graphical Lasso formulation of Eq.~(\ref{eq:gLasso}) and proposes to solve the following problem{\footnote{{The exact formulation of the optimization problem in \cite{Lake10} is in a slightly different but equivalent form, due to the following relationship: $||\boldsymbol{\Theta}||_1 = ||\mathbf{L}||_1 + \frac{1}{\sigma^2}N = 2||\mathbf{W}||_1 + \frac{1}{\sigma^2}N.$ We therefore choose the formulation in Eq.~(\ref{eq:lake}) as it illustrates the connection with the graphical Lasso formulation in a straightforward way.}}}: \begin{equation} \begin{split} \underset{\boldsymbol{\Theta},~\sigma^2}{\mbox{maximize}} ~~~ & \text{log}~\text{det} \boldsymbol{\Theta} - \mathrm{tr}(\frac{1}{M}\mathbf{X} \mathbf{X}^T \boldsymbol{\Theta}) - \rho ||\boldsymbol{\Theta}||_1, \\ \mbox{subject to} ~~~ & \mathbf{\Theta} = \mathbf{L} + \frac{1}{\sigma^2} \mathbf{I},~\mathbf{L} \in \mathcal{L}, \end{split} \label{eq:lake} \end{equation} \noindent where $\mathbf{I}$ is the identity matrix, ${\sigma^2>0}$ is the a priori feature variance, {$\mathcal{L}$ is the set of valid graph Laplacian matrices, and $||\cdot||_1$ represents the entry-wise $L^1$ norm.} In Eq.~(\ref{eq:lake}), the precision matrix $\boldsymbol{\Theta}$ is modeled as a regularized graph Laplacian matrix (hence full-rank). By solving for it, the authors obtain the graph Laplacian matrix, or in other words, an adjacency matrix with non-negative weights. Notice that the trace term in Eq.~(\ref{eq:lake}) includes the so-called Laplacian quadratic form $\mathbf{X}^T \mathbf{L} \mathbf{X}$, which measures the smoothness of the data on the graph and has also been used in other approaches that are not necessarily developed from the viewpoint of inverse covariance estimation. For instance, the works in \cite{Daitch09} and \cite{Hu15} have proposed to learn the graph by minimizing quadratic forms that involve powers of the graph Laplacian matrix $\mathbf{L}$. When the power of the Laplacian is set to two, this is equivalent to the locally linear embedding criterion proposed in \cite{Roweis00} for nonlinear dimensionality reduction. {As we shall see in the following section, the criterion of signal smoothness has also been adopted in one of the GSP models for graph inference.} \subsection{Physically-motivated models} \label{sec:physics} {While the above methods mostly exploit statistical properties for graph inference, in particular the conditional independence structure between random variables, another family of approaches tackles the problem by taking a physically-motivated perspective.} {In this case, the observations $\mathbf{X}$ are considered as outcomes of some physical phenomena on the graph, {specified by the function $\mathcal{F}(\mathcal{G})$,} and the inference problem consists in capturing the structure inherent to the physics of the observed data.} Two examples of such methods are 1) network tomography, where the physical process models data actually transmitted in a communication network, and 2) epidemic or information propagation models, where the physical process represents a disease spreading over a contact network or a meme spreading over social media. The field of \emph{network tomography} broadly concerns methods for inferring properties of networks from indirect observations \cite{Castro2004network}. It is most commonly used in the context of telecommunication networks, where the information to be inferred may include {the network routes}, or the properties such as available bandwidth or reliability of each link in the network. {For example, end-to-end measurements are acquired by sending a sequence of packets from one source to many destinations, and sequences of received packets are used to infer the internal network topology.} The seminal work on this problem aimed to infer the routing tree from one source to multiple destinations \cite{Ratnasamy1999inference}. Subsequent work considered interleaving measurements from multiple sources to the same destinations simultaneously to infer general topologies \cite{Rabbat2006multiple}. {These methods can be interpreted as choosing the function $\mathcal{F}(\mathcal{G})$ in our formulation as one that measures network responses by exhaustively sending probes between all possible pairs of end-hosts.} Consequently, this may impose a significant amount of measurement traffic on the network. In order to reduce this traffic, approaches based on active sampling have also been proposed \cite{Sattari2014active}. {\emph{Information propagation} models have been applied to infer latent biological, social and financial networks based on observations of epidemics, memes, or other signals diffusing over them (e.g., \cite{GomezRodriguez_2010,Myers2010,GomezRodriguez2014,Nan2012}). For simplicity and consistency, in our discussion, we adopt the terminology of epidemiology. This type of models is characterized by three main components: (a) the \emph{nodes}, (b) an \emph{infection process} (i.e., the change in the state of the node that is transferred by neighboring nodes in the network), and (c) the \emph{causality} (i.e., the underlying graph structure based on which the infection is propagated). Given a known graph structure, epidemic processes over graphs have been well-studied through popular models in which nodes may be susceptible, infected, and possibly recovered \cite{PastorSatorras15}. On the other hand, when the structure is not known beforehand, it may be inferred by considering the propagation of contagions over the edges of an unknown network, given usually only the time steps when nodes became infected.} {A (fully-observed) cascade may be represented by the sequence of triples $\{(v'_{p}, v_{p}, t_p)\}_{p = 0}^P$, where $P \le N$, representing that node $v'_{p}$ infected its neighbor $v_{p}$ at time $t_p$. In many applications, one may observe when a node becomes infected, but not which neighbor infected it (see Fig.~\ref{fig:cascade} for an illustration). Then, the task is to recover a graph $\mathcal{G}$ given the (partial) observations $\{(v_{p}, t_p)\}_{p =0}^P$, usually for a number of such cascades. In this case, the set of nodes is given and the goal is to recover the edge structure. The common convention is to shift the infection times so that the initial infection in each cascade always occurs at time $t_0 = 0$. Equivalently, let $\mathbf{x}$ denote a length-$N$ vector where $x_i$ is the time when $v_i$ is infected, using the convention that $x_i = \infty$ if $v_i$ is not infected in this cascade. The observations from $M$ cascades can then be represented in a $N$-by-$M$ matrix $\mathbf{X} = \mathcal{F}(\mathcal{G})$.} {Methods for inferring networks from information cascades can be generally divided into two main categories depending on whether they are based on homogeneous or heterogeneous models. Methods based on \emph{homogeneous} models assume that cascades propagate in a statistically identical manner across all edges. For example, one model treats entries $w_{ij}$ of the (unknown) adjacency matrix as representing the conditional probability that $v_i$ infects $v_j$ given $v_i$ is infected~\cite{Myers2010}. In addition, a transmission time model $h(t)$ is assumed known such that the likelihood that $v_i$ infects $v_j$ at time $x_j$ given that $v_i$ was infected at time $x_i < x_j$ is: \begin{equation} p(x_j | x_i, w_{ij}) = h(x_j - x_i) w_{ij}. \end{equation} Here, $h(t)$ is taken to be zero for $t < 0$, and typically $h(t)$ also decays to zero as $t \rightarrow \infty$.} {Assuming that the function $h(t)$ is given, the inference problem reduces to finding the conditional probabilities $w_{ij}$. Given the set of nodes infected as well as the time of infection in each observed cascade, and assuming that cascades are independent and identically distributed, the likelihood of a graph with adjacency matrix $\mathbf{W}$ (with $w_{ij}$ being the $ij$-th entry) is derived explicitly in \cite{Myers2010}, and it is further shown that maximizing this likelihood can be recast as an equivalent geometric program, so that convex optimization techniques can be applied to the problem of inferring $\mathbf{W}$.} {A similar model is considered in~\cite{GomezRodriguez_2010}, in which the conditional transmission probabilities are taken to be the same on all edges, i.e., $w_{ij} = \beta \cdot \mathbf{1}\{(v_i,v_j) \in \mathcal{E}\}$ where $\mathbf{1}\{\cdot \}$ is an indicator function, for a given constant $\beta \in (0,1)$. The task therefore reduces to determining where there are edges, which is a discrete optimization problem. The maximum likelihood objective is shown to be submodular in~\cite{GomezRodriguez_2010}, and an edge selection scheme based on greedy optimization obtains the optimal likelihood up to a constant factor. Clearly, the main drawbacks of homogeneous methods are the strong underlying assumption that cascades propagate in an identical manner across all edges in the network.} \begin{figure}[t] \centering \subfloat[] {\includegraphics[width=8cm]{Fig5a}}~ \subfloat[] {\includegraphics[width=8cm]{Fig5b}} \caption{(a) A graph with directed edges indicating possible directions of spreading. (b) Observations of cascades spreading over the graph. We observe the times when nodes became infected (i.e., the cascade reached a node) but do not observe from which neighbor it was infected. Figure inspired by the one in \cite{GomezRodriguez2014}.} \label{fig:cascade} \end{figure} \newcommand{\mathop{\operatorname{Sur}}}{\mathop{\operatorname{Sur}}} \newcommand{\mathop{\operatorname{Haz}}}{\mathop{\operatorname{Haz}}} {Methods based on \emph{heterogeneous} models relax these requirements and allow for cascades to propagate at different rates across different edges. The \textsc{NetRate} algorithm~\cite{GomezRodriguez2014} is a prototypical example of this category, in which one assumes a parametric form for the edge conditional likelihood $p(x_j | x_i, w_{ij})$. For example, in an exponential model, $p(x_j | x_i, w_{ij}) = w_{ij} e^{-w_{ij} (x_j - x_i)} \cdot \mathbf{1}\{x_j > x_i\}$. If we write $P(x_j | x_i, w_{ij}) = \int_{x_i}^{x_j} p(t | x_i, w_{ij}) \;dt$ for the cumulative density function, then the \emph{survival function} \begin{equation} \mathop{\operatorname{Sur}}(x_j | x_i, w_{ij}) := 1 - P(x_j | x_i, w_{ij}) \end{equation} is the probability that $v_j$ is not infected by $v_i$ by time $x_j$ given that $v_i$ was infected at time $x_i$. Furthermore, the \emph{hazard function} \begin{equation} \mathop{\operatorname{Haz}}(x_j | x_i, w_{ij}) := \frac{p(x_j | x_i, w_{ij})}{\mathop{\operatorname{Sur}}(x_j | x_i, w_{ij})} \end{equation} is the instantaneous probability, at time $x_j$, that $v_j$ is infected by $v_i$ given that $v_i$ was infected at time $x_i$.} { With this notation, the likelihood of a given cascade observation $\mathbf{x}$ that is observed up to time $T = \max\{x_v < \infty \colon v \in \mathcal{V}\}$ is~\cite{GomezRodriguez2014}: \begin{equation} \begin{split} p(\mathbf{x} | \mathbf{W}) &= \prod_{i : x_i \le T} \prod_{j : x_j > T} \mathop{\operatorname{Sur}}(T | x_i, w_{ij}) \\ &\quad \times \prod_{k: x_k < x_i} \mathop{\operatorname{Sur}}(x_i | x_k, w_{ki}) \sum_{l : x_l < x_i} \mathop{\operatorname{Haz}}(x_i | x_l, w_{li}). \end{split} \end{equation} When the survival and hazard functions are log-concave (which is the case for exponentially-distributed edge conditional likelihoods, as well as others), then the resulting maximum likelihood inference problem is shown to be convex in~\cite{GomezRodriguez2014}. In fact, the overall maximum likelihood problem decomposes into per-node problems which can be solved using a soft-thresholding algorithm, in a manner similar to~\cite{Meinshausen06}. Furthermore, conditions are provided in~\cite{GomezRodriguez2016} under which the resulting estimate is shown to be consistent (as the number of observed cascades tends to infinity), and sample complexity results are provided, quantifying how quickly the error decays as a function of the number of observed cascades.} {The above heterogeneous approach requires adopting a parametric model for the edge conditional likelihood, which may be difficult to justify in some settings. The approach described in~\cite{Nan2012} uses kernel methods to estimate the edge conditional likelihoods in a non-parametric manner. More recently, a Bayesian approach to infer a graph topology from diffusion observations has been proposed where the infection time is not directly observed \cite{Shaghaghian16}, but rather the state of each node (susceptible or infected) is a latent variable affecting the statistics of the signal which is observed at each node.} {In summary, many physically-motivated approaches consider the function $\mathcal{F}(\mathcal{G})$ to be an information propagation model on the network, and generally fall under the bigger umbrella of probabilistic inference of the network of diffusion or epidemic data. Notice, however, that despite its probabilistic nature, such inference is carried out with a specific model of the physical phenomena in mind, instead of using a general probability distribution of the observations considered by statistical models in the previous section. In addition, for both methods in network tomography and those based on information propagation models, the recovered network typically indicates only the existence of edges and does not promote a specific graph-signal structure. As we shall see, this is a clear difference from the GSP models that are discussed in the following section.} \section{Graph learning: A signal representation perspective} \label{sec:gsp} {There is clearly a growing interest} in the signal processing community to analyze signals that are supported on the vertex set of weighted graphs, leading to the fast-growing field of graph signal processing \cite{Shuman13,Sandryhaila13}. GSP enables the processing and analysis of signals that lie {on structured but irregular domains} by generalizing classical signal processing concepts, tools and methods, such as time-frequency analysis and filtering, on graphs \cite{Shuman13,Sandryhaila13,Ortega18}. Consider a weighted graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ with the vertex set $\mathcal{V}$ of cardinality $N$ and edge set $\mathcal{E}$. A graph signal is defined as a function $\mathbf{x}: \mathcal{V} \rightarrow \mathbb{R}^N$ that assigns a scalar value to each vertex. {When the graph is undirected,} the combinatorial or unnormalized graph Laplacian matrix $\mathbf{L}$ is defined as: \begin{equation} \mathbf{L}=\mathbf{D}-\mathbf{W}, \label{eq:laplacian} \end{equation} where $\mathbf{D}$ is the degree matrix that contains the degrees of the vertices along the diagonal, and $\mathbf{W}$ is the weighted adjacency matrix of $\mathcal{G}$. Since $\mathbf{L}$ is a real and symmetric matrix, it admits a complete set of orthonormal eigenvectors with the associated eigenvalues via the eigencomposition: \begin{equation} \mathbf{L} = \boldsymbol{\chi} \mathbf{\Lambda} \boldsymbol{\chi}^T, \label{eq:eigendecomp} \end{equation} where $\boldsymbol{\chi}$ is the eigenvector matrix that contains the eigenvectors as columns, and $\boldsymbol{\Lambda}$ is the eigenvalue matrix $\textbf{diag}(\lambda_0, \lambda_1, \cdots, \lambda_{N-1})$ that contains the eigenvalues along the diagonal. Conventionally, the eigenvalues are sorted in an increasing order, and we have for a connected graph: $0 = \lambda_0 < \lambda_1 \leq \cdots \leq \lambda_{N-1}$. The Laplacian matrix $\mathbf{L}$ enables a generalization of the notion of frequency and Fourier transform for graph signals \cite{Hammond11}. Alternatively, a graph Fourier transform may also be defined using the adjacency matrix $\mathbf{W}$, and this definition can be used in directed graphs \cite{Sandryhaila13}. Furthermore, both $\mathbf{L}$ and $\mathbf{W}$ can be interpreted as a general class of shift operators on graphs \cite{Sandryhaila13}. {The above operators are used to represent and process signals on a graph in a similar way as in traditional signal processing. To see this more clearly, consider two equations of central importance in signal processing: $\mathcal{D}\mathbf{c}=\mathbf{x}$ for the synthesis view and $\mathcal{A}\mathbf{x}=\mathbf{b}$ for the analysis view. In the synthesis view, the signal $\mathbf{x}$ is represented as a linear combination of atoms that are columns of a representation matrix $\mathcal{D}$, with $\mathbf{c}$ being the coefficient vector. In the context of GSP, the representation $\mathcal{D}$ of a signal on the graph $\mathcal{G}$ is realized via $\mathcal{F}(\mathcal{G})$, i.e., a function of $\mathcal{G}$. In the analysis view of GSP, given $\mathcal{G}$ and $\mathbf{x}$ and with a design for $\mathcal{F}$ (that defines $\mathcal{A}$), we study the characteristics of $\mathbf{x}$ encoded in {$\mathbf{b}$}. Examples include the generalization of the Fourier and wavelet transforms for graph signals \cite{Hammond11,Sandryhaila13}, which are defined based on mathematical properties of a given graph $\mathcal{G}$. Alternatively, graph dictionaries can be trained by taking into account information from both $\mathcal{G}$ and $\mathbf{x}$ \cite{Zhang12,Thanou14}. } {Although most GSP approaches focus on developing techniques for analyzing signals on a predefined or known graph, there is a growing interest in addressing the problem of learning graph topologies from observed signals, especially in the case when the topology is not readily available (i.e., not pre-defined given the application domain). This offers a new perspective to the problem of graph learning, especially by focusing on the representation of the observed signals on the learned graph. Indeed, this corresponds to a synthesis view of the signal processing model: given $\mathbf{x}$, with some designs for $\mathcal{F}$ and $\mathbf{c}$, we would like to infer $\mathcal{G}$. Of crucial importance is therefore a model that captures the relationship between the signal representation and the graph, which, together with graph operators such as the adjacency/Laplacian matrices or the graph shift operators \cite{Sandryhaila13}, contributes to specific designs for $\mathcal{F}$. Moreover, assumptions on the structure or properties of $\mathbf{c}$ also play an important role in determining the characteristics of the resulting signal representation. Graph learning frameworks that are developed from a signal representation perspective therefore have the unique advantage of enforcing certain desirable representations of the observed signals, by exploiting the notions of frequency-domain analysis and filtering operations on graphs. } A graph signal representation perspective is complementary to the existing ones that we discussed in the previous section. For instance, from the statistical perspective, the majority of approaches for learning graphical models do not lead directly to a graph topology with non-negative edge weights, {a property that is often desirable in real world applications,} and very little work has studied the case of inferring attractive GMRFs. Furthermore, the joint distribution of the random variables is mostly imposed in a global manner, while it is not easy to encourage localized behavior (i.e., about a subset of the variables) on the learned graph. The physics perspective, on the other hand, mostly focuses on a few conventional models such as network diffusion and cascades. {It remains however an open question how observations that do not necessarily come from a well-defined physical phenomenon can be exploited to infer the underlying structure of the data. The graph signal processing viewpoint introduces one more important ingredient that can be used as a regularizer for complicated inference problems: the frequency or spectral representation of the observations. In what follows, we will review three models for signal representation on graphs, which lead to various methodologies for inferring graph topologies from the observed signals.} \subsection{Models based on signal smoothness} \label{sec:smoothness} The first model we consider is a smoothness model, under which the signal takes similar values at neighboring vertices. Practical examples of this model could be temperature observed at different locations in a flat geographical region, or ratings on movies of individuals in a social network. The measure of smoothness of a signal $\mathbf{x}$ on the graph $\mathcal{G}$ is usually defined by the so-called Laplacian quadratic form: \begin{equation} \mathcal{Q}(\mathbf{L}) = \mathbf{x}^T \mathbf{L} \mathbf{x} = \frac{1}{2} \sum_{i,j} w_{ij} \left(\mathbf{x}(i)-\mathbf{x}(j)\right)^2, \label{eq:lapquad} \end{equation} {where $w_{ij}$ is the $ij$-th entry of the adjacency matrix $\mathbf{W}$ and $\mathbf{L}$ is the Laplacian matrix.} Clearly, $\mathcal{Q}(\mathbf{L}) =0$ when $\mathbf{x}$ is a constant signal over the graph (i.e., a DC signal with no variation). More generally, we can see that given the same $L^2$-norm, the smaller the value $\mathcal{Q}(\mathbf{L})$, the more similar are the signal values at neighboring vertices (i.e., the lower the variation of $\mathbf{x}$ is with respect to $\mathcal{G}$). One natural criterion is therefore to learn a graph (or equivalently its Laplacian matrix $\mathbf{L}$) such that the signal variation on the resulting graph, i.e., the Laplacian quadratic $\mathcal{Q}(\mathbf{L})$, is small. As an example, for the same signal, learning a graph in Fig.~\ref{fig:exa1}(a) leads to a smoother signal representation in terms of $\mathcal{Q}(\mathbf{L})$ than that by learning a graph in Fig.~\ref{fig:exa1}(c). The criterion of minimizing $\mathcal{Q}(\mathbf{L})$ or its variants with powers of $\mathbf{L}$ has been proposed in a number of existing approaches, such as the ones in \cite{Lake10,Daitch09,Hu15}. \begin{figure}[t] \centering \subfloat[] { \includegraphics[width=5cm]{Fig6a} \label{exa1a}}~~~ \subfloat[] { \includegraphics[width=6cm]{Fig6b} \label{exa1b}}\\ \subfloat[] { \includegraphics[width=5cm]{Fig6c} \label{exa1a}}~~~ \subfloat[] { \includegraphics[width=6cm]{Fig6d} \label{exa1b}}\\ \caption{(a) A smooth signal on the graph with $\mathcal{Q}(\mathbf{L})=1$ and (b) its Fourier coefficients in the graph spectral domain. The signal forms a smooth representation on the graph as its values vary slowly along the edges of the graph, and it mainly consists of low frequency components in the graph spectral domain. (c) A less smooth signal on the graph with $\mathcal{Q}(\mathbf{L})=5$ and (d) its Fourier coefficients in the graph spectral domain. A different choice of the graph leads to a different representation of the same signal.} \label{fig:exa1} \end{figure} {A procedure to} {infer a graph that favors the smoothness of the graph signals can be obtained} using the synthesis model $\mathcal{F}(\mathcal{G}) \mathbf{c}=\mathbf{x}$, and this is the idea behind the approaches in \cite{Dong16,Kalofolias16}. Specifically, consider a factor analysis model with the choice of $\mathcal{F}(\mathcal{G}) = \boldsymbol{\chi}$ and: \begin{equation} \mathbf{x} = \boldsymbol{\chi} \mathbf{c} + \boldsymbol{\epsilon}, \end{equation} where $\boldsymbol{\chi}$ is the eigenvector matrix of the Laplacian $\mathbf{L}$ and $\boldsymbol{\epsilon} \sim \mathcal{N}(\mathbf{0}, \sigma_\epsilon^2 \mathbf{I})$ is additive Gaussian noise. With a further assumption that $\mathbf{c}$ follows a Gaussian distribution with a precision matrix $\mathbf{\Lambda}$: \begin{equation} \mathbf{c} \sim \mathcal{N}(\mathbf{0},\mathbf{\Lambda}^\dagger), \end{equation} where $\mathbf{\Lambda}^\dagger$ is the Moore-Penrose pseudo-inverse of the eigenvalue matrix of $\mathbf{L}$, and $\mathbf{c}$ and $\boldsymbol{\epsilon}$ are statistically independent, it is shown in \cite{Dong16} that the signal $\mathbf{x}$ follows a GMRF model: \begin{equation} \mathbf{x} \sim \mathcal{N}(\mathbf{0}, \mathbf{L}^\dagger + \sigma_\epsilon^2 \mathbf{I}). \end{equation} This leads to formulating the problem of jointly inferring the graph Laplacian and the latent variable $\mathbf{c}$ as: \begin{equation} \min_{\boldsymbol{\chi}, \boldsymbol{\Lambda}, \mathbf{c}} \|\mathbf{x} - \boldsymbol{\chi} \mathbf{c}\|_2^2 + \alpha~\mathbf{c}^T \boldsymbol{\Lambda} \mathbf{c}, \end{equation} where $\alpha$ is a non-negative regularization parameter related to the assumed noise level $\sigma_\epsilon^2$. By making the change of variables $\mathbf{y} = \boldsymbol{\chi} \mathbf{c}$ and recalling that the matrix of Laplacian eigenvectors $\boldsymbol{\chi}$ is orthornormal, one arrives at the equivalent problem: \begin{equation} \min_{\mathbf{L}, \mathbf{y}} \|\mathbf{x} - \mathbf{y} \|_2^2 + \alpha~\mathbf{y}^T \mathbf{L} \mathbf{y}, \end{equation} in which the Laplacian quadratic form appears. Therefore, these particular modeling choices for $\mathcal{F}$ and $\mathbf{c}$ lead to a procedure for inferring a graph over which the observation $\mathbf{x}$ is smooth. Note that, there is a one-to-one mapping between the Laplacian matrix $\mathbf{L}$ and a weighted undirected graph, so inferring {$\mathbf{L}$} is equivalent to inferring $\mathcal{G}$. By taking the matrix form of the observations and adding an $L^2$ penalty, the authors of \cite{Dong16} propose to solve the following optimization problem: \begin{equation} \begin{split} \underset{\mathbf{L},~\mathbf{Y}}{\mbox{minimize}} ~~~ & ||\mathbf{X}-\mathbf{Y}||_F^2 + \alpha~\mathrm{tr}(\mathbf{Y}^T \mathbf{L} \mathbf{Y}) + \beta ||\mathbf{L}||_F^2, \\ \mbox{subject to} ~~~ & \mathrm{tr}(\mathbf{L}) = N,~\mathbf{L} \in \mathcal{L}, \end{split} \label{eq:smooth} \end{equation} where $\mathrm{tr}(\cdot)$ and $||\cdot||_F$ represent the trace and Frobenius norm of a matrix, respectively, and $\alpha$ and $\beta$ are non-negative regularization parameters. The trace constraint acts as a normalization factor that fixes the volume of the graph and $\mathcal{L}$ is the set of valid Laplacian matrices. This constitutes the problem of finding $\mathbf{Y}$ that is close to the data observations $\mathbf{X}$, while ensuring at the same time that $\mathbf{Y}$ is smooth on the learned graph represented by its Laplacian matrix $\mathbf{L}$. The Frobenius norm of $\mathbf{L}$ is added to control the distribution of the edge weights and is inspired by the approach in \cite{Hu15}. The problem is solved via alternating minimization in \cite{Dong16}, in which the step of solving for $\mathbf{L}$ bears similarity to the optimization in \cite{Hu15}. A formulation similar to Eq.~(\ref{eq:smooth}) has further been studied in \cite{Kalofolias16} where reformulating the problem in terms of the adjacency matrix $\mathbf{W}$ leads to a more efficient algorithm computationally. Both works emphasize the characteristics of GSP-based graph learning approaches, i.e., enforcing desirable signal representations through the learning process. As we have seen, the smoothness property of the graph signal is associated with a multivariate Gaussian distribution, which is also behind the idea of classical approaches for learning graphical models, such as the graphical Lasso. Following the same design for $\mathcal{F}$ and slightly different ones for $\mathbf{\Lambda}$ compared to \cite{Dong16,Kalofolias16}, the authors of \cite{Egilmez17} have proposed to solve a similar objective compared to the graphical Lasso, but with the constraints that the solutions correspond to different types of graph Laplacian matrices (e.g., the combinatorial or generalized Laplacian). {The basic idea in the latter approach is to identify GMRF models such that the precision matrix has the form of a graph Laplacian. Their work generalizes the classical graphical Lasso formulation and the formulation proposed in \cite{Lake10} to precision matrices restricted to have a Laplacian form. From a probabilistic perspective, the problems of interest correspond to a maximum a posteriori (MAP) parameter estimation of GMRF models, whose precision matrix is a graph Laplacian. In addition, the proposed approach allows for incorporating prior knowledge on graph connectivity, which, if applicable, can help improve the performance of the graph inference algorithm.} It is also worth mentioning that, the approaches in \cite{Dong16,Kalofolias16,Egilmez17} learn a graph topology without any explicit constraint on the density of the edges in the learned graph. This information, if available, can be incorporated in the learning process. For example, the work of \cite{Chepuri17} has proposed to learn a graph with a targeted number of edges by selecting the ones that lead to the smallest $\mathcal{Q}(\mathbf{L})$. {To summarize, in the global smoothness model, the objective of minimizing the original or a variant of the Laplacian quadratic form of $\mathcal{Q}(\mathbf{L})$ can be interpreted as having $\mathcal{F}(\mathcal{G})=\boldsymbol{\chi}$ and $\mathbf{c}$ following a multivariate Gaussian distribution. However, different learning algorithms may differ in both the output of the algorithm and the computational complexity. For instance, the approaches in \cite{Kalofolias16,Chepuri17} learn an adjacency matrix, while the ones in \cite{Dong16,Egilmez17} learn a graph Laplacian matrix or its variants. In terms of complexity, the approaches in \cite{Dong16}, \cite{Kalofolias16} and \cite{Egilmez17} all solve a quadratic program (QP), with efficient implementations provided in the latter two based on primal-dual techniques and block-coordinate descent algorithms, respectively. On the other hand, the method in \cite{Chepuri17} involves a sorting algorithm that scales with the desired number of edges.} Finally, it is important to notice that $\mathcal{Q}(\mathbf{L})$ is a measure for \emph{global} smoothness on $\mathcal{G}$ in the sense that a small $\mathcal{Q}(\mathbf{L})$ implies a small variation of signal values along \emph{all} the edges in the graph, and the signal energy is mostly concentrated in the low frequency components in the graph spectral domain. Although global smoothness is often a desirable property for the signal representation, it can also be limiting in other scenarios. The second class of models that we introduce in the following section relaxes this constraint, by allowing for a more flexible representation of the signal in terms of its spectral characteristics. \subsection{Models based on spectral filtering of graph signals} \label{sec:filtering} {The second graph signal model that we consider goes beyond the global smoothness of the signal on the graph and focuses more on the general family of graph signals that are generated by applying a filtering operation to a latent (input) signal. In particular, the filtering operation may correspond to the diffusion of an input signal on the graph.} Depending on the type of the graph filter, and the input signal, the generated signal can have different frequency characteristics (e.g., bandpass signals) and localization properties (e.g., locally smooth signals). Moreover, this family of algorithms is more appropriate {than the one based on a globally smooth signal model} for learning graph topologies when the observations are the result of a diffusion process on a graph. Particularly, the graph diffusion model can be widely applied in real world scenarios to understand the distribution of heat (sources) \cite{Chung07}, such as the propagation of a heat wave in geographical spaces, the movement of people in buildings or vehicles in cities, and the shift of people's interest towards certain subjects on social media platforms \cite{Ma_2008}. {{In this type of models, the graph filters and the input signals may be interpreted as the functions $\mathcal{F}(\mathcal{G})$ and the coefficients $\mathbf{c}$ in our synthesis model, respectively. The existing methods in the literature therefore differ in the assumptions on $\mathcal{F}$ as well as the distribution of $\mathbf{c}$.} In particular, $\mathcal{F}$ may be defined as an arbitrary (polynomial) function of a matrix related to the graph \cite{Segarra17a,Pasdeloup18}, or a well-known diffusion kernel such as the heat diffusion kernel \cite{Thanou17} (see Fig.~\ref{fig:diffusion} for two examples). The assumptions on $\mathbf{c}$ can also vary, with the most prevalent ones being zero-mean Gaussian distribution, and sparsity. Broadly speaking, we can distinguish the graph learning algorithms belonging to this family in two different categories. {The first category models the graph signals as stationary processes on graphs, where the eigenvectors of a graph operator, such as the adjacency/Laplacian matrix or a shift operator, are estimated from the sample covariance matrix of the observations in the first step. The eigenvalues are then estimated in the second step to obtain the operator.} The second category poses the graph learning problem as a dictionary learning problem with a prior on the coefficients $\mathbf{c}$. In what follows, we will give a few representative examples of both categories, which differ in terms of graph filters as well as input signal characteristics.} \begin{figure}[t] \centering \includegraphics[width=16cm]{Fig7} \label{invcov} \caption{Diffusion processes on the graph defined by a heat diffusion kernel (top right) and a graph shift operator (bottom right).} \label{fig:diffusion} \end{figure} \subsubsection{{Stationarity based learning frameworks}} The main characteristic of this line of work is that, given a stationarity assumption, the eigenvectors of a graph operator are estimated by the empirical covariance matrix of the observations. In particular, the graph signal $\mathbf{x}$ can be generated from: \begin{equation} \label{eq:diff_model} \mathbf{x} = \beta_0 \Pi_{k = 1}^{\infty} (\mathbf{I} - \beta_k \mathbf{S})\mathbf{c} = \sum_{k = 0}^{\infty}\alpha_k \mathbf{S}^k \mathbf{c}, \end{equation} for some set of the parameters $\{\alpha\}$ and $\{\beta\}$. The latter implies that there exists an underlying diffusion process in the graph operator $\mathbf{S}$, which can be the adjacency matrix, Laplacian, or a variation thereof, that produces the signal $\mathbf{x}$ from the input signal $\mathbf{c}$. By assuming a finite polynomial degree $K$, the generative signal model becomes: \begin{equation} \mathbf{x} = \mathcal{F}(\mathcal{G})\mathbf{c} = \sum_{k = 0}^{K}\alpha_k \mathbf{S}^k \mathbf{c}, \end{equation} where the connectivity matrix of $\mathcal{G}$ is captured through the graph operator $\mathbf{S}$. Usually, $\mathbf{c}$ is assumed to be a zero-mean graph signal with covariance matrix $\mathbf{\Sigma}_c = \mathbb{E}[\mathbf{c} \mathbf{c}^T]$. In addition, if $\mathbf{c}$ is white and $\boldsymbol{\Sigma}_c = \mathbf{I}$, Eq.~(\ref{eq:diff_model}) is equivalent to assuming that the graph process $\mathbf{x}$ is \emph{stationary} in $\mathbf{S}$. This assumption of stationarity is important for estimating the eigenvectors of the graph operator. Indeed, since the graph operator $\mathbf{S}$ is often a real and symmetric matrix, its eigenvectors are also eigenvectors of the covariance matrix $\boldsymbol{\Sigma}_x$. As a matter of fact: \begin{equation} \begin{split} \boldsymbol{\Sigma}_x &= \mathbb{E}[\mathbf{x} \mathbf{x}^T] = \mathbb{E}\left[\sum_{k = 0}^{K}\alpha_k \mathbf{S}^k \mathbf{c} \big(\sum_{k = 0}^{K}\alpha_k \mathbf{S}^k \mathbf{c}\big)^T\right]\\ &= \sum_{k = 0}^{K}\alpha_k \mathbf{S}^k \big(\sum_{k = 0}^{K}\alpha_k \mathbf{S}^k\big)^T = \boldsymbol{\chi} \left(\sum_{k = 0}^{K}\alpha_k \mathbf{\Lambda}^k\right)^2 \boldsymbol{\chi}^T, \end{split} \label{eq:stationarity_cov_step2} \end{equation} where we have used the assumption that $\mathbf{\Sigma}_c = \mathbf{I}$ and the eigendecomposition $\mathbf{S} = \boldsymbol{\chi} \mathbf{\Lambda} \boldsymbol{\chi}^T$. Given a sufficient number of graph signals, the eigenvectors of the graph operator $\mathbf{S}$ can therefore be approximated by the eigenvectors of the empirical covariance matrix of the observations. {To recover $\mathbf{S}$, the second step of the process would then be to learn its eigenvalues.} The authors in \cite{Pasdeloup18} follow the aforementioned reasoning and model the diffusion process by powers of the normalized Laplacian matrix. More precisely, they propose an algorithm for characterizing and then computing a set of admissible diffusion matrices, which defines a polytope. In general, this polytope corresponds to a continuum of graphs that are all consistent with the observations. To obtain a particular solution, an additional criterion is required. Two such criteria are proposed: one which encourages the resulting graph to be sparse, and another which encourages the recovered graph to be \emph{simple} (i.e., a graph in which no vertex has a connection to itself hence an adjacency matrix with only zeros along the diagonal). Similarly, in \cite{Segarra17a}, after obtaining the eigenvectors of a graph shift operator, the graph learning problem is equivalent to learning its eigenvalues, under the constraints that the shift operator obeys some desired properties such as sparsity. The optimization problem of \cite{Segarra17a} can be written as: \begin{equation} \begin{split} \underset{\mathbf{S} ,~\mathbf{\Psi}}{\mbox{minimize}} ~~~ & f(\mathbf{S}, \mathbf{\Psi}), \\ \mbox{subject to} ~~~ & \mathbf{S} = \mathbf{\chi} \mathbf{\Psi} \mathbf{\chi}^T, ~\mathbf{S} \in \mathcal{S}, \end{split} \label{eq:opt_prob_Segarra} \end{equation} where $f(\cdot)$ is a convex function applied on $\mathbf{S}$ that imposes the desired properties of $\mathbf{S}$, e.g., sparsity via an {entry-wise} $L^1$-norm, and $\mathcal{S}$ is the constraint set of $\mathbf{S}$ being a valid graph operator, e.g., non-negativity of the edge weights. The stationarity assumption is further relaxed in \cite{Shafipour18a}. However, all these approaches are based on the assumption that the sample covariance of the observed data and the graph operator have the same set of eigenvectors. Thus, their performance depends on the accuracy of eigenvectors obtained from the sample covariance of data, which can be difficult to guarantee especially when the number of data samples is small relative to the number of vertices in the graph. Given the limitation in estimating the eigenvectors of the graph operator from the sample covariance, the work of \cite{Egilmez18} has proposed a different approach. They have formulated the problem of graph learning as a graph system identification problem where, by assuming that the observed signals are output of a system with a graph-based filter given certain input, the goal is to learn a weighted graph (a graph Laplacian matrix) and the graph-based filter (a function of the graph Laplacian matrices). {The algorithm is based on the minimization of a regularized maximum likelihood criterion and it is valid under the assumption that the graph filters are one-to-one functions, i.e., increasing or decreasing in the space of eigenvalues, such as a heat diffusion kernel. More specifically, the system input is assumed to be multivariate white Gaussian noise (hence the stationarity assumption on the observed signals), and Eq.~(\ref{eq:stationarity_cov_step2}) is again used for computing an initial estimate of the eigenvectors. However, different from \cite{Segarra17a,Pasdeloup18} where these eigenvectors are used directly in forming the graph operators, in \cite{Egilmez18} they are used to compute the graph Laplacian: after initializing the filter parameter, the algorithm iterates until convergence between the following three steps: (a) pre-filter the sample covariance using the inverse of the graph filter; (b) estimate a graph Laplacian from the pre-filtered covariance matrix by solving a maximum likelihood optimization criterion, using an algorithm proposed in \cite{Egilmez17}; (c) update the filter parameter based on the current estimate of the graph Laplacian. Compared to \cite{Segarra17a,Pasdeloup18}, this approach may therefore lead to a more accurate inference of the graph operator (graph Laplacian in this case). \subsubsection{{Graph dictionary based learning frameworks}} {Methods belonging to this category are based on the notion of spectral graph dictionaries for efficient signal representation. Specifically, the authors in \cite{Thanou17,Maretic17} assume a different graph signal diffusion model,} where the data consist of (sparse) combinations of overlapping local patterns that reside on the graph. These patterns may describe localized events or specific processes appearing at different vertices of the graph, such as traffic bottlenecks in transportation networks or rumor sources in social networks. The graph signals are then viewed as observations at different time instants of a few processes that start at different nodes of an unknown graph and diffuse with time. Such signals can be represented as the combination of graph heat kernels or, more generally, of localized graph kernels. {Both algorithms can be considered as a generalization of dictionary learning to graph signals. Dictionary learning \cite{Rubinstein10,Tosic11} is an area of research in signal processing and machine learning where the signals are represented as a linear combination of simple components, i.e., atoms, in an (often) overcomplete basis. Signal decompositions with overcomplete dictionaries offer a way to efficiently approximate or process signals, such that the important characteristics are revealed by the sparse signal representation. Due to these desirable properties, dictionary learning has been extended to the representation of graph signals, and eventually has been applied to the problem of graph inference.} {Next, we provide more details on one of the above mentioned algorithms.} The authors in \cite{Thanou17} have focused on graph signals generated from heat diffusion processes, which are useful in identifying processes evolving nearby a starting seed node. An illustrative example of such a signal can be found in Fig.~\ref{fig:diffusion_example}, in which case the graph Laplacian matrix is used to model the diffusion of the heat throughout a graph. The concatenation of a set of heat diffusion operators at different time instances defines a graph dictionary that is further on used to represent the graph signals. Hence, the graph signal model becomes: \begin{equation} \mathbf{x} = \mathcal{F}(\mathcal{G}) \mathbf{c} = [e^{-\tau_1 \mathbf{L}} ~ e^{-\tau_2 \mathbf{L}} ~ \cdots ~ e^{-\tau_S \mathbf{L}} ~ ]\mathbf{c} = \sum_{s=1}^S e^{-\tau_s \mathbf{L}}\mathbf{c}_s, \end{equation} which is a linear combination of different heat diffusion processes evolving on the graph. In this synthesis model, the coefficients $\mathbf{c}_s$ corresponding to a subdictionary $e^{-\tau_s \mathbf{L}}$ can be seen as a graph signal that goes through a heat diffusion process on the graph. The signal component $e^{-\tau_s \mathbf{L}} \mathbf{c}_s$ can then be interpreted as the result of this diffusion process at time $\tau_s$. It is interesting to notice that the parameter $\tau_s$ in the model carries a notion of scale. In particular, when $\tau_s $ is small, the $i$-th column of $e^{-\tau_s \mathbf{L}}$, i.e., the atom centered at node $v_i$ of the graph, is mainly localized in a small neighborhood of $v_i$. As $\tau_s$ becomes larger, it reflects information about the graph at a larger scale around $v_i$. Thus, the signal model can be seen as an additive model of diffusion processes of $S$ initial graph signals, that undergo a diffusion model with different diffusion times. \begin{figure*}[!t] \centering {\includegraphics[width=16cm]{Fig8} \label{heatdiff}} \caption{(a) A graph signal. (b-e) Its decomposition in four localized simple components. Each component is a heat diffusion process $(e^{-\tau\mathbf{L}})$ at time $\tau$ that has started from different network nodes. The size and the color of each ball indicate the value of the signal at each vertex of the graph. Figure from \cite{Thanou17}.} \label{fig:diffusion_example} \end{figure*} An additional assumption on the above signal model is that the diffusion processes are expected to start from only a few nodes of the graph, at specific times, and spread over the entire graph over time\footnote{When no locality assumptions are imposed (e.g., large $\tau_s$) and a single diffusion kernel is used in the dictionary, the model reduces to a global smoothness model.}. This assumption can be formally captured by imposing a sparsity constraint on the latent variable $\mathbf{c}$. The graph learning problem can be cast as a structured dictionary learning problem, where the dictionary is defined by the unknown graph Laplacian matrix. The latter can then be estimated as a solution of the following optimization problem: \begin{equation} \begin{split} \underset{\mathbf{L} ,~\mathbf{C}, ~\tau}{\mbox{minimize}} ~~~ & \| \mathbf{X} -{\mathcal{D}} \mathbf{C} \|^{2}_{F} + \alpha \sum_{m = 1}^M\|\mathbf{c}_m\|_1 + \beta \|\mathbf{L}\|_F^2, \\ \mbox{subject to} ~~~ & {\mathcal{D}} = [e^{-\tau_1 \mathbf{L}} ~ e^{-\tau_2 \mathbf{L}} \dots e^{-\tau_S \mathbf{L}} ~ ], ~ \{\tau_s\}_{s=1}^S \ge 0, \\ ~~~ & \mathrm{tr}(\mathbf{L}) = N, ~ \mathbf{L} \in \mathcal{L}, \end{split} \label{eq:opt_prob_heat_kernel} \end{equation} where the constraints on $\mathcal{L}$ is the same as that in Eq.~(\ref{eq:smooth}). Following the same reasoning, the work in \cite{Maretic17} extends the heat diffusion dictionary to the more general family of polynomial graph kernels. In summary, these approaches propose to recover the graph Laplacian matrix by assuming that the graph signals can be sparsely represented by a dictionary that consists of graph diffusion kernels. {In summary, from the perspective of spectral filtering, and in particular network diffusion, the function $\mathcal{F}(\mathcal{G})$ is one that helps define a meaningful diffusion process on the graph via the graph Laplacian, heat diffusion kernel, or other more general graph shift operators. This directly leads to the slightly different output of the learning algorithms in \cite{Pasdeloup18,Segarra17a,Thanou17}. The choice of the coefficients $\mathbf{c}$, on the other hand, determines specific characteristics of the graph signals, such as stationarity or sparsity. In terms of computational complexity, the methods in \cite{Pasdeloup18,Segarra17a,Thanou17} all involve the computation of eigenvectors, followed by solving a linear program (LP), a semidefinite program (SDP), and a SDP, respectively.} \subsection{Models based on causal dependencies on graphs} \label{sec:causal} The models described in the previous two sections are mainly designed for learning undirected graphs, which is also the predominant consideration in the current GSP literature. Undirected graphs are associated with symmetric Laplacian matrices $\mathbf{L}$, which admit a complete set of orthonormal eigenvalues and eigenvectors that conveniently provide a notion of frequency for signals on graphs. It is often the case, however, that in some application domains learning directed graphs is more desirable as in those cases the directions of edges may be interpreted as causal dependencies between the variables that the vertices represent. For example, in brain analysis, even though the inference of an undirected \emph{functional connectivity} between the regions of interest (ROIs) is certainly of interest, a directed \emph{effective connectivity} may reveal extra information about the causal dependencies between those regions \cite{Friston94,Shen16}. The third class of models that we discuss is therefore one that allows for the inference of such directed dependencies. The authors of \cite{Mei17} have proposed a causal graph process based on the idea of sparse vector autoregressive (SVAR) estimation \cite{Songsiri10,Bolstad11}. In their model, the signal at time step $t$, $\mathbf{x}[t]$, is represented as a linear combination of its observations in the past $T$ time steps and a random noise process $\mathbf{n}[t]$: \begin{equation} \begin{split} \mathbf{x}[t] &= \mathbf{n}[t] + \sum_{j=1}^T P_j(\mathbf{W}) \mathbf{x}[t-j]\\ &= \mathbf{n}[t] + \sum_{j=1}^T \sum_{k=0}^j a_{jk} {\mathbf{W}}^k \mathbf{x}[t-j], \end{split} \label{eq:svar-model} \end{equation} \noindent where $P_j(\mathbf{W})$ is a degree $j$ polynomial of the (possibly directed) adjacency matrix $\mathbf{W}$ with coefficients $a_{jk}$ (see Fig.~\ref{fig:temporal} for an illustration). Clearly, this model admits the design of $\mathcal{F}(\mathcal{G})=P_i(\mathbf{W})$ and $\mathbf{c}=\mathbf{x}[t-i]$ in forming one time-lagged copy of the signal $\mathbf{x}[t]$. For temporal observations $\mathbf{X}= \big( \mathbf{x}[0]~\mathbf{x}[1]~\cdots~\mathbf{x}[M-1] \big)$, the authors have therefore proposed to solve the following optimization problem: \begin{equation} \min_{\mathbf{W},\mathbf{a}} ~ \frac{1}{2} \sum_{t=T}^{M-1} \Big\| \mathbf{x}[t] - \sum_{j=1}^T P_j(\mathbf{W}) \mathbf{x}[t-j] \Big\|_2^2 + \alpha~||\text{vec}(\mathbf{W})||_1 + \beta~||\mathbf{a}||_1, \label{eq:svar} \end{equation} where $\text{vec}(\mathbf{W})$ is the vectorized form of $\mathbf{W}$, $\mathbf{a} = \big( a_{10}~a_{11}~\cdots~a_{jk}~\cdots~a_{TT} \big)$ is a vector of all the polynomial coefficients $a_{jk}$, and the {entry-wise} $L^1$-norm is imposed on $\mathbf{W}$ and $\mathbf{a}$ for promoting sparsity. Due to non-convexity introduced by the matrix polynomials, the problem in Eq.~(\ref{eq:svar}) is solved in three steps, i.e., solving sequentially for $P_j(\mathbf{W})$, $\mathbf{W}$, and $\mathbf{a}$. In summary, in the SVAR model, the specific designs of $\mathcal{F}$ and $\mathbf{c}$ lead to a particular generative process of the observed signals on the learned graph. Similar ideas can also be found in the Granger causality or vector autoregressive models (VARMs) \cite{Roebroeck05,Goebel03}. \begin{figure}[t] \centering \includegraphics[width=16cm]{Fig9} \label{temporal} \caption{A graph signal $\mathbf{x}$ at time step $t$ is modeled as a linear combination of its observations in the past $T$ time steps and a random noise process $\mathbf{n}[t]$.} \label{fig:temporal} \end{figure} Structural equation models (SEMs) are another popular approach for inferring directed graphs \cite{Kaplan09,Mclntosh94}. In the SEMs, the signal observation $\mathbf{x}$ at time step $t$ is modeled as: \begin{equation} \mathbf{x}[t] = \mathbf{W} \mathbf{x}[t] + \mathbf{E} \mathbf{y}[t] + \mathbf{n}[t], \label{eq:sem} \end{equation} where the first term in Eq.~(\ref{eq:sem}) consists of endogenous variables, which define the signal value at each variable as a linear combination of the values at its neighbors in the graph, and the second term represents exogenous variables $\mathbf{y}[t]$ with a coefficient matrix $\mathbf{E}$. The third term represents observation noise which is similar to that in Eq.~(\ref{eq:svar-model}). The endogenous component of the signal implies a choice of $\mathcal{F}(\mathcal{G}) = \mathbf{W}$ (which can again be directed) and $\mathbf{c} = \mathbf{x}[t]$ and, similar to the SVAR model, enforces a certain generative process of the signal on the learned graph. {As we can see, causal dependencies on the graph, either between different components of the signal or between its present and past observations, can be conveniently modeled in a straightforward manner by choosing $\mathcal{F}(\mathcal{G})$ as a polynomial of the adjacency matrix of a directed graph and choosing the coefficients $\mathbf{c}$ as the present or past signal observations. As a consequence, methods in \cite{Mei17,Baingana17,Shen16} are all able to learn an asymmetric graph adjacency matrix, which is a potential advantage compared to methods based on the previous two models. Furthermore, the SEMs can be extended to track network topologies that evolve dynamically \cite{Baingana17} and deal with highly correlated data \cite{Traganitis17}, or combined with the SVAR model which leads to the structural vector autoregressive models (SVARMs) \cite{Chen11}. Interested readers are referred to \cite{Giannakis18} for a recent review of the related models. In these extensions of the classical models, the designs of $\mathcal{F}$ and $\mathbf{c}$ can be generalized accordingly to link the signal representation and the learned graph topology. Finally, as an overall comparison, the differences between methods that are based on the three models discussed in this review are summarized in Table~\ref{tab:comparison}.} \begin{table}[t] \centering \caption{Comparison between different GSP-based approaches to graph learning.} \label{tab:comparison} \scalebox{0.8}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Signal Model}} & \multicolumn{2}{c|}{\textbf{Assumption}} & \multirow{2}{*}{\textbf{Learning Output}} & \multirow{2}{*}{\textbf{Edge Directionality}} \\ \cline{3-4} & & $\mathcal{F}(\mathcal{G})$ & $\textbf{c}$ & & \\ \hline Dong et al. \cite{Dong16} & Global Smoothness & \begin{tabular}[c]{@{}c@{}}Eigenvector \\ Matrix\end{tabular} & Gaussian & Laplacian & Undirected \\ \hline Kalofolias et al. \cite{Kalofolias16} & Global Smoothness & \begin{tabular}[c]{@{}c@{}}Eigenvector \\ Matrix\end{tabular} & Gaussian & Adjacency Matrix & Undirected \\ \hline Egilmez et al. \cite{Egilmez17} & Global Smoothness & \begin{tabular}[c]{@{}c@{}}Eigenvector \\ Matrix\end{tabular} & Gaussian & Generalized Laplacian & Undirected \\ \hline Chepuri et al. \cite{Chepuri17} & Global Smoothness & \begin{tabular}[c]{@{}c@{}}Eigenvector \\ Matrix\end{tabular} & Gaussian & Adjacency Matrix & Undirected \\ \hline Pasdeloup et al. \cite{Pasdeloup18} & \begin{tabular}[c]{@{}c@{}}Spectral Filtering \\ (Diffusion by Adjacency)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Normalized \\ Adjacency Matrix\end{tabular} & IID Gaussian & \begin{tabular}[c]{@{}c@{}}Normalised Adjacency Matrix\\ Normalized Laplacian\end{tabular} & Undirected \\ \hline Segarra et al. \cite{Segarra17a} & \begin{tabular}[c]{@{}c@{}}Spectral Filtering \\ (Diffusion by Graph Shift Operator)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Graph Shift \\ Operator\end{tabular} & IID Gaussian & Graph Shift Operator & Undirected \\ \hline Thanou et al. \cite{Thanou17} & \begin{tabular}[c]{@{}c@{}}Spectral Filtering \\ (Heat diffusion)\end{tabular} & Heat Kernel & Sparsity & Laplacian & Undirected \\ \hline Mei and Moura \cite{Mei17} & \begin{tabular}[c]{@{}c@{}}Causal Dependency \\ (SVAR)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Polynomials of \\ Adjacency Matrix\end{tabular} & Past Signals & Adjacency Matrix & Directed \\ \hline Baingana et al. \cite{Baingana17} & \begin{tabular}[c]{@{}c@{}}Causal Dependency \\ (SEM)\end{tabular} & Adjacency Matrix & Present Signal & \begin{tabular}[c]{@{}c@{}}Time-Varying \\ Adjacency Matrix\end{tabular} & Directed \\ \hline Shen et al. \cite{Shen16} & \begin{tabular}[c]{@{}c@{}}Causal Dependency \\ (SVARM)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Polynomials of \\ Adjacency Matrix\end{tabular} & \begin{tabular}[c]{@{}c@{}}Past and \\ Present Signals\end{tabular} & Adjacency Matrix & Directed \\ \hline \end{tabular}} \end{table} \subsection{Connections with the broader literature} We have seen that GSP-based approaches can be unified by the viewpoint of learning graph topologies that enforce desirable representations of the signals on the learned graph. This offers a new interpretation of the traditional statistical and physically-motivated models. First, as a typical example of approaches for learning graphical models, the graphical Lasso solves the optimization problem of Eq.~(\ref{eq:gLasso}) in which the trace term $\mathrm{tr}(\widehat{\boldsymbol{\Sigma}} \mathbf{\Theta}) = \frac{1}{M-1}\mathrm{tr} (\mathbf{X}^T \mathbf{\Theta} \mathbf{X})$ bears similarity to the Laplacian quadratic form $\mathcal{Q}(\mathbf{L})$ and the trace term in the problem of Eq.~(\ref{eq:smooth}), when the precision matrix $\mathbf{\Theta}$ is chosen to be the graph Laplacian $\mathbf{L}$. This is the case for the approach in \cite{Lake10}, which has proposed to consider $\mathbf{\Theta} = \mathbf{L} + \frac{1}{\sigma^2}\mathbf{I}$ (see Eq.~(\ref{eq:lake})) as a regularized Laplacian to fit into the formulation of Eq.~(\ref{eq:gLasso}). The graphical Lasso approach therefore can be interpreted as one that promotes global smoothness of the signals on the learned topology. Second, models based on spectral filtering and causal dependencies on graphs can generally be thought of as the ones that define generative processes of the observed signals, in particular the diffusion processes on the graph. This is achieved either explicitly by choosing $\mathcal{F}(\mathcal{G})$ as diffusion matrices as that in Section~\ref{sec:filtering}, or implicitly by defining the causal processes of signal generation as that in Section~\ref{sec:causal}. Both types of models share similar philosophies as the ones developed from a physics viewpoint in Section~\ref{sec:physics}, in that they all propose to infer the graph topologies by modeling signals as outcomes of physical processes on the graph, especially the diffusion and cascading processes. It is also interesting to notice that certain models can be interpreted from all the three viewpoints, an example being the global smoothness model. Indeed, in addition to the statistical and GSP perspectives described above, the property of global smoothness can also be observed in a square-lattice Ising model \cite{Cipra87}, hence admitting a physical interpretation. {Despite the connections with traditional approaches, however,} GSP-based approaches offer some unique advantages compared to the classical methods. On the one hand, the flexibility in designing the function $\mathcal{F}(\mathcal{G})$ allows for statistical properties of the observed signals that are not limited to a Gaussian distribution, which is however the predominant choice in many statistical machine learning methods. On the other hand, this also makes it easier to consider models that go beyond a simple diffusion or cascade model. For example, by the sparsity assumption on the coefficients $\mathbf{c}$, the method in \cite{Thanou17} defines the signals as the outcomes of possibly more than one diffusion processes originated at different parts of the graph after possibly different time steps. Similarly, by choosing different $\mathcal{F}(\mathcal{G})$ and $\mathbf{c}$, the SVAR models \cite{Mei17} and the SEMs \cite{Baingana17} correspond to different generative processes of the signals, one based on the static network structure and the other on temporal dynamics. These design flexibilities provide more powerful modeling of the signal representation for the graph inference process. \section{Applications of GSP-based graph learning methods} The field of GSP is strongly motivated by a wide range of applications where there exist inherent structures behind data observations. Naturally, GSP-based graph learning methods are appealing in areas where learning hidden structures behind data has been of constant interest. In particular, the emphasis on the modeling of the signal representation within the learning process has made them increasingly popular in a growing number of applications. Currently, these methods mainly find applications in image coding and compression, brain signal analysis, and a few other diverse areas, {as described briefly below.} \subsection{Image coding and compression} Image representation and coding has been one main area of interest for GSP-based methods. Images can be naturally thought of as graph signals defined on a regular grid structure, where the nodes are the image pixels and the edge weights capture the similarity between adjacent pixels. The design of new flexible graph signal representations has opened the door to new structure-aware transform coding techniques, and eventually to more efficient image compression frameworks \cite{Cheung18}. Such representation permits to go beyond traditional transform coding by moving from classical fixed transforms such as the discrete cosine transform (DCT) to graph-based transforms that are better adapted to the actual image structure. The design of the graph and the corresponding transform remains, however, one of the biggest challenges in graph-based image compression. A suitable graph for effective transform coding should lead to easily compressible signal coefficients, at the cost of a small overhead for coding the graph. Most graph-based coding techniques focus mainly on images, and they construct the graph by considering pairwise similarities among pixel intensities. A few attempts on adapting the graph topology and consequently the graph transform exist in the literature, as for example in \cite{Hu15Compression,Rotondo15}. However, they rely on the selection from a set of representative graph templates, without being fully adapted to the image signals. \begin{figure}[t] \centering \subfloat[] {\includegraphics[width=12.6cm]{Fig10a} \label{compression-example-image}}\\ \subfloat[] { \includegraphics[width=7cm]{Fig10b} \label{fig:GFT-decay}} \subfloat[] { \includegraphics[width=7cm]{Fig10c} \label{fig:GFT_image_compression}} \caption{{Inferring a graph for image coding: (a) The graph learned on a random patch of the image Teddy using \cite{Fracastoro2017}. (b) Comparison between the GFT coefficients of the image signal on the learned graph and the four nearest neighbor grid graph. The coefficients are ordered decreasingly by log-magnitude. (c) The GFT coefficients of the graph weights.}} \label{fig:compression-example} \end{figure} Graph learning has been introduced only recently for this type of problems. A learning model based on signal smoothness, inspired by \cite{Dong16,Kalofolias17}, has been further extended in order to design a graph-based coding framework that takes into account the coding of the signal values as well as the cost of transmitting the graph in rate distortion terms \cite{Fracastoro2017}. In particular, the cost of coding the image signal is minimized by promoting its smoothness on the learned topology. The transmission cost of the graph itself is further controlled by adding an additional term in the optimization problem which penalizes the sparsity of the graph Fourier coefficients of the edge weight signal. {An illustrative example of the graph-based transform coding proposed in \cite{Fracastoro2017}, as well as its application to image compression, is shown in Fig.~\ref{fig:compression-example}. Briefly, the compression algorithm consists of three important parts. First, the solution to an optimization problem that takes into account the rate approximation of the image signal at a patch level, as well as the cost of transmitting the graph, provides a graph topology (Fig.~\ref{fig:compression-example}(a)) that defines the optimal coding strategy. Second, the GFT coefficients of the image signal on the learned graph can be used to compress efficiently the image. As we can see in Fig.~\ref{fig:compression-example}(b), the decay of these coefficients (in terms of their log-magnitude) is much faster than the decay of the GFT coefficients corresponding to a regular grid graph that does not involve any learning. Third, the weights of the learned graph are treated as a new edge weight signal that lies on a dual graph, whose nodes represent the edges in the learned graph, with the signal values on the nodes being the edge weights of the learned graph. Two nodes are connected in this dual graph if and only if the two corresponding edges share one common node in the learned graph. The learned graph is then transmitted by the GFT coefficients of this edge weight signal, where the decay of these coefficients is shown in Fig.~\ref{fig:compression-example}(c). The obtained results confirm that the GFT coefficients of the graph weights are concentrated on the low frequencies, which indicates a highly compressible graph.} {Another example is the work in \cite{Lu2017} that introduces an efficient graph learning approach for fast graph Fourier transform that is based on \cite{Egilmez17}. The authors have considered a maximum likelihood estimation problem with additional constraints based on a matrix factorization of the graph Laplacian matrix, such that its eigenvector matrix is a product of a block diagonal matrix and a butterfly-like matrix. The learned graph leads to a fast non-separable transform for intra predictive residual blocks in video compression. Such efforts confirm that learning a meaningful graph can have a significant impact in graph-based image compression. These are only some first attempts which leave much room for improvement, especially in terms of coding performance. Thus, we expect to see more research efforts in the future to fully exploit the potential of graph methods.} \subsection{Brain signal analysis} GSP has been shown to be a promising and powerful framework for brain network data, mainly due to the potential to jointly model the brain structure through the graph and the brain activity as a signal residing on the nodes of the graph. The overview paper \cite{Huang18} provides a summary of how a graph signal processing view on brain signals can provide additional insights into the functionality of the brain. Graph learning in particular has been successfully applied for inferring the structural and functional connectivity of the brain related to different diseases or external stimuli. For example, \cite{Hu15} introduced a graph regression model for learning brain \emph{structural} connectivity of patients with Alzheimer's disease, which is based on the signal smoothness model discussed in Section \ref{sec:smoothness}. A similar framework \cite{Liu2018}, extended to the noisy settings, has been applied on a set of magnetoencephalography (MEG) signals to capture the brain activity in two categories of visual stimuli (e.g., the subject was viewing face or non-face images). In addition to the smoothness assumption, the proposed framework is based on the assumption that the perturbation on the low-rank components of the noisy signals is sparse. The recovered \emph{functional} connectivity graphs under these assumptions are compatible with findings in the neuroscientific literature, which is a promising result indicating that graph learning can contribute to this application domain. Instead of the smoothness model adopted in \cite{Hu15,Liu2018}, the authors in \cite{Shen16} have utilized models on causal dependencies and proposed to infer \emph{effective} connectivity networks of brain regions that may shed light on the understanding of the cause behind epilepsy. The signals that they use are electrocorticography (ECoG) time series data before and after ictal onset of seizures of epilepsy. All these applications show the potential impact GSP-based graph learning methods may have on brain and more generally biomedical data analysis where the inference of hidden functional connections can be crucial. \subsection{Other application domains} In addition to image processing and biomedical analysis, GSP-based graph learning methods have been applied to a number of other diverse areas. One notable example is meteorology, where it is of interest to understand the relationship between different locations based on the temperatures recorded at the weather stations in these locations. Interestingly, this is an area where all the three major signal models introduced in this tutorial may be employed to learn graphs that lead to different insights. For instance, the authors of \cite{Dong16,Chepuri17} have proposed to learn a network of weather stations using the signal smoothness model, which essentially captures the relationship between these stations in terms of their altitude. Alternatively, the work in \cite{Pasdeloup18} has adopted the heat diffusion model in which the evolution of temperatures in different regions is modeled as a diffusion process on the learned geographical graph. The authors of \cite{Mei17} have further developed a framework based on causal dependencies to infer a directed temperature propagation network that is consistent with major wind directions over the United States. We note, however, that most of these studies are proof of concept, and future research is expected to focus more on the perspective of practical applications in meteorology. Another area of interest is environmental monitoring. As an example, the author of \cite{Jablonski17} has proposed to apply the GSP-based graph learning framework of \cite{Kalofolias17} for the analysis of exemplary environmental data of ozone concentration in Poland. More specifically, the paper has proposed to learn a network that reflects the relationship between different regions in terms of ozone concentration. Such relationship may be understood in a dynamic fashion using data from different temporal periods. Similarly, the work in \cite{Dong16} has analyzed evapotranspiration data collected in California to understand relationship between regions of different geological features. Finally, GSP-based methods have also been applied to infer graphs that reveal urban traffic flows \cite{Thanou17}, patterns of news propagation on the Internet \cite{Baingana17}, inter-region political relationship \cite{Dong16}, similarity between animal species \cite{Egilmez17}, and ontologies of concepts \cite{Lake10}. The diversity of these areas has demonstrated the potential of applying GSP-based graph learning methods for understanding hidden relationship behind data observations in real world applications. \section{Concluding remarks and future directions} Learning structures and graphs from data observations is an important problem in modern data analytics, and the novel signal processing approaches reviewed in this paper have both theoretical and practical significance. On the one hand, GSP provides a new theoretical framework for graph learning by utilizing signal processing tools, with a strong emphasis on the representation of the signals on the learned graph, which can be essential from a modeling viewpoint. As a result, the novel approaches developed in this field would benefit not only the inference of optimal graph topologies, but potentially also the subsequent signal processing and data analysis tasks. On the other hand, the novel signal and graph models designed from a GSP perspective may contribute uniquely to the understanding of the often complex data structure and generative processes of the observations made in real world application domains, such as brain and social network analysis. For these reasons, GSP-based approaches for graph learning have since recently attracted an increasing amount of interest; there exist, however, many open issues and questions that are worthy of further investigation. In what follows, we {discuss five general directions for future work.} \subsection{Input signals of learning frameworks} \label{sec:input} The first important point that needs further investigation is the quality of the input signals. Most of the approaches in the literature have focused on the scenario where a complete set of data is observed for all the entities of interest (i.e., at all vertices in the graph). However, there are often situations when observations are only partially available either due to failure in data acquisition from some sensors or simply because of the cost of making full observations. For example, in large-scale social, biomedical or environmental networks, sampling or active learning may need to be applied to select a limited number of sensors for observations \cite{Gadde14}. It is a challenge to design graph learning approaches that can handle such cases, and to study the extent to which the partial or missing observations affect the learning performance. Another scenario is dealing with sequential input data that come in an online and adaptive fashion, which has been studied in the recent work of \cite{Vlaski18}. \subsection{Outcome of learning frameworks} \label{sec:output} Compared to the input signals, it is perhaps even more important to rethink the potential outcome of the learning frameworks. Several important lines of thoughts remain largely unexplored in the current literature. First, while most of the existing work focuses on learning undirected graphs, it is certainly of interest to investigate approaches for learning directed ones. Methods described in Section~\ref{sec:causal}, such as \cite{Mei17,Baingana17,Shen16}, are able to achieve this since they do not explicitly rely on the notion of frequency provided by the eigendecomposition of the symmetric graph adjacency or Laplacian matrices. However, it is certainly possible and desirable to extend the frequency interpretation obtained with undirected graphs to the case of directed ones. For example, alternative definitions of frequencies of graph signals have been recently proposed based on normalization of the random walk Laplacian \cite{Mhaskar18}, novel definition of inner product of graph signals \cite{Girault18}, and explicit optimization for an orthonormal basis on graphs \cite{Sardellitti17,Shafipour18b}. {How to design techniques that learn directed graphs by making use of these new developments in the frequency interpretation of graph signals remains an interesting question.} Second, in many real world applications, noticeably social network interactions and brain functional connectivities, the network structure changes over time. It is therefore interesting to look into learning frameworks that can infer dynamic graph topologies. To this end, \cite{Baingana17} proposes a method to track network structure that can be switched between a number of different states. Alternatively, \cite{Kalofolias17} has proposed to infer dynamic networks from observations within different time windows, with a penalty term imposed on the similarity between consecutive networks to be inferred. Such a notion of temporal smoothness is certainly an interesting question to study, which may draw inspiration from visualizations of dynamic networks recently proposed in \cite{DalCol17}. Third, although the current lines of work reviewed in this survey mainly focus on the signal representation, it is also possible to put constraints directly on the learned graphs by enforcing certain graph properties that go beyond the common choice of sparsity, which has been adopted explicitly in the optimization problems in many existing methods such as the ones in \cite{Friedman08,Lake10,Chepuri17,Pasdeloup18,Segarra17a,Mei17,Baingana17}. One example is the work in \cite{Pavez18}, where the authors have proposed to infer graphs with monotone topology properties. Another example is the approach in \cite{Sudin17} which learns a sparse graph with connected components. Learning graphs with desirable properties inspired by a specific application domain (e.g., community detection \cite{Fortunato10}) can also have great potential benefit, and it is a topic worth investigating. Fourth, in some applications it might not be necessary to learn the full graph topology, but some other intermediate or graph-related representations. For example, this can be an embedding of the vertices in the graph for the purpose of clustering \cite{Dong14}, or a function of the graph such as graph filters for the subsequent signal processing tasks \cite{Segarra17b}. Another possibility is to learn graph properties such as the eigenvalues (for example using technique described in \cite{Pasdeloup18}) or degree distribution, or templates that constitute local regions of the graph. Similar to the previous point, in these scenarios, the learning framework needs to be designed accordingly with the end objective or application in mind. Finally, instead of learning a deterministic graph structure as in most existing methods, it would be interesting to explore the possibility of learning graphs in a probabilistic fashion in which we specify the confidence in building an edge between each pair of the vertices. This would benefit situations when a soft decision is preferred to a hard decision, possibly due to anticipated measurement errors in the observations or other constraints. \subsection{Signal models} Throughout this tutorial, we have emphasized the important role a properly defined signal model plays in the design of the graph learning framework. The current literature predominantly focuses on either the globally or locally smooth models. Other models such as bandlimited signals, i.e., the ones that have limited support in the graph spectral domain, may also be considered for inferring graph topologies \cite{Sardellitti16}. More generally, more flexible signal models that go beyond the smoothness-based criteria can be designed by taking into account general filtering operations of signals on the graph. The learning framework may also need to adapt to the specific input and output as outlined in Section~\ref{sec:input} and Section~\ref{sec:output}. For instance, given only partially available observations, it might make sense to consider a signal model tailored for the observed, instead of the whole, region of the graph. Another scenario would be that, in learning dynamic graph topologies, the signal model employed needs to be consistent with the temporal smoothness criteria adopted to learn the sequence of graphs. \subsection{Performance guarantees} Graph inference is an inherently difficult problem given the large number of unknown variables (generally in the order of $N^2$) and the relatively small amount of observations. As a result, learning algorithms need to be designed with additional assumptions or priors. In this case, it is desirable to have theoretical guarantees on the performance of graph recovery under the specific model and prior. It would also be interesting to put the errors in graph recovery into the context of the subsequent data processing tasks and study their impact. Furthermore, for many graph learning algorithms, in addition to the empirical performance it is necessary to provide guarantees of the convergence when alternating minimization is employed, as well as to study the computational complexity that can be essential for learning large-scale graphs. These theoretical considerations remain largely unexplored in the current literature and hence require much further investigation, given their importance. \subsection{Objective of graph learning} {The final comment on future work is a reflection on the objective of the graph learning problem and, in particular, how to better integrate the inference framework with the subsequent data analysis tasks. Clearly, the learned graph may be readily used for classical machine learning tasks such as clustering or semi-supervised learning, but it may also directly benefit the processing and analysis of the graph signals. In this setting, it is often the case that a cost related to the application is directly incorporated into the optimization for graph learning. For instance, the work in \cite{Yankelevsky16} has proposed a method for inferring graph topologies with a joint goal of dictionary learning, whose cost function is incorporated into the optimization problem. In many applications, such as image coding, accuracy is not the only interesting performance metric. Typically, there exist different trade-offs that are more complex and should be taken into consideration. For example, in image compression, the actual cost of coding the graph is at least equally important compared to the cost of coding the image signal. Such constraints are indicated by the application, and they should be incorporated in the graph learning framework (e.g., \cite{Fracastoro2017}) in order to make the learning framework more targeted to a specific application.} \section{Acknowledgements} The authors would like to thank Giulia Fracastoro for her help with preparing Fig.~\ref{fig:compression-example}. \bibliographystyle{IEEEtran.bst}
{ "timestamp": "2019-05-21T02:32:44", "yymm": "1806", "arxiv_id": "1806.00848", "language": "en", "url": "https://arxiv.org/abs/1806.00848" }
\section{Introduction} A gamma-ray line signal, if robustly detected, will be deemed as a smoking-gun signature of particle dark matter (DM) since no known astrophysical process can generate such a specific spectrum, while the line signal in principle can be from the direct annihilation of DM particles into gamma-rays (i.e., $\chi\chi\rightarrow\gamma\gamma, \gamma{Z}$ or $\gamma{H}$). It may be captured if the annihilation cross section is large enough that the line signal exceeds the detection sensitivities of gamma-ray telescopes such as Fermi-LAT \cite{atwood09LAT} and DAMPE \cite{dampe}. Great efforts have been made to hunt for such kind of signal, but none is conclusively detected so far \cite{pullen07EGRETline,fermi10line1,fermi12line2,bringmann12_130gev,weniger12_130gev,geringer12dsphLine,tempel12_130gev,huang12_130gev,su2012line,fermi13line,Hektor2013gcline,albert14line,fermi15line,anderson16gclsLine,liang16gclsLine,Profumo2016line,liang16dsphLine,Liang17line3}. Some tentative evidence of line signals has been suggested in the literature. By analyzing an optimized region around the Galactic center, a line-like excess at 130 GeV was found in the 4 years' Fermi-LAT Pass 7 data \cite{bringmann12_130gev, weniger12_130gev}. This signal was also reported with lower significance in the searches of galaxy clusters \cite{Hektor2013gcline}. However, analysis by Fermi-LAT collaboration using 5.8 years of Pass 8 data did not confirm this signal \cite{fermi15line}. More recently, a tentative line-like excess at 42.7 GeV was found in the stacked spectrum of 16 nearby galaxy clusters, the global significance of this excess is just $\sim3.0\,\sigma$ \cite{liang16gclsLine,fl16}. The most promising site one might be able to observe a line signal is the region around the center of our Milky Way. Besides the Galactic center (GC), other regions that may produce considerable line signals include dwarf spheroidal galaxies (dSphs) \cite{geringer12dsphLine,liang16dsphLine}, DM subhalos \cite{tempel12_130gev,Liang17line3} and galaxy clusters \cite{Hektor2013gcline,anderson16gclsLine,liang16gclsLine}. In this work, we do not examine specific objects/regions, but perform blind searches for the line signals in the whole sky using the Fermi-LAT data. We aim to find out some regions with relatively high TS values. Due to the very large trial factor introduced in such kind of analysis, none of the weak excesses can not be identified as a real signal. Nevertheless, our search results may be taken as a list of regions that is worth further attention, since the possibility that very a few of them are DM line signals from subhalos can not be ruled out. We also set limits on the DM properties utilizing the number of the line-like excesses (see Sec. \ref{sec:limits}). \section{Fermi-LAT data and line signal search} \label{sec2} In this work, we use the Fermi-LAT data to perform the searches\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/}}. We will search for the signals with line energies from 5 GeV to 300 GeV, thus we take into account the Fermi-LAT data in the energy range of 1 GeV to 500 GeV to address the energy dispersion of the instrument. The time period of the data we use is from Aug. 4th, 2008 to Aug. 4th, 2017 (corresponding to MET 239557417-523497605). We take the recommended zenith angle cut ($\theta_{\rm zenith}<90^{\circ}$) and data quality cut ({\tt DATA\_QUAL==1 \&\& LAT\_CONFIG==1}) to avoid the contamination from Earth limb emission and to guarantee the data is suitable for science use. To reduce the contamination from residual cosmic rays in the LAT data, and also to be consistent with our previous works \cite{liang16gclsLine,liang16dsphLine,Liang17line3}, we make use of the {\tt ULTRACLEAN} data. For achieving better energy resolution, we exclude the {\tt EDISP0} data in our analysis (${\tt evtype=896}$). We use the {\tt Fermi Science Tools} of version {\tt v10r0p5} to do the data selection and the exposure calculation. To search for the line signals in the whole sky, we select totally 49152 ROIs with a radius of 2 degree for each. The centers of the ROIs correspond to the {\tt HEALPix} \cite{healpix05} coordinates list with {$\tt nside = 64$}. Such a strategy ensures that all the sky is covered by our ROI sample. Assuming a point-like spatial distribution for the line signal, the $2^\circ$ radius also ensures that most line signal photons are included by the ROI even the signal is located at the edge of a {\tt HEALPix} pixel considering the point spread function (PSF) of Fermi-LAT is smaller than $1^\circ$ for $>5\,{\rm GeV}$ data \cite{fermi12cal} and the radius of the pixel is roughly 0.5 degree\footnote{The shape of the {\tt HEALPix} pixel is in fact not a circle, the radius here is an estimation derived using the solid angle of each pixel.}. In each ROI, the sliding window technique \cite{pullen07EGRETline,weniger12_130gev,liang16gclsLine} is adopted to perform the search. For each putative line with energy $E_{\gamma}$, we perform unbinned likelihood fittings in a narrow window of ($E_{\gamma}-0.5E_{\gamma}$, $E_{\gamma}+0.5E_{\gamma}$). The test statistics (TS) is obtained by comparing the likelihoods of null model (no line signal model) and the signal model. We approximate the null model to a power law function. In consideration of that the background mixing all astrophysical components should be smooth and continuous in spectra, the power law approximation is reasonable since we are using a very narrow energy window. For the signal model, we adopt the form of a line component ($\delta(E-E_\gamma)$) superposing on the power law background. For the line component, we have also convolved it with the energy dispersion function of the data. The method of searching for line signals in the Fermi-LAT data has been extensively introduced in Ref.\cite{weniger12_130gev,fermi13line,fermi15line,liang16gclsLine}. We refer readers to these literature for details. \section{results} Adopting the aforementioned approach, we searched totally 49152 ROIs for the line signals. We summarize our search results in this section. Figure \ref{fig:tsdistribution} presents the $TS_{\rm max}$ distribution over all the ROIs. The $TS_{\rm max}$ denotes the maximum TS value among a series of attempted line energies\footnote{Explicitly, 110 $E_\gamma$ in the range of 5$-$300$\,{\rm GeV}$.} in each ROI. As expected, most of ROIs give relatively low TS values ($TS_{\rm max}<9$ for 94\% ROIs). Theoretically, the $TS_{\rm max}$ for the background only data should follow a trial-corrected $\chi^2$ distribution \cite{weniger12_130gev,liang16gclsLine}. Fitting our results with this distribution gives $\chi^2_{\rm red}=202.5/58$, indicating the best fit can not match the data well. Considerable discrepancy between the best fit curve and the TS distribution is clearly seen around $TS_{\rm max}\sim4-5$. To check whether the deviation is artificially from the analysis method we use, we have made some tests in Appendix \ref{app:test}. Using the same fitting code to analyze MC simulation data, we obtain results well consistent with the theoretical predictions. We therefore conclude that the deviation is not related to our fitting method. It may come from the non-poisson background of the real events due to systematics related to instrument measurements or induced by the approximation of a power law background in each energy window \cite{fermi15line}. However, we find the tails of the curve can match the histogram relatively well, it is still reasonable to use the best-fit function to approximate the null distribution (i.e., the distribution for background-only data) for large $TS_{\rm max}$. \begin{figure}[!htb] \includegraphics[width=0.9\columnwidth]{tsdistr.pdf} \caption{The $TS_{\rm max}$ distribution of 49152 searched ROIs. The dashed line is the best fit trial-corrected $\chi^2$ distribution. The inserted sub-figure is the same distribution plotted in a y-log framework for better showing the tails of the distribution.} \label{fig:tsdistribution} \end{figure} In our all-sky searches, no signal is found to have a TS value greater than 25 ($TS=25$ corresponds to a local significance of $5\,\sigma$). The most significant line-like excess appears in the ROI centered on ($l=182.81$, $b=-15.09$), with a TS value of 24.3. The corresponding line energy is 74.9 GeV. The observation spectrum of this ROI can be found in Appendix \ref{spectrum}. Evidence of excess is clearly seen for this ROI, indicating that our search strategy can effectively identify such a kind of signal in the spectrum. Besides this excess, in total 50 ROIs give $TS_{\rm max}>16$ and 2953 result in $TS_{\rm max}>9$. The 50 ROIs showing highest TS values are of particular interest to us, because they have higher probabilities of being from real signal rather than background fluctuation. We plot their positions in the sky in Figure \ref{fig:dis}. Though these regions have $TS_{\rm max}>16$, intrinsically their global significances are very low. The reason is that we have searched a lot of ROIs and for each ROI a series of line energies, an extremly large trial-factor is introduced if we convert the TS values to the global significances. Utilizing the null distribution in Figure \ref{fig:tsdistribution} and attributing 49152 trials to the scan over multiple ROIs, the derived global significances are 0.54$\sigma$ and 0.11$\sigma$ for the first two ROIs with $TS_{\rm max}=24.3$ and $22.4$, respectively, and $<0.1\sigma$ for any other ROIs. These weak line-like excesses are most likely from statistical fluctuations. In view of the great importance of the line signal, these regions still deserve further attention. If one can find the same excesses in some of these regions by analyzing the data from other gamma-ray telescopes, the statistical origin will be disfavored. We thus present the information of the ROIs with $TS_{\rm max}>16$ in Appendix \ref{list}, they can be treated as a list of potential line signal regions for later studies. \begin{figure*}[!t] \includegraphics[width=0.9\textwidth]{ff.pdf} \caption{ROIs showing line-like excesses with $\rm TS >16$ overlaid on a Hammer-Aitoff projection of Fermi-LAT counts map ($E>1\,{\rm GeV}$).} \label{fig:dis} \end{figure*} \section{Searching for counterparts of the weak line-like excesses} \label{sec4} In addition to waiting for the observations from other instruments, we have also attempted to test the possible DM origin of these excesses by searching for their counterparts. The dark matter particles that generate the gamma-ray lines may simultaneously annihilate to other Standard Model particles (e.g., $b\bar{b}$, $\tau^+\tau^-$) \cite{Lefranc:2016fgn}, which could yield continuum gamma-ray emission at lower energies. This model-independent gamma-ray emissions provide us a way to test the possible DM origin of the line-like excesses. Specifically, for a certain excess, if we could detect another gamma-ray component in the ROI with its spectrum and spatial distribution compatible with DM continuum emission, it is a strong evidence that both the line and continuum emission are from DM annihilation within a given subhalo. For this reason, we analyze the unassociated point sources within the ROIs of $TS_{\rm max}>16$. Due to the complicated gamma-ray backgrounds in the Galactic plane, we ignore the ROIs with latitudes $|b|<10^\circ$. Totally, 13 unassociated point sources in FL8Y\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/fl8y/}} are found. We apply the standard likelihood analysis of Fermi-LAT data to these unassociated sources in the energy range from 300 MeV to 300 GeV. The unassociated point sources are modeled with spectrum of DM annihilation\footnote{The DM spectra are implemented with DMFitFunction: \url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source_models.html}} and the spectral function adopted in FL8Y. For the DM model, we consider the annihilation channel of $b\bar{b}$ and $\tau^+\tau^-$, and the DM mass is fixed to the $E_\gamma$ giving the largest TS value in each ROI. The delta likelihood between these two models, $\Delta\ln{\mathcal L}=\ln{{\mathcal L}_{\rm DM}}-\ln{{\mathcal L}_{\rm FL8Y}}$, is used to determine whether a DM annihilation hypothesis is favored. Our analyses show that only 1 among 13 sources, FL8Y J1656.4-0410, marginally favors the DM spectrum over the spectral model used in FL8Y. The $\Delta\ln{\mathcal L}=7.0$ corresponds to a local significance of $<4\sigma$, not offering an evidence of DM signal from subhalo. Besides, if certain of the line-like excesses in the Table \ref{tb:50roi} is a real DM signal, it could be from any channel of $\chi\chi\rightarrow\gamma\gamma$ or $\gamma{Z}$ or $\gamma{H}$, then it is possible that the dark matter particles also annihilate through another channel among the three \cite{su2012line}. For example, assuming the first line signal is from $\chi\chi\rightarrow\gamma\gamma$, the second line would be located at $E'_\gamma=m_\chi(1-m_X^2/4m_\chi^2)$, where $X$ could be $Z$ or $h$. If the second line is found with high significance, it also offers an indication that the line-like excess in Table \ref{tb:50roi} is a real DM signal. Thus for the 50 ROIs, we calculate the TS values of the second line signals at the corresponding energies. The largest TS values for the second lines are listed in Table \ref{tb:50roi} as well. We find that only 4 ROIs result in TS$_{\rm 2nd}>4$, and the highest one appears in the ROI {\tt \#35888}. The combined TS of this ROI reaches 27.1 for the two gamma-ray lines, however considering additional degree of freedom and very large trial factors, the global significance is still very low. \section{Constraining DM cross section with the non-detection of significant line signal} \label{sec:limits} In the cold dark matter paradigm, structure forms hierarchically, and it is predicted that there exist large amount of DM subhalos around the Milky Way. Such a prediction is supported by numerical $N$-body simulations \cite{diemand08VL2,springel08Aquarius,garrison14ELVIS}. The concentration of DM in the subhalos leads to a higher DM annihilation rate. If massive subhalos are close to the Earth sufficiently, they may generate gamma-ray signals detectable by Fermi-LAT. For some subhalos which are too small to capture enough baryonic matter (i.e., $M_{\rm sub}<10^{8}\,M_{\odot}$), the gamma-ray annihilation signals would be the only channel to observe them. Thus, it is supposed that some unassociated Fermi-LAT sources are potential DM subhalos \cite{fermi12dmsh,berlin14,bertoni15dmsh,schoonenberg16dmsh,bertoni16j2212,hooper16dmsh,calore16dmsh,wyp16dmsh,xzq16dmsh}, especially those spatially extended and with spectra compatible with DM signals \cite{bertoni16j2212,wyp16dmsh,xzq16dmsh}. \begin{figure*}[!htb] \includegraphics[width=0.45\textwidth]{phi0.pdf} \includegraphics[width=0.45\textwidth]{nobs_phi0.pdf} \caption{{\it Left panel}: The line signal detection threshold $\Phi_0$ as a function of the line energy $E_\gamma$. {\it Right panel}: The expected number of subhalos that can yield line signal significantly detectable by Fermi-LAT as a function of the line signal cross section assuming a DM mass of 100 GeV and a detection threshold of $1.0\times10^{-11}\,{\rm ph/cm^2/s}$. Note that $\Phi_0$, $M_{\rm DM}$ and $\left<\sigma{v}\right>$ are actually three degenerate parameters, the curve in the right panel applies to any values of $\Phi_0$ and $M_{\rm DM}$.} \label{fig:phi0} \end{figure*} No significant line signal ($TS>25$) is found in our analysis and we can set limits on the DM cross section of annihilating to gamma-rays, $\left<\sigma{v}\right>_{\gamma\gamma}$. The basic idea is that, higher cross section may lead to brighter gamma-ray annihilation flux that more subhalos far from us can be detected \cite{fermi12dmsh,berlin14,bertoni15dmsh,schoonenberg16dmsh,hooper16dmsh,calore16dmsh}. The number of expected observable subhalos $N_{\rm exp}$ is therefore proportional to the cross section. According to poisson statistics, for a model of $N_{\rm exp}$ observable subhalos, the probability distribution function of the number of detected subhalos, $N_{\rm obs}$, is \begin{equation} p(N_{\rm obs}|N_{\rm exp})=\frac{(N_{\rm exp})^{N_{\rm obs}}\exp^{-N_{\rm exp}}}{N_{\rm obs}!}. \end{equation} Thus for a given number of real observed subhalos $N'_{\rm obs}$, the 95\% upper limit of $N_{\rm exp}$ corresponds to the one make \begin{equation} \int_{N_{\rm obs}>N'_{\rm obs}}{p(N_{\rm obs}|N_{\rm exp})}{\rm d}N_{\rm obs}>0.95. \end{equation} Since we do not find any line-like excesses with $TS>25$, we set $N'_{\rm obs}=0$, leading to $N_{\rm exp}<3$ at 95\% confidence level. Here we use the expression derived in Ref. \cite{hooper16dmsh} (hereafter H16) to give the predicted numbers of the observable DM subhalos, \begin{eqnarray} N_\text{exp} &=& \Omega \int \int \int \int D^2 \, \frac{dN}{dMdV} \, \frac{dP}{d\gamma} \,\frac{dP}{dR_b} \, \nonumber\\ &&\Theta[\Phi_{\gamma}(M, D, R_b, \gamma)-\Phi_0]\, dM \, dD \, dR_b \, d\gamma, \nonumber \\ \label{eq:npred} \end{eqnarray} where $D$ and $M$ are the distance and mass of subhalo, respectively. The gamma-ray flux of the line signal generated in a given subhalo is \begin{equation} \Phi_{\gamma} = \frac{{\langle \sigma v \rangle}_{\gamma\gamma}}{4 \pi m^2_\chi D^2} \int \rho^2(r) \, dV. \label{eq:flux} \end{equation} For the DM distribution in the subhalo $\rho(r)$, following H16, a density profile of power law with exponetial cutoff (PLE) is adopted rather than the Navarro-Frenk-White \cite{navarro97nfw} one, \begin{equation} \rho(r) = \frac{\rho_0}{r^{\gamma}} \, \exp\left(-\frac{r}{R_b}\right)\,. \label{eq:dmprofile} \end{equation} It is found that a PLE density profile can better match the characteristics found in the VL-II and ELVIS simulations considering the effects of tidal stripping \cite{hooper16dmsh}. In Eq. (\ref{eq:npred}), the $dN/dMdV$, $dP/d\gamma$ and $dP/dR_b$ are subhalo distribution and the distributions of the values of $\gamma$ and $R_b$ near the Earth's location. For these distributions we also utilize the formulae reported in H16, which are presented in Appendix \ref{distribution} as well. When deriving the distributions, their dependence on both the subhalo mass and the location relative to the galactic center has been taken into account \cite{hooper16dmsh}. Please note that these distributions in the integrand of Eq. (\ref{eq:npred}) are only valid in the local environment; especially, to simplify the calculation, a uniform subhalo number density ($dN/dV\propto$const) is assumed following H16. We thus consider only the subhalos within the distance of $5\,{\rm kpc}$ \footnote{The bounds of the integral in Eq. (\ref{eq:npred}) for $M$, $R_b$ and $\gamma$ are [$10^5$, $10^{10}$] $M_\odot$, [0, 5] kpc and [0, 1.45], respectively.}. Subhalos at larger distances may also be detectable, our choice of $D_{\rm max}$ will lead to relatively conservative results. The $\Phi_0$ in Eq. (\ref{eq:npred}) denotes the flux threshold above which the line signals will be significantly detected. Since no line signal is found with $TS>25$, we make use of the Monte Carlo simulation to derive the $\Phi_0$. We model the Fermi-LAT observation spectrum averaged over all the sky (excluding the Galactic plane and the regions around bright gamma-ray sources, see below) with a PLE function and use this PLE spectrum to approximate the backgrounds in our line searches. Based on this PLE background spectrum, we generate pseudo photons in the 2$^\circ$ ROI. Besides, a line-like component is superposed onto the background, the profile of which is the energy dispersion function of the Fermi-LAT data used in this work. We apply the same searching procedure as that in Sec. \ref{sec2} on these pseudo data, and derive corresponding TS value of the line component. By varying the flux $\Phi$ of the input line component, for each $E_\gamma$ we determine the threshold above which the line component has a TS value greater than 25. We perform 100 Monte Carlo simulations and adopt the median value of the thresholds as the $\Phi_0$. The resultant $\Phi_0$ curve is shown in the left panel of Figure \ref{fig:phi0}. \begin{figure*}[!t] \includegraphics[width=0.55\textwidth]{ul.pdf} \caption{The 95\% confidence level upper limits on the cross sections of DM annihilating into double $\gamma$-rays derived in our analysis. As a comparison, we also plot the constraints set by the Fermi-LAT observations of the Galactic central regions \cite{fermi15line}.} \label{fig:limits} \end{figure*} The background gamma-ray emissions in the Galactic plane region and near bright gamma-ray point sources are much stronger than that in other regions, thus lowering the detectability of a line signal from subhalo. In this section, for both calculating the observed solid angle $\Omega$ and deriving the flux threshold $\Phi_0$, the regions of $|b|<20^\circ$ and those within $2^\circ$ around the 100 most bright point sources in 3FGL \cite{fermi15_3fgl} are excluded. With the elements described above, we can calculate the expected number of subhalos, $N_{\rm exp}$, that can yield line signals significantly detectable by Fermi-LAT. Assuming a detection threshold of $\Phi_0=1.0\times10^{-11}\,{\rm ph\,cm^{-2}\,s^{-1}}$, the $N_{\rm exp}$ as a function of cross section for 100 GeV DM is shown in the right panel of Figure \ref{fig:phi0}. We apply poisson statistics to the $N_{\rm exp}$ (i.e., requiring $N_{\rm exp}<3$) to place a 95\% upper limit on the annihilation cross section for a given value of the DM mass. The obtained constraints are shown in Figure \ref{fig:limits}. As a comparison, also plotted are the constraints derived based on the Fermi-LAT observation towards the regions around the Galactic center \cite{fermi15line}. We find that our constraints here are not competitive with these Galactic ones (thin solid line for the isothermal density profile and dashed line for the NFW). We would like to emphasize that in the calculation we have considered only the subhalos within 5 kpc. The current constraints would be improved by including the subhalos farther away. \section{Summary} In this work, we have analyzed the Fermi-LAT data to blindly search for the potential line signals originated from anywhere of the sky. We make use of the sliding window technique to perform unbinned likelihood fittings in 49152 ROIs, which cover the whole sky. We did not find line signal with $TS>25$. However, line-like excesses with $TS>16$ appear in the spectra of 50 regions. After the trial factor correction, the highest global significance among these excesses is only $0.54\sigma$. These excesses are most likely originated from statistic fluctuations. In any case, the possibility that very a few of them come from DM annihilation can not be excluded. If one can analyze the data observed by other/future survey mode gamma-ray observatories in the same regions, their origin (DM or statistical fluctuation) may be identified. We thus suggest that these regions are worth further attention. All these regions have been presented in Appendix \ref{list}. The DM particles that generate the line signals may simultaneously annihilate through other channels, thus leading to counterpart gamma-ray emission (either continuum emission at lower energies or a second gamma-ray line). If detected, these counterparts provide indications of DM origin of the line signals. In Section \ref{sec4}, we have attempted to search for the counterpart gamma-ray emissions for the line-like excesses in Table \ref{tb:50roi} by analyzing the Fermi-LAT unassociated point sources within selected ROIs (for continuum emission) or by examining the significances of the second lines at specific energies. No evidence of the counterparts is found in the analyses. Some previous works have pointed out that the number of DM subhalo candidates can be used to place constraints on the DM cross section. In our analysis, we don't find any significant line signal with $TS>25$, then the number of observed subhalo is zero. Based on this, we derive the expected number of subhalos as a function of the cross section of DM annihilating to gamma-ray lines, and then set constraints on the latter. We found that the constraints obtained here are weaker than those given according to the Fermi-LAT observations towards the Galactic central region. Nevertheless our work offers a novel approach to support these previous constraints independently. Finally, we would point out that some other on-orbit or proposed space borne gamma-ray telescopes, such as DAMPE \cite{dampe}, Gamma-400 \cite{galper14gamma400} and HERD \cite{zhang14herd}, all of which have significantly better energy resolution comparing to Fermi-LAT, will contribute significantly to the gamma-ray line search and may help examining the origin of the line-like excesses in this work. \begin{acknowledgments} We thank Samuel J. Witte for helpful discussion on the calculation of Eq. (\ref{eq:npred}). This work is supported in part by the National Key Research and Development Program of China (No. 2016YFA0400200), the National Natural Science Foundation of China (Nos. 11525313, 11722328, 11773075, U1738210, U1738136). \end{acknowledgments} \bibliographystyle{apsrev4-1-lyf}
{ "timestamp": "2019-06-05T02:07:56", "yymm": "1806", "arxiv_id": "1806.00733", "language": "en", "url": "https://arxiv.org/abs/1806.00733" }
\section{Introduction} During solar flares beams of accelerated electrons propagating along the open magnetic field lines can produce the so-called type III radio bursts \citep{2008LRSP....5....1B,Holman2011}. They are observed as fast drifting structures in radio dynamic spectra with high brightness temperature \citep[see][as recent reviews]{1985srph.book..289S,Pick2008,2008LRSP....5....1B}. The type III radio bursts can be traced from the solar corona into the interplanetary space \citep[e.g.][]{1974SSRv...16..189L,2011SoPh..273..413K,Krupar2014,2015A&A...582A..52A}, where the corresponding electron beams and the beam-driven Langmuir waves near the electron plasma frequency $f_{\mathrm{pe}}$ can be observed in-situ \citep{Lin1985, Krucker2007}. In the dynamic spectra of type III bursts,the fundamental ($\approx f_{\mathrm{pe}}$) and harmonic ($\approx 2f_{\mathrm{pe}}$) parallel drifting components are usually identified \citep{McLean1985}. A large fraction of metric and decametric type III radio bursts reveal fine spectral structuring. In particular, the so-called type IIIb radio bursts \citep{deLaNoe1972,1979SoPh...62..145A,2017SoPh..292..155M} are characterized by multiple narrowband bursts with slow frequency drift, known as stria bursts. They compose the fast drifting spectral structure similar to usual type III radio bursts \citep{Ellis1967, Ellis1969, deLaNoe1972, Stewart1975, Takakura1975, Baselyan1974a}. Striae can be observed in both the fundamental and harmonic emission components, although the harmonic striae are more diffusive \citep{Baselyan1974b}; if present, they can form the so called IIIb--IIIb pairs \citep{1979SoPh...62..145A,2015RRPRA..20...99B}. A typical stria frequency bandwidth is of about $\sim 30-300$ kHz with the frequency drift rate of $0-150$ kHz $\textrm{s}^{-1}$ \citep{Bhonsle1979, Kruger1984}. Duration of an individual stria depends on its frequency and is of about $\sim 1$ second for the fundamental component and can be several time longer for the harmonic emission \citep{Bhonsle1979, Kruger1984}. The basic explanation of striae origin is the existence of density variations along the electron beam path; this idea was firstly proposed by \inlinecite{Takakura1975}. Numerical modelling of \inlinecite{Kontar2001} demonstrated that the spatial distribution of Langmuir waves is strongly modulated by small-amplitude density fluctuations creating ``Langmuir wave clumps'' that could be responsible for individual striae. Estimations in that work showed that even relatively weak density perturbations ($\Delta n/n\sim 10^{-3}$, where $n$ is the thermal electron density) are sufficient to form the observed fine spectral structures. However, it is still not clear which magnetohydrodynamic (MHD) waves cause these density fluctuations; there is a long list of possible mechanisms responsible for the emission modulation \citep{Melrose1982}. Furthemore, until now the number of studies devoted to investigation of striae spectral properties at different frequencies was limited, while a successful theory should be able to explain both the observed striae drift rates and bandwidths as well as their frequency dependencies. In this paper, we analyze two IIIb bursts observed with the LOw Frequency ARray \citep[LOFAR,][]{Haarlem2013} on 16 April 2015. The frequency resolution of LOFAR is sufficient to resolve striae; it allows also studying the spatial characteristics of the emission sources. While the spatial dynamics of the striae sources in the mentioned event was analyzed in detail in the paper of \inlinecite{Kontar2017}, the main aim of this paper is to investigate the spectral characteristics of the striae (i.e., bandwidth and frequency drift) and their dependencies on the emission frequency. Our particular interest is to verify applicability of the density fluctuations model to explaining the observed striae properties. \section{LOFAR observations} LOw Frequency ARray \citep[LOFAR,][]{Haarlem2013} was designed by the Netherlands Institute for Radio Astronomy (ASTRON). It is working in the metric-decametric wavelengths range and is able to produce spatially resolved solar observations with high time cadence and excellent spectral resolution. In this work we analyse the low-band observations (in the 30-80 MHz range) made with the LOFAR core stations located near Exloo, Netherlands; all 24 core stations (scattered over the area of $\sim 3\times 2$ $\textrm{km}^2$) were used for the observations. Instead of a classical interferometric approach, the spatially-resolved spectroscopic observations were performed using the LOFAR beam-formed mode \citep{Stappers2011, Haarlem2013}, when the data from the LOFAR core stations are combined to form a number of ``tied-array beams'' covering an area of the sky; the beam size is of about $\lambda/D\sim 10'$ at 32 MHz, where $\lambda$ is the wavelength and $D$ is the maximum baseline. The advantage of using the tied-array beams is that it allows producing images with very high time resolution, which is not possible in the LOFAR interferometric mode \citep[see][for details]{Stappers2011,Morosan2014,Morosan2015,2017A&A...606A.141R,2018ApJ...856...73C}. In this work the LOFAR configuration included 127 beams covering the solar disk and adjacent areas with the separation between the beam centers of about $356''$ (see Figure \ref{ims}). The fluxes corresponding to each beam were recorded with high frequency and time resolution ($12.2$~kHz and 10 ms, respectively). Flux calibration was made using observations of the Crab nebula (Tau A) as it was demonstrated by \citet{Kontar2017,2018ApJ...856...73C}. In the below light curves and dynamic spectra, the pre-event (pre-burst) background is subtracted. \section{Main properties of the selected events} \subsection{LOFAR dynamic spectra of the selected type IIIb radio bursts}\label{DSp} The observations were made on 16 April 2015, around the local noon; a number of type III bursts were detected. Among them, we selected two bursts shown in Figure \ref{dyn_spec}; these bursts were chosen because they are isolated, i.e., do not overlap with other bursts. The bursts occurred at $\sim$11:56:20 and $\sim$11:56:55 UT; below, they are referred to as burst 1 and burst 2, respectively. We note that there were no flares or coronal mass ejections during the considered time interval; the nearest flare (of C2 GOES class) was at $\sim$10:45:00~UT. Both bursts reveal the typical two-component (fundamental-harmonic) structure of dynamic spectra; the fine spectral structures (striae) are well visible in both of them. Below we demonstrate in detail the analysis technique and results for burst 2 (the brighter one); the other one is analyzed in the same way. The statistical results are shown for both bursts. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig1.eps} \caption{Spatially-integrated background-subtracted calibrated LOFAR dynamic spectrum of two subsequent type IIIb radio bursts.} \label{dyn_spec} \end{figure} In Figure \ref{zoom} we show zoomed dynamic spectra of burst 2 for the chosen frequency and time ranges. These ranges are marked by rectangular boxes in top left panel of the figure. As said above, the burst is composed of two distinct components: a bright (fundamental) component with well-pronounced striae is followed by a more diffusive drifting structure (harmonic); hence the analyzed bursts can be identified as type IIIb--III pairs. The drift rates of both components are comparable and are of about 10 MHz $\textrm{s}^{-1}$ that corresponds to the electron beam speed of $\sim 0.3c$ assuming the one-fold Newkirk coronal density model \citep{Newkirk1961}. The dynamic spectra of the diffusive component (harmonic) reveal fine spectral structures, too (see box 3 in Figure \ref{zoom}); these structures look like smoothed striae. Below, we analyze in detail the striae detected in the first bright (fundamental) component since they are more distinctive (see boxes 1 and 2 in Figure \ref{zoom}). As an example, we show in the right-bottom panel of Figure \ref{zoom} the time profiles of the total (spatially-integrated) radio flux at two frequencies: $f_1=34.54$ MHz and $f_2=56.33$ MHz; the frequencies were chosen to ensure that the corresponding radio fluxes (i.e., the fundamental and harmonic components) peak at the same time. Therefore the harmonic-fundamental ratio at this particular time can be estimated as $f_2/f_1\approx 1.63$, which is relatively low but not unprecedented for type III bursts \citep{1985srph.book..289S}. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig2.eps} \caption{a) Dynamic spectrum of burst 2; three rectangular boxes show the regions of dynamic spectrum presented in other panels. b-d) Zoomed regions of the dynamic spectrum corresponding to the fundamental (b-c) and harmonic (d) components; the contours mark striae. e) Light curves at two selected frequencies demonstrating the fundamental-harmonic relation.} \label{zoom} \end{figure} In panels (b-d) of Figure \ref{zoom}, individual striae with high contrast are highlighted by contours; the contour levels are chosen to ensure that the contours are not too long (with the length below a threshold of 250 pixels of dynamic spectrum) and hence they, as a rule, enclose separate striae rather than groups of striae. The stria durations vary in the range of $0.5-1$ s. One can notice that the striae have frequency dependent drift rate: the stria drift rate is larger at higher frequencies; this behaviour is typical of the type III radio bursts in general. Also, the high-frequency striae have a slightly larger bandwidth than the low-frequency ones. In Section \ref{stat_spec}, these frequency variations of the stria parameters are analyzed quantitatively. \subsection{Dynamics of the radio emission source} Figure \ref{ims} shows two examples of the LOFAR radio images (at different times and frequencies) obtained by interpolation of the fluxes corresponding to different beams using the radial basis function method. At low frequencies, we can see a single well-defined radio emission source located on the solar disk; the signal-to-noise ratio is remarkably high. At higher frequencies (see, e.g., right panel in Figure \ref{ims}), additional weaker and smaller sources appear that are likely caused by the instrument sidelobes. For these reasons, we analyze the spatial characteristics of the emission source up to the frequency of $\approx 50$ MHz only; on the other hand, the spectral characteristics of the striae are analyzed up to 70 MHz. From Figure \ref{ims}, one can notice that with an increase of the emission frequency the emission source shifts eastward, and its size decreases. The raw images do not allow us to inspect dynamics of the source position with a sufficient accuracy; therefore, to describe the source parameters quantitatively and to study temporal dynamics, we fitted it by an elliptical Gaussian defined by seven free parameters: normalization coefficient, center position ($x_0$ and $y_0$), size ($\sigma_x$ and $\sigma_y$), and tilt angle. Both the parameters and their confidence limits (errors) were estimated using the least-squares procedure. Only the LOFAR beams located within $1000''$ from the solar disk center were used in the fitting procedure. We should note that the observed radio map is a convolution of the real brightness distribution with the LOFAR beam. In a simple case when both the emission source and the LOFAR beam have approximately Gaussian shapes, the real source area $A_{\mathrm{real}}$ is determined as $A_{\mathrm{real}} \approx A_{\mathrm{obs}} - A_{\mathrm{beam}}$ where $A_{\mathrm{obs}}$ and $A_{\mathrm{beam}}$ are the areas of the observed emission source and the LOFAR beam, respectively. Thus, firstly, the apparent increase of emission source size with decreasing frequency seems to be caused mainly by the LOFAR beam broadening with decreasing frequency. \inlinecite{Kontar2017} estimated the real source size for the considered event as $17-22$ arcmin around 32~MHz, which is approximately twice larger than the LOFAR beam at that frequency. Secondly, the expansion/shrinking rate of the emission source at a fixed frequency (which is applicable to a narrowband stria) satisfies the relation $\mathrm{d}A_{\mathrm{real}}/\mathrm{d}t = \mathrm{d}A_{\mathrm{obs}}/\mathrm{d}t$ (because the LOFAR beam size at a fixed frequency is constant). That is why the expansion rates of individual striae (considered below) are not affected by the mentioned convolution effect. In addition, in the observed frequency range the ionospheric refraction can result in displacement of the apparent radio emission source. LOFAR monitoring of the point-like source Tau~A revealed absence of significant intensity scintillations on the subsecond time scales; the level of ionospheric turbulence at the time of observations was low. To minimize the (possible) ionospheric effects, we focus on relative motions in the plane of sky at single frequency at the timescales shorter than the ionospheric scintilations observed. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig3.eps} \caption{LOFAR radio images at two time-frequency points within burst 2. Centers of 127 LOFAR beams are shown by small white squares. The solar limb is shown by white thick circle. The LOFAR beams sizes (at $1/2$ level) at the considered frequencies are shown by white ellipses.} \label{ims} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig4.eps} \caption{Dynamics of the emission source position on the solar disk (for a sub-region of burst 2, fundamental component). a) Radial distance from the solar disk center shown by colored background as function of time and frequency; the emission intensity contours (white) are overplotted to mark striae. b) The corresponding dynamic spectrum. c) Time profile of the mentioned radial distance at a fixed frequency (31.71 MHz); the intensity lightcurve (in relative units) is overplotted.} \label{RST} \end{figure} Figure \ref{RST} presents comparison of fine structures seen in dynamic spectra with the radial shift of the emission source position, for a region of the dynamic spectrum corresponding to the fundamental component of burst 2; the time resolution is 12.5 ms. The radial position of the source is calculated as offset of Gaussian centroid from the solar disk center; in panel~a of Figure \ref{RST}, the source position is shown in a dynamic-spectrum style (by colored background). Panel~b presents corresponding region of the dynamic spectrum. In the panel~c the dynamics of the radial distance at a chosen single frequency (31.71 MHz) is shown. The emission intensity is overplotted by white contours (in panel~a) or red lightcurve (in panel~c). Both the time-frequency plots (for all striae) and single-frequency time profiles demonstrate a complicated pattern reported earlier by \inlinecite{Kontar2017}: the emission source position is characterized by a gradually increasing radial distance (i.e., motion towards the limb) with a subsequent fast return motion. The evolution of the source position is delayed with respect to the intensity enhancement; the maximum radial distance is achieved $\sim 1$ s after the intensity peak, consistent with the conclusion by \inlinecite{Kontar2017} that this behaviour most likely reflects the radio emission propagation effects. \subsection{Parameters of individual striae}\label{case_study} Figures \ref{stria1_b1}--\ref{stria2_b1} present examples of individual striae. We have selected two striae within burst 1, with the frequencies around 30.11 MHz (Figure \ref{stria1_b1}) and 41.77 MHz (Figure \ref{stria2_b1}); below in this Section, we refer to them as to the ``low-frequency'' (LF) and ``high-frequency'' (HF) striae, respectively. To determine the stria parameters, we fitted the emission spectrum (containing several spectral channels which were selected manually for each stria burst) in each time bin by a Gaussian with a linear background. This provides us with the time-dependent values of the stria central frequency and bandwidth (determined at one-sigma level); the radio flux and emission source size and position shown in Figures \ref{stria1_b1}--\ref{stria2_b1} correspond to the mentioned (variable) central frequency. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig5.eps} \caption{An example of ``low-frequency'' stria. a) Dynamic spectrum. Black dots mark the central frequency of the stria (obtained by Gaussian fitting) in each time bin. b) Emission intensity (at the central frequencies) vs. time. c) Area of the radio emission source (at the central frequencies) vs. time. d) Spectral bandwidth of the stria vs. time. e) Central frequency of the stria vs. time. f) Source position on the solar disk at different times (color-coded, with the time increasing from violet to red); black lines show radial directions from the disk center.} \label{stria1_b1} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig6.eps} \caption{Same as in Figure \protect\ref{stria1_b1}, for a ``high-frequency'' stria.} \label{stria2_b1} \end{figure} One can see that the emission source experiences gradual expansion with time: from 270 to 450 $\textrm{arcmin}^2$ for the LF stria and from 135 to 155 $\textrm{arcmin}^2$ for the HF stria. Also, both considered striae demonstrate motion of the emission sources in a mostly radial direction, similar to the behaviour observed at a fixed frequency (see Figure \ref{RST}). The real emission source size can be estimated using the above-mentioned relation $A_{\mathrm{real}} \approx A_{\mathrm{obs}} - A_{\mathrm{beam}}$, from which we estimate the linear source size (FWHM) in the plane of sky as $l_{\mathrm{real}}\approx 2\sigma_{\mathrm{real}}\sqrt{2\ln 2}\approx (2\sqrt{2\ln 2}/\pi) \sqrt{A_{\mathrm{obs}}-A_{\mathrm{real}}}\approx 0.75\sqrt{A_{\mathrm{obs}}-A_{\mathrm{real}}}$. For the LF stria peak, considering $A_{\mathrm{real}} \approx 320$~arcmin$^2$ and $A_{\mathrm{beam}} \approx 110$~arcmin$^2$, we obtain $l_{\mathrm{real}} \approx 11$ arcmin; for the HF stria peak, we have $A_{\mathrm{real}} \approx 145$~arcmin$^2$, $A_{\mathrm{beam}} \approx 60$~arcmin$^2$, and $l_{\mathrm{real}} \approx 7$ arcmin. Spatial extent of the source along the line-of-sight (including the effects of scattering) can be estimated from FWHM of temporal stria intensity profile in the same way as in the work of \inlinecite{Kontar2017}. For the both LF and HF striae, the duration at a fixed frequency $\Delta t$ is about 0.6 seconds; thus the upper limit of the spatial extent is $(l_{\mathrm{LOS}})_{\max} = c\Delta t\approx 180~\textrm{Mm}\approx 4$~arcmin. The size of the stria emission source along the line-of-sight $l_{\mathrm{LOS}}$ is smaller than the size measured across the line-of-sight; thus, studying separate striae we confirmed the conclusion of \inlinecite{Kontar2017} that the radio wave scattering in the corona is highly anisotropic. The LF stria bandwidth increases with time from 38 to 54 kHz, while the HF stria reveals no trend in bandwidth evolution. To estimate a characteristic (mean) stria bandwidth, we calculated a cumulative stria spectrum by summation of the spectra in all relevant time bins (shifted to provide the same central frequency). Then the bandwidth of the resulting spectrum (at one-sigma level) is considered to be a characteristic stria bandwidth; it is of about 44 kHz for the LF stria and 68 kHz for the HF stria (the HF stria is wider). Also, the HF stria reveals faster frequency drift compared with the LF stria: the drift rate can be estimated as $-27$ kHz $\textrm{s}^{-1}$ for the LF stria and $-98$ kHz $\textrm{s}^{-1}$ for the HF stria. A similar analysis was performed for all identified striae in the bursts 1 and 2, with the results presented and summarized in the next Section. \section{Statistics of the striae parameters} \subsection{Dynamics of the emission sources}\label{stat_sourc} We have selected for quantitative analysis 43 striae in burst 1 and 40 striae in burst 2. Top panels in Figure \ref{stat_source} show central positions of the radio emission sources for different times and frequencies; the average positions and the characteristic (average) velocities of the emission sources for the selected striae are shown as well. To determine an average vector of the emission source velocity in the plane of the sky, we used linear fits of $x_0(t)$ and $y_0(t)$ dependencies, where $(x_0, y_0)$ are the central coordinates of the emission source; the corresponding average speeds (absolute magnitudes) are shown in the bottom panel (a) of Figure \ref{stat_source}. A similar procedure (linear fitting) was used to determine a characteristic expansion rate of the emission source $\mathrm{d}S/\mathrm{d}t$, see bottom panel (b) of Figure \ref{stat_source}. We do not show the error bars for $x_0$ and $y_0$ in top panels of Figure \ref{stat_source} because this would make the figure unreadable due to the large number of data points; the typical error bars at two representative frequencies are shown in Figures \ref{stria1_b1}f and \ref{stria2_b1}f. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig7.eps} \caption{Upper panels: central positions of the radio emission sources; color marks the emission frequency and black dots show the average positions of the radio emission sources for different striae. Black lines show the average velocities (direction and relative magnitude) of the striae radio emission sources. Bottom panels: speeds (a) and area expansion rates (b) of the radio emission sources for different striae vs. the striae central frequencies; red and black colors mark bursts 1 and 2, respectively.} \label{stat_source} \end{figure} One can see that the obtained speeds of the striae radio emission sources $v_{\mathrm{c}}$ are mostly in the range of $(0.1-0.6)c$ and only a few points are outside this range. The area expansion rate of the emission source can be as high as $\sim 200$ $\textrm{arcmin}^2$ $\textrm{s}^{-1}$. Assuming that the emission source has a roughly circular shape, we can estimate the corresponding linear expansion rate as $\mathrm{d}r_{\bot}/\mathrm{d}t\approx (\mathrm{d}S/\mathrm{d}t)/(2\pi\sqrt{S})$; this value is also comparable with the speed of light and can be as high as $0.2c$. The expansion rate of the emission source tends to decrease with an increase of the emission frequency. \subsection{Statistics of the striae bandwidths and frequency drift rates}\label{stat_spec} The left panel of Figure \ref{stat_wdfdt} shows the striae bandwidths calculated according to the technique described in Section \ref{case_study}. The resulting data points are fitted by linear functions (separately for the bursts 1 and 2) with the equations written within the plot. The striae bandwidth increases with an increase of frequency for both bursts; typical striae widths at 30 and 60 MHz are of about 40 and 60 kHz, respectively (i.e., just a few LOFAR frequency channels). The relative bandwidth $\Delta f/f$ is weakly variable being of about 0.13\% and 0.10\% at the mentioned frequencies. We can conclude that the linear fits for both type IIIb bursts are similar to each other, and an average stria bandwidth increases with frequency by $\sim 0.6$ kHz per MHz. \begin{figure} \centering \includegraphics{fig8.eps} \caption{Statistics of the striae bandwidths (left) and frequency drift rates (right). Solid lines represent linear fits to the data points. Red and black colors show the results for burst 1 and 2, respectively.} \label{stat_wdfdt} \end{figure} The frequency drift rate of the striae increases with an increase of the emission frequency, too (see the right panel of Figure \ref{stat_wdfdt}); it varies from $\sim 30$ kHz $\textrm{s}^{-1}$ at 30 MHz up to $\sim 150$ kHz $\textrm{s}^{-1}$ at 60 MHz. The obtained linear fits (stria drift rate vs. frequency) for both considered type IIIb bursts are similar to each other: $\mathrm{d}f/\mathrm{d}t\simeq 0.004f$. Note that the striae drift rates are much smaller than the typical drift rate of usual type III bursts, which normally decreases with frequency as $\mathrm{d}f/\mathrm{d}t\simeq 0.01 f^{1.84}$ \citep{1973SoPh...29..197A}. \section{Discussion} The most straightforward scenario for the striae formation in the type III bursts is an existence of plasma density fluctuations along the electron beam path \citep{Takakura1975}. These small-amplitude density perturbations can modulate the Langmuir waves generation substantially \citep{Kontar2001} and hence produce fine structures like striae in the dynamic radio spectra. Results obtained in this work (see Section \ref{stat_spec}) can be used to estimate the properties of the density irregularities that are responsible for striation. Assuming that the emission is produced at the local plasma frequency, we estimate the relative density variations corresponding to the stria bursts as $\Delta n/n\simeq 2\Delta f/f$, where $\Delta f$ is bandwidth of a stria and $f$ is its central frequency. From the linear fits in the right panel of Figure \ref{stat_wdfdt}, we determined $\Delta n/n$ at different frequencies (see Figure \ref{esti}a) for both type IIIb bursts. The amplitude of the density fluctuations varies from $\sim 2.0\times10^{-3}$ at 70 MHz to $\sim 3.2\times 10^{-3}$ at 30 MHz; the error bars show possible ranges of $\Delta n/n$ considering errors of linear fitting presented in Figure \ref{stat_wdfdt}. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{fig9.eps} \caption{Various parameters inferred from the striae bandwidths, durations, and drift rates: a) amplitude of the plasma density perturbations $\Delta n/n$; b) characteristic electron beam length $d$; c) characteristic length scale of the plasma inhomogeneities $l$; d) propagation speed of the plasma density perturbations. The respective heights are calculated according to the Newkirk coronal density model.} \label{esti} \end{figure} Longitudinal (i.e., along the magnetic field) size of the electron cloud generating the radio emission can be estimated as $d \sim v_{\mathrm{b}}\tau$, where $v_{\mathrm{b}}$ is the electron beam speed and $\tau$ is the characteristic time of interaction between the electron cloud and a particular density inhomogeneity. We assume that the electron beam speed is $v_{\mathrm{b}}=0.3c$ which is consistent with the estimations obtained in Section \ref{DSp}; the time $\tau$ can be estimated as $\tau \sim \Delta f (\mathrm{d}f/\mathrm{d}t)^{-1}$, where $\Delta f$ is a stria bandwidth and $\mathrm{d}f/\mathrm{d}t$ is its frequency drift rate. Note that we do not consider the stria duration as $\tau$, because the stria duration can be affected by propagation effects \citep{Kontar2017} resulting in a radio echo and extending the radio pulses; therefore the above expression characterizes the intrinsic properties of the radio emission source more accurately. The obtained values of the electron beam size vary from $\sim 20$ Mm to $\sim 150$ Mm and increase with a decrease of the stria frequency (see Figure \ref{esti}b). We interpret this as an expansion of the electron beam during its propagation in the solar corona, which is likely determined by geometry of open magnetic flux tube where the electrons propagate. The characteristic length scale $l$ of the plasma inhomogeneities producing the striae bursts is related to the striae bandwidth as \begin{equation} l\approx 2n\left(\frac{\mathrm{d}n}{\mathrm{d}r}\right)^{-1}\frac{\Delta f}{f}\approx 653\left[\log_{10}\left(\frac{f}{f_0}\right)\right]^{-2}\frac{\Delta f}{f}, \end{equation} where $n$ is the plasma density and $r$ is the distance along the electron beam path. The second expression in the above formula was obtained by assuming radial propagation of the energetic electrons and the Newkirk coronal density model \citep{Newkirk1961}, with $f_0=1.84$ MHz and the resulting value of $l$ in Mm units. The estimated values of $l$ vary in the range of $\sim (0.2-0.8)$ Mm; they increase with a decrease of the stria frequency (see Figure \ref{esti}c). These inhomogeneities are much smaller than the electron beam: $l/d\sim (0.2-2.6)$\%. Similarly, the characteristic propagation speed $v_{\mathrm{p}}$ of the plasma inhomogeneities is related to the striae frequency drift rate as \begin{equation} v_{\mathrm{p}}\approx 2n\left(\frac{\mathrm{d}n}{\mathrm{d}r}\right)^{-1}\frac{1}{f}\frac{\mathrm{d}f}{\mathrm{d}t}\approx 6.53\times 10^5\left[\log_{10}\left(\frac{f}{f_0}\right)\right]^{-2}\frac{1}{f}\frac{\mathrm{d}f}{\mathrm{d}t}, \end{equation} with the resulting value of $v_{\mathrm{p}}$ in the second expression in km $\textrm{s}^{-1}$ units. The estimated values of the propagation speed vary in the range of $400-800$~$\textrm{km~s}^{-1}$ (see Figure \ref{esti}d) that corresponds to typical speeds of MHD waves \citep{2000SoPh..193..139R}; thus, the striae frequency drift can be caused by the motion of the plasma density perturbations due to MHD waves \citep[e.g.][for details]{2018arXiv180508282K}. These perturbations appear supersonic with the speeds which are $2-4$ times larger than a typical sound speed of $c_{\mathrm{s}}\approx 147\sqrt{T/[1~\mathrm{MK}]}\approx 200$ km $\textrm{s}^{-1}$ for a typical coronal plasma temperature of $T\approx 2$ MK \citep{2005psci.book.....A}. \section{Conclusion} We presented detailed analysis of spatially-resolved multi-frequency LOFAR observations of two type IIIb radio bursts. The results obtained provide statistically significant properties of individual striae and hence essential constraints for the theories describing the fine spectral structure of type IIIb bursts. The main results can be summarized as follows: \begin{itemize} \item Spatial position of the radio emission sources is characterized by radial motion from the Sun centre in the sky plane. The motion is particularly well pronounced during the decay phase of a striae. \item Evolution of the spatial source position is delayed by $\sim 1$ s with respect to the radio intensity time profiles. \item The apparent speed of the radio emission sources in the plane of sky is $(0.1-0.6)c$; the expansion speed of the sources is up to $\sim 0.2c$. \item Instantaneous bandwidth of the striae increases with an increase of the central frequency; the bandwidths lie in the range of $20-100$ kHz that corresponds to the relative bandwidth $\Delta f/f$ of about $0.06-0.12$\%. \item Frequency drift rate of the striae increases with an increase of the central frequency; the drift rates lie in the range of $0-0.3$ MHz~$\textrm{s}^{-1}$. \item The relative amplitudes of the plasma density fluctuations that may be responsible for the formation of the striae should be of about $(2-3)\times 10^{-3}$. \item The characteristic sizes and propagation speeds of the mentioned density fluctuations are expected to be of about $200-800$ km and $400-800$ km $\textrm{s}^{-1}$, respectively. The propagation speeds are substantially larger than the typical sound speed of 200 km $\textrm{s}^{-1}$ in the corona and closer to the typical Alfv\'en speed \citep{2008A&A...491..297R}. \item Estimations of the apparent radio source size (with account for the scattering effects) indicate that the source size across the line-of-sight exceeds considerably the size along the line-of-sight; this implies that scattering of the radio waves must be anisotropic. \end{itemize} The results obtained from analysis of two type IIIb bursts with striae support the conclusion of \inlinecite{Kontar2017} that the sizes of the radio emission sources and their dynamics are determined by the radio wave propagation effects. At the same time, dynamics of the source motions does not support a simple scenario of an isotropic radio source and isotropic radio wave scattering \citep[e.g.,][]{1971A&A....10..362S,Arzner1999}; reproducing the observed source dynamics and diagnosing the scattering regime require more complicated scattering simulations. The narrowband ``striae'' bursts are produced, most likely, by small-scale small-amplitude plasma density perturbations in the solar corona connected with propagating MHD waves. These MHD perturbations propagate with the speed larger than the sound speed and closer to Alfv\'en (fast magnetoacoustic) speed in the corona. \begin{acks} The work has benefited from a Marie Curie International Research Staff Exchange Scheme ``Radiosun'' (PEOPLE-2011-IRSES-295272), an international team grant (\url{http://www.issibern.ch/teams/lofar/}) from ISSI Bern, Switzerland, the Program No. 28 of the RAS Presidium, and budgetary funding of Basic Research program II.16. E.P.K. was supported by Science and Technology Facilities Council Grant (STFC) No. ST/P000533/1. This paper is based (in part) on data obtained from facilities of the International LOFAR Telescope (ILT) under project code LC3-012. LOFAR \citep{Haarlem2013} is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefitted from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universit\'e d'Orl\'eans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK. \end{acks} \bibliographystyle{spr-mp-sola}
{ "timestamp": "2018-06-05T02:16:46", "yymm": "1806", "arxiv_id": "1806.01046", "language": "en", "url": "https://arxiv.org/abs/1806.01046" }
\section*{Introduction} \label{sec:introduction} The \mbox{LHCb}\xspace collaboration has presented first experimental evidence that spin-carrying matter and antimatter differ~\cite{LHCb-PAPER-2016-030}. Differences in the behaviour of matter and antimatter are associated with the non-invariance of fundamental interactions under the combined charge-conjugation and parity transformations, known as {\ensuremath{C\!P}}\xspace violation. Up until then, {\ensuremath{C\!P}}\xspace violation had only been verified experimentally with spin-zero mesons; a brief historical review is given in Ref.~\cite{LHCb-PAPER-2016-030}. As pointed out recently, the \mbox{LHCb}\xspace measurement marks a first step into unexplored territory~\cite{Durieux:2017nps}. It is of the utmost importance to confirm the \mbox{LHCb}\xspace result with higher statistical significance, analysing the larger data samples now available from the second run of the Large Hadron Collider at CERN. Furthermore, numerous other decays of beauty baryons should be studied, to establish a diverse set of observations, thereby improving our understanding of {\ensuremath{C\!P}}\xspace violation. Diversity of results comes in two flavours, namely from the study of a variety of different systems, and via measurements of several physical quantities sensitive to {\ensuremath{C\!P}}\xspace violation. {\ensuremath{C\!P}}\xspace violation has far-reaching importance, being a crucial ingredient for the generation of the observed matter-antimatter asymmetry in the Universe. Unfortunately, our current theory and models can only explain a matter-antimatter asymmetry at least ten orders of magnitude smaller than the one observed. Additional sources of {\ensuremath{C\!P}}\xspace violation, yet to be discovered, are likely to explain the discrepancy. New sources of {\ensuremath{C\!P}}\xspace violation may be seen again in the quark sector or in a different sector of the theory. Since the visible Universe is made of spin-carrying particles such as the proton and the neutron, it seems natural to study purely baryonic decay processes, \mbox{\itshape i.e.}\xspace decay processes involving only spin-carrying particles. Any {\ensuremath{C\!P}}\xspace violating effects may have a more direct correspondence to the long-standing puzzle of the matter-antimatter asymmetry. These yet unexplored elementary processes may hold key information in much the same way that the study of {\ensuremath{C\!P}}\xspace violation with $B$ mesons provided a more comprehensive understanding of {\ensuremath{C\!P}}\xspace violation once it got established in the decay of neutral kaons. Purely baryonic decay processes can exhibit a rich spin structure and provide complementary information to that obtained so far with mesonic decays or final states. For example, decays of baryons with spin of 1/2 or 3/2 can be used to construct time-reversal violating observables, which provide other tests of {\ensuremath{C\!P}}\xspace violation. We discuss in this letter the study of purely baryonic decay processes. For each beauty baryon we present the most promising decay mode to look for, taking into account experimental constraints. Theoretical predictions are provided for some decay branching fractions and, in some cases, for the {\ensuremath{C\!P}}\xspace violating asymmetries. \section*{Results} \label{sec:results} Elementary decay processes exclusively involving baryons are only kinematically allowed with beauty baryons. These purely baryonic decays require at least three final-state particles in order to fullfil the empirical law of baryon number conservation~\cite{Geng:2016drz}. The ``lowest-ground'' process is \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace {\ensuremath{\Pn}}\xspace}}{}, discussed in Ref.~\cite{Geng:2016drz}. We here focus our attention on the final states that are easiest to reconstruct experimentally, in full, having in mind that the \mbox{LHCb}\xspace collaboration is the only running experiment capable of performing the search for these processes. The lowest-ground beauty baryons of interest are the {\ensuremath{\Lz^0_\bquark}}\xspace, the isospin doublet {\ensuremath{\Xires^0_\bquark}}\xspace and {\ensuremath{\Xires^-_\bquark}}\xspace, and the {\ensuremath{\Omegares^-_\bquark}}\xspace. The isotriplet $\Sigma_b$ baryons decay strongly, hence this family is of little interest in the study of {\ensuremath{C\!P}}\xspace violation in weak decay processes. The decay \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} is a fully reconstructible final state. It does involve the reconstruction of a long-lived particle, the {\ensuremath{\PLambda}}\xspace baryon. In \mbox{LHCb}\xspace, long-lived particles are reconstructed with lower efficiencies than single charged hadrons. Typically, an order of magnitude in selection efficiency is lost for the presence of any single fully reconstructible long-lived particle in the final state such as {\ensuremath{\PLambda}}\xspace or {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace, compared to the selection efficiency of reconstructing a charged hadron. Still, the final state ${\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace$ seems the best way to observe for the first time a fully baryonic final state of the {\ensuremath{\Lz^0_\bquark}}\xspace baryon. The {\ensuremath{\Xires^0_\bquark}}\xspace baryon can also decay to the ${\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace$ final state. This decay is the most promising mode to observe a purely baryonic decay of the {\ensuremath{\Xires^0_\bquark}}\xspace baryon. Indeed, moving up in complexity of reconstruction, both {\ensuremath{\Lz^0_\bquark}}\xspace and {\ensuremath{\Xires^0_\bquark}}\xspace can decay to the ${\ensuremath{\PLambda}}\xspace {\ensuremath{\kern 0.1em\overline{\kern -0.1em\PLambda}}}\xspace {\ensuremath{\PLambda}}\xspace$ final state. This final state is unique in its own right, and in particular provides a natural ground in which to study the relatively poorly known decay modes of the charmonium ${\ensuremath{\Pc}}\xspace{\ensuremath{\overline \cquark}}\xspace$ resonances to the ${\ensuremath{\PLambda}}\xspace {\ensuremath{\kern 0.1em\overline{\kern -0.1em\PLambda}}}\xspace$ final state. The reconstruction efficiency of three long-lived {\ensuremath{\PLambda}}\xspace baryons will unfortunately be very low, which makes the decay modes \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\kern 0.1em\overline{\kern -0.1em\PLambda}}}\xspace {\ensuremath{\PLambda}}\xspace}}{} and \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\kern 0.1em\overline{\kern -0.1em\PLambda}}}\xspace {\ensuremath{\PLambda}}\xspace}}{} out of reach until the \mbox{LHCb}\xspace experiment is upgraded for the years 2020s. The search for purely baryonic decays of the {\ensuremath{\Xires^-_\bquark}}\xspace baryon is easiest performed looking for the mode \texorpdfstring{\decay{{\ensuremath{\Xires^-_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\PLambda}}\xspace {\ensuremath{\overline \proton}}\xspace}}{}. The reconstruction efficiency will be low owing to the need to reconstruct two long-lived {\ensuremath{\PLambda}}\xspace baryons. The observation of a purely baryonic decay of the {\ensuremath{\Omegares^-_\bquark}}\xspace will require large samples yet to be collected by an upgraded \mbox{LHCb}\xspace experiment. Its observation is presently out of reach. On the one hand, the production rate of {\ensuremath{\Omegares^-_\bquark}}\xspace is rather small compared to the production of {\ensuremath{\Lz^0_\bquark}}\xspace baryons. On the other hand, the simplest decay mode is \texorpdfstring{\decay{{\ensuremath{\Omegares^-_\bquark}}\xspace}{{\ensuremath{\Xires^-}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{}, which involves the cascade {\ensuremath{\Xires^-}}\xspace in the final state and hence the decay chain of two long-lived particles, as the {\ensuremath{\Xires^-}}\xspace baryon is typically reconstructed in the ${\ensuremath{\PLambda}}\xspace {\ensuremath{\pion^-}}\xspace$ final state. The resulting efficiency in the reconstruction of the full decay chain is very low. \subsection*{Branching fractions} As mentioned above, purely baryonic decay processes were first considered in Ref.~\cite{Geng:2016drz}, which focused attention on the simplest decay involving the lightest possible baryons, \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace {\ensuremath{\Pn}}\xspace}}{}. Its branching fraction is predicted to be ${\cal B}(\texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace {\ensuremath{\Pn}}\xspace}}{}) = (2.0^{+0.3}_{-0.2})\times 10^{-6}$~\cite{Geng:2016drz}. The decays \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} and \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} should be the easiest purely baryonic decay processes to observe experimentally. Their branching fractions are predicted to be $( 3.2 ^{+0.8}_{-0.3} \pm 0.4 \pm 0.7) \times 10^{-6}$ and $(1.4\pm 0.1\pm 0.1\pm 0.4)\times 10^{-7}$, where the uncertainties arise from non-factorisable effects, CKM matrix elements, and hadronic form factors, respectively. \subsection*{{\ensuremath{C\!P}}\xspace asymmetries} The study of triple-product correlations (TPCs) in three-body decays is handicaped by the fact that the definitions of these TPCs involve the spin of one of the final-state particles. Such an issue does not happen in four-body decays, where TPCs depend only on the momenta of the final-state particles. The issue can nevertheless be overcome in specific cases, when dealing with so-called self-tagging decay modes. The decay mode \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} is such a decay. The charge of the proton from the \texorpdfstring{\decay{{\ensuremath{\PLambda}}\xspace}{{\ensuremath{\Pp}}\xspace {\ensuremath{\pion^-}}\xspace}}{} decay automatically determines whether the decay is that of the {\ensuremath{\Lz^0_\bquark}}\xspace baryon or its {\ensuremath{\Lbar{}^0_\bquark}}\xspace antiparticle. The direct {\ensuremath{C\!P}}\xspace asymmetry of the \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decay is predicted to be $( 3.4 \pm 0.1 \pm 0.1 \pm 1.0 ) \%$. Similarly, the direct {\ensuremath{C\!P}}\xspace asymmetry of the \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decay is predicted to be $(-13.0\pm 0.5\pm 1.5\pm 1.1)\%$. Here, the first uncertainties account for non-factorisable effects, the second reflect the experimental knowledge of the CKM matrix elements, and the third correspond to those on the hadronic form factors (see Methods for a discussion of the latter). The relatively large direct {\ensuremath{C\!P}}\xspace asymmetry predicted for the \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decay mode makes it especially interesting from an experimental point of view. \subsection*{Baryon-antibaryon enhancement near threshold} Many $B$-meson decays to baryonic final states present a characteristic enhancement at (production) threshold in the baryon-antibaryon mass spectrum of multi-body decays~\cite{Hou:2000bz,Bevan:2014iga,LHCb-PAPER-2017-012,LHCb-PAPER-2017-005}, a fact that is still not fully understood. Such enhancements are not observed in mesonic final states. This same baryon-antibaryon enhancement near threshold is expected to be present in the decays of $b$ baryons too. It awaits experimental confirmation. Because of the participating Feynman diagrams, the \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} and \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decay modes are expected not to exhibit a threshold enhancement in the same baryon-antibaryon system. A threshold enhancement in ${\ensuremath{\PLambda}}\xspace {\ensuremath{\overline \proton}}\xspace$ is expected for the \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decay whereas it is the invariant mass of the ${\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace$ system that is expected to peak near threshold in the case of the \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decay. The expected dibaryon invariant mass spectra are displayed in Figure~\ref{mBB}. These are clear signatures of the underlying QCD phenomenological framework used. \begin{figure}[t!] \centering \includegraphics[width=5.30in]{Fig1.eps} \caption{The dibaryon invariant mass spectra for the (a) \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} and (b) \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decays.}\label{mBB} \end{figure} \section*{Discussion} \label{sec:discussion} The study of hadronic decays of $b$ hadrons has proved to be a rich playground for a better understanding of {\ensuremath{C\!P}}\xspace violation and for searches of manifestations of physics beyond the Standard Model. The study of charmless decays, in particular, has provided a wealth of crucial results and milestones in flavour physics, notably the discovery of direct {\ensuremath{C\!P}}\xspace violation in the ${\ensuremath{\B^0}}\xspace \to {\ensuremath{\kaon^+}}\xspace {\ensuremath{\pion^-}}\xspace$ decay~\cite{Aubert:2004qm,Chao:2004jy}, the first observation of {\ensuremath{C\!P}}\xspace violation in the {\ensuremath{\B^0_\squark}}\xspace-meson system~\cite{LHCb-PAPER-2013-018}, and first evidence for {\ensuremath{C\!P}}\xspace violation in the decay of a baryon, \mbox{\itshape i.e.}\xspace, in the decay of a spin-carrying particle~\cite{LHCb-PAPER-2016-030}. Charmless decays, namely decays to final states with no charm flavour content, typically involve flavour charged ($b \to u$) and neutral ($b \to s$ and $b \to d$) transitions, which are suppressed with respect to the favoured $b \to c$ transition to open-charm final states. In the years to come, the \mbox{LHCb}\xspace collaboration is the only running experiment capable of studying beauty baryons. We urge the collaboration to expand its presently ongoing programme of studies of $b$-hadron decays and to investigate purely baryonic decays, which, for the first time, would allow a study of {\ensuremath{C\!P}}\xspace violation in decay processes involving only spin-carrying particles. These yet unexplored elementary processes may hold key information towards a better understanding of the {\ensuremath{C\!P}}\xspace violating phenomena that are needed in order to explain the observed matter and antimatter asymmetry of the Universe. The decay modes \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} and \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} are the most promising candidates for the first observation of decay processes exclusively involving spin-carrying particles. For what concerns {\ensuremath{C\!P}}\xspace violation, although the current sensitivity of the \mbox{LHCb}\xspace experiment is unlikely to reach the level predicted in the Standard Model, it is still worthy to explore the {\ensuremath{C\!P}}\xspace violating asymmetries of fully reconstructed baryonic decays as they could be large in models of physics beyond the Standard Model. \section*{Methods} \label{sec:methods} Figure~\ref{diagrams} displays the dominant Feynman diagrams describing the purely baryonic decays ${\bf B_b}\to {\bf B_1\bar B_2 B_3}$ (${\bf B}$ denotes a baryon), which proceed with a $\bf B_b\to B_3$ transition and a $\bf B_1\bar B_2$-pair production. The decay mode \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} is taken as an example. Similar diagrams can be drawn for the \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decay. According to Figure~\ref{diagrams}, the typical amplitude combines two matrix elements: ${\cal A}({\bf B_b}\to {\bf B_1\bar B_2 B_3})\sim \langle {\bf B_1\bar B_2}|(\bar q_1 q_2)|0\rangle \langle {\bf B_3}|(\bar q_3 b)| {\bf B_b}\rangle$, where $(\bar q_1 q_2)(\bar q_3 b)$ are (axial)vector or (pseudo)scalar currents from the quark-level effective Hamiltonian for charmless $b\to q_1\bar q_2 q_3$ transitions. In the amplitude, the two matrix elements can be further presented as the timelike baryonic form factors and the ${\bf B_b\to B_3}$ transition form factors~\cite{Geng:2016fdw,Hsiao:2017nga,Hsiao:2018umx}, together with the parameter for factorisable effects, being decomposed as effective Wilson coefficients~\cite{Ali:1998eb}, the Fermi constant and Cabibbo-Kobayashi-Maskawa (CKM) matrix elements~\cite{CKM1,CKM2}. The extractions of the form factors with their uncertainties can be found in Refs.~\cite{Geng:2016fdw,Hsiao:2017nga,Hsiao:2018umx}. The form factors have been used to calculate ${\cal B}(\bar B^0_s\to \Lambda \bar p K^+,\bar \Lambda p K^-)$~\cite{Geng:2016fdw}, whose value is in agreement with the measurement published by the LHCb collaboration~\cite{LHCb-PAPER-2017-012}. Likewise, the prediction of the branching fraction ${\cal B}(\bar B^0\to \Lambda\bar p K^+ K^-)$ has been validated by the recently measurement by the \mbox{Belle}\xspace collaboration~\cite{Lu:2018qbw}. \begin{figure}[tbhp] \centering \includegraphics[width=5.0in]{Fig2.eps} \caption{Feynman diagrams describing the purely baryonic decay \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{}.} \label{diagrams} \end{figure} Following the techniques described in previous work~\cite{Geng:2016drz}, the branching fractions for the three-body purely baryonic decays discussed in this letter are predicted to be in the range $10^{-7}-10^{-6}$, specifically ${\cal B}(\texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{}) = (3.2 ^{+0.8}_{-0.3} \pm 0.4 \pm 0.7) \times 10^{-6}$ and ${\cal B}(\texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{}) =(1.4\pm 0.1\pm 0.1\pm 0.4)\times 10^{-7}$, where the first uncertainties account for non-factorizable effects, the second reflect the experimental knowledge of the CKM matrix elements, and the third arise from those on the form factors~\cite{Geng:2016fdw,Hsiao:2017nga,Hsiao:2018umx}. The direct {\ensuremath{C\!P}}\xspace violating rate ($\Gamma$) asymmetry can be defined by \begin{eqnarray}\label{acp1} {\cal A}_{CP}=\frac{ \Gamma( {\bf B}_h\to{\bf B}_{l_1} \bar {\bf B}_{l_2} {\bf B}_{l_3}) -\Gamma( {\bf\bar B}_h\to{\bf \bar B}_{l_1} {\bf B}_{l_2} {\bf\bar B}_{l_3})} {\Gamma( {\bf B}_h\to{\bf B}_{l_1} \bar {\bf B}_{l_2} {\bf B}_{l_3}) +\Gamma( {\bf\bar B}_h\to{\bf \bar B}_{l_1} {\bf B}_{l_2} {\bf\bar B}_{l_3})}\;. \end{eqnarray} If both weak ($\gamma$) and strong ($\delta$) phases are non-vanishing, one has that ${\cal A}_{CP}\propto \sin\gamma\sin\delta$. The direct {\ensuremath{C\!P}}\xspace asymmetries of \texorpdfstring{\decay{{\ensuremath{\Lz^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} and \texorpdfstring{\decay{{\ensuremath{\Xires^0_\bquark}}\xspace}{{\ensuremath{\PLambda}}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace}}{} decays are predicted to be $( 3.4 \pm 0.1 \pm 0.1 \pm 1.0 ) \%$ and $(-13.0\pm 0.5\pm 1.5\pm 1.1)\%$, respectively, with the uncertainties mentioned early. \addcontentsline{toc}{section}{References} \setboolean{inbibliography}{true} \ifx\mcitethebibliography\mciteundefinedmacro \PackageError{LHCb.bst}{mciteplus.sty has not been loaded} {This bibstyle requires the use of the mciteplus package.}\fi \providecommand{\href}[2]{#2} \begin{mcitethebibliography}{10} \mciteSetBstSublistMode{n} \mciteSetBstMaxWidthForm{subitem}{\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space} {\relax}{\relax} \bibitem{LHCb-PAPER-2016-030} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Measurement of matter-antimatter differences in beauty baryon decays}}, }{}\href{https://doi.org/10.1038/nphys4021}{Nat.\ Phys.\ \textbf{13}, 391 (2017)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Durieux:2017nps} G.~Durieux and Y.~Grossman, \ifthenelse{\boolean{articletitles}}{\emph{{CP violation: Another piece of the puzzle}}, }{}\href{https://doi.org/10.1038/nphys4068}{Nat.\ Phys.\ \textbf{13}, 322 (2017)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Geng:2016drz} C.~Q. Geng, Y.~K. Hsiao, and E.~Rodrigues, \ifthenelse{\boolean{articletitles}}{\emph{{Exploring the simplest purely baryonic decay processes}}, }{}\href{https://doi.org/10.1103/PhysRevD.94.014027}{Phys.\ Rev.\ \textbf{D94}, 014027 (2016)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Hou:2000bz} W.-S. Hou and A.~Soni, \ifthenelse{\boolean{articletitles}}{\emph{{Pathways to rare baryonic B decays}}, }{}\href{https://doi.org/10.1103/PhysRevLett.86.4247}{Phys.\ Rev.\ Lett.\ \textbf{86}, 4247 (2001)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Bevan:2014iga} \mbox{BaBar}\xspace and \mbox{Belle}\xspace collaborations, A.~J. Bevan {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The physics of the B factories}}, }{}\href{https://doi.org/10.1140/epjc/s10052-014-3026-9}{Eur.\ Phys.\ J.\ \textbf{C74}, 3026 (2014)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2017-012} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{First observation of a baryonic $B^0_s$ decay}}, }{}\href{https://doi.org/10.1103/PhysRevLett.119.041802}{Phys.\ Rev.\ Lett.\ \textbf{119}, 041802 (2017)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2017-005} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Observation of charmless baryonic decays $B^0_{(s)}\to{\ensuremath{\Pp}}\xspace{\ensuremath{\overline \proton}}\xspace h^+ h^{\prime -}$}}, }{}\href{https://doi.org/10.1103/PhysRevD.96.051103}{Phys.\ Rev.\ \textbf{D96}, 051103 (2017)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Aubert:2004qm} BaBar collaboration, B.~Aubert {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Observation of direct CP violation in $B^0 \to K^+ \pi^-$ decays}}, }{}\href{https://doi.org/10.1103/PhysRevLett.93.131801}{Phys.\ Rev.\ Lett.\ \textbf{93}, 131801 (2004)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Chao:2004jy} Belle collaboration, Y.~Chao {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Improved measurements of partial rate asymmetry in $B \to h h$ decays}}, }{}\href{https://doi.org/10.1103/PhysRevD.71.031502}{Phys.\ Rev.\ \textbf{D71}, 031502 (2005)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2013-018} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{First observation of ${\ensuremath{C\!P}}\xspace$ violation in the decays of ${\ensuremath{\B^0_\squark}}\xspace$ mesons}}, }{}\href{https://doi.org/10.1103/PhysRevLett.110.221601}{Phys.\ Rev.\ Lett.\ \textbf{110}, 221601 (2013)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Geng:2016fdw} C.~Q. Geng, Y.~K. Hsiao, and E.~Rodrigues, \ifthenelse{\boolean{articletitles}}{\emph{{Three-body charmless baryonic $\overline B^0_s$ decays}}, }{}\href{https://doi.org/10.1016/j.physletb.2017.02.001}{Phys.\ Lett.\ \textbf{B767}, 205 (2017)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Hsiao:2017nga} Y.~K. Hsiao and C.~Q. Geng, \ifthenelse{\boolean{articletitles}}{\emph{{Four-body baryonic decays of $B\to p \bar{p} \pi^+\pi^-(\pi^+K^-)$ and $\Lambda \bar{p} \pi^+\pi^-(K^+K^-)$}}, }{}\href{https://doi.org/10.1016/j.physletb.2017.04.067}{Phys.\ Lett.\ \textbf{B770}, 348 (2017)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Hsiao:2018umx} Y.~K. Hsiao, C.~Q. Geng, Y.~Yu, and H.~J. Zhao, \ifthenelse{\boolean{articletitles}}{\emph{{Study of $B^-\to \Lambda\bar p\eta^{(')}$ and $\bar B^0_s\to \Lambda\bar\Lambda\eta^{(')}$ decays}}, }{}\href{http://arxiv.org/abs/1803.05161}{{\normalfont\ttfamily arXiv:1803.05161}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Ali:1998eb} A.~Ali, G.~Kramer, and C.-D. Lu, \ifthenelse{\boolean{articletitles}}{\emph{{Experimental tests of factorization in charmless nonleptonic two-body $B$ decays}}, }{}\href{https://doi.org/10.1103/PhysRevD.58.094009}{Phys.\ Rev.\ \textbf{D58}, 094009 (1998)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{CKM1} N.~Cabibbo, \ifthenelse{\boolean{articletitles}}{\emph{{Unitary symmetry and leptonic decays}}, }{}\href{https://doi.org/10.1103/PhysRevLett.10.531}{Phys.\ Rev.\ Lett.\ \textbf{10}, 531 (1963)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{CKM2} M.~Kobayashi and T.~Maskawa, \ifthenelse{\boolean{articletitles}}{\emph{{{\ensuremath{C\!P}}\xspace Violation in the renormalizable theory of weak interaction}}, }{}\href{https://doi.org/10.1143/PTP.49.652}{Prog.\ Theor.\ Phys.\ \textbf{49}, 652 (1973)}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Lu:2018qbw} P.-C. Lu {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Observation of $B^{+} \rightarrow p\bar{\Lambda} K^+ K^-$ and $B^{+} \rightarrow \bar{p}\Lambda K^+ K^+$}}, }{}\href{http://arxiv.org/abs/1807.10503}{{\normalfont\ttfamily arXiv:1807.10503}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \end{mcitethebibliography} \section*{Acknowledgments} The work of C.Q.~Geng and Y.K.~Hsiao was supported in part by the National Center for Theoretical Sciences, MoST (MoST-104-2112-M-007-003-MY3), and the National Science Foundation of China (11675030). The work of E. Rodrigues was supported in part by the United States National Science Foundation award ACI-1450319. E. R. wishes to thank the National Center for Theoretical Sciences at Hsinchu, Taiwan, for its warm hospitality. \section*{Author Contributions} C.Q.G. and Y.K.H. performed the calculations and produced the figures. All authors interpreted and analysed the results. E.R. wrote the manuscript. All authors reviewed the manuscript. \section*{Additional Information} {\bf Competing Interests:} The authors declare no competing interests. \end{document}
{ "timestamp": "2019-02-11T02:17:14", "yymm": "1806", "arxiv_id": "1806.00861", "language": "en", "url": "https://arxiv.org/abs/1806.00861" }
\section{Introduction} This work presents a new machine learning-based approach as applied to high-performance computing (HPC). In particular, we seek to train a supervised inductive learning system to predict when jobs submitted to a compute cluster may be subject to failure due to insufficient requested resources. There exist various open-source software packages which can help administrators to manage HPC systems, such as the Sun Grid Engine (SGE) [1] and Slurm [2]. These provide real-time monitoring platforms allowing both users and administrators to check job statuses. However, there does not yet exist software that can help to fully automate the allocation of HPC resources or to anticipate resource needs reliably by generalizing over historical data, such as determining the number of processor cores and the amount of memory needed as a function of requests and outcomes on previous job submissions. Machine learning (ML) applied towards decision support in estimating resource needs for an HPC task, or to predict resource usage, is the subject of several previous studies [3]--[7]. Our continuing work is based on a new predictive test bed developed using {\it Beocat}, the primary HPC platform at Kansas State University, and a data set compiled by the Department of Computer Science from the job submission and monitoring logs of {\it Beocat}. \\ \indent Our purpose in this paper is to use supervised inductive learning over historical log files on large-scale compute clusters that contain a mixture of CPUs and GPUs. We seek initially to develop a data model containing ground attributes of jobs that can help users or administrators to classify jobs into equivalence classes by likelihood of failure, based on aggregate demographic and historical profile information regarding the user who submitted each job. Among these are estimating (1) the failure probability of a job at submission time, (2) the estimated resource utilization level given submission history. We address the predictive task with the prospective goal of selecting helpful, personalized runtime or postmortem feedback to help the user make better cost-benefit tradeoffs. Our preliminary results show that the probability of failed jobs is associated with information freely available at job submission time and may this be usable by a learning system for user modeling that gives personalized feedback to users. \\ \indent {\it Beocat} is the primary HPC system of Kansas State University (KSU). When submitting jobs to the managed queues of {\it Beocat}, users need to specify their estimated running time and memory. The Beocat system then schedules jobs based on these job requirements, the availability of system resources, and job-dependent factors such as static properties of the executable. However, users cannot in general estimate the usage of their jobs with high accuracy, as it is challenging even for trained and experienced users to estimate how much time and memory a job requires. A central hypothesis of this work, based upon observation of users over the 20-year operating life of {\it Beocat} to date, is that estimation accuracy is correlated with user experience in particular use cases, such as the type of HPC codes and data they are working with. The risks of underestimation of resource requirements include wasting some of these resources: if a user submits a job which will fail during execution because this user underestimates resource needs at submission, the job will occupy some resources after submission until it has been identified as having failed by the queue management software of the compute cluster. This situation will not only affect the available resources but also affect other jobs in the cluster's queues. This is a pervasive issue in HPC, not specific to {\it Beocat}, and cannot by solved solely by proactive HPC management. We therefore address it by using supervised learning to build a model from historical logs. This can help save some resources in HPC systems and also yield {\bf recommendations} to users such as estimated CPU/RAM usage of job, allowing them to submit more robust job specifications rather than waiting until jobs fail. \\ \indent In the remainder of this paper, we lay out the machine learning task and approach for {\it Beocat}, surveying algorithms and describing the development of a experimental test bed. \section{Machine Learning and Related Work} As related above, the core focus and novel contribution of this work are the application of supervised machine learning to resource allocation in HPC systems, particularly a predictive task defined on compute clusters. This remains an open problem as there is as yet no single machine learning representation and algorithm that can reliably help users to predict job memory requirements in an HPC system, as been noted by researchers at IBM [8]. However, it is still worth while to try to find some machine learning technologies that could help administrators better anticipate resource needs and help users make more cost-effective allocation choices in an HPC system. Different HPC system have different environments; our goal is to improve resource allocation in our HPC system. Our objectives go beyond simple CPU and memory usage prediction towards data mining and decision support. [9] There are thus two machine learning tasks on which we are focused: {\bf (1) regression} to predict usage of CPU and memory, and {\bf (2) classification} over job submission instances to predict job failure after the submission. We also try to train different models with different machine learning algorithms. The following part is the machine learning algorithms involved in our experiment. \subsection{Test bed for machine learning and decision support} A motivating goal of this work is to develop an open test bed for machine learning and decision support using {\it Beocat} data. {\it Beocat} is at present the largest in the state of Kansas. It is also the central component of a regional compute cluster that provides a platform for academic HPC research, including many interdisciplinary and transdisciplinary users. This is significant because user experience can vary greatly by both familiarity with computational methods in their domain of application and HPC platform. Examples of application domain-specific computational methods include data integration and modeling, data transformations needed by the users on their own raw data, and algorithms used to solve their specific problems. Meanwhile, the HPC platform depends design choices such as programming languages, scientific computing libraries, the parallel computing architecture (e.g., a parallel programming library versus MapReduce or ad hoc task parallelism), and load balancing and process migration methods, if any. {\bf Precursors: feature sets and ancillary estimation targets.} Attendant upon the development of a test bed are specific technical objectives in the form of actionable decision support tasks such as: ``Approximately what will it cost to migrate this job to a compute cloud such as Amazon Web Services, Azure, or Google Cloud, what will it cost to run on {\it Beocat}, and what are the the costs of various hybrid solutions?'' and ``Which of the following is expected to be most cost-effective based on these estimates and historical data?'' This in turn requires data preparation steps including integration and transformation. {\bf Prediction targets and ground truth.} Data transformations on these logs allow us to define a basic regression task of predicting the CPU and memory usage of jobs, and the ancillary task of determining whether a job submission will fail due to this usage exceeding the allocation based on the resources requested by a user. The ground truth for these tasks comes from historical data collected from {\it Beocat} over several years, which presents an open question of how to validate this ground truth across multiple HPC systems. This is a challenge beyond the scope of the present work but an important reason for having open access data: so that the potential for cross-system transfer can be assessed as a criterion, and cross-domain transfer learning methods can be evaluated. \subsection{Defining learning tasks: regression vs. classification} As explained above, this work concerns {\it learning to predict} for two questions: the numerical question of estimating the quantity of resources used (CPU cycles and RAM), and the yes-no question of whether a job will be killed. The first question, CPU/RAM estimation, is by definition a discrete estimation task. However, in its most general form, the integer precision needed to obtain multiple decision support estimates such as those discussed in the previous section, and to then generate actionable recommendations for the available options, makes this in essence a continuous estimation task (i.e., regression). The section question is binary classification (i.e., concept learning, with the concept being "job killed due to resource underestimate"). Beyond the single job classification task, we are interested in the formulation of classification tasks for users - that is, assessing the level of expertise and experience. These may be independent factors as documented in the description of the test bed. \subsection{Ground Features and Relevance} A key question is that of {\it relevance determination} - how to deal with increasing numbers of irrelevant features. [9]--[10] In this data set, ground features are primitive attributes of the relational schema (and simple observable or computable variables). For our HPC predictive analytics task, this is initially a less salient situation, because the naive version of the task uses only {\bf per-job} features or predominantly such features, but becomes increasingly the case as {\bf per-user} features are introduced. \iffalse \indent {\color{red}{Huichen rewrote Riddy's start}} \fi \indent {\bf Linear regression} is a simple but powerful algorithm that we selected as a baseline for our prediction task. We chose to use multiple linear regression to simultaneously predict both CPU and memory usage for an HPC system. The dependent variables thus include CPU and memory usage, while the independent variables are the features presented in Table I. We can find a linear equation such as $ y = a_0 + a_1x_1 + a_2x_2 + \cdots + a_nx_n $ to fit the data set with minimum distance for each data point to the fitting line. There are various loss functions (i.e., ordinary least squares, ridge) to measure the fitness of linear regression models. By minimizing the loss functions, we can optimize our models. We use different regression and classification models in the experiment as follows. \indent {\bf Ordinary Least Squares}: this model fits a target function by minimizing the sum of squares of the difference between observations and values predicted by a linear approximation. The purpose of using this model is to minimize the error of estimations in order to optimize the linear regression model. \indent {\bf LassoLarsIC}: this linear model is trained with an L1 regularizer, which solves for the minimum least-squares penalty. These are favored in models with many zero weights (i.e., irrelevant features). Also, this model can be validated using the Akaike information criterion (AIC) or Bayes Information criterion (BIC). \indent {\bf ElasticNetCV}: this linear model is trained with weighted combination of L1 and L2 regularizer, and it can be used to set the parameters to determine which regularizer (L1, L2 or the combination between them) could be used by cross-validation. \indent {\bf Ridge Regression}: this linear model is trained with an L2 regularizer, which can deal with overfitting issue. Compared with Ordinary Least Squares, Ridge Regression has a more stable feature by introducing a penalty to reduce the size of coefficients and avoid the overfitting problems that Ordinary Least Squares may have. \indent {\bf CART: Classification and Regression Trees (CART)} is a non-linear model that we also selected for our experiment. This model mainly includes two types of decision trees: classification trees and regression trees. Classification trees have categorical variables and are used label the class of variables. Regression trees have continue variables and are used for predications. \indent {\bf Logistic Regression}: this model is a predictive analysis technique employed when the dependent variable is binary. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables. \indent {\bf Gaussian Naive Bayes}: this model is based on Naive Bayes with prior as Gaussian distribution. Gaussian Naive Bayes extends Naive Bayes to real-valued attributes by estimating the mean and the standard deviations generated by our data set. The representation for Gaussian Naive Bayes is to calculate the probabilities for input values for each class, and store the mean and standard deviations for them. \indent {\bf Random Forest Classification}: this model works by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. It is a supervised learning algorithm based on an ensemble of decision trees as the learning representation. \iffalse \indent {\color{red}{Huichen rewrote Riddy's end}} \indent {\color{red}{Riddy's part start}} \subsection*{Prediction Techniques} \begin{itemize} \item Linear Regression: This allows us to mathematically model the relationship between two or more variables. There is usually one dependent variable with one or more independent variables. The goal of a simple linear regression is to create a model that minimizes the sum of squares of the residuals.\\ \item ElasticNetCV Regression: This is a method that reduces the danger of overfitting in the context of regression. It is a regularized regression method that linearly combines the L1 and L2 penalties of the lasso and ridge methods. \\ \item LassoLarsIC Regression: \\ \item Ridge Regression: This is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates are unbiased, but their variances are large so they may be far from the true value. By adding a degree of bias to the regression estimates, ridge regression reduces the standard errors.\\ \item CART Regression: Classification and Regression Tree(CART) can either be Classification tree analysis or Regression tree analysis.Regression tree analysis is when the predicted outcome can be considered a real number.\\ \end{itemize} \subsection*{Classification Techniques} \begin{itemize} \item Logistic Regression: This is a predictive analysis technique employed when the dependent variable is binary. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.\\ \item CART Classification: This is when the predicted outcome is the class to which the data belongs\\ \item GaussianNB Classification: This implements the Gaussian Naives Bayes algorithm for classification.\\ \item Random Forest Classification: This operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees.It is a supervised learning algorithm that is an ensemble of decision trees.\\ \item CART Regression\\ \end{itemize} \indent {\color{red}{Riddy's part end}} \fi \section{Experiment Design} In this section, we describe the acquisition and preparation of training data for machine learning, the principles governing our feature extraction and selection process, and design choices for learning algorithms and their parameters. \subsection{Data Preparation and Feature Analysis} Our experiment is based on the SGE log file which is recorded and used by the Beocat system management software. The raw data set covers an 8-year history of job information which has been run on Beocat from 2009 to 2017. It has around twenty million instances, and has forty-five attributes for each instance. \\ \indent The purpose of the machine learning experiment is to train a regression model to predict CPU and memory usage, and to determine if submitted jobs will fail, based on requested resources and other {\em submission-time} attribute, those known after a job is submitted but before it is executed. At that time, a monitoring system can only provide very limited information, including: basic demographic information about the submitting user, such as their name, institution, department, etc., plus information available to the job scheduling system such as resource requests, particularly the estimated job running time and the expected maximum memory needed during runtime. However, the raw data set does not have additional forensic user such as the their job title (faculty, staff, student, or other), degree program for a student (graduate or undergraduate), home department, etc. We obtain forensic information about users, by using public services such as the Lightweight Directory Access Protocol (LDAP) [11] command on the HPC system, and use this to augment the raw data set. \\ \indent Because only limited features can be used during job submission, we also consider the behavior of users, by analyzing features for user modeling [12]--[13] and using them for resource usage prediction by regression and job outcome prediction by classification. \subsection{Feature Construction for User Modeling} A driving hypothesis of this work is that augmenting {\em per-job} features with {\em per-user} features can provide a {\bf persistent context of user expertise} that can improve the accuracy, precision, and recall of learned models. The rationale is that the skills and expertise level of a user are variables that: (1) change at a time scale much greater than job-by-job; (2) are based on low-level submission-time attributes; and (3) can be estimated more accurately as a user submits more jobs. These are tied to mid-range objectives of this research, namely, formulating testable statistical hypotheses regarding observable variables of the user model. For example, this work is motivated by the conjectures that: (1) the expertise level expressed as a job failure rate can be predicted over time; (2) per-user features can incrementally improve the precision, recall, and accuracy of the predictive models for resource usage and job failure achieved by training with submission-time per-job features only; and (3) the variance of per-user features tends to be reduced by collecting more historical data. \\ \indent Support for these hypotheses would indicate that individual user statistics known prior to job submission time, such as the job failure rate and margin of underestimates in resource requests for a user, can be cumulatively estimated. This would also pose further interesting questions of whether social models of users, from rudimentary "expertise clusters" to detectable communities of users with similar training needs and error profiles, can be formed by unsupervised learning. A long-term goal of this research is to identify such communities by applying network analysis (link mining or graph mining) algorithms to linked data such as property graphs of users and their jobs. [14]--[17] \\ \indent The primitive per-user attributes that are computed are simply average usage statistics for CPU and memory across all jobs submitted by a user. We begin with these ground attributes because the defined performance element for this research is based on the initial machine learning task of training regression and classification models to predict per-job usage of CPU and memory. Subsequent modeling objectives, such as forming a {\em causal explanation} for this prediction target, also depend on capturing potential influents of job failure such as user inexperience, and imputing a quantifiable and relevant experience level based on user self-efficacy and track record (i.e., per-user attributes as a time series).\\ \indent We chose the average of CPU usage, average of memory usage, average of running time requested, and average of memory requested for each user as our per-user behavioral features. The data transformations used to preprocess the raw data (consisting of one tuple per job) included aggregation (roll-up) across jobs. This groups across jobs, by user, computing the average usage value for CPU, memory, requested time and requested memory from raw data set, and projects the resulting columns: \begin{center} {\bf $ \mathcal{G}_{average(CPU, memory, reqTime, reqMem)}(User) $ } \end{center} We then re-joined the per-user aggregates (rolled-up values resulting from the above grouping operation across jobs) into the raw data set: \begin{center} {\bf $ New\:data := per-user\:aggregates \bowtie_{user} raw\:dataset $ } \end{center} This results in one row per job again, with values from each per-user relation replicated across all jobs submitted by that user. \indent The raw data set stretches over 8 years, and includes some invalid values before cleaning, such as data with missing values. We restricted the data set to the past three years to enforce a higher standard of data quality. This more recent historical data consists of 16 million instances and admits a schema of forty-five raw attributes. To produce a representative data set that can be used to train models using machine learning libraries on personal computers, we selected one million instances from this data set uniformly at random. Because average number of submitted jobs per person is more than two thousand, we filtered out the users who submitted fewer than two hundred jobs. The schema was also reduced to eliminate some raw attributes that are known to be redundant, correlated, or ontologically irrelevant. This resulted in a data set with one million instances and 18 selected features, which is described in Table 1. \begin{table}[htb]\centering \caption{Feature Selected}\label{t_sim} \begin{tabular}{@{}lcl@{}} \toprule Feature & Type & Describe \\ \midrule failed& Numeric& Indicate job failed or not, 0/1\\ cpu& Numeric& CPU usage (predicted variable)\\ maxvmem&Numeric& Memory usage (predicted variable)\\ id& Numeric &User id\\ reqMem& Numeric &Memory requested at job submission\\ reqTime &Numeric&Time requested at job submission\\ project&Numeric&Project which was assigned to job\\ aCPU&Aggregate&Average CPU usage for user\\ aMaxmem&Aggregate&Average memory usage for user\\ aReqtime&Aggregate&Average running time requested by user\\ aReqmem&Aggregate&Average of memory requested by user\\ p\_Faculty&Categorical&Role of user\\ p\_Graduate&Categorical&Role of user\\ p\_PostDoc&Categorical&Role of user\\ p\_ResearchAss&Categorical&Role of user\\ p\_Staff&Categorical&Role of user\\ p\_UnderGra&Categorical&Role of user\\ p\_Unknowing&Categorical&Role of user\\ \bottomrule \end{tabular} \end{table} \indent Entries of type "[Numeric] Aggregate" in Table 1 constitute all (and only) those that are per-user behavioral ground features. \subsection{Machine Learning Implementation} To handle the various types of data, we did standardization for the data set before training the prediction model. We used the {\tt scikit-learn} [18]--[19] Python library for our experiment implementation. \subsection*{Prediction Techniques} \begin{itemize} \item Linear Regression: Linear Regression: we set up all parameters by default, such as normalize = False, n\_jobs = 1. \item LassoLarsIC Regression: LassoLarsIC Regression: we choose `aic (Akaike information criterion)' as the criterion's parameter which is used to identify the goodness of fit. \item ElastcNetCV Regression: ElasticNetCV Regression: l1\_ration parameter in this model means which regularization (L1 or L2) you want to use for training your model. We choose 0.5 (penalty is a combination of L1 and L2) which is a default value for this parameter, as changing to other values for this parameter does not help to improve the result based on our data set. We choose default value `None' for the parameter of alpha. \item Ridge Regression: Ridge Regression: in this model, alpha represents the strength of regularization, we choose 0.5 which is the default value for this parameter. `auto' is set up for the parameter of the solver, which indicates that the model would choose the solver automatically based on the type of data set, such as svd, cholesk. \item CART Regression: we use `mse (mean squared error)' as the criterion parameter, and `best' instead of `random' as the parameter of the splitter. We use `None' for the parameter of the max\_depth, because this option will not affect the model result in our data set. \end{itemize} \subsection*{Classification Techniques} \begin{itemize} \item Logistic Regression: Logistic Regression: we choose l2 regularization - `l2' as penalty parameter and `liblinear' as solver parameter which refers to the optimization method for finding the optimum of the objective function. \item CART Classification: CART Classification: we choose `gini' as the criterion parameter rather than `entropy'. The `best' is used as splitting parameter. The parameter of maximum depth of the tree max\textunderscore depth is set up as `None'. \item GaussianNB Classification: GaussianNB: there is only one parameter `priors' and it is used by default value `None'. This parameter means the prior probabilities of the classes. \item Random Forest Classification: Random Forest Classification: `gini' is chosen to be the criterion parameter rather than `entropy'. `None' is set up for max\textunderscore depth parameter. \end{itemize} \section{Evaluation} In this section, we describe the evaluation strategy we used for the experiment, and present results consisting of quantitative metrics, followed by their qualitative interpretation in the context of the prediction tasks defined in the preceding experiment design section. \\ \indent The applicative phase generates predicted CPU and memory usage from new instances (unlabeled examples). The long-term performance of a user is taken into consideration via per-user features given as input to the regression model, such as average CPU usage across all jobs submitted by a user, rather than past jobs submitted by a user up to the present job. That is, per-user features are replicated rather than cumulatively calculated. We computed per-user features by aggregating (rolling up) values across tuples (one row per job) to obtain one row per user, for each new columns (one column per aggregation operator). In general, different aggregation operations such as {\tt AVERAGE}, {\tt MIN}, {\tt MAX}, and {\tt COUNT} can be used; in our experiments reported in this work, {\tt AVERAGE} is the only aggregate value calculated. \\ \indent The changes in accuracy and F1 from the baseline data set to this new training data set are computed, to assess the incremental gain of adding per-user aggregate features. For predicting CPU usage, these aggregates consist of the average CPU usage (aCpu) and average requested time (aReqtime), computed across previously submitted jobs for each user to produce the CPU training data. Similarly, for memory usage prediction, the average memory usage (aMaxmem) and average requested memory (aReqmem) across previous submitted jobs for a user, are computed to produce training data. We use the R-squared statistic, a common measure of accuracy, to quantitatively evaluate our regression models. The R-squared values of different models are compared. Table 2 and 3 describe and list results for different machine learning algorithms applied to the version of the data set with per-user aggregate features (marked "True" in the second column). \iffalse \includegraphics[width=0.9\columnwidth]{cpu.jpg} \fi The semantics of the data model in Table 2 and 3 are as follows: \begin{itemize} \item LLIC: LassoLarsIC Regression \item ENCV: ElasticNetCV Regression \item Ridge: Ridge Regression \item CART: CART Regression \item Aggregate Features (Table 2): aCPU + aReqtime \item Aggregate Feature (Table 3): aMaxmem + aReqMem \end{itemize} \begin{table}[htb]\centering \caption{CPU usage prediction with Regression}\label{t_sim} \begin{tabular}{@{}lccc@{}} \toprule Model & Per-User Features & R squared (\%) & Time (second) \\ \midrule LinearRegression & True & 15.86 & 0.448 \\ LinearRegression & False & 6.01 & 0.343 \\ LLIC & True & 15.85 & 0.445 \\ LLIC & False & 6.01 & 0.398 \\ ENCV & True & 14.99 & 6.679 \\ ENCV & False & 6.01 & 6.381 \\ Ridge & True & 15.86 & 0.224 \\ Ridge & False & 6.01 & 0.211 \\ CART & True & 27.86 & 3.090 \\ CART & False & 29.90 & 2.205 \\ \bottomrule \end{tabular} \end{table} \iffalse \includegraphics[width=0.9\columnwidth]{memory.jpg} \fi \begin{table}[htb]\centering \caption{Memory usage prediction with Regression}\label{t_sim} \begin{tabular}{@{}lccc@{}} \toprule Model & Per-User Features & R squared (\%) & Time (second) \\ \midrule LinearRegression & True & 23.11 & 0.406 \\ LinearRegression & False & 16.70 & 0.348 \\ LLIC & True & 23.11 & 0.451 \\ LLIC & False & 16.70 & 0.410 \\ ENCV & True & 23.11 & 6.387 \\ ENCV & False & 16.70 & 7.273 \\ Ridge & True & 23.11 & 0.249 \\ Ridge & False & 16.70 & 0.202 \\ CART & True & -23.23 & 2.108 \\ CART & False & -27.12 & 1.472 \\ \bottomrule \end{tabular} \end{table} \indent For the classification task, per-user features are taken into account. We re-joined the per-user, across-job numeric aggregates (in Table 1) in classification task which is different from the regression tasks. The accuracy score which is provided by {\tt scikit-learn} [18] has been used in classification model measurement. The accuracy score represents the ratio of samples which has been classified to the total samples in our classification model. A higher percentage of accuracy score implies a more accurate classification result in the classification model. F1 score, also known as {\it F-measure}, is the harmonic mean of the precision and recall of our model. This score is a weighted average of the precision and recall of a model, ranging from 0 (worst model) to 1 (best model). In addition, F1 score has the equivalent indication for both precision and recall. Table 4 shows the results of various measurements for different classification models. \begin{table}[htb]\centering \caption{Classification results}\label{t_sim} \begin{tabular}{@{}lcccc@{}} \toprule Model & Per-User Features & Accuracy(\%) & F1(\%) & Time(second) \\ \midrule LR & True & 95.20 & 93 & 245.642 \\ LR & False & 95.22 & 93 & 71.680 \\ CART & True & 96.87 & 96 & 30.910 \\ CART & False & 96.87 & 96 & 19.642 \\ GNB & True & 92.43 & 92 & 8.465 \\ GNB & False & 92.00 & 92 & 6.107 \\ RF & True & 96.87 & 96 & 64.356 \\ RF & False & 98.86 & 96 & 59.743 \\ \bottomrule \end{tabular} \end{table} The legend for Table 4 is as follows: \begin{itemize} \item LR: Logistic Regression \item CART: CART Classification \item NB: Gaussian Naive Bayes Classification \item RF: Random Forest Classification \end{itemize} \iffalse \includegraphics[width=0.9\columnwidth]{classification1mWithAvg.jpg} \fi As Tables 2 and 3 indicate, there is a substantial gain in (the relatively low positive values of) R-squared, the coefficient of determination, as a result of using per-user aggregates, and that this is borne out across different regression models. However, for the classification models trained in this work, these gains are not matched by a commensurate gain in accuracy, precision, or recall. The marginal increase in R-squared suggests that the approach of joining per-job tuples with replicated per-user statistics is still promising. \section{Conclusions and Future Work} In this section, we conclude with a brief discussion of findings, immediate priorities for continuing work, and next steps for the overall approach presented in this work. \subsection{Summary} In this work we have developed the initial version of a test bed for machine learning for predictive analytics and decision support on a compute cluster. As derived, our regression-based usage prediction and classification tasks are disparate and require different machine learning models. Our experiment results show that where regression can be used for the prediction task and regression-based classification for the overall concept of a job being killed due to resource underestimates, the learning {\bf algorithms} also differ in most cases. \subsection{Findings and Conclusions} The results demonstrate some potential for learnability of the regression and classification target functions using the historical data in our initial test bed. We can use CART regression to predict CPU usage as its data set is more like a non-linearity distribution, but CART regression does not satisfy memory usage prediction at all, because the data set of memory is distributed nearly linearly. For the classification task, all the models got high accuracy score. Thus, we should consider the model that is the least time-consuming, which is Gaussian Naive Bayes in our experiment. Our experiment brings a possible solution for applying machine learning to the HPC system, but we still need to focus on improving the prediction accuracy for CPU and memory usage. To predict CPU and memory usage during the job submission is still a difficult task due to the insufficient features that can be used, and there are no conspicuous relationships between CPU and memory usage and the existing features from the data set that could be captured for prediction. Incremental gain in R-squared is observed for per-user aggregate features, but no appreciable gain in accuracy or F1 is observed for any inducers (models for supervised inductive learning) that were used. Thus, the experiment is inconclusive as to measurable impact, but further experimentation with other causal models of user behavior such as Bayesian networks and other graphical models of probability. \subsection{Current and Future Work} Some promising future directions suggested by the above results and interpretation include exploring the user modeling aspect of predictive analytics for HPC systems. We are developing a survey instrument to collect user self-efficacy and background information about training. This can be used to segment users in a data-driven way: based on clustering algorithms rather than merely stratifying them by years of experience, even by category. This would also facilitate exploration of the hypotheses outlined in \textsection 3.2 regarding change in imputed user expertise. Moreover, given the availability of user demographic data, there appears to be potential to incorporate a social aspect of the data to broaden the scope and applicability of the test bed - by learning from linked data and building network models including but not limited to detecting "communities of expertise" within the set of users on an HPC system. [14]--[17] \begin{figure}[h] \caption{Types of links in an example social network. [17]} \includegraphics[width=\linewidth]{Figure1.png} \end{figure} The existence of a relationship existing between two users in a social network can be identified by an inference process or by simple classification. Although the inference steps may be probabilistic, logical, or both, the links themselves tend to be categorical. As Figure 1 shows, they can be dependent purely on single nodes, local topology, or exogenous information. [17] In addition to using the structure of the known graph, common features of candidates for link existence (in this domain, profile similarity; in others, friendship, trust, or mutual community membership) include similarity measures such as the number of job failures of the same type, or some semantically-weighted measure of similarity over job outcome events. \section{Acknowledgements} This work was also originally supported in part by the U.S. National Science Foundation (NSF) under grant CNS-MRI-1429316. The authors thank Dave Turner, postdoctoral research associate, and Kyle Hutson, a system administrators of Beocat, for consultation on the original SGE and Slurm log file formats. The authors also thank all of the students of CIS 732 {\em Machine Learning and Pattern Recognition} in spring, 2017 and spring, 2018 who worked on term projects related to the prediction tasks presented in this paper, including: Mohammedhossein Amini, Kaiming Bi, Poojitha Bikki, Yuying Chen, Shubh Chopra, Sandeep Dasari, Pruthvidharreddy Dhodda, Sravani Donepudi, Cortez Gray, Sneha Gullapalli, Nithin Kakkireni, Blake Knedler, Atef Khan, Ryan Kruse, Mahesh Reddy Mandadi, Tracy Marshall, Reza Mazloom, Akash Rathore, Debarshi Saha, Maitreyi Tata, Thaddeus Tuck, Sharmila Vegesana, Sindhu Velumula, Vijay Kumar Venkatamuniyappa, Nitesh Verma, Chaoxin Wang, and Jingyi Zhou.
{ "timestamp": "2018-06-05T02:18:19", "yymm": "1806", "arxiv_id": "1806.01116", "language": "en", "url": "https://arxiv.org/abs/1806.01116" }